Total Pageviews

Tuesday 20 August 2019

IPFS

IPFS(InterPlanetary File System)是一个点对点的分布式超媒体分发协议,它整合了过去几年最好的分布式系统思路,为所有人提供全球统一的可寻址空间,包括Git、自证明文件系统SFS、BitTorrent和DHT,同时也被认为是最有可能取代HTTP的新一代互联网协议。
IPFS用基于内容的寻址替代传统的基于域名的寻址,用户不需要关心服务器的位置,不用考虑文件存储的名字和路径。我们将一个文件放到IPFS节点中,将会得到基于其内容计算出的唯一加密哈希值。哈希值直接反映文件的内容,哪怕只修改1比特,哈希值也会完全不同。当IPFS被请求一个文件哈希时,它会使用一个分布式哈希表找到文件所在的节点,取回文件并验证文件数据。
本文参考了官方原文Getting Started

下载go-ipfs

在节点1(一个ubuntu16的VM)上执行:
mkdir ~/ipfs && cd ~/ipfs
wget https://dist.ipfs.io/go-ipfs/v0.4.13/go-ipfs_v0.4.13_linux-amd64.tar.gz
tar xzvf go-ipfs_v0.4.13_linux-amd64.tar.gz && rm go-ipfs_v0.4.13_linux-amd64.tar.gz && cd go-ipfs

快速开始

初始化

sudo ./install.sh
ipfs init
initializing IPFS node at /home/vagrant/.ipfs
generating 2048-bit RSA keypair...done
peer identity: QmfMFY37oG4HLZrATjdfD1mrvv32p9DSr4zT23yMnfiF6Z
to get started, enter:
       ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme

添加文件到ipfs

echo "hello world" >hello.txt
ipfs add hello.txt
added QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o hello.txt
(如果加上-r参数,可以把整个目录都添加到ipfs。例如ipfs add -r ~/opt

启动ipfs服务

ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8082
ipfs daemon
API server listening on /ip4/127.0.0.1/tcp/5001
Gateway (readonly) server listening on /ip4/0.0.0.0/tcp/8082
Daemon is ready
之所以执行ipfs config,是因为本地默认的8080端口被占用了。
另一个常见的可设置端口是Addresses.API:
ipfs config Addresses.API /ip4/0.0.0.0/tcp/8081
上述命令将ipfs的API端口修改为8081。假如VM的8081端口通过NAT映射到了宿主机的8081端口,则在宿主机windows下打开浏览器访问地址:http://127.0.0.1:8081/webui,则会显示ipfs的管理界面(自动重定向)。

远程访问本地文件

QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o是之前hello.txt的哈希值。下面在浏览器中访问这个文件:
curl https://ipfs.io/ipfs/QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o
hello world
根据文件内容同步到https://ipfs.io上很很慢,可能几分钟后才能访问上述地址。
我还特意找了一个VM安装ipfs,并添加同样内容("hello world")的文件到ipfs,发现hash是一样的。相信全球大量的人员在测试ipfs时都可能会使用"hello world"作为文件内容。
所以当访问地址https://ipfs.io/ipfs/QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o时,访问的不见得是你自己的电脑上的这个hello.txt文件,除非这个文件的内容独特到全球只有你的电脑上有。
文档https://ipfs.io/ipfs/QmXZXP8QRMG7xB4LDdKeRL5ZyZGZdhxkkLUSqoJDV1WRAp的内容比较独特,恐怕只有我自己的电脑上有,如果我不开机并启动ipfs进程,估计别人访问不了。

API

注意:要使用IPFS HTTP API,必须先启动ipfs daemon。 IPFS的CLI与HTTP API是一一对应的。
例如,命令行ipfs swarm peers对应的HTTP API是:
curl http://127.0.0.1:5001/api/v0/swarm/peers

参数

下面的命令行与HTTP API是一样的效果
ipfs swarm disconnect /ip4/54.93.113.247/tcp/48131/ipfs/QmUDS3nsBD1X4XK5Jo836fed7SErTyTuQzRqWaiQAyBYMP
curl "http://127.0.0.1:5001/api/v0/swarm/disconnect?arg=/ip4/54.93.113.247/tcp/48131/ipfs/QmUDS3nsBD1X4XK5Jo836fed7SErTyTuQzRqWaiQAyBYMP"
{
  "Strings": [
    "disconnect QmUDS3nsBD1X4XK5Jo836fed7SErTyTuQzRqWaiQAyBYMP success",
  ]
}

标志(Flag)

命令行的标志(选项)通过查询参数的形式添加。如标志--encoding=json用查询参数&encoding=json
curl "http://127.0.0.1:5001/api/v0/object/get?arg=QmaaqrHyAQm7gALkRW8DcfGX3u8q9rWKnxEMmf7m9z515w&encoding=json"

命令

ipfs config show
查看配置
ipfs id
查看本地节点的id、公钥、地址等
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Methods '["PUT", "GET", "POST", "OPTIONS"]'
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["*"]'
CORS配置

cat ls

$ ipfs cat  /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv
Error: this dag node is a directory
$ ipfs ls  /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv
QmZTR5bcpQD7cFgTorqxZDYaew1Wqgfbd2ud9QqGPAkK2V 1688 about
QmYCvbfNbCwFR45HiNP45rwJgvatpiW38D961L5qAhUM5Y 200  contact
QmY5heUM5qgRubMDD1og9fhCPA6QdkMp3QCwd4s7gJsyE7 322  help
QmejvEPop4D7YUadeGqYWmZxHhLc4JBUCzJJHWMzdcMe2y 12   ping
QmXgqKTbzdh83pQtKFb19SpMCpDDcKR2ujqk3pKph9aCNF 1692 quick-start
QmPZ9gcCEpqKTo6aq61g2nXGUhM4iCL3ewB6LDXZCtioEB 1102 readme
QmQ5vhrL7uv6tuoN9KeVBwd4PwfQkXdVVmDLUZuTNxqgvm 1173 security-notes
$ ipfs cat /ipfs/QmPZ9gcCEpqKTo6aq61g2nXGUhM4iCL3ewB6LDXZCtioEB
(显示了readme文件的内容)
cat显示单个文件,ls显示文件夹。

文件操作

原文
$ ipfs cat /ipfs/QmUcfdnf8jDHKytxa4z8YEG3SsMXr6iWdepfvzKqpnBwU7
webb wang
$ ipfs files mkdir /webb
$ ipfs files ls
webb
$ ipfs files cp /ipfs/QmUcfdnf8jDHKytxa4z8YEG3SsMXr6iWdepfvzKqpnBwU7 /webb/webb.txt
$ ipfs files ls /webb
webb.txt
$ ipfs files read /webb/webb.txt
webb wang
以上演示了创建目录,向目录中添加文件,对目录进行列表等操作。 cp不会改变文件hash,mv会改变hash寻址
目录也有哈希值:
$ ipfs files ls /  -l
webb    Qme738bWGaVkATZtU2CDasoTZJWYrVqrff3RwqxuKweNP4  0
$ ipfs cat Qme738bWGaVkATZtU2CDasoTZJWYrVqrff3RwqxuKweNP4/webb.txt
webb wang
$ ipfs files ls /webb -l           (得到webb.txt的哈希值)
webb.txt        QmUcfdnf8jDHKytxa4z8YEG3SsMXr6iWdepfvzKqpnBwU7  10
$ ipfs cat QmUcfdnf8jDHKytxa4z8YEG3SsMXr6iWdepfvzKqpnBwU7  
webb wang
如果用ipfs files read命令读取文件,后面的参数不能是哈希地址。
现在目录/webb的哈希值是Qme738bWGaVkATZtU2CDasoTZJWYrVqrff3RwqxuKweNP4。现在可以通过浏览器访问这个哈希地址:http://127.0.0.1:8080/ipfs/Qme738bWGaVkATZtU2CDasoTZJWYrVqrff3RwqxuKweNP4,浏览器会显示这个目录下的文件列表。然后访问目录下的文件:
$ curl http://127.0.0.1:8080/ipfs/Qme738bWGaVkATZtU2CDasoTZJWYrVqrff3RwqxuKweNP4/webb.txt
webb wang

将目录发布到IPNS

$ ipfs name publish Qme738bWGaVkATZtU2CDasoTZJWYrVqrff3RwqxuKweNP4
Published to QmQiqapf8V2DZ439uTAfEiBuXUBB3wQLZH8EreKaUDaxUo: /ipfs/Qme738bWGaVkATZtU2CDasoTZJWYrVqrff3RwqxuKweNP4
验证发布的结果:
$ ipfs name resolve QmQiqapf8V2DZ439uTAfEiBuXUBB3wQLZH8EreKaUDaxUo
/ipfs/Qme738bWGaVkATZtU2CDasoTZJWYrVqrff3RwqxuKweNP4
返回的是我们发布的内容。现在只发布了一个/webb目录而已。
现在可以通过IPNS访问发布的结果了(注意路径是ipns,不是ipfs):
https://ipfs.io/ipns/QmQiqapf8V2DZ439uTAfEiBuXUBB3wQLZH8EreKaUDaxUo
访问上述网址很慢,可能是网络的原因。可以访问本地服务,效果是一样的:
curl http://127.0.0.1/ipns/QmQiqapf8V2DZ439uTAfEiBuXUBB3wQLZH8EreKaUDaxUo
(返回一个网页的很多html代码)
利用IPNS将目录发布到了网络上,然后用节点ID访问目录中内容。这种做法保留了文件名的稳定。
---------------------

A frontend for an IPFS node. 

IPFS Web UI

A web interface to IPFS.
Check on your node stats, explore the IPLD powered merkle forest, see peers around the world and manage your files, without needing to touch the CLI.
Screenshot of the status page
FilesExplorePeersSettings
Screenshot of the file browser pageScreenshot of the IPLD explorer pageScreenshot of the swarm peers mapScreenshot of the settings page
  dependencies Status CircleCI
The IPFS WebUI is a work-in-progress. Help us make it better! We use the issues on this repo to track the work and it's part of the wider IPFS GUI project.
The app uses ipfs-http-client to communicate with your local IPFS node.
The app is built with create-react-app. Please read the docs.

Install

With node >= 8.12 and npm >= 6.4.1 installed, run
> npm install

Usage

When working on the code, run an ipfs daemon, the local dev server, the unit tests, and the storybook component viewer and see the results of your changes as you save files.
In separate shells run the following:
# Run IPFS
> ipfs daemon
# Run the dev server @ http://localhost:3000
> npm start
# Run the unit tests
> npm test
# Run the UI component viewer @ http://localhost:9009
> npm run storybook

Configure IPFS API CORS headers

You must configure your IPFS API at http://127.0.0.1:5001 to allow cross-origin (CORS) requests from your dev server at http://localhost:3000
Similarly if you want to try out pre-release versions at https://webui.ipfs.io you need to add that as an allowed domain too.

Easy mode

Run the cors-config.sh script with:
> ./cors-config.sh

The manual way

> ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["http://localhost:3000", "https://webui.ipfs.io"]'
> ipfs config --json API.HTTPHeaders.Access-Control-Allow-Methods '["PUT", "GET", "POST"]'

Reverting

To reset your config back to the default configuration, run the following command.
> ipfs config --json API.HTTPHeaders {}
You might also like to copy the ~/.ipfs/config file somewhere with a useful name so you can use ipfs config replace  to switch your node between default and dev mode easily.

Build

To create an optimized static build of the app, output to the build directory:
# Build out the html, css & jss to ./build
> npm run build

Test

The following command will run the app tests, watch source files and re-run the tests when changes are made:
> npm test
The WebUI uses Jest to run the isolated unit tests. Unit test files are located next to the component they test and have the same file name, but with the extension .test.js

End-to-end tests

The end-to-end tests (e2e) test the full app in a headless Chromium browser. They require an http server to be running to serve the app.
In dev, run npm start in another shell before starting the tests
# Run the end-to-end tests
> npm run test:e2e
By default the test run headless, so you won't see the the browser. To debug test errors, it can be helpful to see the robot clicking around the site. To disable headless mode and see the browser, set the environment variable DEBUG=true
# See the end-to-end tests in a browser
> DEBUG=true npm run test:e2e
In a continuous integration environment we lint the code, run the unit tests, build the app, start an http server and run the unit e2e tests.
> npm run lint
> npm test
> npm run build
> npm run test:ci:e2e

Coverage

To do a single run of the tests and generate a coverage report, run the following:
> npm run test:coverage

Lint

Perform standard linting on the code:
> npm run lint

Analyze

To inspect the built bundle for bundled modules and their size, first build the app then:
# Run bundle
> npm run analyze

Translations

The translations are stored on ./public/locales and the English version is the source of truth. We use Transifex to help us translate WebUI to another languages.
If you're interested in contributing a translation, go to our page on Transifex, create an account, pick a language and start translating.
You can read more on how we use Transifex and i18next in this app at docs/LOCALIZATION.md

Releasing a new version of the WebUI.

  1. PR master with the result of tx pull -a to pull the latest translations from transifex
  2. Tag it npm versiongit pushgit push --tags.
  3. Add release notes to https://github.com/ipfs-shipyard/ipfs-webui/releases
  4. Wait for master to build on CI, and grab the CID for the build
  5. Update the hash at:
-----

星际文件系统 IPFS - 分布式存储的区块链技术协议,保障用户隐私安全

也许,很多小伙伴跟我一样,第一次听说 IPFS 是因为马斯克“代盐”过的 Filecoin 币。其实,IPFS是一个网络协议,而 Filecoin 则是一个基于 IPFS 的去中心化存储项目。Filecoin 是 IPFS 协议下的应用,但 Filecoin 并不是唯一应用IPFS协议的项目。

星际文件系统InterPlanetary File System,缩写为IPFS)是一个旨在创建持久且分布式存储和共享文件的网络传输协议。它是一种内容可寻址的对等超媒体分发协议。在IPFS网络中的节点将构成一个分布式文件系统。它是一个开放源代码项目,自2014年开始由协议实验室在开源社区的帮助下发展。其最初由Juan Benet设计。

Brave浏览器自1.19版本开始集成 IPFS 协议,也是第一个支持IPFS协议的安全浏览器。IPFS是一种分布式存储和共享文件的网络传输协议,与使用了数十年的HTTP(超文本传输协议)和HTTPS(超文本传输安全协议)相比,它提供了一种完全不同的传输方式。

鉴于IPFS协议可以有效提高网页的访问速度、连接稳定性和安全隐私等,一灯不是和尚非常希望更多的浏览器陆续开始支持IPFS传输协议,这将对整个互联网产生非常大的影响。

本文目录

1、什么是IPFS协议?

当我们在使用HTTP和HTTPS浏览网页时,浏览器通过URL从网站托管的固定服务器中获取内容。此时,你所在的位置与服务器之间的物理距离会影响网站加载页面的时间。

IPFS通过网络分发网站数据,取代了URL和服务器,使用URI(通用资源标识符)访问数据。简而言之,IPFS类似于BitTorrent和区块链,在网络中的每一台计算机或移动设备(称为“节点”)都会临时存储网站数据。因此,无论你在任何时候通过IPFS访问目标网站,都会从网络中离你最近的节点加载调取数据,这点非常类似于我们通常使用的CDN。如果我们不希望自己的设备充当本地节点,也可以通过“公共网关”访问IPFS内容。

2、IPFS协议的优缺点

(1)IPFS协议的优点

因为IPFS的分布式托管类似于IDC服务商提供的CDN网络,可以有效提高网页的加载速度,因为你的设备是从离你最近的节点中访问数据,而不是从源服务器中,所以加载时间和带宽要求就会降低很多,文件传输和流媒体传输速度也会更快。如果IPFS协议被各大浏览器公司所采用,那么每一个用户就是一个免费的CDN节点,这对于从事网站服务托管的服务商非常不利。当我们使用IPFS协议时,即使所访问的目标网站出现脱机,用户也依然可以正常访问,因为用户读取已经寄存在其他网络节点中的数据。正是因为这种分布式存储方式使网络防火墙的审查屏蔽变得非常困难,此时防火墙将失去需要屏蔽的目标网址或IP,而该目标网站的内容将分布于网络的每一个节点中。

(2)IPFS协议的缺点

但如果防火墙无法屏蔽这种传输方式,可能会进而直接屏蔽使用该IPFS协议的数据传输。当你使用Brave的IPFS网络是否充当本地节点,都存在隐私问题。如果你充当节点,那么网络会为你提供唯一的ID,其它的用户是可以查看到这个ID号的,并且可以用来查看其他人真正托管和访问的内容。例如,有人通过该ID访问你当前所托管的IPFS数据,那么他还会消耗硬件设备和本地带宽资源。

你也可以选择不成为节点,而只是通过“公共网关”访问IPFS内容,但是“公共网关”将可以查看和记录你的IP地址。此时你可能会想使用VPN+IPFS协议的方式来实现翻墙,此时您的IP地址已经改变并成功出墙,再使用IPFS就有点画蛇添足了,所以此做法没有太大意义,而且可能会降低网站的访问速度。

就需要扶墙这件事来说,毕竟是小众领域,IPFS希望替代HTTP或HTTPS几乎不可能,它最可能像是Tor一样。就和Tor一样,每一个网站还需要专门做一个支持IPFS的功能,这对大多数中小型网站而言是不太现实的,自己没有这样的技术,市场中也没有足够多的服务商提供这样的服务。

如果您想体验一下Brave的IPFS,可以先去下载一个最新版Brave浏览器,然后访问一下网址:ipfs://bafybeiemxf5abjwjbikoz4mc3a3dla6ual3jsgpdr4cjr3oz3evfyavhwq/wiki/Vincent_van_Gogh.html,这是Brave的一个演示网址,所加载的内容是一个Wiki页面。Brave的IPFS功能默认是关闭的,需要在“Settings > Extensions”中开启才能使用。

3、IPFS 和 filecoin 的关系

IPFS是一个网络协议,而Filecoin则是一个基于IPFS的去中心化存储项目。Filecoin是IPFS协议下的应用,但Filecoin并不是唯一应用IPFS协议的项目。IPFS 和 Filecoin 都是由协议实验室打造的明星项目,IPFS 是一种点对点、版本化、内容寻址的超媒体传输协议,对标的是传统互联网协议 HTTP, 其所要构建的是一个分布式的 web 3.0。但 IPFS 只是一个开源的互联网底层通信协议,大家都可以免费的使用他。目前所有 IPFS 节点都提供存储空间同时也需要其他节点帮助自己存储资源,即「人人为我,我为人人」,你需要别人的存储帮助,同时也要求自己有共享。

所以,IPFS 需要 Filecoin 的激励机制来吸引一批专业的存储服务商来提供更专业、安全和稳定的存储服务。Filecoin 是一个去中心化分布式存储网络,是 IPFS 的唯一激励层。Filecoin 采用了区块链通证体系发行了 Token,Token 简称 FIL。Filecoin 是一个去中心化存储网络,是的 IPFS 激励层。所以,Filecoin 是 IPFS 上的一个重要应用。

----------------

IPFS新手指北

IPFS全称InterPlanetary File System,中文名叫星际文件系统,听起来非常酷炫。

它是是一个旨在创建持久分布式存储和共享文件的网络传输协议,是一种内容可寻址的对等超媒体分发协议。在IPFS网络中的全球所有节点将构成一个分布式文件系统,全球中的每一个人都可以通过IPFS网关存储和访问IPFS里面的文件。

这个酷炫的项目最初由Juan Benet设计,自2014年开始由Protocol Labs在开源社区的帮助下发展,是一个完全开源的项目。

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/compare/ipfs-illustration-http.svg

HTTP一次从一台计算机下载文件,而不是同时从多台计算机获取文件。点对点IPFS节省了大量的带宽,视频高达60%,这使得无需重复地高效地分发大量数据成为可能。

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/compare/ipfs-illustration-history.svg

一个网页的平均寿命是100天,然后就永远消失了。我们这个时代的主要媒介还不够脆弱。IPFS保留文件的每一个版本,并使为镜像数据建立弹性网络变得简单。

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/compare/ipfs-illustration-centralized.svg

互联网作为人类历史上最伟大的均衡器之一,推动了创新的发展,但日益巩固的集权控制威胁着这一进步。IPFS通过分布式技术来避免这一点。

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/compare/ipfs-illustration-network.svg

IPFS支持创建多样化的弹性网络,以实现持久可用性,无论是否有Internet主干网连接。这意味着发展中国家在自然灾害期间,或者在咖啡厅的wi-fi上时,能够更好地连接。

IPFS宣称,无论你现在在用已有的Web技术干什么,IPFS都可以做到更好。

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/usefull/usefull_1.png

  • 对于归档人员

    IPFS提供了数据块去重、高性能和基于集群的数据持久化,这有利于存储世界上的信息来造福后代

  • 对于服务提供商

    IPFS提供安全的P2P内容交付,可以为服务提供者节省数百万带宽成本

  • 对于研究者

    如果您使用或分发大型数据集,IPFS可以帮助您提供快速的性能和分散的归档

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/usefull/usefull_2.png

  • 对于世界发展

    对于那些互联网基础设施较差的人来说,高延迟网络是一大障碍。IPFS提供对数据的弹性访问,独立于延迟或主干网连接

  • 对于区块链

    使用IPFS,您可以处理大量数据,并在事务中放置不可变的永久链接—时间戳和保护内容,而不必将数据本身放在链上

  • 对于内容创造者

    IPFS充分体现了网络的自由和独立精神,可以帮助您以更低的成本交付内容.

    让我们通过向IPFS添加一个文件这个过程,来简单看一下IPFS是如何工作的

    https://blog.zuik.ren/posts/tutorials/p2p/ipfs/work/work_1.png

    IPFS将文件切割为多个小块,每个块的大小为256KB,块的数量由文件的大小决定。然后计算每个块的Hash,作为这个块的指纹。

    https://blog.zuik.ren/posts/tutorials/p2p/ipfs/work/work_2.png

    因为很多文件数据有重复的部分,在切割成小块后,这些小块有的会完全相同,表现出来就是指纹Hash相同。拥有相同指纹Hash的块被视为同一个块,所以相同的数据在IPFS都表现为同一块,这也就消除了存储相同数据的额外开销。

    https://blog.zuik.ren/posts/tutorials/p2p/ipfs/work/work_3.png

    IPFS网络中的每一个节点只存储自己感兴趣的内容,也就是该IPFS节点的使用者经常访问、或指定要固定的内容。

    除此之外还需要额外存储一些索引信息,这些索引信息用来帮助文件查找的寻址工作。当我们需要获取某个块的时候,索引信息就可以告诉IPFS这个特定块在哪些节点上有存储。

    https://blog.zuik.ren/posts/tutorials/p2p/ipfs/work/work_4.png

    当我们要从IPFS中查看或者下载某个文件时,IPFS便要通过改文件的指纹Hash查询索引信息,并向自己连接的节点进行询问。这一步需要找到IPFS网络中的哪些节点存储着自己想要的文件数据块。

    https://blog.zuik.ren/posts/tutorials/p2p/ipfs/work/one-ipfs-node-only.png

    https://blog.zuik.ren/posts/tutorials/p2p/ipfs/work/work_5.png

    如果你无法记住IPFS中存储的文件的指纹Hash(是一段非常长的字符串),实际上你也无须记住这个Hash,IPFS提供了IPNS来提供人类可读名字指纹Hash之间的映射,你只需要记住你添加在IPNS中的人类可读名字即可。

    设置环境变量IPFS_PATH,这个目录在后面进行初始化和使用的时候会作为IPFS的本地仓库。如果这里不进行设置,IPFS默认会使用用户目录下的.ipfs文件夹作为本地仓库。

    https://blog.zuik.ren/posts/tutorials/p2p/ipfs/tutorial/ipfs_env.png

    运行命令 ipfs init 进行初始化,这一步会初始化密钥对,并在刚刚指定的IPFS_PATH目录创建初始文件。

    https://blog.zuik.ren/posts/tutorials/p2p/ipfs/tutorial/ipfs_init.png

    运行命令 ipfs id 即可查看自己IPFS节点ID信息,包含了节点ID、公钥、地址、代理版本、协议版本、支持的协议等信息

    可以通过 ipfs id 别人的ID来查看别人的节点ID信息

    通过显示的命令来检查可用性,这里使用ipfs cat命令来查看指定的CID对应的内容。

    https://blog.zuik.ren/posts/tutorials/p2p/ipfs/tutorial/ipfs_init.png

    运行下面命令开启守护进程

    1
    
    ipfs daemon
    

    IPFS获取文件的方式是隐式的,我们可以通过查看、下载等命令,告诉IPFS你要去获取我想要的文件

    查看文本使用 ipfs cat命令来进行,就如前面检查可用性的使用一样

    对于图片、视频等文件,无法使用cat命令来查看(cat出来是一堆乱码),此时我们可以使用ipfs get cid的方式来将文件下载到本地。不过这样直接下载文件名会是指定的CID,一个长字符串不具有识别性,我们可以重定向到指定的文件,ipfs get cid -o newname.png

    https://blog.zuik.ren/posts/tutorials/p2p/ipfs/tutorial/ipfs_get.png

    通过ipfs ls命令来列出一个目录

    https://blog.zuik.ren/posts/tutorials/p2p/ipfs/tutorial/ipfs_ls.png

    通过ipfs add 文件名命令来将文件添加到IPFS

    如果需要添加文件夹,需要添加-r参数来使其递归处理

    https://blog.zuik.ren/posts/tutorials/p2p/ipfs/tutorial/ipfs_add.png

     

    在进行深一步学习之前,先让我们来看一下关于IPFS几个不得不知道的概念,这些概念是IPFS的基础组成部分,对后续的使用至关重要

    Peer是对等节点,因为IPFS是基于P2P技术实现的,所以没有服务器客户端这一说,每个人都同时是服务器和客户端,人人为我,我为人人。

    内容标识符(CID)是一个用于指向IPFS中的内容的标签。它不指示内容存储在哪里,但它根据内容数据本身形成一种地址。无论它指向的内容有多大,CID都很短

    详细内容见:IPFS官方文档:Content addressing and CIDs

    在线的CID查看器:CID Inspector

  • IPFS官方提供的Gateway: https://ipfs.io/
  • Cloudflare提供的IPFS Gateway服务:https://cf-ipfs.com
  • 其他公开的Gateway列表:https://ipfs.github.io/public-gateway-checker/

https://www.cloudflare.com/distributed-web-gateway/

具体见:IPFS文档:Gateway

IPFS使用基于内容的寻址方式,简单说就是IPFS根据文件数据的Hash来生成CID,这个CID只与文件内容有关,这也就导致了如果我们修改这个文件的内容,这个CID也会改变。如果我们通过IPFS给别人分享文件,则每次更新内容时都需要给此人一个新链接。

为了解决这个问题,星际名称系统(IPNS)通过创建一个可以更新的地址来解决这个问题。

具体见:IPFS文档:IPNS

https://docs.ipfs.io/concepts/ipld/

既然IPFS宣称能够构建新一代分布式Web,那我们便想要把自己的网站部署到IPFS上去,一起体验一下去中心化、分布式的Web3.0技术

我使用的是Hugo静态网站生成器生成我的博客,生成的内容存放在public目录下,所以首先我需要将public目录及其里面的所有内容添加到IPFS中。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# -r 参数代表递归添加
ipfs add -r public

# 实际运行效果
PS D:\blog> ipfs add -r public
added QmZT5jXEi2HFVv8tzuDqULBaiEPc8geZFVjXxb9iAsBqbg public/404.html
added QmcGDfkg6mcboba3MkNeamGQvRgdnHiD4HZhvCRwEnSdSj public/CNAME
很长的滚屏后......
added QmT61SS4ykbnt1ECQFDfX27QJdyhsVfRrLJztDvbcR7Kc1 public/tags
added QmdoJ8BiuN8H7K68hJhk8ZrkFXjU8T9Wypi9xAyAzt2zoj public
 35.12 MiB / 35.12 MiB [===========================================] 100.00%

如果你不想看这么长的滚屏,只想要最后一个Hash,可以添加一个 Q (quiet) 参数

1
2
PS D:\blog\blog> ipfs add -rQ public
QmdoJ8BiuN8H7K68hJhk8ZrkFXjU8T9Wypi9xAyAzt2zoj

在刚刚添加完成的最后,名称为public的那串Hash便是public目录的CID,我们现在可以通过这个CID在IPFS网关上访问我们刚刚的添加的内容。

我们先通过本机的IPFS网关来访问一下,看看有没有添加成功。注意这一步需要你本地已经开启了IPFS守护进程。

访问:http://localhost:8080/ipfs/QmdoJ8BiuN8H7K68hJhk8ZrkFXjU8T9Wypi9xAyAzt2zoj

然后浏览器会自动进行跳转,可以看到能够正常访问我们的页面

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/web/ipfs_local_web.png

注意

你会发现浏览器地址栏的网址为一个另一个长字符串构成的域名

长字符串.ipfs.localhost:8080

这里的长字符串是IPFS中的另一个概念:IPLD

如果你的页面只能够显示内容,但是样式是错误的,如下图

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/web/local_error.png

这是因为使用的是绝对地址,我们需要使用相对地址的形式,如果你和我一样使用Hugo,那么只需要在你的配置文件中增加 relativeURLs = true 即可

刚刚我们通过本机的IPFS网关成功访问到了IPFS中的网站,现在我们找一个公开的其他的IPFS网关来访问试一下

这里我选择IPFS官方维护的网关:https://ipfs.io,访问:https://ipfs.io/ipfs/QmdoJ8BiuN8H7K68hJhk8ZrkFXjU8T9Wypi9xAyAzt2zoj

需要注意的是,此时网站还只存在于我们本机上,其他IPFS网关从IPFS网络中找到我们的网站文件需要一段时间,我们需要保证此时IPFS守护进程不关闭并已经连接了成百上千的其他节点,这样有利于IPFS官方Gateway尽快找到我们。

经过多次刷新和焦急的等待后,终于有了显示

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/web/ipfs_web.png

使用命令 ipfs name publish CID 来发布一个IPNS,这里可能需要等待一会

1
2
PS D:\blog\blog> ipfs name publish QmdoJ8BiuN8H7K68hJhk8ZrkFXjU8T9Wypi9xAyAzt2zoj
Published to k51qzi5uqu5djhbknypxifn09wxhtf3y1bce8oriud1ojqz5r71mpu75rru520: /ipfs/QmdoJ8BiuN8H7K68hJhk8ZrkFXjU8T9Wypi9xAyAzt2zoj

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/web/ipns_web.png

通过使用IPNS映射,后续我们可以不断更新网站内容。如果没有使用IPNS而是直接发布CID,那别人便无法访问最新的版本了

注意

如果使用了IPNS,需要备份节点的私钥和生成IPNS地址时生成的Key

它们分别存储在你init时显示的目录下的config文件和keystore文件夹内

IPNS不是在IPFS上创建可变地址的唯一方法,我们还可以使用DNSLink,它目前比IPNS快得多,还使用人类可读的名称。

例如我想要给刚刚发布在IPFS上的网站绑定ipfs.lgf.im这个域名,那我就需要创建_dnslink.ipfs.lgf.imTXT记录

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/web/dnslink.png

然后任何人都可以用 /ipfs/ipfs.lgf.im 来找到我的网站了,访问http://localhost:8080/ipns/ipfs.lgf.im

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/web/ipfs_dnslink_web.png

详细文档见:IPFS文档:DNSLink

更新内容时,只需要再添加一次,然后重新发布IPNS,如果你是使用DNSLink的方式,还需要修改DNS记录

每个Merkle都是一个有向无环图 ,因为每个节点都通过其名称访问。每个Merkle分支都是其本地内容的哈希,它们的子节点使用它们的哈希而非完整内容来命名。因此,在创建后将不能编辑节点。这可以防止循环(假设没有哈希碰撞),因为无法将第一个创建的节点链接到最后一个节点从而创建最后一个引用。

对任何Merkle来说,要创建一个新的分支或验证现有分支,通常需要在本地内容的某些组合体(例如列表的子哈希和其他字节)上使用一种哈希算法。IPFS中有多种散列算法可用。

输入到散列算法中的数据的描述见 https://github.com/ipfs/go-ipfs/tree/master/merkledag

具体见:IPFS文档:Merkle

具体见:IPFS文档:DHT

IPFS作为一个文件系统,本质就是用来存储文件,基于这个文件系统的一些特性,有很多上层应用涌现出来。

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/ipfs-applications-diagram.png

https://blog.zuik.ren/posts/tutorials/p2p/ipfs/filecoin.png

IPFS提供了IPFS协议的GolangJavaScript实现,可以非常方便的将IPFS集成到我们的应用当中,充分利用IPFS的各种优势。

对于P2P:https://t.lgf.im/post/618818179793371136/%E5%85%B3%E4%BA%8Eresilio-sync

很多人误认为IPFS可以永久存储文件,从使用的技术来讲的确更有利于永久存储内容,但是还需不断需要有人访问、Pin、传播该内容,否则待全网所有节点都将该内容数据GC掉,数据还是会丢失。

有人认为P2P就是匿名的,就像Tor一样,就像以太坊一样。实际上绝大部分P2P应用都不是匿名的,IPFS也不是匿名的,所以当你在发布敏感信息的时候,需要保护好自己。IPFS目前还不支持Tor网络。

从理论上来讲,只要节点数量足够多,基于P2P技术的IPFS速度能够跑满你的带宽,延迟也有可能比中心化的Web低。但实际上,就目前情况而言,使用IPFS的人并不多,你链接的IPFS节点最多也就1000个左右(至少目前阶段我最多也就撑死连1000个以内),所以并不能达到理论的理想状态,所以现在IPFS的速度并不是很快,并且很少人访问的冷数据延迟很高,还有大概率找不到。

的确,目前有很多投机的人,他们想要通过销售所谓的IPFS矿机(其实就是普通的电脑接上大硬盘)来盈利,所以他们故意去混淆IPFS、Filecoin、比特币、区块链等概念,打着永久存储的伪概念,用区块链这个热点来欺骗啥都不懂的老人,这种行为非常无耻。

实际上,IPFS本身并不是骗局,基于IPFS产生的激励层Filecoin也不是骗局,从我的使用来看,任何人都无需特意去购买任何所谓的IPFS矿机,只需要在自己的电脑运行时,后台跑一个IPFS守护进程就可以了。不要被所谓的冲昏了头脑。

 

  • from http://web.archive.org/web/20220510104601/https://zu1k.com/posts/tutorials/p2p/ipfs/ 
  • --------------

https://docs.ipfs.io/install/ipfs-desktop/#macos

https://github.com/ipfs/ipfs-desktop/releases

https://docs.ipfs.io/how-to/command-line-quick-start/ 

---------------------------------------------------------------

An IPFS implementation in Go

ipfs.tech

 

What is Kubo?

Kubo was the first IPFS implementation and is the most widely used one today. Implementing the Interplanetary Filesystem - the Web3 standard for content-addressing, interoperable with HTTP. Thus powered by IPLD's data models and the libp2p for network communication. Kubo is written in Go.

Featureset

Other implementations

See List

What is IPFS?

IPFS is a global, versioned, peer-to-peer filesystem. It combines good ideas from previous systems such as Git, BitTorrent, Kademlia, SFS, and the Web. It is like a single BitTorrent swarm, exchanging git objects. IPFS provides an interface as simple as the HTTP web, but with permanence built-in. You can also mount the world at /ipfs.

For more info see: https://docs.ipfs.tech/concepts/what-is-ipfs/

Before opening an issue, consider using one of the following locations to ensure you are opening your thread in the right place:

YouTube Channel Subscribers Follow @IPFS on Twitter

Next milestones

Milestones on GitHub

Table of Contents

Security Issues

Please follow SECURITY.md.

Install

The canonical download instructions for IPFS are over at: https://docs.ipfs.tech/install/. It is highly recommended you follow those instructions if you are not interested in working on IPFS development.

System Requirements

IPFS can run on most Linux, macOS, and Windows systems. We recommend running it on a machine with at least 2 GB of RAM and 2 CPU cores (kubo is highly parallel). On systems with less memory, it may not be completely stable.

If your system is resource-constrained, we recommend:

  1. Installing OpenSSL and rebuilding kubo manually with make build GOTAGS=openssl. See the download and compile section for more information on compiling kubo.
  2. Initializing your daemon with ipfs init --profile=lowpower

Docker

Official images are published at https://hub.docker.com/r/ipfs/kubo/:

Docker Image Version (latest semver)

More info on how to run Kubo (go-ipfs) inside Docker can be found here.

Official prebuilt binaries

The official binaries are published at https://dist.ipfs.tech#kubo:

dist.ipfs.tech Downloads

From there:

  • Click the blue "Download Kubo" on the right side of the page.
  • Open/extract the archive.
  • Move kubo (ipfs) to your path (install.sh can do it for you).

If you are unable to access dist.ipfs.tech, you can also download kubo (go-ipfs) from:

Updating

Using ipfs-update

IPFS has an updating tool that can be accessed through ipfs update. The tool is not installed alongside IPFS in order to keep that logic independent of the main codebase. To install ipfs-update tool, download it here.

Downloading builds using IPFS

List the available versions of Kubo (go-ipfs) implementation:

$ ipfs cat /ipns/dist.ipfs.tech/kubo/versions

Then, to view available builds for a version from the previous command ($VERSION):

$ ipfs ls /ipns/dist.ipfs.tech/kubo/$VERSION

To download a given build of a version:

$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_darwin-386.tar.gz    # darwin 32-bit build
$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_darwin-amd64.tar.gz  # darwin 64-bit build
$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_freebsd-amd64.tar.gz # freebsd 64-bit build
$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_linux-386.tar.gz     # linux 32-bit build
$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_linux-amd64.tar.gz   # linux 64-bit build
$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_linux-arm.tar.gz     # linux arm build
$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_windows-amd64.zip    # windows 64-bit build

Unofficial Linux packages

Arch Linux

kubo via Community Repo

# pacman -S kubo

kubo-git via AUR

Nix

With the purely functional package manager Nix you can install kubo (go-ipfs) like this:

$ nix-env -i kubo

You can also install the Package by using its attribute name, which is also kubo.

Solus

Package for Solus

$ sudo eopkg install kubo

You can also install it through the Solus software center.

openSUSE

Community Package for go-ipfs

Guix

Community Package for go-ipfs is no out-of-date.

Snap

No longer supported, see rationale in kubo#8688.

Unofficial Windows packages

Chocolatey

No longer supported, see rationale in kubo#9341.

Scoop

Scoop provides kubo as kubo in its 'extras' bucket.

PS> scoop bucket add extras
PS> scoop install kubo

Unofficial macOS packages

MacPorts

The package ipfs currently points to kubo (go-ipfs) and is being maintained.

$ sudo port install ipfs

Nix

In macOS you can use the purely functional package manager Nix:

$ nix-env -i kubo

You can also install the Package by using its attribute name, which is also kubo.

Homebrew

A Homebrew formula ipfs is maintained too.

$ brew install --formula ipfs

Build from Source

GitHub go.mod Go version

kubo's build system requires Go and some standard POSIX build tools:

  • GNU make
  • Git
  • GCC (or some other go compatible C Compiler) (optional)

To build without GCC, build with CGO_ENABLED=0 (e.g., make build CGO_ENABLED=0).

Install Go

GitHub go.mod Go version

If you need to update: Download latest version of Go.

You'll need to add Go's bin directories to your $PATH environment variable e.g., by adding these lines to your /etc/profile (for a system-wide installation) or $HOME/.profile:

export PATH=$PATH:/usr/local/go/bin
export PATH=$PATH:$GOPATH/bin

(If you run into trouble, see the Go install instructions).

Download and Compile IPFS

$ git clone https://github.com/ipfs/kubo.git

$ cd kubo
$ make install

Alternatively, you can run make build to build the go-ipfs binary (storing it in cmd/ipfs/ipfs) without installing it.

NOTE: If you get an error along the lines of "fatal error: stdlib.h: No such file or directory", you're missing a C compiler. Either re-run make with CGO_ENABLED=0 or install GCC.

Cross Compiling

Compiling for a different platform is as simple as running:

make build GOOS=myTargetOS GOARCH=myTargetArchitecture
OpenSSL

To build go-ipfs with OpenSSL support, append GOTAGS=openssl to your make invocation. Building with OpenSSL should significantly reduce the background CPU usage on nodes that frequently make or receive new connections.

Note: OpenSSL requires CGO support and, by default, CGO is disabled when cross-compiling. To cross-compile with OpenSSL support, you must:

  1. Install a compiler toolchain for the target platform.
  2. Set the CGO_ENABLED=1 environment variable.

Troubleshooting

  • Separate instructions are available for building on Windows.
  • git is required in order for go get to fetch all dependencies.
  • Package managers often contain out-of-date golang packages. Ensure that go version reports at least 1.10. See above for how to install go.
  • If you are interested in development, please install the development dependencies as well.
  • Shell command completions can be generated with one of the ipfs commands completion subcommands. Read docs/command-completion.md to learn more.
  • See the misc folder for how to connect IPFS to systemd or whatever init system your distro uses.

Getting Started

Usage

docs: Command-line quick start docs: Command-line reference

To start using IPFS, you must first initialize IPFS's config files on your system, this is done with ipfs init. See ipfs init --help for information on the optional arguments it takes. After initialization is complete, you can use ipfs mount, ipfs add and any of the other commands to explore!

Some things to try

Basic proof of 'ipfs working' locally:

echo "hello world" > hello
ipfs add hello
# This should output a hash string that looks something like:
# QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o
ipfs cat <that hash>

HTTP/RPC clients

For programmatic interaction with Kubo, see our list of HTTP/RPC clients.

Troubleshooting

If you have previously installed IPFS before and you are running into problems getting a newer version to work, try deleting (or backing up somewhere else) your IPFS config directory (~/.ipfs by default) and rerunning ipfs init. This will reinitialize the config file to its defaults and clear out the local datastore of any bad entries.

Please direct general questions and help requests to our forums.

If you believe you've found a bug, check the issues list and, if you don't see your problem there, either come talk to us on Matrix chat, or file an issue of your own!

Packages

See IPFS in GO documentation.

Development

Some places to get you started on the codebase:

Map of Implemented Subsystems

WIP: This is a high-level architecture diagram of the various sub-systems of this specific implementation. To be updated with how they interact. Anyone who has suggestions is welcome to comment here on how we can improve this!

CLI, HTTP-API, Architecture Diagram

Origin

Description: Dotted means "likely going away". The "Legacy" parts are thin wrappers around some commands to translate between the new system and the old system. The grayed-out parts on the "daemon" diagram are there to show that the code is all the same, it's just that we turn some pieces on and some pieces off depending on whether we're running on the client or the server.

Testing

make test

Development Dependencies

If you make changes to the protocol buffers, you will need to install the protoc compiler.

Developer Notes

Find more documentation for developers on docs

Maintainer Info

from https://github.com/ipfs/kubo 

----------------------------------------

Gx:基于 IPFS 的通用包管理工具

gx 是个基于分布式,内容处理文件系统 IPFS 的通用包管理工具,语言无关,非常灵活,强大和简单。Gx 当前还是 Alpha 版本,但在 go-ipfs  依赖管理上证明是可靠的。

使用

添加新库:

$ gx repo add myrepo /ipns/QmPupmUqXHBxikXxuptYECKaq8tpGNDSetx1Ed44irmew3

罗列配置仓库:

$ gx repo list
myrepo       /ipns/QmPupmUqXHBxikXxuptYECKaq8tpGNDSetx1Ed44irmew3

列出当前库的包:

$ gx repo list myrepo
events      QmeJjwRaGJfx7j6LkPLjyPfzcD2UHHkKehDPkmizqSpcHT
smalltree   QmRgTZA6jGi49ipQxorkmC75d3pLe69N6MZBKfQaN6grGY
stump       QmebiJS1saSNEPAfr9AWoExvpfGoEK4QCtdLKCK4z6Qw7U

从一个库中导入包:

$ gx repo import events

现已托管到 GitHub:

https://github.com/whyrusleeping/gx

--------------------------------------------

A package management tool. 

gx

The language-agnostic, universal package manager

gx is a packaging tool built around the distributed, content addressed filesystem IPFS. It aims to be flexible, powerful and simple.

gx is Alpha Quality. While not perfect, gx is reliable enough to manage dependencies in go-ipfs and is ready for use by developers of all skill levels.

Table of Contents

Background

gx was originally designed to handle dependencies in Go projects in a distributed fashion, and pulls ideas from other beloved package managers (like npm).

gx was designed with the following major goals in mind:

  1. Be language/ecosystem agnostic by providing git-like hooks for adding new ecosystems.
  2. Provide completely reproducible packages through content addressing.
  3. Use a flexible, distributed storage backend.

Requirements

Users are encouraged to have a running IPFS daemon of at least version 0.4.2 on their machines. If not present, gx will use the public gateway. If you wish to publish a package, a local running daemon is a hard requirement. If your IPFS repo is in a non-standard location, remember to set $IPFS_PATH. Alternatively, you can explicitly set $IPFS_API to $IPFS_API_IPADDR:$PORT.

Installation

$ (cd ~ && GO111MODULE=on go get github.com/whyrusleeping/gx)

This will download, build, and install a binary to $GOPATH/bin. To modify gx, just change the source in that directory, and run go build.

Usage

Creating and publishing new generic package:

$ gx init
$ gx publish

This will output a 'package-hash' unique to the content of the package at the time of publishing. If someone downloads the package and republishes it, the exact same hash will be produced.

package.json

It should be noted that gx is meant to work with existing package.json files. If you are adding a package to gx that already has a package.json file in its root, gx will try and work with it. Any shared fields will have the same types, and any fields unique to gx will kept separate.

E.g. A single package.json file could be used to serve both gx and another packaging tool, such as npm. Since gx is Alpha Quality there may be some exceptions to the above statements, if you notice one, please file an issue.

Installing a gx package

If you've cloned down a gx package, simply run gx install or gx i to install it (and its dependencies).

Dependencies

To add a dependency of another package to your package, simply import it by its hash:

$ gx import QmaDFJvcHAnxpnMwcEh6VStYN4v4PB4S16j4pAuC2KSHVr

This downloads the package specified by the hash into the vendor directory in your workspace. It also adds an entry referencing the package to the local package.json.

Gx has a few nice tools to view and analyze dependencies. First off, the simple:

$ gx deps
go-log              QmSpJByNKFX1sCsHBEp3R73FL4NF6FnQTEGyNAXHm2GS52 1.2.0
go-libp2p-peer      QmWXjJo15p4pzT7cayEwZi2sWgJqLnGDof6ZGMh9xBgU1p 2.0.4
go-libp2p-peerstore QmYkwVGkwoPbMVQEbf6LonZg4SsCxGP3H7PBEtdNCNRyxD 1.2.5
go-testutil         QmYpVUnnedgGrp6cX2pBii5HRQgcSr778FiKVe7o7nF5Z3 1.0.2
go-ipfs-util        QmZNVWh8LLjAavuQ2JXuFmuYH3C11xo988vSgp7UQrTRj1 1.0.0

This just lists out the immediate dependencies of this package. To see dependencies of dependencies, use the -r option: (and optionally the -s option to sort them)

$ gx deps -r -s
go-base58           QmT8rehPR3F6bmwL6zjUN8XpiDBFFpMP2myPdC6ApsWfJf 0.0.0
go-crypto           Qme1boxspcQWR8FBzMxeppqug2fYgYc15diNWmqgDVnvn2 0.0.0
go-datastore        QmbzuUusHqaLLoNTDEVLcSF6vZDHZDLPC7p4bztRvvkXxU 1.0.0
go-ipfs-util        QmZNVWh8LLjAavuQ2JXuFmuYH3C11xo988vSgp7UQrTRj1 1.0.0
go-keyspace         QmUusaX99BZoELh7dmPgirqRQ1FAmMnmnBn3oiqDFGBUSc 1.0.0
go-libp2p-crypto    QmVoi5es8D5fNHZDqoW6DgDAEPEV5hQp8GBz161vZXiwpQ 1.0.4
go-libp2p-peer      QmWXjJo15p4pzT7cayEwZi2sWgJqLnGDof6ZGMh9xBgU1p 2.0.4
go-libp2p-peerstore QmYkwVGkwoPbMVQEbf6LonZg4SsCxGP3H7PBEtdNCNRyxD 1.2.5
go-log              QmSpJByNKFX1sCsHBEp3R73FL4NF6FnQTEGyNAXHm2GS52 1.2.0
go-logging          QmQvJiADDe7JR4m968MwXobTCCzUqQkP87aRHe29MEBGHV 0.0.0
go-multiaddr        QmYzDkkgAEmrcNzFCiYo6L1dTX4EAG1gZkbtdbd9trL4vd 0.0.0
go-multiaddr-net    QmY83KqqnQ286ZWbV2x7ixpeemH3cBpk8R54egS619WYff 1.3.0
go-multihash        QmYf7ng2hG5XBtJA3tN34DQ2GUN5HNksEw1rLDkmr6vGku 0.0.0
go-net              QmZy2y8t9zQH2a1b8q2ZSLKp17ATuJoCNxxyMFG5qFExpt 0.0.0
go-testutil         QmYpVUnnedgGrp6cX2pBii5HRQgcSr778FiKVe7o7nF5Z3 1.0.2
go-text             Qmaau1d1WjnQdTYfRYfFVsCS97cgD8ATyrKuNoEfexL7JZ 0.0.0
go.uuid             QmcyaFHbyiZfoX5GTpcqqCPYmbjYNAhRDekXSJPFHdYNSV 1.0.0
gogo-protobuf       QmZ4Qi3GaRbjcx28Sme5eMH7RQjGkt8wHxt2a65oLaeFEV 0.0.0
goprocess           QmSF8fPo3jgVBAy8fpdjjYqgG87dkJgUprRBHRd2tmfgpP 1.0.0
mafmt               QmeLQ13LftT9XhNn22piZc3GP56fGqhijuL5Y8KdUaRn1g 1.1.1

That's pretty useful, I now know the full set of packages my package depends on. But what's difficult now is being able to tell what is imported where. To address that, gx has a --tree option:

$ gx deps --tree
├─ go-base58          QmT8rehPR3F6bmwL6zjUN8XpiDBFFpMP2myPdC6ApsWfJf 0.0.0
├─ go-multihash       QmYf7ng2hG5XBtJA3tN34DQ2GUN5HNksEw1rLDkmr6vGku 0.0.0
│  ├─ go-base58       QmT8rehPR3F6bmwL6zjUN8XpiDBFFpMP2myPdC6ApsWfJf 0.0.0
│  └─ go-crypto       Qme1boxspcQWR8FBzMxeppqug2fYgYc15diNWmqgDVnvn2 0.0.0
├─ go-ipfs-util       QmZNVWh8LLjAavuQ2JXuFmuYH3C11xo988vSgp7UQrTRj1 1.0.0
│  ├─ go-base58       QmT8rehPR3F6bmwL6zjUN8XpiDBFFpMP2myPdC6ApsWfJf 0.0.0
│  └─ go-multihash    QmYf7ng2hG5XBtJA3tN34DQ2GUN5HNksEw1rLDkmr6vGku 0.0.0
│     ├─ go-base58    QmT8rehPR3F6bmwL6zjUN8XpiDBFFpMP2myPdC6ApsWfJf 0.0.0
│     └─ go-crypto    Qme1boxspcQWR8FBzMxeppqug2fYgYc15diNWmqgDVnvn2 0.0.0
├─ go-log             QmNQynaz7qfriSUJkiEZUrm2Wen1u3Kj9goZzWtrPyu7XR 1.1.2
│  ├─ randbo          QmYvsG72GsfLgUeSojXArjnU6L4Wmwk7wuAxtNLuyXcc1T 0.0.0
│  ├─ go-net          QmZy2y8t9zQH2a1b8q2ZSLKp17ATuJoCNxxyMFG5qFExpt 0.0.0
│  │  ├─ go-text      Qmaau1d1WjnQdTYfRYfFVsCS97cgD8ATyrKuNoEfexL7JZ 0.0.0
│  │  └─ go-crypto    Qme1boxspcQWR8FBzMxeppqug2fYgYc15diNWmqgDVnvn2 0.0.0
│  └─ go-logging      QmQvJiADDe7JR4m968MwXobTCCzUqQkP87aRHe29MEBGHV 0.0.0
└─ go-libp2p-crypto   QmUEUu1CM8bxBJxc3ZLojAi8evhTr4byQogWstABet79oY 1.0.2
   ├─ gogo-protobuf   QmZ4Qi3GaRbjcx28Sme5eMH7RQjGkt8wHxt2a65oLaeFEV 0.0.0
   ├─ go-log          Qmazh5oNUVsDZTs2g59rq8aYQqwpss8tcUWQzor5sCCEuH 0.0.0
   │  ├─ go.uuid      QmPC2dW6jyNzzBKYuHLBhxzfWaUSkyC9qaGMz7ciytRSFM 0.0.0
   │  ├─ go-logging   QmQvJiADDe7JR4m968MwXobTCCzUqQkP87aRHe29MEBGHV 0.0.0
   │  ├─ go-net       QmZy2y8t9zQH2a1b8q2ZSLKp17ATuJoCNxxyMFG5qFExpt 0.0.0
   │  │  ├─ go-text   Qmaau1d1WjnQdTYfRYfFVsCS97cgD8ATyrKuNoEfexL7JZ 0.0.0
   │  │  └─ go-crypto Qme1boxspcQWR8FBzMxeppqug2fYgYc15diNWmqgDVnvn2 0.0.0
   │  └─ randbo       QmYvsG72GsfLgUeSojXArjnU6L4Wmwk7wuAxtNLuyXcc1T 0.0.0
   ├─ go-ipfs-util    QmZNVWh8LLjAavuQ2JXuFmuYH3C11xo988vSgp7UQrTRj1 1.0.0
   │  ├─ go-base58    QmT8rehPR3F6bmwL6zjUN8XpiDBFFpMP2myPdC6ApsWfJf 0.0.0
   │  └─ go-multihash QmYf7ng2hG5XBtJA3tN34DQ2GUN5HNksEw1rLDkmr6vGku 0.0.0
   │     ├─ go-base58 QmT8rehPR3F6bmwL6zjUN8XpiDBFFpMP2myPdC6ApsWfJf 0.0.0
   │     └─ go-crypto Qme1boxspcQWR8FBzMxeppqug2fYgYc15diNWmqgDVnvn2 0.0.0
   └─ go-msgio        QmRQhVisS8dmPbjBUthVkenn81pBxrx1GxE281csJhm2vL 0.0.0
      └─ go-randbuf   QmYNGtJHgaGZkpzq8yG6Wxqm6EQTKqgpBfnyyGBKbZeDUi 0.0.0

Now you can see the entire tree of dependencies for this project. Although, for larger projects, this will get messy. If you're just interested in the dependency tree of a single package, you can use the --highlight option to filter the trees printing:

$ gx deps --tree --highlight=go-crypto
├─ go-multihash       QmYf7ng2hG5XBtJA3tN34DQ2GUN5HNksEw1rLDkmr6vGku 0.0.0
│  └─ go-crypto       Qme1boxspcQWR8FBzMxeppqug2fYgYc15diNWmqgDVnvn2 0.0.0
├─ go-ipfs-util       QmZNVWh8LLjAavuQ2JXuFmuYH3C11xo988vSgp7UQrTRj1 1.0.0
│  └─ go-multihash    QmYf7ng2hG5XBtJA3tN34DQ2GUN5HNksEw1rLDkmr6vGku 0.0.0
│     └─ go-crypto    Qme1boxspcQWR8FBzMxeppqug2fYgYc15diNWmqgDVnvn2 0.0.0
├─ go-log             QmNQynaz7qfriSUJkiEZUrm2Wen1u3Kj9goZzWtrPyu7XR 1.1.2
│  └─ go-net          QmZy2y8t9zQH2a1b8q2ZSLKp17ATuJoCNxxyMFG5qFExpt 0.0.0
│     └─ go-crypto    Qme1boxspcQWR8FBzMxeppqug2fYgYc15diNWmqgDVnvn2 0.0.0
└─ go-libp2p-crypto   QmUEUu1CM8bxBJxc3ZLojAi8evhTr4byQogWstABet79oY 1.0.2
   ├─ go-log          Qmazh5oNUVsDZTs2g59rq8aYQqwpss8tcUWQzor5sCCEuH 0.0.0
   │  └─ go-net       QmZy2y8t9zQH2a1b8q2ZSLKp17ATuJoCNxxyMFG5qFExpt 0.0.0
   │     └─ go-crypto Qme1boxspcQWR8FBzMxeppqug2fYgYc15diNWmqgDVnvn2 0.0.0
   └─ go-ipfs-util    QmZNVWh8LLjAavuQ2JXuFmuYH3C11xo988vSgp7UQrTRj1 1.0.0
      └─ go-multihash QmYf7ng2hG5XBtJA3tN34DQ2GUN5HNksEw1rLDkmr6vGku 0.0.0
         └─ go-crypto Qme1boxspcQWR8FBzMxeppqug2fYgYc15diNWmqgDVnvn2 0.0.0

This tree is a subset of the previous one, filtered to only show leaves that end in the selected package.

The gx deps command also has two other smaller subcommands, dupes and stats. gx deps dupes will print out packages that are imported multiple times with the same name, but different hashes. This is useful to see if different versions of the same package have been imported in different places in the dependency tree. Allowing the user to more easily address the discrepancy. gx deps stats will output the total number of packages imported (total and unique) as well as the average depth of imports in the tree. This gives you a rough idea of the complexity of your package.

The gx dependency graph manifesto

I firmly believe that packages are better when:

1. The depth of the dependency tree is minimized.

This means restructuring your code in such a way that flattens (and perhaps widens as a consequence) the tree. For example, in Go, this often times means making an interface its own package, and implementations into their own separate packages. The benefits here are that flatter trees are far easier to update. For every package deep a dependency is, you have to update, test, commit, review and merge another package. That's a lot of work, and also a lot of extra room for problems to sneak in.

2. The width of the tree is minimized, but not at the cost of increasing depth.

This should be fairly common sense, but striving to import packages only where they are actually needed helps to improve code quality. Imagine having a helper function in one package, simply because it's convenient to have it there, that depends on a bunch of other imports from elsewhere in the tree. Sure it's nice, and doesn't actually increase the 'total' number of packages you depend on. But now you've created an extra batch of work for you to do any time any of these are updated, and you also now force anyone who wants to import the package with your helper function to also import all those other dependencies.

Adhering to the above two rules should (I'm very open to discussion on this) improve overall code quality, and make your codebase far easier to navigate and work on.

Updating

Updating packages in gx is simple:

$ gx update mypkg QmbH7fpAV1FgMp6J7GZXUV6rj6Lck5tDix9JJGBSjFPgUd

This looks into your package.json for a dependency named mypkg and replaces its hash reference with the one given.

Alternatively, you can just specify the hash you want to update to:

$ gx update QmbH7fpAV1FgMp6J7GZXUV6rj6Lck5tDix9JJGBSjFPgUd

Doing it this way will pull down the package, check its name, and then update that dependency.

Note that by default, this will not touch your code at all, so any references to that hash you have in your code will need to be updated. If you have a language tool (e.g. gx-go) installed, and it has a post-update hook, references to the given package should be updated correctly. If not, you may have to run sed over the package to update everything. The bright side of that is that you are very unlikely to have those hashes sitting around for any other reason so a global find-replace should be just fine.

Publishing and Releasing

Gx by default will not let you publish a package twice if you haven't updated its version. To get around this, you can pass the -f flag. Though this is not recommended, it's still perfectly possible to do.

To update the version easily, use the gx version subcommand. You can either set the version manually:

$ gx version 5.11.4

Or just do a 'version bump':

$ gx version patch
updated version to: 5.11.5
$ gx version minor
updated version to: 5.12.0
$ gx version major
updated version to: 6.0.0

Most of the time, your process will look something like:

$ gx version minor
updated version to: 6.1.0
$ gx publish
package whys-awesome-package published with hash: QmaoaEi6uNMuuXKeYcXM3gGUEQLzbDWGcFUdd3y49crtZK
$ git commit -a -m "gx publish 6.1.0"
[master 5c4d36c] gx publish 6.1.0
 2 files changed, 3 insertions(+), 2 deletions(-)

The release subcommand can be used to automate the above process. gx release <version> will do a version update (using the same inputs as the normal version command), run a gx publish, and then execute whatever you have set in your package.json as your releaseCmd. To get the above git commit flow, you can set it to: git commit -a -m \"gx publish $VERSION\" and gx will replace $VERSION with the newly changed version before executing the git commit.

Ignoring files from a publish

You can use a .gxignore file to make gx ignore certain files during a publish. This has the same behaviour as a .gitignore.

Gx also respects a .gitignore file if present, and will not publish any file excluded by it.

Repos

gx supports named packages via user configured repositories. A repository is simply an ipfs object whose links name package hashes. You can add a repository as either an ipns or ipfs path.

Usage

Add a new repo

$ gx repo add myrepo /ipns/QmPupmUqXHBxikXxuptYECKaq8tpGNDSetx1Ed44irmew3

List configured repos

$ gx repo list
myrepo       /ipns/QmPupmUqXHBxikXxuptYECKaq8tpGNDSetx1Ed44irmew3

List packages in a given repo

$ gx repo list myrepo
events      QmeJjwRaGJfx7j6LkPLjyPfzcD2UHHkKehDPkmizqSpcHT
smalltree   QmRgTZA6jGi49ipQxorkmC75d3pLe69N6MZBKfQaN6grGY
stump       QmebiJS1saSNEPAfr9AWoExvpfGoEK4QCtdLKCK4z6Qw7U

Import a package from a repo:

$ gx repo import events

Hooks

gx supports a wide array of use cases by having sane defaults that are extensible based on the scenario the user is in. To this end, gx has hooks that get called during certain operations.

These hooks are language specific, and gx will attempt to make calls to a helper binary matching your language to execute the hooks. For example, when writing go, gx calls gx-go hook <hookname> <args> for any given hook.

Currently available hooks are:

  • post-import
    • called after a new package is imported and its info written to package.json.
    • takes the hash of the newly imported package as an argument.
  • post-init
    • called after a new package is initialized.
    • takes an optional argument of the directory of the newly init'ed package.
  • pre-publish
    • called during gx publish before the package is bundled up and added to ipfs.
    • currently takes no arguments.
  • post-publish
    • called during gx publish after the package has been added to ipfs.
    • takes the hash of the newly published package as an argument.
  • post-update
    • called during gx update after a dependency has been updated.
    • takes the old package ref and the new hash as arguments.
  • post-install
    • called after a new package is downloaded, during install and import.
    • takes the path to the new package as an argument.
  • install-path
    • called during package installs and imports.
    • sets the location for gx to install packages to.

Package directories

Gx by default will install packages 'globally' in the global install location for your given project type. Global gx packages are shared across all packages that depend on them. The location of this directory can be changed if desired. Add a hook to your environments extension tool named install-path (see above) and gx will use that path instead. If your language does not set a global install path, gx will fallback to installing locally as the default. This means that it will create a folder in the current directory named vendor and install things to it.

When running gx install in the directory of your package, gx will recursively fetch all of the dependencies specified in the package.json and save them to the install path specified.

Gx supports both local and global installation paths. Since the default is global, to install locally, use --local or --global=false. The global flag is passed to the install-path hook for your extension code to use in its logic.

Using gx as a Go package manager

If you want (like me) to use gx as a package manager for go, it's pretty easy. You will need the gx go extensions before starting your project:

$ go get -u github.com/whyrusleeping/gx-go

Once that's installed, use gx like normal to import dependencies. You can import code from the vendor directory using:

import "gx/ipfs/<hash>/packagename"

for example, if i have a package foobar, you can import with gx it like so:

$ gx import QmR5FHS9TpLbL9oYY8ZDR3A7UWcHTBawU1FJ6pu9SvTcPa

And then in your go code, you can use it with:

import "gx/ipfs/QmR5FHS9TpLbL9oYY8ZDR3A7UWcHTBawU1FJ6pu9SvTcPa/foobar"

Then simply set the environment variable GO15VENDOREXPERIMENT to 1 and run go build or go install like you normally would. Alternatively, install your dependencies globally (gx install --global) and you can leave off the environment variable part.

See the gx-go repo for more details.

Using gx as a Javascript package manager

Please take a look at gx-js.

Using gx as a package manager for language/environment X

If you want to use gx with a big bunch of repositories/packages please take a look at gx-workspace.

If you want to extend gx to work with any other language or environment, you can implement the relevant hooks in a binary named gx-X where the 'X' is the name of your environment. After that, any package whose language is set to 'X' will call out to that tools hooks during normal gx operations. For example, a 'go' package would call gx-go hook pre-publish during a gx publish invocation before the package is actually published. For more information on hooks, check out the hooks section above.

See also the examples directory.

Why is it called gx?

No reason. "gx" stands for nothing.

Getting Involved

If you're interested in gx, please stop by #gx and #ipfs on freenode irc!

from https://github.com/whyrusleeping/gx

--------------------------------------

https://matters.town/@penfarming/388272-%E9%96%B1%E8%AE%80%E7%AD%86%E8%80%95-matters-%E7%B7%9A%E4%B8%8B%E6%B4%BB%E5%8B%95%E9%80%9F%E8%A8%98-ipfs-%E4%BB%A5%E5%8F%8A-ipns-%E6%A6%82%E5%BF%B5%E8%88%87%E5%AF%A6%E4%BD%9C-bafybeib3drfhgheqdfgjciolwisz6a3o5fq3jk72kxsrcxiadyjfnjzj7y

 

 


 

No comments:

Post a Comment