IPFS Multinode Private Cluster

Ravinayag
4 min readNov 9, 2022

--

IPFS — A Peer-to-Peer distributed file system

A Guide to creating a private distributed network to enable secure storage, sharing, and data replication on AWS.
Here’s a walkthrough of what I did to set up the nodes running. All feedback/comments/questions are very much appreciated! If you decide to give this a run-through, let us know how it goes so that we can patch up any confusing parts/mistakes in the documentation.
All feedback/comments/questions are very much appreciated! If you decide to give this a run-through

This guide includes 5 parts:

  • Deploy IPFS Nodes over EC2
  • Configure the s3 bucket as a datastore
  • Deploy IPFS Swarm/Cluster
  • Deploy ipfs-http-client react app for file upload
  • Securing the deployment using TLS + (Secure) Websockets

1. Deploy IPFS Nodes over EC2

Create SG as ipfs-nodes and add ports 22,80,443, 4001
Install ipfs:

  • We going to build the ipfs binary from the source repo:
git clone https://github.com/ipfs/kubo.git
cd kubo
export GO111MODULE=on
go get github.com/ipfs/go-ds-s3/plugin@latest
go get github.com/aws/aws-sdk-go v1.27.0
echo -en "\ns3ds github.com/ipfs/go-ds-s3/plugin 0" >> plugin/loader/preload_list
make install
go mod tidy
make install
cmd/ipfs/ipfs version
cp cmd/ipfs/ipfs /usr/local/bin/

Above we cloned the kubo(go-ipfs) repository and add the s3 plugin for the build. Since the first make install will throw an error, we fix it with mod tidy and rerun the make install. this will generate the ipfs binary file under ~/cmd/ipfs/ipfs then you can copy it to the execution path. If you were able to see your ipfs version then you are all good to proceed next steps.

Initilise the server : ipfs init -p server

ipfs will create the directory under your home folder ex. /home/ubuntu, we will create a system service and do a few config changes. use your favorite editor, for me it's vim.

$ sudo vi /etc/systemd/system/ipfs.serviceAnd add to it the following configs:
[Unit]
Description=IPFS server Daemon
After=syslog.target network.target remote-fs.target nss-lookup.target
[Service]
Type=simple
ExecStart=/usr/local/bin/ipfs daemon --enable-namesys-pubsub --enable-gc
Restart=always
User=ubuntu
Group=ubuntu
[Install]
WantedBy=multi-user.target

enable the service.

sudo systemctl daemon-reload
sudo systemctl enable ipfs

Do the config changes.

vi ~/.ipfs/config Datastore.StorageMax 100GB
Addresses.Gateway /ip4/0.0.0.0/tcp/8080

Alternatively, you can change by running commands directly

Set max storage with $ ipfs config Datastore.StorageMax 100GB

Enable gateway: $ ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080

2. Configure S3 bucket as datastore:

create a dedicated IAM user to access the s3 bucket, name the bucket as s3ipfsdatastore and provide respective permission to this user to read/write access the bucket. while creating IAM bucket user, make a note of the access credentials we need to configure to below.

       {
"child": {
"type": "s3ds",
"region": "us-east-1",
"bucket": "s3datastore",
"accessKey": "",
"secretKey": ""
},
"mountpoint": "/blocks",
"prefix": "s3.datastore",
"type": "measure"
},

Edit config file again : vim ~/.ipfs/config and there should be 2 entries under Datastore.Spec.mounts replace the first with the above content and ensure you updated with your region, bucket names, accessKey, and secretKey are set that you created earlier.
Now edit ~/.ipfs/datastore_spec to match the new data store vim ~/.ipfs/datastore_spec as below

{"mounts":[{"bucket":"$bucketname","mountpoint":"/blocks","region":"us-east-1","rootDirectory":""},{"mountpoint":"/","path":"datastore","type":"levelds"}],"type":"mount"}

Now start the ipfs and check the status of the ipfs service

sudo systemctl start ipfs
sudo systemctl status ipfs
wget https://4.bp.blogspot.com/-iSvQAJWAwRU/UQfatfJDchI/AAAAAAAAhhk/GvmjRzB1Rug/s1600/Rocky-Mountains-Jasper-Alberta-Canada-1024x1280.jpg
ipfs add Rocky-Mountains-Jasper-Alberta-Canada-1024x1280.jpg

You get the hashtag pin of the file uploaded, note it down and open the browser, and check.
http://1.2.3.1:8080/ipfs/QmSXkhhYJskgQpUaPFpxrqMvaMBcJFgkTxGXAJPYaAfUCJ

Check your s3 bucket, that new file was added in your s3 bucket.

3, Deploy IPFS Swarm/Cluster

In my opinion, IPFS cluster is the successor of Swarm Deployment, However, i still need to validate those statements, because the steps and methods are similar (Debian vs Ubuntu). if you have other thoughts please do share over comments.

In this tutorial, we use CRDT consensus for the cluster for more details ref.

IPFS Cluster needs these ports to need to open

9094 > Used for HTTP API endpoint
9095 > Used for IPFS proxy endpoint
9096 > Used for Cluster swarm, communication between cluster nodes

For Understanding My instances are

Node 1 : 1.2.3.1  > Ip Address
Node 2 : 1.2.3.2
Node 3 : 1.2.3.3

Repeat the same steps above for node 2 & node 3.

We need to obtain the ipfs-cluster-service , ipfs-cluster-ctl and ipfs-cluster-follow binaries.
Download all three binaries from the below, Please mind the versions, this is the latest version while writing this Doc.

  • Place the binaries where it can be run unattended, IPFS Cluster should be installed and run along ipfs (go-ipfs) in the executable path /use/local/bin
https://dist.ipfs.tech/ipfs-cluster-service/v1.0.4/ipfs-cluster-service_v1.0.4_linux-amd64.tar.gz
https://dist.ipfs.tech/ipfs-cluster-ctl/v1.0.4/ipfs-cluster-ctl_v1.0.4_linux-amd64.tar.gz
https://dist.ipfs.tech/ipfs-cluster-follow/v1.0.4/ipfs-cluster-follow_v1.0.4_linux-amd64.tar.gz

When you run the first time, you need to generate the cluster_key and store in a CLUSTER_SECRET variable and run with init option.

$ export CLUSTER_SECRET=$(od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n')   
$ echo $CLUSTER_SECRET

Copy this secret key and update it in the system service file. Create a System service like ipfs in above, or you can pass the environment variable in some other way that you wish. when the cluster starts it looks for the env variable to initiate or join. The secret has to pass in all the nodes participating in the cluster.

$ sudo vi /etc/systemd/system/ipfs-cluster.serviceAnd add to it the following configs:
[Unit]
Description=IPFS-Cluster server Daemon
After=syslog.target network.target remote-fs.target nss-lookup.target
[Service]
Type=simple
ExecStart=/bin/bash -lc 'CLUSTER_SECRET=21fsdfs09788ab9e0bf7gsds684d375db80064efcfd3easfd34524d1649394289c /usr/local/bin/ipfs-cluster-service daemon'
Restart=always
User=ubuntu
Group=ubuntu
[Install]
WantedBy=multi-user.target

enable the service

$ sudo systemctl daemon-reload
$ sudo systemctl enable ipfs-cluster
$ ipfs-cluster-service init

This will generate respective files and folders in your home path, in my case /home/ubuntu, the cluster id will be generated from the above command you can the below command to see the ipfs-cluster id

$ ipfs-cluster-ctl id
13D3KooWF8M8fHEM8fHE6VqNWvEQRUFXUTBYFixcuHJRGsd7i7QnEN

Till here do the same steps for all nodes, i.e. node 2 and node 3. except the cluster_secret.
Note: the cluster ip is unique for each cluster node.

To get the ipfs-cluster id

ipfs-cluster-ctl id

To list the cluster peers

ipfs-cluster-ctl peers ls

— continuing in part 2

--

--

Ravinayag

Blockchain enthusiast & Research | DevOps Explorer | Hyperledger Explorer