Running Multi-miner Container
Architecture
Install multi-buckets container can be illustrated as below:
watchTower: When there is a difference between the local bucket image and the official bucket image, watchtower will automatically pull the new official image, create a new storage node, and then delete the old storage node.
bucket: A storage node. Multiple storage nodes communicate with each other via P2P. The ports configured in the example config.yaml are: 15001, 15002.
chain: A chain node. Storage nodes query block information through the chain node's 9944 port by default; chain nodes synchronize data among themselves through the default port: 30336.
System requirements
Minimum Configuration Requirements:
Each storage node requires at least 4GB of RAM and 1 processor, and the chain node requires at least 2GB of RAM and 1 processor.
At least 10GB of RAM and 3 processors if running 2 buckets and 1 chain node at the same time
Method 1: Run multi-buckets containers with admin client
Storage environment requirements
Installation operation has certain requirements on the storage environment in the current host, and different configurations are required based on the disk configuration.
Multiple Disks
As shown in the figure below, where /dev/sda
is the system disk, /dev/sdb
and /dev/sdc
is the data disk, users can directly partition and create file systems on the data disks, and finally mount the file systems to the working directory of the bucket.
Repeat the above steps to partition /dev/sdc
and create a filesystem, then mount it to the file directory: /mnt/cess_storage2
In the case where a disk is divided into many partitions, when the disk is damaged, all storage nodes that use its partitions for work will be affected.
Single Disk
This procedure is suitable for environments with only one system disk.
Scene 1
As shown in the following example, if there is only one 50GB system disk, the Last sector value
of partition /dev/sda3
of disk /dev/sda
is already at its maximum value (50GB disk can not be partitioned anymore).
As shown above, the current system kernel is using this partition, so it can not modify the partition to build the running environment required for multi-bucket.
If the partition does not take up the entire disk and there is still storage space available for partitioning, you can configure the partition by referring to the configuration method of Multiple Disks.(In this situation, the running of multi-bucket will depend on this single disk)
Scene 2
As shown in the figure below, the current environment has only one /dev/nvme0n1
system disk with about 1.8T of storage space, which is partitioned three times, including /dev/nvme0n1p1
, /dev/nvme0n1p2
and /dev/nvme0n1p3
.
The current system relies on the virtual logical disk /dev/ubuntu-vg/ubuntu-lv
created in the third partition /dev/nvme0n1p3
. Since this virtual logical disk occupies only 100GB of storage space, you can configure a multi-bucket environment by using lvm
to create multiple virtual logical volumes on the remaining space.
Users can create multiple logic volumes on a single disk by lvm, and mount multiple logic volumes on different diskPaths, but when the disk is damaged, all storage nodes relying on lvm will be affected!
1. Download and install cess-multi-bucket client
2. Customize your own configuration
After executing the above installation command, customize your own config file at: /opt/cess/cess-multibucket-admin /config.yaml
.
UseSpace: Storage capacity of the storage node, measured in GB.
UseCpu: Number of logical cores used by the storage node.
port: Storage node use that port to communicat with each other,the port of each storage node must be different and not occupied by other process
diskPath: Absolute system path where the storage node run, requiring a file system to be mounted at this path.
earningsAcc: Used to receive mining rewards. Get earningsAcc and mnemonic
stakingAcc: Used to pay for staking TCESS. 4000 TCESS is required for providing 1T of storage space. Delete in config.yaml can stake by earningsAcc.
mnemonic: Account mnemonic, consisting of 12 words, with each storage node requiring a different mnemonic.
chainWsUrl: By default, the local RPC node will be used for data synchronization. The priority of
buckets[].chainWsUrl
is higher thannode.chainWsUrl
.backupChainWsUrls: Backup RPC nodes that can be official RPC nodes or other RPC nodes you know. The priority of
buckets[].backupChainWsUrls
is higher thannode.backupChainWsUrls
.
Your can run multi-bucket in a single disk by lvm, then mount each lv in different diskPath, but when your single disk can not work, all storage nodes depends on this single disk will be affected!
3. Generate configuration
The following command will generate config.yaml
for each storage node and docker-compose.yaml
based on the file located at: /opt/cess/cess-multibucket-admin /config.yaml
.
Generate each bucket configuration at
$diskPath/bucket/config.yaml
. For example, bucket1's configuration generate at:/mnt/cess_storage1/bucket/config.yaml
Generate docker-compose.yaml at
/opt/cess/cess-multibucket-admin /build/docker-compose.yaml
If you want others server access to local rpc node, please add --rpc-external
in services.chain.command
in /opt/cess/cess-multibucket-admin /build/docker-compose.yaml
and add --rpc-cors
all
in services.chain.command
to allow CORS: Cross-Origin Resource Sharing
'--rpc-external'
'--rpc-cors'
'all'
4. Installation
Install all services
Install watchTower, rpc, and multi-buckets services
Skip install rpcnode
If an official RPC node or other known RPC node is configured in the configuration file, you can skip starting a local RPC node with -skip-chain
.
5. Common Operations
Stop all services
Stop one or more specific service
Such as execute sudo cess-multibucket-admin stop bucket_1 bucket_2
to stop bucket_1
and bucket_2
Stop and remove all services
Stop and remove one or more specific service
Such as execute sudo cess-multibucket-admin down bucket_1
to remove bucket_1
Restart all services
Restart one or more specific service
Such as execute sudo cess-multibucket-admin restart bucket_1
to restart bucket_1
Get version information
Check services status
Pull images
Check disk usage
View all storage node's status
Please wait hours for data sync in storage node when you first run
Increase all storage node's stake
Such as execute sudo cess-multibucket-admin buckets increase staking 4000
to increase all node's stake
Increase a specific storage node's stake
Such as sudo cess-multibucket-admin buckets increase staking bucket_1 4000
Query all storage node's reward
Claim all storage node's reward
Claim a specific storage node's reward
Such as sudo cess-multibucket-admin buckets claim bucket_1
Update an earnings account
Such as sudo cess-multibucket-admin buckets update earnings cXxxx
The process of exiting the CESS network will last for hours, and forcing an exit in the middle of the process will make the storage node being punished.
Make all storage nodes exit the network of cess
Make a specific storage node exit the network of cess
Such as sudo cess-multibucket-admin buckets exit bucket_1
Withdraw all storage node's stake
After all storage nodes has exited CESS Network (see above), run
Withdraw a specific storage node's stake
After this node has exited CESS Network (see above), run
Remove the data in chain and bucket
6. upgrade cess-multibucket-admin client
Upgrade the cess-multibucket-admin client by execute command as below:
After the program update is completed, please regenerate your configuration as below:
Options help:
Last updated