Bored of syncing nodes? Let EOS Infra take care of that for you 👌

Blocks Archive

Regular backups of the blocks data directory so you can fully sync with the EOS network.

Title Download Size MD5 Checksum
blocks_2019-07-16-07-03.tar.gz Wasabi S3 209.71 GiB 4dc1521cb334576991e7b55d1b9116d6
blocks_2019-07-14-07-03.tar.gz Wasabi S3 208.2 GiB 439ac0eecaf28ffef3b7c3582d615a13
blocks_2019-07-13-07-04.tar.gz Wasabi S3 207.56 GiB 69d3856b0f4de741f1f3512330bc69dd
blocks_2019-07-12-07-03.tar.gz Wasabi S3 206.88 GiB 08cb55cfb4b81514cb33e4316ecb2611
blocks_2019-07-11-07-03.tar.gz Wasabi S3 206.24 GiB 940c729630da773839db9ff30a704b10
blocks_2019-07-09-07-03.tar.gz Wasabi S3 204.95 GiB 905b990170cbd602c1ae93e338adc721

The blocks archives are taken daily from our bank of API nodes. These backups can be used across all node configurations and have been tested with Ubuntu, Centos and Debian.

How To Use

Download the archive, uncompress it into your data directory and start up nodeos requesting a hard replay which deletes the state database. This will validate the blocks, rebuild your state and sync with the live chain.

The example assumes you have used our automation framework to install and configure the EOS application. It includes handy bash helpers to auto dameonise the nodeos process and capture all output into a single log file.

You can use the one-liner in the example to always download the latest backup. We also have a Blocks API which orders the archives in chronological order, newest first.

# Move to your local eos directory, removing the existing data directories (if relevant)
cd /opt/mainnet
rm -rf blocks state

# Download the latest blocks backup
wget $(wget --quiet "https://eosnode.tools/api/blocks?limit=1" -O- | jq -r '.data[0].s3') -O blocks_backup.tar.gz

# Uncompress to ./blocks
tar xvzf blocks_backup.tar.gz

# Start the chain and replay from the blocks backup
./start.sh --hard-replay --wasm-runtime wabt

# Tail the logs to watch the sync process
tail -f log.txt
2018-08-13T09:42:10.168 initializing chain plugin
2018-08-13T09:42:10.170 Hard replay requested: deleting state database
2018-08-13T09:42:10.171 Recovering Block Log...
2018-08-13T09:42:10.171 Moved existing blocks directory to backup location: '/mnt/blocks-2018-08-13T09:42:10.171'
2018-08-13T09:42:10.172 Reconstructing '/mnt/blocks/blocks.log' from backed up block log
2018-08-13T09:44:33.490 Existing block log was undamaged. Recovered all irreversible blocks up to block number 10887835.
2018-08-13T09:44:33.493 Reversible blocks database was not corrupted. Copying from backup to blocks directory.
2018-08-13T09:44:38.833 Log is nonempty
2018-08-13T09:44:38.833 Index is empty
2018-08-13T09:44:38.833 Reconstructing Block Log Index...
...
2018-08-13T09:47:12.722 No head block in fork db, perhaps we need to replay
2018-08-13T09:47:12.722 Initializing new blockchain with genesis state
2018-08-13T09:47:12.755 existing block log, attempting to replay 10887835 blocks
    140700 of 10887835

How Long To Replay?

Once you kick off the hard-replay, the sync will take hours. Exactly how long is dependent on your system configuration. The replay process is mostly CPU bound, as nodeos is single threaded the important factor is your CPU clock speed, not the overall number of cores.

When you replay, you should follow the nodeos log. The code snippet on the left shows you an example of the log messages that you should see when you execute the hard-replay. After the initial validation you get a progress output to give you a better indication of the time it will take.