Bored of syncing nodes? Let EOS Infra take care of that for you 👌

Blocks Archive

Regular backups of the blocks data directory so you can fully sync with the EOS network.

Title Download Size MD5 Checksum
blocks_2019-06-18-07-02.tar.gz Wasabi S3 190.78 GiB 169da6fac0d1fba0d8e52f1fd5b1c589
blocks_2019-06-17-07-03.tar.gz Wasabi S3 190.09 GiB 0d65cdd38d623ccb3b6680966a97d1bb
blocks_2019-06-16-07-03.tar.gz Wasabi S3 189.48 GiB f5c0214394f5cfefd3053a726b3ad601
blocks_2019-06-15-07-02.tar.gz Wasabi S3 188.88 GiB 67325345e24fc041f13f97ba448c2278
blocks_2019-06-14-07-02.tar.gz Wasabi S3 188.3 GiB 1909e776a8e2ba59cd6978ffab04e09f
blocks_2019-06-13-07-03.tar.gz Wasabi S3 187.66 GiB 66eebe238da519285d07925f22d32771
blocks_2019-06-12-07-03.tar.gz Wasabi S3 187.06 GiB 6a3760f5245bad89dc73cb57fde01405
blocks_2019-06-11-07-03.tar.gz Wasabi S3 186.4 GiB b95569d6718690b3d9ad7f00589e8d9a

The blocks archives are taken daily from our bank of API nodes. These backups can be used across all node configurations and have been tested with Ubuntu, Centos and Debian.

How To Use

Download the archive, uncompress it into your data directory and start up nodeos requesting a hard replay which deletes the state database. This will validate the blocks, rebuild your state and sync with the live chain.

The example assumes you have used our automation framework to install and configure the EOS application. It includes handy bash helpers to auto dameonise the nodeos process and capture all output into a single log file.

You can use the one-liner in the example to always download the latest backup. We also have a Blocks API which orders the archives in chronological order, newest first.

# Move to your local eos directory, removing the existing data directories (if relevant)
cd /opt/mainnet
rm -rf blocks state

# Download the latest blocks backup
wget $(wget --quiet "https://eosnode.tools/api/blocks?limit=1" -O- | jq -r '.data[0].s3') -O blocks_backup.tar.gz

# Uncompress to ./blocks
tar xvzf blocks_backup.tar.gz

# Start the chain and replay from the blocks backup
./start.sh --hard-replay --wasm-runtime wabt

# Tail the logs to watch the sync process
tail -f log.txt
2018-08-13T09:42:10.168 initializing chain plugin
2018-08-13T09:42:10.170 Hard replay requested: deleting state database
2018-08-13T09:42:10.171 Recovering Block Log...
2018-08-13T09:42:10.171 Moved existing blocks directory to backup location: '/mnt/blocks-2018-08-13T09:42:10.171'
2018-08-13T09:42:10.172 Reconstructing '/mnt/blocks/blocks.log' from backed up block log
2018-08-13T09:44:33.490 Existing block log was undamaged. Recovered all irreversible blocks up to block number 10887835.
2018-08-13T09:44:33.493 Reversible blocks database was not corrupted. Copying from backup to blocks directory.
2018-08-13T09:44:38.833 Log is nonempty
2018-08-13T09:44:38.833 Index is empty
2018-08-13T09:44:38.833 Reconstructing Block Log Index...
...
2018-08-13T09:47:12.722 No head block in fork db, perhaps we need to replay
2018-08-13T09:47:12.722 Initializing new blockchain with genesis state
2018-08-13T09:47:12.755 existing block log, attempting to replay 10887835 blocks
    140700 of 10887835

How Long To Replay?

Once you kick off the hard-replay, the sync will take hours. Exactly how long is dependent on your system configuration. The replay process is mostly CPU bound, as nodeos is single threaded the important factor is your CPU clock speed, not the overall number of cores.

When you replay, you should follow the nodeos log. The code snippet on the left shows you an example of the log messages that you should see when you execute the hard-replay. After the initial validation you get a progress output to give you a better indication of the time it will take.