Database
Nethermind uses the RocksDB database to store the state. By default, the database is located in the
same directory where the Nethermind executable is. You can change the database location using the -d, --baseDbPath
command line option.
Database directory structure
Directory | Description |
---|---|
blockInfos | Information about blocks at each level of the block tree (canonical chain and branches) |
blocks | Block bodies (block transactions and uncles) |
bloom | Bloom indices for fast log searches |
canonicalHashTrie | LES protocol related data |
code | Contract bytecodes |
discoveryNodes | Peers discovered via discovery protocol - used for quick peering after restarts (you can copy this DB between nodes to speed up peering) |
headers | Block headers only |
pendingTx | The second level cache of pending transactions/mempool (the first level is in memory). Wiped out on each restart. |
peers | Additional sync peers information (like peer reputation) - you can copy this DB between nodes to speed up peering on fresh sync |
receipts | Transaction receipts |
state | Blockchain state including accounts and contract storage (Patricia trie nodes) |
You can use rsync
between your nodes to clone the database (One of our users copied the entire 4.5TB archive state this
way while the node was running and only stopped the node for the very last stage of rsync
). You can also copy
the database between Linux, Windows, and macOS.
Database size
Below is a comprehensive list of the supported chains, along with a detailed breakdown of their respective database directories. For reference, the database sizes listed have been determined using the standard configurations provided.
- Mainnet
- Sepolia
- Holesky
- Gnosis
- Chiado
- Energyweb
- Volta
state
: 156 GBreceipts
: 204 GBblocks
: 584 GBbloom
: 6.3 GBheaders
: 8.8 GBcode
: 4.6 GBblobTransactions
: 1.4 GB- ...
- Total: 965 GB
state
: 39 GBreceipts
: 37 GBblocks
: 251 GBbloom
: 2.0 GBheaders
: 2.2 GBcode
: 6.0 GBblobTransactions
: 496 MB- ...
- Total: 337 GB
state
: 17 GBreceipts
: 12 GBblocks
: 50 GBbloom
: 648 MBheaders
: 828 MBcode
: 434 MBblobTransactions
: 760 MB- ...
- Total: 81 GB
state
: 64 GBreceipts
: 215 GBblocks
: 196 GBbloom
: 9.0 GBheaders
: 10 GBcode
: 658 MBblobTransactions
: 75 MB- ...
- Total: 497 GB
state
: 2.6 GBreceipts
: 1.4 GBblocks
: 8.5 GBbloom
: 2.9 GBheaders
: 2.2 GBcode
: 60 MBblobTransactions
: 825 MB- ...
- Total: 20 GB
state
: 27 GBreceipts
: 4.4 GBblocks
: 24 GBbloom
: 9.6 GBheaders
: 6.8 GBcode
: 14 MBblobTransactions
:- ...
- Total: 74 GB
state
: 34 GBreceipts
: 8.3 GBblocks
: 32 GBbloom
: 8.9 GBheaders
: 6.8 GBcode
: 94 MBblobTransactions
:- ...
- Total: 92 GB
Reducing database size
The Nethermind database can experience substantial growth over time, starting from an initial size of approximately 650 GB. As a result, many node setups are configured to run on 1 TB disks. However, even with settings designed to slow the growth rate, these disks may eventually run out of free space.
The current options to reduce the database size are as follows:
The table below presents a short comparison of these methods including possible fine-tuning of each method. Data was fetched from a node running on a machine with the below specifications:
- Node.js: v1.18.0
- Consensus client: Lighthouse
- CPU: AMD EPYC 7713 (16 cores allocated for the VM)
- RAM: 64 GB
- Disk size: 1.2 TB
- Disk IOPS: 70,000 to 80,000
Metric | Resync | Pruning | Pruning and memory budget (4 GB) |
---|---|---|---|
Execution time | ~4h | ~24h | ~12h |
Minimum free disk space | N/A. You can execute resync even if there is 0 free space (avoid such a case). | 250 GB | 250 GB |
Attestation rate drop | 100%. No attestation rewards during that time or highly reduced. | 5–10% during that time | N/A |
Average block processing time of new blocks during the process | N/A. New blocks are processed after state but are significantly slower until old bodies/receipts are downloaded. Afterward, average about 0.35s. | 0.7s | 1.0s |
Is the node online during the process? | No, unless the state is synced. | Yes. The node follows the chain, and all modules are still enabled. | Yes. The node follows chain and all modules are still enabled. |
The command used for testing disk IOPS was as follows:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw