Skip to main content
Version: 1.26.0

Database

Nethermind uses the RocksDB database to store the state. By default, the database is located in the same directory where the Nethermind executable is. You can change the database location using the -d, --baseDbPath command line option.

Database directory structure

DirectoryDescription
blockInfosInformation about blocks at each level of the block tree (canonical chain and branches)
blocksBlock bodies (block transactions and uncles)
bloomBloom indices for fast log searches
canonicalHashTrieLES protocol related data
codeContract bytecodes
discoveryNodesPeers discovered via discovery protocol - used for quick peering after restarts (you can copy this DB between nodes to speed up peering)
headersBlock headers only
pendingTxThe second level cache of pending transactions/mempool (the first level is in memory). Wiped out on each restart.
peersAdditional sync peers information (like peer reputation) - you can copy this DB between nodes to speed up peering on fresh sync
receiptsTransaction receipts
stateBlockchain state including accounts and contract storage (Patricia trie nodes)

You can use rsync between your nodes to clone the database (One of our users copied the entire 4.5TB archive state this way while the node was running and only stopped the node for the very last stage of rsync ). You can also copy the database between Linux, Windows, and macOS.

Database size

Below is a comprehensive list of the supported chains, along with a detailed breakdown of their respective database directories. For reference, the database sizes listed are based on the data from July 2023 and have been determined using the standard configurations provided.

  • state: 153 GB
  • receipts: 196 GB
  • blocks: 571 GB
  • bloom: 6.2 GB
  • headers: 8.6 GB
  • code: 4.4 GB
  • blobTransactions: 1.4 GB
  • ...
  • Total: 942 GB

Reducing database size

The Nethermind database can experience substantial growth over time, starting from an initial size of approximately 650 GB. As a result, many node setups are configured to run on 1 TB disks. However, even with settings designed to slow the growth rate, these disks may eventually run out of free space.

The current options to reduce the database size are as follows:

The table below presents a short comparison of these methods including possible fine-tuning of each method. Data was fetched from a node running on a machine with the below specifications:

  • Node.js: v1.18.0
  • Consensus client: Lighthouse
  • CPU: AMD EPYC 7713 (16 cores allocated for the VM)
  • RAM: 64 GB
  • Disk size: 1.2 TB
  • Disk IOPS: 70,000 to 80,000
MetricResyncPruningPruning and memory budget (4 GB)
Execution time~4h~24h~12h
Minimum free disk spaceN/A. You can execute resync even if there is 0 free space (avoid such a case).250 GB250 GB
Attestation rate drop100%. No attestation rewards during that time or highly reduced.5–10% during that timeN/A
Average block processing time of new blocks during the processN/A. New blocks are processed after state but are significantly slower until old bodies/receipts are downloaded. Afterward, average about 0.35s.0.7s1.0s
Is the node online during the process?No, unless the state is synced.Yes. The node follows the chain, and all modules are still enabled.Yes. The node follows chain and all modules are still enabled.

The command used for testing disk IOPS was as follows:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw