|
14 | 14 | - [My beacon node logs `WARN Error signalling fork choice waiter`, what should I do?](#bn-fork-choice)
|
15 | 15 | - [My beacon node logs `ERRO Aggregate attestation queue full`, what should I do?](#bn-queue-full)
|
16 | 16 | - [My beacon node logs `WARN Failed to finalize deposit cache`, what should I do?](#bn-deposit-cache)
|
| 17 | +- [How can I construct only partial state history?](#bn-partial-history) |
17 | 18 |
|
18 | 19 | ## [Validator](#validator-1)
|
19 | 20 |
|
@@ -190,6 +191,40 @@ If the node is syncing or downloading historical blocks, the error should disapp
|
190 | 191 |
|
191 | 192 | This is a known [bug](https://github.com/sigp/lighthouse/issues/3707) that will fix by itself.
|
192 | 193 |
|
| 194 | +### <a name="bn-partial-history"></a> How can I construct only partial state history? |
| 195 | + |
| 196 | +Lighthouse prunes finalized states by default. Nevertheless, it is quite often that users may be interested in the state history of a few epochs before finalization. To have access to these pruned states, Lighthouse typically requires a full reconstruction of states using the flag `--reconstruct-historic-states` (which will usually take a week). Partial state history can be achieved with some "tricks". Here are the general steps: |
| 197 | + |
| 198 | + 1. Delete the current database. You can do so with `--purge-db-force` or manually deleting the database from the data directory: `$datadir/beacon`. |
| 199 | + |
| 200 | + 1. If you are interested in the states from the current slot and beyond, perform a checkpoint sync with the flag `--reconstruct-historic-states`, then you can skip the following and jump straight to Step 5 to check the database. |
| 201 | + |
| 202 | + If you are interested in the states before the current slot, identify the slot to perform a manual checkpoint sync. With the default configuration, this slot should be divisible by 2<sup>21</sup>, as this is where a full state snapshot is stored. With the flag `--reconstruct-historic-states`, the state upper limit will be adjusted to the next full snapshot slot, a slot that satisfies: `slot % 2**21 == 0`. In other words, to have the state history available before the current slot, we have to checkpoint sync 2<sup>21</sup> slots before the next full snapshot slot. |
| 203 | + |
| 204 | + Example: Say the current mainnet is at slot `12000000`. As the next full state snapshot is at slot `12582912`, the slot that we want is slot `10485760`. You can calculate this (in Python) using `12000000 // 2**21 * 2**21`. |
| 205 | + |
| 206 | + 1. [Export](./advanced_checkpoint_sync.md#manual-checkpoint-sync) the blobs, block and state data for the slot identified in Step 2. This can be done from another beacon node that you have access to, or you could use any available public beacon API, e.g., [QuickNode](https://www.quicknode.com/docs/ethereum). |
| 207 | + |
| 208 | + 1. Perform a [manual checkpoint sync](./advanced_checkpoint_sync.md#manual-checkpoint-sync) using the data from the previous step, and provide the flag `--reconstruct-historic-states`. |
| 209 | + |
| 210 | + 1. Check the database: |
| 211 | + |
| 212 | + ```bash |
| 213 | + curl "http://localhost:5052/lighthouse/database/info" | jq '.anchor' |
| 214 | + ``` |
| 215 | + |
| 216 | + and look for the field `state_upper_limit`. It should show the slot of the snapshot: |
| 217 | + |
| 218 | + ```json |
| 219 | + "state_upper_limit": "10485760", |
| 220 | + ``` |
| 221 | + |
| 222 | +Lighthouse will now start to reconstruct historic states from slot `10485760`. At this point, if you do not want a full state reconstruction, you may remove the flag `--reconstruct-historic-states` (and restart). When the process is completed, you will have the state data from slot `10485760`. Going forward, Lighthouse will continue retaining all historical states newer than the snapshot. Eventually this can lead to increased disk usage, which presently can only be reduced by repeating the process starting from a more recent snapshot. |
| 223 | + |
| 224 | +> Note: You may only be interested in very recent historic states. To do so, you may configure the full snapshot to be, for example, every 2<sup>11</sup> slots, see [database configuration](./advanced_database.md#hierarchical-state-diffs) for more details. This can be configured with the flag `--hierarchy-exponents 5,7,11` together with the flag `--reconstruct-historic-states`. This will affect the slot number in Step 2, while other steps remain the same. Note that this comes at the expense of a higher storage requirement. |
| 225 | +
|
| 226 | +> With `--hierarchy-exponents 5,7,11`, using the same example as above, the next full state snapshot is at slot `12001280`. So the slot to checkpoint sync from is: slot `11999232`. |
| 227 | +
|
193 | 228 | ## Validator
|
194 | 229 |
|
195 | 230 | ### <a name="vc-redundancy"></a> Can I use redundancy in my staking setup?
|
|
0 commit comments