Skip to content

Releases: ArweaveTeam/arweave

Release N.2.9.5

13 Oct 16:32

Choose a tag to compare

This is a substantial update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

This release is primarily a bug fix, stability, and performance release. It includes all changes from all of the 2.9.5 alpha releases. Full details can be found in the release notes for each alpha:

Some of the changes described above were to address regressions introduced in a prior alpha. The full set of changes you can expect when upgrading from 2.9.4.1 are described below.

A special call out to the mining community members who installed and tested each of the alpha releases. Their help was critical in addressing regressions, fixing bugs, and implementing imrprovements. Thank you! Full list of contributors in the Community involvement section.

New Binaries

This release includes an updated set of pre-built binaries:

  • Ubuntu 22:04, erlang R26
  • Ubuntu 24:04, erlang R26
  • rocky9, erlang R26
  • MacOS, erlang R26

The default linux release refers to Ubuntu 22:04, erlang R26.

Going forward we recommend Arweave be built with erlang R26 rather than erlang R24.

The MacOS binaries are intended to be used for VDF Servers. Packing and mining on MacOS is still unsupported.

Changes to miner config

  • Several changes to options related to repack-in-place. See Support for repack-in-place from the replica.2.9 format.
  • vdf: see Optimized VDF.
  • Several changes to options related to the verify tool. See verify Tool Improvements.
  • disable_replica_2_9_device_limit: Disable the device limit for the replica.2.9 format. By default, at most one worker will be active per physical disk at a time, setting this flag removes this limit allowing multiple workers to be active on a given physical disk.
  • Several options to manually configure low level network performance. See help for options starting with network., http_client. and http_api..
  • mining_cache_size_mb: the default is set to 100MiB per partition being mined (e.g. if you leave mining_cache_size_mb unset while mining 64 partitions, your mining cache will be set to 6,400 MiB).
  • The process for running multiple nodes on a single server has changed. Each instance will need to set distinct/unique values for the ARNODE and ARCOOKIE environment variables. Here is an example script to launch 2 nodes one named exit and one named miner:
#!/usr/bin/env bash
ARNODE=exit@127.0.0.1 \
ARCOOKIE=exit \
screen -dmSL arweave.exit -Logfile ./screenlog.exit \
    ./bin/start config_file config.exit.json;

ARNODE=miner@127.0.0.1 \
ARCOOKIE=miner \
screen -dmSL arweave.miner -Logfile ./screenlog.miner \
    ./bin/start config_file config.miner.json

Optimized VDF

This release includes the optimized VDF algorithm developed by Discord user hihui.

To use this optimized VDF algorithm set the vdf hiopt_m4 config option. By default the node will run with the legacy openssl implementation.

Support for repack-in-place from the replica.2.9 format

This release introduces support for repack-in-place from replica.2.9 to unpacked or to a different replica.2.9 address. In addition we've made several performance improvements and fixed a number of edge case bugs which may previously have caused some chunks to be skipped by the repack process.

Performance

Due to how replica.2.9 chunks are processed, the parameters for tuning the repack-in-place performance have changed. There are 4 main considerations:

  • Repack footprint size: replica.2.9 chunks are grouped in footprints of chunks. A full footprint is 1024 chunks distributed evenly across a partition.
  • Repack batch size: The repack-in-place process reads some number of chunks, repacks them, and then writes them back to disk. The batch size controls how many contiguous chunks are read at once. Previously a batch size of 10 would mean that 10 chunks would be read, repacked, and written. However in order to handle replica.2.9 data efficiently, a batch size indicates the number of footprints to process at once. So a batch size of 10 means that 10 footprints will be read, repacked, and written. Since a full footprint is 1024 chunks, the amount of memory required to process a batch size of 10 is now 10,240 chunks or roughly 2.5 GiB.
  • Available RAM: The footprint size and batch size drive how much RAM is required by the repack in place process. And if you're repacking multiple partitions at once, the RAM requirements can grow quickly.
  • Disk IO: If you determine that disk IO is your bottleneck, you'd want to increase the batch size as much as you can as reading contiguous chunks are generally much faster than reading non-contiguous chunks.
  • CPU: However in some cases you may find that CPU is your bottleneck - this can happen when repacking from a legacy format like spora_2_6, or can happen when repacking many partitions between 2 replica.2.9 addresses. The saving grace here is that if CPU is your bottleneck, you can reduce your batch size or footprint size to ease off on your memory utilization.

To control all these factors, repack-in-place has 2 config options:

  • repack_batch_size: controls the batch size - i.e. the number of footprints processed at once
  • repack_cache_size_mb: sets the total amount of memory to allocate to the repack-in-place process per partition. So if you set repack_cache_size_mb to 2000 and are repacking 4 partitions, you can expect the repack-in-place process to consume roughly 8 GiB of memory. Note: the node will automatically set the footprint size based on your configured batch and cache sizes - this typically means that it will reduce the footprint size as much as needed. A smaller footprint size will increase your CPU load as it will result in your node generating the same entropy multiple times. For example, if your footprint size is 256 the node will need to generate teh same entropy 4 times in order to process all 1024 chunks in the full footprint.

Debugging

This release also includes a new option on the data-doctor inspect tool that may help with debugging packing issues.

/bin/data-doctor inspect bitmap <data_dir> <storage_module>

Example: /bin/data-doctor inspect bitmap /opt/data 36,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI.replica.2.9

Will generate a bitmap where every pixel represents the packing state of a specific chunk. The bitmap is laid out so that each vertical column of pixels is a complete entropy footprint. Here is an example of bitmap:

bitmap_storage_module_5_En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI replica 2 9

This bitmap shows the state of one node's partition 5 that has been repacked to replica.2.9. The green pixels are chunks that are in the expected replica.2.9 format, the black pixels are chunks that are missing from the miner's dataset, and the pink pixels are chunks that are too small to be packed (prior to partition ~9, users were allowed to pay for chunks that were smaller than 256KiB - these chunks are stored unpacked and can't be packed).

Performance Improvements

  • Improvements to both syncing speed and memory use while syncing
  • In our tests using solo as well as coordinated miners configured to mine while syncing many partitions, we observed steady memory use and full expected hashrate. This improves on 2.9.4.1 performance. Notably: the same tests run on 2.9.4.1 showed growing memory use, ultimately causing an OOM.
  • Reduce the volume of unnecessary network traffic due to a flood of 404 requests when trying to sync chunks from a node which only serves replica.2.9 data. Note: the benefit of this change will only be seen when most of the nodes in the network upgrade.
  • Performance improvements to HTTP handling that should improve performance more generally.
  • Optimization to speed up the collection of peer intervals when syncing. This can improve syncing performance in some situations.
  • Fix a bug which could cause syncing to occasionally stall out.
  • Optimize the shutdown process. This should help with, but not fully address, the slow node shutdown issues.
  • Fix a bug where a VDF client might get pinned to a slow or stalled VDF server.
  • Several updates to the mining cache logic. These changes address a number of edge case performance and memory bloat issues that can occur while mining.
  • Improve the transaction validation performance, this should reduce the frequency of "desyncs". I.e. nodes should now be able to handle a higher network transaction volume without stalling
    • Do not delay ready_for_mining on validator nodes
    • Make sure identical tx-status pairs do not cause extra mempool updates
    • Cache the owner address once computed for every TX
  • Reduce the time it takes for a node to join the network:
    • Do not re-download local blocks on join
    • Do not re-write written txs on join
    • Reduce per peer retry budget on join 10 -> 5
  • Fix edge case that could occasionally cause a mining pool to reject a replica.2.9 solution.
  • Fix edge case crash that occurred when a coordinated miner timed out while fetching partitions from peers
  • Fix bug where storage module crossing weave end may cause syncing stall
  • Fix bug where crash during peer interval collection may cause syncing stall
  • Fix race condition where we may no...
Read more

Release N.2.9.5-alpha6

29 Sep 17:40

Choose a tag to compare

Pre-release

This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

This release addresses several of the mining performance issues that had been reported on previous alphas. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you wish to take advantage of the new performance improvements.

Fix: Crash during coordinated mining when a solution is found

(Reported by discord user Vidiot)

Symptoms

  • After mining well for some time, hashrate dropped to 0
  • Logs had messages like: Generic server ar_mining_server terminating. Reason: {badarg,[{ar_block,compute_h1,3

Fix: session_not_found error during coordinated mining

(Reported by discord user Qwinn)

Symptoms

  • Hahrate lower than expected
  • Logs had mining_worker_failed_to_add_chunk_to_cache errors, with reason set to session_not_found

Guidance: cache_limit_exceeded warning during solo and coordinated mining

(Reported by discord users BerryCZ, mousetu, radion_nizametdinov, qq87237850, Qwinn, Vidiot)

Symptoms

  • Logs show mining_worker_failed_to_reserve_cache_space warnings, with reason set to cache_limit_exceeded

Resolution

The warning, if seen periodically, is expected and safe to ignore.

Root Cause

All VDF servers - even those with the exact same VDF time - will be on slightly different steps. This is because new VDF epochs are opened roughly every 20 minutes and are opened when a block is added to the chain. Depending on when your VDF server receives that block it may start calculating the new VDF chain earlier or later than other VDF servers.

This can cause there to be a gap in the VDF steps generated by two different servers even if they are able to compute new VDF steps at the exact same speed.

When a VDF server receives a block that is ahead of it in the VDF chain, it is able to quickly validate and use all the new VDF steps. This can cause the associated miners to receive a batch of VDF steps all at once. In these situations, the miner may exceed its mining cache causing the cache_limit_exceeded warning.

However this ultimately does not materially impact the miner's true hashrate. A miner will process VDF steps in reverse order (latest steps first) as those are the most valuable steps. The steps being dropped from the cache will be the oldest steps. Old steps may still be useful, but there is a far greater chance that any solution mined off an old step will be orphaned. The older the VDF step, the less useful it is.

TLDR: the warning, if seen periodically, is expected and safe to ignore.

Exception: If you are continually seeing the warning (i.e. not in periodic batches, but constantly and all the time) it may indicate that your miner is not able to keep up with its workload. This can indicate a hardware configuration issue (e.g. disk read rates are too slow), or perhaps a hardware capacity issue (E.g. CPU not fast enough to run hashes on all attached storage module), or some other performance-related issue.

Guidance

  • This alpha increases the default cache size from 4 steps to 20 VDF steps. This should noticeably reduce (but not eliminate) the frequency of the cache_limit_exceeded warning
  • If you want to increase it further you can set the mining_cache_size_mb option.

Guidance: 2.9.5-alphaX hashrate appears to be slower than 2.9.4.1

(Reported by discord users EvM, Lawso2517, Qwinn)

Symptoms

  • 2.9.4.1 hashrate is higher than 2.9.5-alphaX
  • 2.9.4.1 hashrate when solo mining might even be higher than the "ideal" hashrate listed in the mining report or grafana metrics

Resolution

The 2.9.4.1 hashrate included invalid hashes and the 2.9.5-alpha6 hashrate, although lower, includes only valid hashes.

Root Cause

2.9.4.1 (and earlier releases) had a bug which caused miners to generate hashes off of entropy in addition to valid packed data. The replica.2.9 data format lays down a full covering of entropy in each storage module before adding packed chunks. The result that is that for any storage module with less than 3.6TB of packed data, there is some amount of data on disk that is just entropy. A bug in the 2.9.4.1 mining algorithm generated hashes off of this entropy causing an inflated hashrate. Often the 2.9.4.1 hashrate is above the estimated "ideal" hashrate even when solo mining.

Another symptom of this bug is the chunk_not_found error occasionally reported by miners. This occurs under 2.9.4.1 (and earlier releases) when the miner hashes a range of entropy and generates a hash that exceeds the network difficulty. The miner believes this to be a valid solution and begins to build a block. At some point in the block building process the miner has to validate and include the packed chunk data. However since no packed chunk data exists (only entropy), the solution fails and the error is printed.

2.9.5-alpha2 fixed this bug so that miners correctly exclude entropy data when mining. This means that under 2.9.5-alpha2 and later releases miners spend fewer resources hashing entropy data, and generate fewer failed solution errors. The reported hashrate on 2.9.5-alpha2 is lower than 2.9.4.1 because the invalid hashes are no longer being counted.

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • BerryCZ
  • Evalcast
  • EvM
  • JanP
  • lawso2517
  • mousetu
  • qq87237850
  • Qwinn
  • radion_nizametdinov
  • smash
  • T777
  • Vidiot

Release N.2.9.5-alpha5

21 Aug 14:50

Choose a tag to compare

Pre-release

This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

This release includes several syncing and mining performance improvements. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you wish to take advantage of the new performance improvements.

Note regarding the release binaries:
We have changed the naming of the tar balls, dropped ubuntu20 and added ubuntu24.

All of the provided tarballs are built with erlang26. Going forward (and barring any serious issues that show up), arweave will be built with r26. We've been running r26 internally for a while now with no issues (and with some small stability and performance improvements)

Performance improvements

In all cases we ran tests on a full-weave solo miner, as well as a full-weave coordinated mining cluster. We believe the observed performance improvements are generalizable to other miners, but, as always, the performance observed by a given miner is often influenced by many factors that we are not able to test for. TLDR: your mileage may vary.

Syncing

Improvements to both syncing speed and memory use while syncing. The improvements address some regressions that were reported in the 2.9.5 alphas, but also improve on 2.9.4.1 performance.

Mining

This release addresses the significant hashrate loss that was observed during Coordinated Mining on the 2.9.5 alphas.

Syncing + Mining

In our tests using solo as well as coordinated miners configured to mine while syncing many partitions, we observed steady memory use and full expected hashrate. This addresses some regressions that were reported in the 2.9.5 alphas, but also improves on 2.9.4.1 performance. Notably: the same tests run on 2.9.4.1 showed growing memory use, ultimately causing an OOM.

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • BerryCZ
  • Butcher_
  • edzo
  • Evalcast
  • EvM
  • JF
  • lawso2517
  • MaSTeRMinD
  • qq87237850
  • Qwinn
  • radion_nizametdinov
  • RedMOoN
  • smash
  • T777
  • Vidiot

Release 2.9.5-alpha4

06 Aug 22:44

Choose a tag to compare

Release 2.9.5-alpha4 Pre-release
Pre-release

This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

This release includes the VDF optimization as well as several bug fixes. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you wish to use the new VDF optimization or if you believe one of the listed bug fixes will improve your mining experience.

New Binaries

This release includes an updated set of pre-built binaries:

  • Ubuntu 20:04, erlang r24 (arweave-2.9.5-alpha4.ubuntu20.r24-x86_64.tar.gz)
  • Ubuntu 20:04, erlang r26 (arweave-2.9.5-alpha4.ubuntu20.r26-x86_64.tar.gz)
  • Ubuntu 22:04, erlang r24 (arweave-2.9.5-alpha4.ubuntu22.r24-x86_64.tar.gz)
  • Ubuntu 22:04, erlang r26 (arweave-2.9.5-alpha4.ubuntu22.r26-x86_64.tar.gz)
  • MacOS, erlang r24 (N.2.9.5-alpha4-Darwin-arm64-R24.tar.gz)
  • MacOS, erlang r26 (N.2.9.5-alpha4-Darwin-arm64-R26.tar.gz)

The default linux release refers to Ubuntu 22:04, erlang r26

We recommend trying the appropriate "erlang r26" binary first. Internal testing shows it to be more stable and slightly more performant.

The MacOS binaries are intended to be used for VDF Servers. Packing and mining on MacOS is still unsupported.

Optimized VDF

This release includes the optimized VDF algorithm developed by Discord user hihui.

To use this optimized VDF algorithm set the vdf hiopt_m4 config option. By default the node will run with the legacy openssl implementation.

Mining Fixes

This release fixes a number of performance and memory issues that were observed while mining on previous 2.9.5 alpha releases.

Other Fixes and Improvements

Full Changelog: N.2.9.5-alpha3...N.2.9.5-alpha4

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • BerryCZ
  • bigbang
  • BloodHunter
  • Butcher_
  • doesn't stay up late
  • edzo
  • Evalcast
  • EvM
  • hihui
  • Iba Shinu
  • JamsJun
  • JF
  • jimmyjoe7768
  • lawso2517
  • MaSTeRMinD
  • Merdi Kim
  • Niiiko
  • qq87237850
  • Qwinn
  • RedMOoN
  • sk
  • smash
  • sumimi
  • T777
  • tashilo
  • U genius
  • Vidiot

Release 2.9.5-alpha3

20 May 19:34

Choose a tag to compare

Release 2.9.5-alpha3 Pre-release
Pre-release

This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

This release includes several bug fixes. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you believe one of the listed bug fixes will improve your mining experience.

Support for repack-in-place from the replica.2.9 format

This release introduces support for repack-in-place from replica.2.9 to unpacked or to a different replica.2.9 address. In addition we've made several performance improvements and fixed a number of edge case bugs which may previously have caused some chunks to be skipped by the repack process.

Performance

Due to how replica.2.9 chunks are processed, the parameters for tuning the repack-in-place performance have changed. There are 4 main considerations:

  • Repack footprint size: replica.2.9 chunks are grouped in footprints of chunks. A full footprint is 1024 chunks distributed evenly across a partition.
  • Repack batch size: The repack-in-place process reads some number of chunks, repacks them, and then writes them back to disk. The batch size controls how many contiguous chunks are read at once. Previously a batch size of 10 would mean that 10 chunks would be read, repacked, and written. However in order to handle replica.2.9 data efficiently, a batch size indicates the number of footprints to process at once. So a batch size of 10 means that 10 footprints will be read, repacked, and written. Since a full footprint is 1024 chunks, the amount of memory required to process a batch size of 10 is now 10,240 chunks or roughly 2.5 GiB.
  • Available RAM: The footprint size and batch size drive how much RAM is required by the repack in place process. And if you're repacking multiple partitions at once, the RAM requirements can grow quickly.
  • Disk IO: If you determine that disk IO is your bottleneck, you'd want to increase the batch size as much as you can as reading contiguous chunks are generally much faster than reading non-contiguous chunks.
  • CPU: However in some cases you may find that CPU is your bottleneck - this can happen when repacking from a legacy format like spora_2_6, or can happen when repacking many partitions between 2 replica.2.9 addresses. The saving grace here is that if CPU is your bottleneck, you can reduce your batch size or footprint size to ease off on your memory utilization.

To control all these factors, repack-in-place has 2 config options:

  • repack_batch_size: controls the batch size - i.e. the number of footprints processed at once
  • repack_cache_size_mb: sets the total amount of memory to allocate to the repack-in-place process per partition. So if you set repack_cache_size_mb to 2000 and are repacking 4 partitions, you can expect the repack-in-place process to consume roughly 8 GiB of memory. Note: the node will automatically set the footprint size based on your configured batch and cache sizes - this typically means that it will reduce the footprint size as much as needed. A smaller footprint size will increase your CPU load as it will result in your node generating the same entropy multiple times. For example, if your footprint size is 256 the node will need to generate teh same entropy 4 times in order to process all 1024 chunks in the full footprint.

Debugging

This release also includes a new option on the data-doctor inspect tool that may help with debugging packing issues.

/bin/data-doctor inspect bitmap <data_dir> <storage_module>

Example: /bin/data-doctor inspect bitmap /opt/data 36,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI.replica.2.9

Will generate a bitmap where every pixel represents the packing state of a specific chunk. The bitmap is laid out so that each vertical column of pixels is a complete entropy footprint. Here is an example of bitmap:

bitmap_storage_module_5_En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI replica 2 9

This bitmap shows the state of one node's partition 5 that has been repacked to replica.2.9. The green pixels are chunks that are in the expected replica.2.9 format, the black pixels are chunks that are missing from the miner's dataset, and the pink pixels are chunks that are too small to be packed (prior to partition ~9, users were allowed to pay for chunks that were smaller than 256KiB - these chunks are stored unpacked and can't be packed).

New prometheus metrics

  • ar_mempool_add_tx_duration_milliseconds: The duration in milliseconds it took to add a transaction to the mempool.
  • reverify_mempool_chunk_duration_milliseconds: The duration in milliseconds it took to reverify a chunk of transactions in the mempool.
  • drop_txs_duration_milliseconds: The duration in milliseconds it took to drop a chunk of transactions from the mempool
  • del_from_propagation_queue_duration_milliseconds: The duration in milliseconds it took to remove a transaction from the propagation queue after it was emitted to peers.
  • chunk_storage_sync_record_check_duration_milliseconds: The time in milliseconds it took to check the fetched chunk range is actually registered by the chunk storage.
  • fixed_broken_chunk_storage_records: The number of fixed broken chunk storage records detected when reading a range of chunks.
  • mining_solution: replaced the mining_solution_failure and mining_solution_total with a single metric, using labels to differentiate the mining solution state.
  • chunks_read: The counter is incremented every time a chunk is read from chunk_storage
  • chunk_read_rate_bytes_per_second: The rate, in bytes per second, at which chunks are read from storage. The type label can be 'raw' or 'repack'.
  • chunk_write_rate_bytes_per_second: The rate, in bytes per second, at which chunks are written to storage.
  • repack_chunk_states: The count of chunks in each state. 'type' can be 'cache' or 'queue'.
  • replica_2_9_entropy_generated: The number of bytes of replica.2.9 entropy generated.

Bug fixes and improvements

  • Several updates to the mining cache logic. These changes address a number of edge case performance and memory bloat issues that can occur while mining.
    • Guidance on setting the mining_cache_size_mb config: for now you can set it to 100x the number of partitions you are mining against. So if you are mining against 64 partitions on your node you would set it to 6400.
  • Improve the transaction validation performance, this should reduce the frequency of "desyncs". I.e. nodes should now be able to handle a higher network transaction volume without stalling
    • Do not delay ready_for_mining on validator nodes
    • Make sure identical tx-status pairs do not cause extra mempool updates
    • Cache the owner address once computed for every TX
  • Reduce the time it takes for a node to join the network:
    • Do not re-download local blocks on join
    • Do not re-write written txs on join
    • Reduce per peer retry budget on join 10 -> 5
  • Fix edge case that could occasionally cause a mining pool to reject a replica.2.9 solution.
  • Fix edge case crash that occurred when a coordinated miner timed out while fetching partitions from peers
  • Fix bug where storage module crossing weave end may cause syncing stall
  • Fix bug where crash during peer interval collection may cause syncing stall
  • Fix bug where we may miss VDF sessions when setting disable vdf_server_pull
  • Fix race condition where we may not detect double-signing
  • Optionally fix broken chunk storage records on the fly
    • Set enable fix_broken_chunk_storage_record to turn the feature on.

Full Changelog: N.2.9.5-alpha2...N.2.9.5-alpha3

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • BerryCZ
  • bigbang
  • BloodHunter
  • Butcher_
  • core_1_
  • doesn't stay up late
  • dzeto
  • edzo
  • Evalcast
  • EvM
  • grumpy.003
  • Iba Shinu
  • JamsJun
  • jimmyjoe7768
  • lawso2517
  • MaSTeRMinD
  • metagravity
  • Qwinn
  • radion_nizametdinov
  • RedMOoN
  • sam
  • smash
  • sumimi
  • tashilo
  • Vidiot
  • wybiacx

Release 2.9.4.1

03 Apr 17:40

Choose a tag to compare

This release fixes a bug in the mining logic that would cause replica.2.9 hashrate to drop to zero at block height 1642850. We strongly recommend all miners upgrade to this release as soon as possible - block height 1642850 is estimated to arrive at roughly April 4 at 11:30a UTC.

If you are not mining, you do do not need to upgrade to this release.

This release is incremental on the 2.9.4 release and does not include any changes from the 2.9.5-alpha1 release.

Full Changelog: N.2.9.4...N.2.9.4.1

Release 2.9.5-alpha2

03 Apr 22:27

Choose a tag to compare

Release 2.9.5-alpha2 Pre-release
Pre-release

This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

This release includes several bug fixes. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you believe one of the listed bug fixes will improve your mining experience.

Changes

  • Apply the 2.9.4.1 patch to the 2.9.5 branch. More info on discord
  • Optimization to speed up the collection of peer intervals when syncing. This can improve syncing performance in some situations. Code changes.
  • Fix a bug which could cause syncing to occasionally stall out. Code changes
  • Bug fixes to address chunk_not_found and sub_chunk_mismatch errors. Code changes
  • Add support for DNS pools (multiple IPs behind a single DNS address). Code changes
  • Publish some more protocol values as metrics. Code changes
  • Optimize the shutdown process. This should help with, but not fully address, the slow node shutdown issues. Code changes
  • Add webhooks for the entire mining solution lifecycle. New solution webhook added with multiple states solution_rejected, solution_stale, solution_partial, solution_orphaned, solution_accepted, and solution_confirmed. Code changes
  • Add metrics to allow tracking mining solutions: mining_solution_failure, mining_solution_success, mining_solution_total. Code changes
  • Fix a bug where a VDF client might get pinned to a slow or stalled VDF server. Code changes

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • BerryCZ
  • bigbang
  • BloodHunter
  • Butcher_
  • dlmx
  • doesn't stay up late
  • edzo
  • Iba Shinu
  • JF
  • lawso2517
  • MaSTeRMinD
  • MCB
  • qq87237850
  • Qwinn
  • RedMOoN
  • smash
  • sumimi
  • T777
  • Thaseus
  • Vidiot
  • Wednesday

Release 2.9.5-alpha1

09 Mar 13:23

Choose a tag to compare

Release 2.9.5-alpha1 Pre-release
Pre-release

This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

This release includes several bug fixes. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you believe one of the listed bug fixes will improve your mining experience.

verify Tool Improvements

This release contains several improvements to the verify tool. Several miners have reported block failures due to invalid or missing chunks. The hope is that the verify tool improvements in this release will either allow those errors to be healed, or provide more information about the issue.

New verify modes

The verify tool can now be launched in log or purge modes. In log mode the tool will log errors but will not flag the chunks for healing. In purge mode all bad chunks will be marked as invalid and flagged to be resynced and repacked.

To launch in log mode specify the verify log flag. To launch in purge mode specify the verify purge flag. Note: verify true is no longer valid and will print an error on launch.

Chunk sampling

The verify tool will now sample 1,000 chunks and do a full unpack and validation of the chunk. This sampling mode is intended to give a statistical measure of how much data might be corrupt. To change the number of chunks sampled you can use the the verify_samples option. E.g. verify_samples 500 will have the node sample 500 chunks.

More invalid scenarios tested

This latest version of the verify tool detects several new types of bad data. The first time you run the verify tool we recommend launching it in log mode and running it on a single partition. This should avoid any surprises due to the more aggressive detection logic. If the results are as you expect, then you can relaunch in purge mode to clean up any bad data. In particular, if you've misnamed your storage_module the verify tool will invalidate all chunks and force a full repack - running in log mode first will allow you to catch this error and rename your storage_module before purging all data.

Bug Fixes

  • Fix several issues which could cause a node to "desync". Desyncing occurs when a node gets stuck at one block height and stops advancing.
  • Reduce the volume of unnecessary network traffic due to a flood of 404 requests when trying to sync chunks from a node which only serves replica.2.9 data. Note: the benefit of this change will only be seen when most of the nodes in the network upgrade.
  • Performance improvements to HTTP handling that should improve performance more generally.
  • Add TX polling so that a node will pull missing transactions in addition to receiving them via gossip

Known issues

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • AraAraTime
  • BerryCZ
  • bigbang
  • BloodHunter
  • Butcher_
  • dlmx
  • dzeto
  • edzo
  • EvM
  • Fox Malder
  • Iba Shinu
  • JF
  • jimmyjoe7768
  • lawso2517
  • MaSTeRMinD
  • MCB
  • Methistos
  • Michael | Artifact
  • qq87237850
  • Qwinn
  • RedMOoN
  • smash
  • sumimi
  • T777
  • Thaseus
  • Vidiot
  • Wednesday
  • wybiacx

What's Changed

Full Changelog: N.2.9.4...N.2.9.5-alpha1

Release 2.9.4

09 Feb 22:20

Choose a tag to compare

This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

This release includes several bug fixes. We recommend upgrading, but it's not required. All releases 2.9.1 and higher implement the consensus rule changes for the 2.9 hard fork and should be sufficient to participate in the network.

Note: this release fixes a packing bug that affects any storage module that does not start on a partition boundary. If you have previously packed replica.2.9 data in a storage module that does not start on a partition boundary, we recommend discarding the previously packed data and repacking the storage module with the 2.9.4 release. This applies only to storage modules that do not start on a partition boundary, all other storage modules are not impacted.

Example of an impacted storage module:

  • storage_module 3,1800000000000,addr.replica.2.9

Example of storage modules that are not impacted:

  • storage_module 10,addr.replica.2.9
  • storage_module 2,1800000000000,addr.replica.2.9
  • storage_module 0,3400000000000,addr.replica.2.9

Other bug fixes and improvements:

  • Fix a regression that caused GET /tx/id/data to fail
  • Fix a regression that could cause a node to get stuck on a single peer while syncing (both sync_from_local_peers_only and syncing from the network)
  • Limit the resources used to sync the tip data. This may address some memory issues reported by miners.
  • Limit the resources used to gossip new transactions. This may address some memory issues reported by miners.
  • Allow the node to heal itself after encountering a not_prepared_yet error. The error has also been downgraded to a warning.

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • AraAraTime
  • bigbang
  • BloodHunter
  • Butcher_
  • dlmx
  • dzeto
  • Iba Shinu
  • JF
  • jimmyjoe7768
  • lawso2517
  • MaSTeRMinD
  • MCB
  • Methistos
  • qq87237850
  • Qwinn
  • RedMOoN
  • sam
  • T777
  • U genius
  • Vidiot
  • Wednesday

What's Changed

Full Changelog: N.2.9.3...N.2.9.4

Release 2.9.3

04 Feb 01:26

Choose a tag to compare

This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

This is a minor release that fixes a few bugs:

  • sync and pack stalling
  • ready_for_work error when sync_jobs = 0
  • unnecessary entropy generated on storage modules that are smaller than 3.6TB
  • remove some overly verbose error logs

What's Changed

Full Changelog: N.2.9.2...N.2.9.3