Skip to content

Releases: kaspanet/rusty-kaspa

Mainnet Crescendo Release - v1.0.0

31 Mar 22:33
eb71df4
Compare
Choose a tag to compare

This release introduces the Crescendo Hardfork, transitioning the Kaspa network from 1 BPS to 10 BPS. This marks a significant increase in transaction throughput and network capacity, as well as improved network responsiveness due to shorter block intervals, enhancing the overall user experience. Crescendo is scheduled to activate on mainnet at DAA Score 110,165,000, projected to occur on May 5, 2025, at approximately 15:00 UTC.

Starting 24 hours before activation, nodes will connect only to peers using the new P2P protocol version 7. Ensure your node is updated to maintain network connectivity.

Key highlights for Kaspa node maintainers

  • 10 BPS Activation: Mainnet will transition from 1 BPS to 10 BPS.
  • Retention Period Configuration: Operators now have greater control over data management with a new retention-period-days configuration. Due to the higher block rate, the pruning period has shortened from approximately 50 hours to 30 hours. If operators wish to retain the same amount of historical data as before, they should specify the desired retention period using the new configuration and ensure sufficient storage capacity is available.
  • Protocol Version Update: Nodes will switch to P2P protocol version 7. Ensure your node is upgraded to maintain connectivity.

Retention period configuration

The new retention-period-days parameter provides flexibility for node operators by determining how many days of historical data to retain.

Configuration Type Usage
retention-period-days f64 The number of days to keep data for. Must be at least 2. If not set, the node will default to keeping only data for the pruning period.

Example: Keep 2.5 days (=60 hours) of data using the following command:

./kaspad --utxoindex --retention-period-days=2.5

Crescendo specification

For full details of the changes activated in this hardfork, refer to KIP-14.

Node upgrade guide

Ensure your node is updated to stay compatible with the Crescendo Hardfork. For detailed instructions on upgrading and configuring your node, refer to the Crescendo Guide.


Full Changelog: v0.16.1...v1.0.0

Release v0.17.2

27 Mar 04:08
47abc36
Compare
Choose a tag to compare
Release v0.17.2 Pre-release
Pre-release

What's Changed

Full Changelog: v0.17.1...v0.17.2

Release v0.17.1

12 Mar 19:06
ac677a0
Compare
Choose a tag to compare
Release v0.17.1 Pre-release
Pre-release

Notes

  • This version includes an optimization that significantly reduces base RAM usage especially in 10BPS networks
  • It also includes a new configuration for retention period. See below

New Configuration

Configuration Type Usage
retention-period-days f64 The number of days to keep data for. Must be at least 2. If not set, the node will default to keeping only data for the pruning period

Sample Usage - Keep 30 days worth of data:

./kaspad --utxoindex --retention-period-days=30

What's Changed

Full Changelog: v0.17.0...v0.17.1

Testnet 10 Crescendo Release - v0.17.0

04 Mar 18:42
430c8ad
Compare
Choose a tag to compare
Pre-release

This pre-release of Kaspa 0.17.0 introduces support for the upcoming Crescendo Hardfork on Testnet 10 (TN10), scheduled to shift from 1 to 10 BPS on March 6, 2025. It is not recommended for production mainnet miners, but non-mining mainnet nodes may upgrade for early stability testing. Note that this version does not support TN11—those needing TN11 should remain on the latest stable release or stable branch.

For detailed instructions on setting up a TN10 node, generating transactions, and mining with this release, see the TN10 Crescendo Hardfork Node Setup Guide.

Stable Release - v0.16.1

08 Feb 06:49
cdd4379
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.16.0...v0.16.1

Stable Release - v0.16.0

29 Jan 16:19
178c060
Compare
Choose a tag to compare

Instructions for Node Maintainers

This is the first stable release that includes the database version upgrade introduced in #494. When you run this version with an existing datadir, you may be asked to upgrade your database version in CLI. To proceed, type y or yes and hit enter. Alternatively, you can run this version with a --yes flag to skip the interactive question.

This database upgrade is a one-way path, which means after updating your datadir to the new version, you won't be able to go back to an earlier version.

Database Upgrade to v4

This release includes a database upgrade from v3 to v4. This upgrade includes:

  • introducing new data stores for temp ghostdag stores used when building pruning proofs
  • removing deprecated higher level ghostdag data entries - these are now calculated on-the-fly when needed (validating pruning proofs, building pruning proofs)

What's Changed

New Contributors

Full Changelog: v0.15.2...v0.16.0

Release Candidate v0.15.4

20 Dec 00:38
8fe4663
Compare
Choose a tag to compare
Pre-release

Instructions for Node Maintainers

This is the first release that includes the database version upgrade introduced in #494. When you run this version with an existing datadir, you may be asked to upgrade your database version in CLI. To proceed, type y or yes and hit enter. Alternatively, you can run this version with a --yes flag to skip the interactive question.

This database upgrade is a one-way path, which means after updating your datadir to the new version, you won't be able to go back to an earlier version.

What's Changed

New Contributors

Full Changelog: v0.15.2...v0.15.4-rc1

Testnet 11 Only v0.15.4

29 Nov 06:58
Compare
Choose a tag to compare
Pre-release

WARNING: INTENDED FOR USE IN TESTNET 11 ONLY

This is a special release with code not in the mainline yet. It is intended to hardfork the testnet-11 environment and should only be used by Testnet-11 node maintainers.

The TN11 HF includes:

  • Enabling KIP10
  • Switching KIP9 to use the Beta version in consensus
  • Enabling Payloads for transactions

In addition, this includes all the optimizations that we have been working on which will allow us to determine a minimum recommended hardware spec for 10bps in mainnet.

Instructions for Node Maintainers

  • This is the first release that includes the database version upgrade introduced in #494. When you run this version with an existing datadir, you may be asked to upgrade your database version in CLI. To proceed, type y or yes and hit enter. Alternatively, you can run this version with a --yes flag to skip the interactive question.
    • This database upgrade is a one-way path, which means after updating your datadir to the new version, you won't be able to go back to an earlier version.

Instruction to build from source

  1. Git pull and checkout the kip10-tn11-hf branch
  2. Build your binary from there. This release is on the commit 14b1e10

Release v0.15.2

20 Sep 11:37
9fae376
Compare
Choose a tag to compare

Released mainly for Integrators using GetVirtualChainFromBlock

NOTE: If you do not use GetVirtualChainFromBlock in your integration, you do not need to update to this version.

This release updates the GetVirtualChainFromBlock so that it operates in batches. This solves an issue where the call to GetVirtualChainFromBlock would previously take a long time to complete if the client has to sync virtual chain state from a deep chain block after a period of being unsynced. Each call will now return very promptly, but it also means you now have to call GetVirtualChainFromBlock multiple times if you're syncing from a deep chain block.

To take advantage of this new batching mechanism, you only need to make sure that you continue calling GetVirtualChainFromBlock until your software has caught up to the tips of the network. For reference, the pseudo-code is:

startHash = <the_last_hash_you_synced_from>
isCatchingUp = true

// Catch-Up code. Expecting to do faster catch up logic here
while isCatchingUp:
  batch = GetVirtualChainFromBlock(startHash, includeTransactionIds: true)
  // Do your processing with batch
  // ...
  startHash = batch.added[<last_element_index>]

  if len(batch.added) < 10:
    // if the response was batched it will contain at least 10 chain blocks
    // (because we limit the number of merged blocks by mergeset limit x 10),
    // otherwise, we've caught up and can proceed with normal batch processing
    isCatchingUp = false

// Continue your normal pace of processing next batches with GetVirtualChainFromBlock
// ...

What's Changed

  • Fix WASM interface typo for normalBuckets and lowBuckets in IFeerateBucket by @Cryptok777 in #557
  • Fix new gRPC methods to use camel case (non-breaking change) by @michaelsutton in #560
  • virtual chain from block batching. by @D-Stacks in #454
  • A few CLI rpc query fixes by @michaelsutton in #563
  • Deploy linux binary without musl in its name + various minor miscellaneous things towards v0.15.2 by @michaelsutton in #564

New Contributors

Full Changelog: v0.15.1...v0.15.2

Release Candidate v0.15.2-rc1

19 Sep 20:41
b14537f
Compare
Choose a tag to compare
Pre-release

Note to Integrators using GetVirtualChainFromBlock

IMPORTANT: If you do not use GetVirtualChainFromBlock in your integration, you do not need to update to this version.

This release updates the GetVirtualChainFromBlock so that it operates in batches. This solves an issue where the call to GetVirtualChainFromBlock would previously take a long time to complete if the client has to sync virtual chain state from a deep chain block after a period of being unsynced. Each call will now return very promptly, but it also means you now have to call GetVirtualChainFromBlock multiple times if you're syncing from a deep chain block.

To take advantage of this new batching mechanism, you only need to make sure that you continue calling GetVirtualChainFromBlock until your software has caught up to the tips of the network. For reference, the pseudo-code is:

startHash = <the_last_hash_you_synced_from>
isCatchingUp = true

// Catch-Up code. Expecting to do faster catch up logic here
while isCatchingUp:
  batch = GetVirtualChainFromBlock(startHash, includeTransactionIds: true)
  // Do your processing with batch
  // ...
  startHash = batch.added[<last_element_index>]

  if len(batch.added) < 10:
    // if the response was batched it will contain at least 10 chain blocks
    // (because we limit the number of merged blocks by mergeset limit x 10),
    // otherwise, we've caught up and can proceed with normal batch processing
    isCatchingUp = false

// Continue your normal pace of processing next batches with GetVirtualChainFromBlock
// ...

What's Changed

  • Fix WASM interface typo for normalBuckets and lowBuckets in IFeerateBucket by @Cryptok777 in #557
  • Fix new gRPC methods to use camel case (non-breaking change) by @michaelsutton in #560
  • virtual chain from block batching. by @D-Stacks in #454

New Contributors

Full Changelog: v0.15.1...v0.15.2-rc1