Releases: kaspanet/rusty-kaspa
Mainnet Crescendo Release - v1.0.0
This release introduces the Crescendo Hardfork, transitioning the Kaspa network from 1 BPS to 10 BPS. This marks a significant increase in transaction throughput and network capacity, as well as improved network responsiveness due to shorter block intervals, enhancing the overall user experience. Crescendo is scheduled to activate on mainnet at DAA Score 110,165,000, projected to occur on May 5, 2025, at approximately 15:00 UTC.
Starting 24 hours before activation, nodes will connect only to peers using the new P2P protocol version 7. Ensure your node is updated to maintain network connectivity.
Key highlights for Kaspa node maintainers
- 10 BPS Activation: Mainnet will transition from 1 BPS to 10 BPS.
- Retention Period Configuration: Operators now have greater control over data management with a new
retention-period-days
configuration. Due to the higher block rate, the pruning period has shortened from approximately 50 hours to 30 hours. If operators wish to retain the same amount of historical data as before, they should specify the desired retention period using the new configuration and ensure sufficient storage capacity is available. - Protocol Version Update: Nodes will switch to P2P protocol version 7. Ensure your node is upgraded to maintain connectivity.
Retention period configuration
The new retention-period-days
parameter provides flexibility for node operators by determining how many days of historical data to retain.
Configuration | Type | Usage |
---|---|---|
retention-period-days |
f64 | The number of days to keep data for. Must be at least 2. If not set, the node will default to keeping only data for the pruning period. |
Example: Keep 2.5 days (=60 hours) of data using the following command:
./kaspad --utxoindex --retention-period-days=2.5
Crescendo specification
For full details of the changes activated in this hardfork, refer to KIP-14.
Node upgrade guide
Ensure your node is updated to stay compatible with the Crescendo Hardfork. For detailed instructions on upgrading and configuring your node, refer to the Crescendo Guide.
Full Changelog: v0.16.1...v1.0.0
Release v0.17.2
What's Changed
- Retain index data up to retention period root by @coderofstuff in #657
- Mining Rule Engine Scaffolding and Sync Rate Rule implementation by @coderofstuff in #654
- Identify & Warn miners with outdated mining rpc flow by @michaelsutton in #658
- Bump to version 0.17.2 + misc by @michaelsutton in #661
Full Changelog: v0.17.1...v0.17.2
Release v0.17.1
Notes
- This version includes an optimization that significantly reduces base RAM usage especially in 10BPS networks
- It also includes a new configuration for retention period. See below
New Configuration
Configuration | Type | Usage |
---|---|---|
retention-period-days | f64 | The number of days to keep data for. Must be at least 2. If not set, the node will default to keeping only data for the pruning period |
Sample Usage - Keep 30 days worth of data:
./kaspad --utxoindex --retention-period-days=30
What's Changed
- Crescendo TN10 node setup guide by @michaelsutton in #645
- Start the pruning proof per level search at the lowest required depth by @coderofstuff in #648
- Crescendo-related RAM optimizations & miscellaneous by @michaelsutton in #650
- Implement Retention Period Support by @coderofstuff in #592
Full Changelog: v0.17.0...v0.17.1
Testnet 10 Crescendo Release - v0.17.0
This pre-release of Kaspa 0.17.0 introduces support for the upcoming Crescendo Hardfork on Testnet 10 (TN10), scheduled to shift from 1 to 10 BPS on March 6, 2025. It is not recommended for production mainnet miners, but non-mining mainnet nodes may upgrade for early stability testing. Note that this version does not support TN11—those needing TN11 should remain on the latest stable release or stable branch.
For detailed instructions on setting up a TN10 node, generating transactions, and mining with this release, see the TN10 Crescendo Hardfork Node Setup Guide.
Stable Release - v0.16.1
What's Changed
- Implement UTXO Return Address RPC command by @coderofstuff in #436
- fix clippy, bump wasm dependencies by @biryukovmaxim in #624
- Instant time instead of SystemTime by @miningexperiments in #625
- calculate sig ops on fly by @biryukovmaxim in #597
- Proper tracking of the accumulative circulating supply store value by @michaelsutton in #629
New Contributors
- @miningexperiments made their first contribution in #625
Full Changelog: v0.16.0...v0.16.1
Stable Release - v0.16.0
Instructions for Node Maintainers
This is the first stable release that includes the database version upgrade introduced in #494. When you run this version with an existing datadir, you may be asked to upgrade your database version in CLI. To proceed, type y
or yes
and hit enter. Alternatively, you can run this version with a --yes
flag to skip the interactive question.
This database upgrade is a one-way path, which means after updating your datadir to the new version, you won't be able to go back to an earlier version.
Database Upgrade to v4
This release includes a database upgrade from v3 to v4. This upgrade includes:
- introducing new data stores for temp ghostdag stores used when building pruning proofs
- removing deprecated higher level ghostdag data entries - these are now calculated on-the-fly when needed (validating pruning proofs, building pruning proofs)
What's Changed
- rothschild: donate funds to external address with custom priority fee by @demisrael in #482
- fix wrong combiner condition by @biryukovmaxim in #567
- fix wRPC JSON notification message format by @aspect in #571
- Documentation updates by @aspect in #570
- WASM RPC method type updates by @aspect in #572
- Cleanup legacy bip39 cfg flags interfering with docs.rs documentation builds by @aspect in #573
- Bump tonic and prost versions, adapt middlewares by @biryukovmaxim in #553
- Fix README.md layout and add linting section by @gvbgduh in #488
- Bump tonic version by @michaelsutton in #579
- replace statrs and statest deps & upgrade some deps. by @D-Stacks in #425
- enhance tx inputs processing by @biryukovmaxim in #495
- Parallelize MuHash calculations by @coderofstuff in #575
- Muhash parallel reduce -- optimize U3072 mul when LHS = one by @michaelsutton in #581
- Rust 1.82 fixes + mempool std sig op count check by @michaelsutton in #583
- typo(cli/utils): kaspa wording by @IzioDev in #582
- On-demand calculation for Ghostdag for Higher Levels by @coderofstuff in #494
- Standartize fork activation logic by @someone235 in #588
- Refactoring for cleaner pruning proof module by @coderofstuff in #589
- Pruning proof minor improvements by @coderofstuff in #590
- Add KIP-10 Transaction Introspection Opcodes, 8-byte arithmetic and Hard Fork Support by @biryukovmaxim in #487
- Some simplification to script number types by @someone235 in #594
- feat: add signMessageWithoutRand method for kaspa wasm by @witter-deland in #587
- Optimize window cache building for ibd by @D-Stacks in #576
- Enable payloads for non coinbase transactions by @someone235 in #591
- Small fixes related to enabling payload by @someone235 in #605
- Fix new lints required by Rust 1.83 by @michaelsutton in #606
- IBD sync: recover sampled window by @michaelsutton in #598
- Track the average transaction mass throughout the mempool's lifespan by @michaelsutton in #599
- Create TN11 KIP10 HF activation and KIP9 beta switch by @coderofstuff in #595
- CI Update by @saefstroem in #622
- Bump version to 0.16.0 by @coderofstuff in #623
New Contributors
- @demisrael made their first contribution in #482
- @IzioDev made their first contribution in #582
- @witter-deland made their first contribution in #587
Full Changelog: v0.15.2...v0.16.0
Release Candidate v0.15.4
Instructions for Node Maintainers
This is the first release that includes the database version upgrade introduced in #494. When you run this version with an existing datadir, you may be asked to upgrade your database version in CLI. To proceed, type y
or yes
and hit enter. Alternatively, you can run this version with a --yes
flag to skip the interactive question.
This database upgrade is a one-way path, which means after updating your datadir to the new version, you won't be able to go back to an earlier version.
What's Changed
- rothschild: donate funds to external address with custom priority fee by @demisrael in #482
- fix wrong combiner condition by @biryukovmaxim in #567
- fix wRPC JSON notification message format by @aspect in #571
- Documentation updates by @aspect in #570
- WASM RPC method type updates by @aspect in #572
- Cleanup legacy bip39 cfg flags interfering with docs.rs documentation builds by @aspect in #573
- Bump tonic and prost versions, adapt middlewares by @biryukovmaxim in #553
- Fix README.md layout and add linting section by @gvbgduh in #488
- Bump tonic version by @michaelsutton in #579
- replace statrs and statest deps & upgrade some deps. by @D-Stacks in #425
- enhance tx inputs processing by @biryukovmaxim in #495
- Parallelize MuHash calculations by @coderofstuff in #575
- Muhash parallel reduce -- optimize U3072 mul when LHS = one by @michaelsutton in #581
- Rust 1.82 fixes + mempool std sig op count check by @michaelsutton in #583
- typo(cli/utils): kaspa wording by @IzioDev in #582
- On-demand calculation for Ghostdag for Higher Levels by @coderofstuff in #494
- Standartize fork activation logic by @someone235 in #588
- Refactoring for cleaner pruning proof module by @coderofstuff in #589
- Pruning proof minor improvements by @coderofstuff in #590
- Add KIP-10 Transaction Introspection Opcodes, 8-byte arithmetic and Hard Fork Support by @biryukovmaxim in #487
- Some simplification to script number types by @someone235 in #594
- feat: add signMessageWithoutRand method for kaspa wasm by @witter-deland in #587
- Optimize window cache building for ibd by @D-Stacks in #576
- Enable payloads for non coinbase transactions by @someone235 in #591
- Small fixes related to enabling payload by @someone235 in #605
- Fix new lints required by Rust 1.83 by @michaelsutton in #606
- IBD sync: recover sampled window by @michaelsutton in #598
- Track the average transaction mass throughout the mempool's lifespan by @michaelsutton in #599
- Create TN11 KIP10 HF activation and KIP9 beta switch by @coderofstuff in #595
New Contributors
- @demisrael made their first contribution in #482
- @IzioDev made their first contribution in #582
- @witter-deland made their first contribution in #587
Full Changelog: v0.15.2...v0.15.4-rc1
Testnet 11 Only v0.15.4
WARNING: INTENDED FOR USE IN TESTNET 11 ONLY
This is a special release with code not in the mainline yet. It is intended to hardfork the testnet-11 environment and should only be used by Testnet-11 node maintainers.
The TN11 HF includes:
- Enabling KIP10
- Switching KIP9 to use the Beta version in consensus
- Enabling Payloads for transactions
In addition, this includes all the optimizations that we have been working on which will allow us to determine a minimum recommended hardware spec for 10bps in mainnet.
Instructions for Node Maintainers
- This is the first release that includes the database version upgrade introduced in #494. When you run this version with an existing datadir, you may be asked to upgrade your database version in CLI. To proceed, type
y
oryes
and hit enter. Alternatively, you can run this version with a--yes
flag to skip the interactive question.- This database upgrade is a one-way path, which means after updating your datadir to the new version, you won't be able to go back to an earlier version.
Instruction to build from source
- Git pull and checkout the
kip10-tn11-hf
branch - Build your binary from there. This release is on the commit 14b1e10
Release v0.15.2
Released mainly for Integrators using GetVirtualChainFromBlock
NOTE: If you do not use GetVirtualChainFromBlock
in your integration, you do not need to update to this version.
This release updates the GetVirtualChainFromBlock
so that it operates in batches. This solves an issue where the call to GetVirtualChainFromBlock
would previously take a long time to complete if the client has to sync virtual chain state from a deep chain block after a period of being unsynced. Each call will now return very promptly, but it also means you now have to call GetVirtualChainFromBlock multiple times if you're syncing from a deep chain block.
To take advantage of this new batching mechanism, you only need to make sure that you continue calling GetVirtualChainFromBlock
until your software has caught up to the tips of the network. For reference, the pseudo-code is:
startHash = <the_last_hash_you_synced_from>
isCatchingUp = true
// Catch-Up code. Expecting to do faster catch up logic here
while isCatchingUp:
batch = GetVirtualChainFromBlock(startHash, includeTransactionIds: true)
// Do your processing with batch
// ...
startHash = batch.added[<last_element_index>]
if len(batch.added) < 10:
// if the response was batched it will contain at least 10 chain blocks
// (because we limit the number of merged blocks by mergeset limit x 10),
// otherwise, we've caught up and can proceed with normal batch processing
isCatchingUp = false
// Continue your normal pace of processing next batches with GetVirtualChainFromBlock
// ...
What's Changed
- Fix WASM interface typo for
normalBuckets
andlowBuckets
inIFeerateBucket
by @Cryptok777 in #557 - Fix new gRPC methods to use camel case (non-breaking change) by @michaelsutton in #560
virtual chain from block
batching. by @D-Stacks in #454- A few CLI rpc query fixes by @michaelsutton in #563
- Deploy linux binary without musl in its name + various minor miscellaneous things towards v0.15.2 by @michaelsutton in #564
New Contributors
- @Cryptok777 made their first contribution in #557
Full Changelog: v0.15.1...v0.15.2
Release Candidate v0.15.2-rc1
Note to Integrators using GetVirtualChainFromBlock
IMPORTANT: If you do not use GetVirtualChainFromBlock
in your integration, you do not need to update to this version.
This release updates the GetVirtualChainFromBlock
so that it operates in batches. This solves an issue where the call to GetVirtualChainFromBlock
would previously take a long time to complete if the client has to sync virtual chain state from a deep chain block after a period of being unsynced. Each call will now return very promptly, but it also means you now have to call GetVirtualChainFromBlock multiple times if you're syncing from a deep chain block.
To take advantage of this new batching mechanism, you only need to make sure that you continue calling GetVirtualChainFromBlock
until your software has caught up to the tips of the network. For reference, the pseudo-code is:
startHash = <the_last_hash_you_synced_from>
isCatchingUp = true
// Catch-Up code. Expecting to do faster catch up logic here
while isCatchingUp:
batch = GetVirtualChainFromBlock(startHash, includeTransactionIds: true)
// Do your processing with batch
// ...
startHash = batch.added[<last_element_index>]
if len(batch.added) < 10:
// if the response was batched it will contain at least 10 chain blocks
// (because we limit the number of merged blocks by mergeset limit x 10),
// otherwise, we've caught up and can proceed with normal batch processing
isCatchingUp = false
// Continue your normal pace of processing next batches with GetVirtualChainFromBlock
// ...
What's Changed
- Fix WASM interface typo for
normalBuckets
andlowBuckets
inIFeerateBucket
by @Cryptok777 in #557 - Fix new gRPC methods to use camel case (non-breaking change) by @michaelsutton in #560
virtual chain from block
batching. by @D-Stacks in #454
New Contributors
- @Cryptok777 made their first contribution in #557
Full Changelog: v0.15.1...v0.15.2-rc1