Skip to content

Replace generate and verify pop into original key ownership PR #9263

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1,440 commits into
base: bkchr-set-keys-proof
Choose a base branch
from

Conversation

dharjeezy
Copy link
Contributor

it also references and closes this issue

dmitry-markin and others added 30 commits April 30, 2025 13:09
…h#8072)

Implement [RFC-0008 "DHT
bootnodes"](https://polkadot-fellows.github.io/RFCs/approved/0008-parachain-bootnodes-dht.html).
Close paritytech#1825.

With this mechanism, every parachain node is eligible to act as a
bootnode. If its peer ID is close to the parachain key for the current
relay chain epoch, it becomes discoverable by other parachain nodes via
the relay chain DHT. This removes the need to specify bootnodes in the
parachain chainspec, eliminating a single point of failure and
simplifying things for parachain operators.

The mechanism is enabled by default. The embedded DHT bootnode can be
disabled using the `--no-dht-bootnode` flag, and discovery of such nodes
can be disabled with the `--no-dht-bootnode-discovery` flag.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Update the sha for the evm-test suite

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Litep2p Release

This release brings several improvements and fixes to litep2p, advancing
its stability and readiness for production use.

### Performance Improvements

This release addresses an issue where notification protocols failed to
exit on handle drop, lowering CPU usage in scenarios like
minimal-relay-chains from 7% to 0.1%.

### Robustness Improvements

- Kademlia:
- Optimized address store by sorting addresses based on dialing score,
bounding memory consumption and improving efficiency.
- Limited `FIND_NODE` responses to the replication factor, reducing data
stored in the routing table.
- Address store improvements enhance robustness against routing table
alterations.

- Identify Codec:
- Enhanced message decoding to manage malformed or unexpected messages
gracefully.

- Bitswap:
- Introduced a write timeout for sending frames, preventing protocol
hangs or delays.

### Testing and Reliability

- Fuzzing Harness: Added a fuzzing harness by SRLabs to uncover and
resolve potential issues, improving code robustness. Thanks to @R9295
for the contribution!

- Testing Enhancements: Improved notification state machine testing.
Thanks to Dominique (@Imod7) for the contribution!

### Dependency Management

- Updated all dependencies for stable feature flags (default and
"websocket") to their latest versions.

- Reorganized dependencies under specific feature flags, shrinking the
default feature set and avoiding exposure of outdated dependencies from
experimental features.

### Fixed

- notifications: Exit protocols on handle drop to save up CPU of
`minimal-relay-chains`
([paritytech#376](paritytech/litep2p#376))
- identify: Improve identify message decoding
([paritytech#379](paritytech/litep2p#379))
- crypto/noise: Set timeout limits for the noise handshake
([paritytech#373](paritytech/litep2p#373))
- kad: Improve robustness of addresses from the routing table
([paritytech#369](paritytech/litep2p#369))
- kad: Bound kademlia messages to the replication factor
([paritytech#371](paritytech/litep2p#371))
- codec: Decode smaller payloads for identity to None
([paritytech#362](paritytech/litep2p#362))

### Added

- bitswap: Add write timeout for sending frames
([paritytech#361](paritytech/litep2p#361))
- notif/tests: check test state
([paritytech#360](paritytech/litep2p#360))
- SRLabs: Introduce simple fuzzing harness
([paritytech#367](paritytech/litep2p#367))
- SRLabs: Introduce Fuzzing Harness
([paritytech#365](paritytech/litep2p#365))

### Changed

- features: Move quic related dependencies under feature flag
([paritytech#359](paritytech/litep2p#359))
- tests/substrate: Remove outdated substrate specific conformace testing
([paritytech#370](paritytech/litep2p#370))
- ci: Update stable dependencies
([paritytech#375](paritytech/litep2p#375))
- build(deps): bump hex-literal from 0.4.1 to 1.0.0
([paritytech#381](paritytech/litep2p#381))
- build(deps): bump tokio from 1.44.1 to 1.44.2 in /fuzz/structure-aware
([paritytech#378](paritytech/litep2p#378))
- build(deps): bump Swatinem/rust-cache from 2.7.7 to 2.7.8
([paritytech#363](paritytech/litep2p#363))
- build(deps): bump tokio from 1.43.0 to 1.43.1
([paritytech#368](paritytech/litep2p#368))
- build(deps): bump openssl from 0.10.70 to 0.10.72
([paritytech#366](paritytech/litep2p#366))

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Dmitry Markin <dmitry@markin.tech>
Within the staking elections, while the balance is `u128`, we want to
downscale all of them to `u64` for further calculations (e.g. within ).
For this, we divide total issuance by u64::max, and downscale everyone
by that. This is what `u128CurrencyToVote` is doing, which is fed to
`pallet-staking` as `type CurrencyToVote`.

atm, the `wnd` total issuance is around 100 times more than `u64::MAX`,
so all stakes in the election process get downscaled by a 100. Note that
this downscaled version is also used in `voter-list`, which is why we
see (reported by Nova) that a nominator with 100 WND is in the
`voter-list` bag associated with `1 WND`.

The fix proposed here is also a sane way to fix this: even if the total
issuance is more than u64::max, the likelihood of a single staker's
stake being more than u64::max is super low. And if it is, so be it.
This would mean that whatever stake they have above u64::max cannot be
used in the staking election process and would remain unused.

Beyond changing westend, this PR will add a check to the `try-state` of
both `pallet-staking` and `pallet-staking-async` to warn us about this.

Long term fix: paritytech#406

---------

Co-authored-by: Ankan <10196091+Ank4n@users.noreply.github.com>
…fig (paritytech#8339)

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ankan <10196091+Ank4n@users.noreply.github.com>
# Description

Currently `assert_expected_events` does not fail (but it should) in case
there is an event missing due to not matching the requested pattern

eg. in the example below, the test will succeed whether `success ==
true` or `success == false`, since if there is an event matching the
pattern, it will succed as expected, but if there is no event matching
the pattern, it will silently pass through.
```
assert_expected_events!(
		AssetHubRococo,
		vec![
			RuntimeEvent::MessageQueue(
				pallet_message_queue::Event::Processed { success: true, .. }
			) => {},
		]
	);
```

## Review Notes
I was looking for a way to implement some unit tests for the macro in
the `xcm-emulator` crate itself but many traits that are invloved like
`Chain`, `TestExt`, `Network` and the fact that `assert_expected_events`
internally calls `let mut events = <$chain as $crate::Chain>::events();`
it requires a lot of mocking code (not really possible to override
traits etc due to `crate::` designation )

I think this macro could benefit from changing its interface to allow
passing `events` vec via parameters but that is a major change and would
not allow for backports. It can be done in a separate PR of course if
you also like the idea,

eg proposed new usage (not part of this PR, just idea for future)
```
assert_expected_events!(
		<AssetHubRococo as Chain>::events(),
		vec![
			RuntimeEvent::MessageQueue(
				pallet_message_queue::Event::Processed { success: true, .. }
			) => {},
		]
	);
```
Adds two self-explanatory view functions to staking/election related
pallets. To be used by wallets wishing to perform rebag operation, and
use by staking miner(s) to know how much deposit they need in advance.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ankan <10196091+Ank4n@users.noreply.github.com>
Co-authored-by: Giuseppe Re <giuseppe.re@parity.io>
PR removes some output for cmd-bot and moves it into artifacts.

cc paritytech#8195
## Description

Removed `TakeFirstAssetTrader` from Asset Hub Westend and Rococo.
Improved macros, fixed tests.

This implies asset sufficiency no longer guarantees that weight can also
be bought with it. `SwapFirstAssetTrader` is used instead which will
attempt to swap some of the given asset for the required amount of
native asset to buy weight. This may or may not succeed depending on
whether there is a local pool present with enough liquidity to serve the
swap.

## Review notes

Additionally parametrised macro and fixed Westend test: weight swapping
was failing at [this
line](https://github.com/paritytech/polkadot-sdk/blob/44ae6a8bebd23a8ffac02d71c6e74ee889c3ab00/substrate/frame/asset-conversion/src/lib.rs#L903)
with around 100x difference thus had to modify the macro.

Fixes paritytech#8233

---------

Co-authored-by: Adrian Catangiu <adrian@parity.io>
… just the statement data. (paritytech#8314)

Some statement contain a proof with the signature of the statement, this
proof is useful to assert that the statement comes from the expected
account.

Alternatively we can always add some signature inside the encrypted
data, but this means we sign 2 times and it isn't necessary in some
cases.

What do you think @arkpar ?

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR contains the following fixes after the Snowbridge V2 audit issue
were found.

## Failure to advance nonce on error allows repeated execution of failed
events without relayer incentives (issue 1)

_Severity: Critical_

In `bridges/snowbridge/pallets/inbound-queue-v2/src/lib.rs:216`, the
submit extrinsic allows submission of inbound events originating from
Ethereum. These events are decoded into messages and processed by the
process_message function.

However, if the send_xcm operation fails, the transaction reverts
without storing the nonce or registering rewards. Consequently, the same
inbound message remains unmarked and eligible for re-execution.

This creates multiple risks: the same message can be replayed multiple
times, potentially in varying orders and at a specific timing decided by
the relayer under diff erent conditions, potentially causing unintended
state transitions.

Additionally, since rewards are only registered upon successful
execution, relayers are incentivized to reorder messages to maximize
successful transactions, potentially at the expense of the user and
system fairness.

**Recommendation**
We recommend storing the nonce and reward relayers also in case the
send_xcm operation fails.

**Snowbridge notes:**
We dispute this finding, because the inbound-queue can never replay
messages because it checks the nonce. Also, by allowing send_xcm to fail
without reverting the whole transaction, we are essentially killing the
message and preventing it from being retried at a later time. This will
lead to a loss of user funds. Message sends can fail for a number of
reasons, like if the HRMP queues between parachains are full, and that's
something we don't have control over. So we need to allow processing a
message again if the send XCM fails. We did however move setting the
nonce to before the send XCM method, even though it does not make a
difference functionally, it is better semantically.

## Reward misallocation due to ignored `reward_address` field (issue 5)
_Severity: Major_

In `bridges/snowbridge/pallets/outbound-queue-v2/src/lib.rs:374`, the
process_delivery_receipt function processes a DeliveryReceipt parameter.

However, while the function correctly validates the gateway and nonce
fields, it disregards the `reward_address` field contained within the
DeliveryReceipt structure.

As a result, delivery rewards are always assigned to the transaction
sender rather than to the beneficiary specified by the `reward_address`.

**Recommendation**
We recommend validating and utilizing all fields of the DeliveryReceipt
structure to correctly allocate rewards.

## Non-sequential call indexes in systemv2 pallet (issue 15)
_Severity: Informational_

In `bridges/snowbridge/pallets/system-v2/src/lib.rs`, the call indexes
for extrinsics are inconsistently assigned.

Specifically, while indexes 0, 3, and 4 are implemented, indexes 1 and 2
remain undefined.

Although this does not currently pose a functional or security risk, the
non-sequential indexing undermines the clarity and consistency of the
codebase. In the context of a newly developed pallet, maintaining
orderly call indexes aids in readability, developer experience, and
future maintainability.

**Recommendation:**
We recommend assigning sequential call indexes to all implemented
extrinsics within the `systemv2` pallet.

---

## Incorrect NatSpec in register_token function (issue 17)
_Severity: Informational_

In `bridges/snowbridge/pallets/system-v2/src/lib.rs:182`, the
`register_token` function is documented to include a `fee` parameter,
but no such parameter is actually used in the function.

**Recommendation:**
We recommend updating the NatSpec documentation to accurately reflect
the function parameters and behavior, removing any reference to a
non-existent fee.

---

## Potentially confusing message hashing implementation (issue 21)
_Severity: Informational_

In `bridges/snowbridge/pallets/system-v2/src/lib.rs:230-235`, the
`Message` struct includes an id field, which is intended to store the
`blake2_256` hash of the message itself.

However, during the hash computation, the `id` field is first
initialized with its default value. The hash is then calculated over the
struct containing this default `id`, and only afterward is the computed
hash assigned back to the id field.

This approach, while functional, introduces complexity for external
systems or tools attempting to verify the message. Verifiers must
replicate this specific sequence: extract the `id` value from the
message, reset the `id` field in the struct to its default value,
compute the hash, and then compare it against the originally extracted
`id`.

**Recommendation:**
We recommend implementing a wrapper data structure over the Message that
would contain its hash as a separate member, not part of the Message
itself.

**Snowbridge notes:**
We used unique_id to generate an ID instead.

---

## Optimize SparseBitmap (issue 22)
_Severity: Informational_

In `bridges/snowbridge/primitives/core/src/sparse_bitmap.rs:9`, the
mapping from buckets to masks is declared, used to define the
SparseBitmap structure. The type of keys in this mapping is u128.

Then, within lines 22-27, the function `compute_bucket_and_mask` is
defined, which accepts a parameter index of type u128. This function is
called both from get and set functions, which are called only with
nonces of type `u64`. The type of the parameter index can be replaced by
`u64`.

The function compute_bucket_and_mask computes the index of the
corresponding bucket by dividing the index by 128, so the value range of
the output is less than the value range of the input. Hence, the return
type of the function, as well as the type of keys in the mapping
declared on line 9, can be declared as `u64`.

Currently, the Rust implementation of SparseBitmap utilizes 128 times
more space than it could.

**Recommendation:**
We recommend optimizing the storage consumption by adopting `u64` as the
types of keys in the internal mappings of both Rust and Solidity
implementations of SparseBitmap.

---

## Misleading implementation details (issue 25)
_Severity: Informational_

In `bridges/snowbridge/primitives/inbound-queue/src/lib.rs:23`, the
commentary states that the structure `EthereumLocationsConverterFor` is
deprecated. However, it is still used in the V2 source file:
bridges/snowbridge/primitives/inbound-queue/src/v2/converter.rs.

**Recommendation:**
We recommend resolving the aforementioned misleading implementation
details.

---

---------

Co-authored-by: Ron <yrong1997@gmail.com>
Co-authored-by: Vincent Geddes <117534+vgeddes@users.noreply.github.com>
Co-authored-by: Adrian Catangiu <adrian@parity.io>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
…ribute and `AuthorizeCall` system transaction extension (paritytech#6324)

## Meta 

This PR is part of 4 PR:
* paritytech#6323
* paritytech#6324
* paritytech#6325
* paritytech#6326

## Description

* new attribute `#[pallet::authorize(..)]`, this attributes takes a
function which returns the validity of the call.
* new attribute `#[pallet::weight_of_authorize(..)]`, same as
`#[pallet::weight(..)]` defines the pre dispatch weight of the
`authorize` function. It can also be retrieved from `WeightInfo` under
the name: `authorize_$call_name`.
* new trait `Authorize` in frame-support: implemented on the call for
pallets and runtime, and used by `AuthorizeCall` transaction extension
in frame-system.
* new origin variant in frame origin: `Origin::Authorized`: a bit
similar to `Unsigned` but used for general transactions.
* new transaction extension: `AuthorizeCall` in frame system. This is
meant to be used first in the transaction extension pipeline. It will
call the authorize function and change the origin to authorized
* new method: `ensure_authorized`.

## Usage

```rust
# #[allow(unused)]
#[frame_support::pallet]
pub mod pallet {
    use frame_support::pallet_prelude::*;
    use frame_system::pallet_prelude::*;
                                                                                   
    #[pallet::pallet]
    pub struct Pallet<T>(_);
                                                                                   
    #[pallet::config]
    pub trait Config: frame_system::Config {}
                                                                                   
    #[pallet::call]
    impl<T: Config> Pallet<T> {
        #[pallet::weight(Weight::zero())]
        #[pallet::authorize(|_source, foo| if *foo == 42 {
            let refund = Weight::zero();
            let validity = ValidTransaction::default();
            Ok((validity, refund))
        } else {
            Err(TransactionValidityError::Invalid(InvalidTransaction::Call))
        })]
        #[pallet::weight_of_authorize(Weight::zero())]
        #[pallet::call_index(0)]
        pub fn some_call(origin: OriginFor<T>, arg: u32) -> DispatchResult {
            ensure_authorized(origin)?;
                                                                                   
            Ok(())
        }
                                                                                   
        #[pallet::weight(Weight::zero())]
        // We can also give the callback as a function
        #[pallet::authorize(Pallet::<T>::authorize_some_other_call)]
        #[pallet::weight_of_authorize(Weight::zero())]
        #[pallet::call_index(1)]
        pub fn some_other_call(origin: OriginFor<T>, arg: u32) -> DispatchResult {
            ensure_authorized(origin)?;
                                                                                   
            Ok(())
        }
    }
                                                                                   
    impl<T: Config> Pallet<T> {
        fn authorize_some_other_call(
            source: TransactionSource,
            foo: &u32
        ) -> TransactionValidityWithRefund {
            if *foo == 42 {
                let refund = Weight::zero();
                let validity = ValidTransaction::default();
                Ok((validity, refund))
            } else {
                Err(TransactionValidityError::Invalid(InvalidTransaction::Call))
            }
        }
    }
                                                                                   
    #[frame_benchmarking::v2::benchmarks]
    mod benchmarks {
        use super::*;
        use frame_benchmarking::v2::BenchmarkError;
                                                                                   
        #[benchmark]
        fn authorize_some_call() -> Result<(), BenchmarkError> {
            let call = Call::<T>::some_call { arg: 42 };
                                                                                   
            #[block]
            {
                use frame_support::pallet_prelude::Authorize;
                call.authorize(TransactionSource::External)
                    .ok_or("Call must give some authorization")??;
            }
                                                                                   
            Ok(())
        }
    }
}
```

---------

Co-authored-by: GitHub Action <action@github.com>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This one properly addresses
paritytech#8378
Now template `Cargo.toml` wll be populated during release time so we
dont need to alter it in sdk

---------

Co-authored-by: Iulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
…check prdoc for the backports PRs (paritytech#8435)

This PR makes the same adjustment, that was done for the `check-prdoc`
job:
- In the backports PRs will be used original PR number to check the
prdoc and introduced changes
…ch#8281)

# Description

Add a common implementation for
`XcmPaymentApi::query_weight_to_asset_fee` to `pallet-xcm`.

This PR is a simple alternative to paritytech#8202 (which could still be useful
for other reasons).
It uses a workaround instead of a big refactoring.

The workaround is:
Computes the weight cost using the provided `WeightTrader`.
This function is supposed to be used ONLY in
`XcmPaymentApi::query_weight_to_asset_fee`
Runtime API implementation, as it can introduce a massive change to the
total issuance.
The provided `WeightTrader` must be the same as the one used in the
XcmExecutor to ensure
uniformity in the weight cost calculation.

NOTE: Currently this function uses a workaround that should be good
enough for all practical
uses: passes `u128::MAX / 2 == 2^127` of the specified asset to the
`WeightTrader` as
payment and computes the weight cost as the difference between this and
the unspent amount.
Some weight traders could add the provided payment to some account's
balance. However,
it should practically never result in overflow because even currencies
with a lot of decimal digits
(say 18) usually have the total issuance of billions (`x * 10^9`) or
trillions (`x * 10^12`) at max,
much less than `2^127 / 10^18 =~ 1.7 * 10^20` (170 billion billion).
Thus, any account's balance
most likely holds less than `2^127`, so adding `2^127` won't result in
`u128` overflow.


## Integration

The Runtime builders can use the `query_weight_to_asset_fee` provided by
`pallet-xcm` in
their XcmPaymentApi implementation.

---------

Co-authored-by: Adrian Catangiu <adrian@parity.io>
…ech#8405)

Use runtime calls to get epoch randomness only on startup, and later get
it from the next epoch descriptor at the first block of every epoch.

Resolves paritytech#8377.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
)

Follow-up from: paritytech#5567
Read comment here: 

paritytech#5567 (comment)

1. Idea is introduce a function export-chain-spec that does the
chain-spec exporting. build-spec will work as before, but have a
deprecation message on usage.
2. For now, this new command will focus on the ability to take the
chain-specs embedded in the node and "exports" them to a json file
3. We will not include the addition of boot node functionality, that the
build-spec command has right now into the new command, if along the
process, we decide otherwise, we can include it
4. Part of this PR will also be to display the message the build-spec
will soon be on deprecation path
5. It also exports a trait that allows extra subcommands definition and
usage for a `polkadot-omni-node-lib` based node

---------

Signed-off-by: Iulian Barbu <iulian.barbu@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Iulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
Co-authored-by: Iulian Barbu <iulian.barbu@parity.io>
Co-authored-by: Serban Iorga <serban300@gmail.com>
Co-authored-by: Serban Iorga <serban@parity.io>
Part of paritytech#6504

---------

Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>
Co-authored-by: Guillaume Thiolliere <guillaume.thiolliere@parity.io>
Co-authored-by: Giuseppe Re <giuseppe.re@parity.io>
…ue (paritytech#8441)

The prdoc from paritytech#8327 did not reference all crates that were touched. I
added them here.
Not the root cause of the block time issue, but the rep change is
certainly not helping.

---------

Co-authored-by: Robert <robert@gonimo.com>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Use 50% slash fraction for fork voting and future block voting (in
accordance with the https://eprint.iacr.org/2025/057.pdf paper).

In order to account for the possible risk of accidental/non-malicious
double voting, keep the current formula for double voting proof.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Adrian Catangiu <adrian@parity.io>
…sitable dust (paritytech#8351)

# Description

This PR enhances the XCM executor’s `deposit_assets_with_retry` so that
**dust deposits** (below the chain’s existential deposit) no longer
abort the entire batch. Instead, any `TokenError::BelowMinimum` is
treated as non‐fatal: the dust portion is burned, and the executor
continues depositing the rest of the assets.
Once this lands, complex XCMs that transfer multiple assets will no
longer fail outright when the leftover fee asset after buying execution
becomes dust.

Fixes issue paritytech#4408

## Integration

No downstream runtime APIs are changed. Existing parachain and
relay‐chain integrations of the XCM executor will automatically benefit
from this fix without modification.
No deprecation or migration steps are required by users of
`XcmExecutor`.

---------

Co-authored-by: Adrian Catangiu <adrian@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Branislav Kontur <bkontur@gmail.com>
… to master (paritytech#8428)

This PR backports regular version bumps from the release branch
`stable2503` back to `master`
…aritytech#8370)

## Issue
Fixes paritytech#6733.

## TODO
- [x] Fix the issue;
- [x] Add unit tests;
- [x] Add PRdoc.
Replaced broken links to corresponding new sources
Populate build profiles only after all other modifications is done

---------

Co-authored-by: Iulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
also closes paritytech#2650

## Summary

This PR removes the concept of slashing spans (`SlashingSpans`,
`SpanSlash`) and all related metadata from `pallet-staking-async`.

While working on this, I noticed some issues with the current span
logic:
- Spans were always starting and ending during the processing of an
offence, meaning they never actually spanned across multiple eras
(likely due to changes in logic over time).
- We don’t chill validators either, so the core reason for slashing span
isn't exercised.

Because of these factors, slashing spans were not serving their intended
purpose. Removing them simplifies the slashing logic significantly and
allows us to drop a good chunk of unnecessary code.

## API Changes (pallet-staking-async)
### Removed
- StorageMap SlashingSpans
- StorageMap SpanSlash.
- Error IncorrectSlashingSpans.

### Deprecated
For the following extrinsic, the parameter `num_slashing_spans` is
deprecated and has no effect. It is left for backward compatibility.
- withdraw_unbonded
- force_unstake
- reap_stash


## Functional Changes:
The key functional change is around slashing rewards:

Previously:
- The reward was 50% of 10% (`SlashRewardFraction`) of the slashed
amount.
- For each successive slash in the same era, the reward would halve
again (e.g., 50%, then 25%, then 12.5%, etc.).

With this PR:
- Successive offences are still filtered to only keep the highest slash
per validator/nominator per era.
- Halving the reward on successive offences is removed.
- My take: this seems reasonable, since we already filter out weaker
offences.
- However, if we want to preserve this behaviour, we could still add a
counter of slashes per validator/nominator per era to implement the
halving logic.

## TODO
- [x] Race condition of offence test: Second offence comes before first
is applied : This is doing this already `offence_discarded_correctly`.
- [x] Preserve extrinsic signatures.

---------

Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>
…release (paritytech#8469)

This PR backports regular version bumps and prdocs reordering from the
stable2503 branch back to master
raymondkfcheung and others added 19 commits July 2, 2025 10:06
Corrected markdown and indentation for the `emit_sent_event` function
parameters in the `EventEmitter` trait documentation for better
readability.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…9050)

Fixes a potential race with off-chain disabling when we learned about
disablement after importing a dispute from that validator.

I think there's no need to handle startup to do deactivation. This will
be only relevant for when a node upgrades to the release with a fix and
writing a migration for that seems like an overkill since this scenario
is very low probability.
Implementation of paritytech#8758

# Description
Authority Discovery crate has been changed so that the `AddrCache` is
persisted to `persisted_cache_file_path` a `json` file in
`net_config_path` folder controlled by `NetworkConfiguration`.

`AddrCache` is JSON serialized (`serde_json::to_pretty_string`) and
persisted to file:
- periodically (every 10 minutes)
- on shutdown

Furthermore, this persisted `AddrCache` on file will be read from upon
start of the worker - if it does not exist, or we failed to deserialize
it a new empty cache is used.

`AddrCache` is made Serialize/Deserialize thanks to `PeerId` and
`Multiaddr` being made Serialize/Deserialize.

# Implementation
The worker use a spawner which is used in the [run loop of the worker,
where at an interval we try to persist the
[AddrCache](https://github.com/paritytech/polkadot-sdk/blob/cyon/persist_peers_cache/substrate/client/authority-discovery/src/worker.rs#L361-L372).
We won't persist the `AddrCache` if `persisted_cache_file_path:
Option<PathBuf>` is `None` - which it would be if
[`NetworkConfiguration`
`net_config_path`](https://github.com/paritytech/polkadot-sdk/blob/master/substrate/client/network/src/config.rs#L591)
is `None`. We spawn a new task each time the `interval` "ticks" - once
every 10 minutes - and it uses `fs::write` (there is also a
`tokio::fs::write` which requires the `fs` feature flag of `tokio` which
is not activated and I chose to not use it). If the worker shutsdown we
will try to persist without using the `spawner`.

# Changes
- New crate dependency: `serde_with` for `SerializeDisplay` and
`DeserialzeFromStr` macros
- `WorkerConfig` in authority-discovery crate has a new field
`persisted_cache_directory : Option<PathBuf>`
- `Worker` in authority-discovery crate constructor now takes a new
parameter, `spawner: Arc<dyn SpawnNamed>`

## Tests
- [authority-discovery
tests](substrate/client/authority-discovery/src/tests.rs) tests are
changed to use tokio runtime, `#[tokio::test]` and we pass a test worker
config with a `tempdir` for `persisted_cache_directory `

# `net_config_path`
Here are the `net_config_path` (from `NetworkConfiguration`) which is
the folder used by this PR to save a serialized `AddrCache` in:

## `dev`
```sh
cargo build --release && ./target/release/polkadot --dev
```

shows =>

`/var/folders/63/fs7x_3h16svftdz4g9bjk13h0000gn/T/substratey5QShJ/chains/rococo_dev/network/authority_discovery_addr_cache.json'`

## `kusama`
```sh
cargo build --release && ./target/release/polkadot --chain kusama --validator
```

shows => `~/Library/Application
Support/polkadot/chains/ksmcc3/network/authority_discovery_addr_cache.json`

> [!CAUTION]
> The node shutdown automatically with scary error. 
> ```
> Essential task `overseer` failed. Shutting down service.
> TCP listener terminated with error error=Custom { kind: Other, error:
"A Tokio 1.x context was found, but it is being shutdown." }
> Installed transports terminated, ignore if the node is stopping
> Litep2p backend terminated`
>Error:
>   0: Other: Essential task failed.
> ```
> This is maybe expected/correct, but just wanted to flag it, expand
`output` below to see log
> 
> Or did I break anything?

<details><summary>Full Log with scary error (expand me 👈)</summary>
The log

```sh
$ ./target/release/polkadot --chain kusama --validator
2025-06-19 14:34:35 ----------------------------
2025-06-19 14:34:35 This chain is not in any way
2025-06-19 14:34:35       endorsed by the
2025-06-19 14:34:35      KUSAMA FOUNDATION
2025-06-19 14:34:35 ----------------------------
2025-06-19 14:34:35 Parity Polkadot
2025-06-19 14:34:35 ✌️  version 1.18.5-e6b86b54d31
2025-06-19 14:34:35 ❤️  by Parity Technologies <admin@parity.io>, 2017-2025
2025-06-19 14:34:35 📋 Chain specification: Kusama
2025-06-19 14:34:35 🏷  Node name: glamorous-game-6626
2025-06-19 14:34:35 👤 Role: AUTHORITY
2025-06-19 14:34:35 💾 Database: RocksDb at /Users/alexandercyon/Library/Application Support/polkadot/chains/ksmcc3/db/full
2025-06-19 14:34:39 Creating transaction pool txpool_type=SingleState ready=Limit { count: 8192, total_bytes: 20971520 } future=Limit { count: 819, total_bytes: 2097152 }
2025-06-19 14:34:39 🚀 Using prepare-worker binary at: "/Users/alexandercyon/Developer/Rust/polkadot-sdk/target/release/polkadot-prepare-worker"
2025-06-19 14:34:39 🚀 Using execute-worker binary at: "/Users/alexandercyon/Developer/Rust/polkadot-sdk/target/release/polkadot-execute-worker"
2025-06-19 14:34:39 Local node identity is: 12D3KooWPVh77R44wZwySBys262Jh4BSbpMFxtvQNmi1EpdcwDDW
2025-06-19 14:34:39 Running litep2p network backend
2025-06-19 14:34:40 💻 Operating system: macos
2025-06-19 14:34:40 💻 CPU architecture: aarch64
2025-06-19 14:34:40 📦 Highest known block at #1294645
2025-06-19 14:34:40 〽️ Prometheus exporter started at 127.0.0.1:9615
2025-06-19 14:34:40 Running JSON-RPC server: addr=127.0.0.1:9944,[::1]:9944
2025-06-19 14:34:40 🏁 CPU single core score: 1.35 GiBs, parallelism score: 1.44 GiBs with expected cores: 8
2025-06-19 14:34:40 🏁 Memory score: 63.75 GiBs
2025-06-19 14:34:40 🏁 Disk score (seq. writes): 2.92 GiBs
2025-06-19 14:34:40 🏁 Disk score (rand. writes): 727.56 MiBs
2025-06-19 14:34:40 CYON: 🔮 Good, path set to: /Users/alexandercyon/Library/Application Support/polkadot/chains/ksmcc3/network/authority_discovery_addr_cache.json
2025-06-19 14:34:40 🚨 Your system cannot securely run a validator.
Running validation of malicious PVF code has a higher risk of compromising this machine.
Secure mode is enabled only for Linux
and a full secure mode is enabled only for Linux x86-64.
You can ignore this error with the `--insecure-validator-i-know-what-i-do` command line argument if you understand and accept the risks of running insecurely. With this flag, security features are enabled on a best-effort basis, but not mandatory.
More information: https://docs.polkadot.com/infrastructure/running-a-validator/operational-tasks/general-management/#secure-your-validator
2025-06-19 14:34:40 Successfully persisted AddrCache on disk
2025-06-19 14:34:40 subsystem exited with error subsystem="candidate-validation" err=FromOrigin { origin: "candidate-validation", source: Context("could not enable Secure Validator Mode for non-Linux; check logs") }
2025-06-19 14:34:40 Starting workers
2025-06-19 14:34:40 Starting approval distribution workers
2025-06-19 14:34:40 👶 Starting BABE Authorship worker
2025-06-19 14:34:40 Starting approval voting workers
2025-06-19 14:34:40 Starting main subsystem loop
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="candidate-validation"
2025-06-19 14:34:40 Starting with an empty approval vote DB.
2025-06-19 14:34:40 subsystem finished unexpectedly subsystem=Ok(())
2025-06-19 14:34:40 🥩 BEEFY gadget waiting for BEEFY pallet to become available...
2025-06-19 14:34:40 Received `Conclude` signal, exiting
2025-06-19 14:34:40 Conclude
2025-06-19 14:34:40 received `Conclude` signal, exiting
2025-06-19 14:34:40 received `Conclude` signal, exiting
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="availability-recovery"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="bitfield-distribution"
2025-06-19 14:34:40 Approval distribution worker 3, exiting because of shutdown
2025-06-19 14:34:40 Approval distribution worker 2, exiting because of shutdown
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="dispute-distribution"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="chain-selection"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="pvf-checker"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="availability-store"
2025-06-19 14:34:40 Approval distribution worker 1, exiting because of shutdown
2025-06-19 14:34:40 Approval distribution worker 0, exiting because of shutdown
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="approval-voting"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="approval-distribution"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="chain-api"
2025-06-19 14:34:40 Approval distribution stream finished, most likely shutting down
2025-06-19 14:34:40 Approval distribution stream finished, most likely shutting down
2025-06-19 14:34:40 Approval distribution stream finished, most likely shutting down
2025-06-19 14:34:40 Approval distribution stream finished, most likely shutting down
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="provisioner"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="availability-distribution"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="runtime-api"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="candidate-backing"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="collation-generation"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="gossip-support"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="approval-voting-parallel"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="bitfield-signing"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="collator-protocol"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="statement-distribution"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="network-bridge-tx"
2025-06-19 14:34:40 Terminating due to subsystem exit subsystem="network-bridge-rx"
2025-06-19 14:34:41 subsystem exited with error subsystem="prospective-parachains" err=FromOrigin { origin: "prospective-parachains", source: SubsystemReceive(Generated(Context("Signal channel is terminated and empty."))) }
2025-06-19 14:34:41 subsystem exited with error subsystem="dispute-coordinator" err=FromOrigin { origin: "dispute-coordinator", source: SubsystemReceive(Generated(Context("Signal channel is terminated and empty."))) }
2025-06-19 14:34:41 Essential task `overseer` failed. Shutting down service.
2025-06-19 14:34:41 TCP listener terminated with error error=Custom { kind: Other, error: "A Tokio 1.x context was found, but it is being shutdown." }
2025-06-19 14:34:41 Installed transports terminated, ignore if the node is stopping
2025-06-19 14:34:41 Litep2p backend terminated
Error:
   0: Other: Essential task failed.

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
```

🤔

</details>

## `kusama -d /my/custom/path`
```sh
cargo build --release && ./target/release/polkadot --chain kusama --validator --unsafe-force-node-key-generation -d /my/custom/path
```
shows => `./my/custom/path/chains/ksmcc3/network/` for `net_config_path`

## `test`

I've configured a `WorkerConfig` with a `tempfile` for all tests. To my
surprise I had to call `fs::create_dir_all` in order for the tempdir to
actually be created.

---------

Co-authored-by: Alexandru Vasile <60601340+lexnv@users.noreply.github.com>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: alvicsam <alvicsam@gmail.com>
paritytech#9034)

Tiny follow-up to
https://github.com/paritytech/polkadot-sdk/pull/8701/files

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
The crate is already exposing testing related features by default, so
there is no real need to hide the rest behind some feature. Also because
of feature unification, the feature is enabled always in the workspace.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…saction (paritytech#9047)

# Description

Fixes paritytech#5936

Since we are still receiving reports about this error, I suggest adding
an extra line to prevent further questions.

---------

Co-authored-by: Alexander Samusev <41779041+alvicsam@users.noreply.github.com>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Updates the `MaxEncodedLen` implementation for the XCM `Error` enum by
switching to a derived implementation, which correctly calculates the
maximum encoded size instead of returning a hardcoded value of `1`.

This change partially addresses issue paritytech#323 related to improving size
bounding and metadata accuracy in runtime pallets.

## Problem

The previous manual implementation returned only `1` byte, accounting
solely for the enum discriminant, while ignoring encoded data in some
variants:

* `WeightLimitReached(Weight)` encodes a `Weight` value
* `Trap(u64)` encodes a `u64` value

This underestimated the size and could lead to incorrect proof or buffer
size calculations.

## Solution

Use `#[derive(MaxEncodedLen)]` along with other relevant derives on the
`Error` enum to:

* Automatically compute the maximum encoded length, correctly including
all encoded fields
* Respect `#[codec(skip)]` annotations to exclude fields from the
calculation

The derived implementation properly accounts for the largest variant
(`WeightLimitReached(Weight)`), ensuring accurate size estimations.

## Testing

* [x] Existing XCM tests pass
* [x] No breaking changes to public API
…tech#8828)

Fixes paritytech#8811

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR adds a fix for the release pipelines. The sync flow needs a
secrete to be passed when it is called from another flow and syncing
between release org and the main repo is needed.
Missing secrets were added to the appropriate flows.
…king-async ah-client pallet (paritytech#9049)

## 🤔 Why
This addresses potential memory issues and improves efficiency of
offence handling during buffered operating mode (see
paritytech-secops/srlabs_findings#525)


## 🔑 Key changes

- Prevents duplicate offences for the same offender in the same session
by keeping only the highest slash fraction
- Introduces `BufferedOffence` struct with optional reporter and slash
fraction fields
- Restructures buffered offences storage from `Vec<(SessionIndex,
Vec<Offence>)>` to nested `BTreeMap<SessionIndex, BTreeMap<AccountId,
BufferedOffence>>`
- Adds `MaxOffenceBatchSize` configuration parameter for batching
control
- Processes offences in batches with configurable size limits, sending
only first session's offences per block
- Implements proper benchmarking infrastructure for
`process_buffered_offences` function
- Adds WeightInfo trait with benchmarked weights for batch processing in
`on_initialize` hook

## ✍️ Co-authors
@Ank4n 
@sigurpol

---------

Co-authored-by: Paolo La Camera <paolo@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
- Derive `DecodeWithMemTracking` on structs
- Make some fields public

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
As one example, this allows us to use the latest version of Subxt: 0.42.
Also if-let chains :)

Main changes:
- Update CI image
- Remove `forklift` from Build step in
`check-revive-stable-uapi-polkavm`; it seemed to [cause an
error](https://github.com/paritytech/polkadot-sdk/actions/runs/16004536662/job/45148002314?pr=8592).
Perhaps we can open an issue for this to fix/try again after this
merges.
- Bump `polkavm` deps to 0.26 to avoid [this
error](https://github.com/paritytech/polkadot-sdk/actions/runs/16004991577/job/45150325849?pr=8592#step:5:1967)
(thanks @koute!)
- Add `result_large_err` clippy to avoid a bunch of clippy warnings
about a 176 byte error (again, we could fix this later more properly).
- Clippy fixes (mainly inlining args into `format!`s where possible),
remove one `#[no_mangle]` on a `#[panic_hook]` and a few other misc
automatic fixes.
- `#[allow(clippy::useless_conversion)]` in frame macro to avoid the
generated `.map(Into::into).map_err(Into::into)` code causing an issue
when not necessary (it is sometimes; depends on the return type in
pallet calls)
- UI test updates

As a side note, I haven't added a `prdoc` since I'm not making any
breaking changes (despite touching a bunch of pallets), just clippy/fmt
type things. Please comment if this isn't ok!

Also, thankyou @bkchr for the wasmtime update PR which fixed a blocker
here!

---------

Co-authored-by: Evgeny Snitko <evgeny@parity.io>
Co-authored-by: Bastian Köcher <git@kchr.de>
…aritytech#9102)

# Description

This should allow aura runtimes to check timestamp inherent data to
sync/import blocks that include timestamp inherent data.

Closes paritytech#8907 

## Integration

Runtime developers can check timestamp inherent data while using
`polkadot-omni-node-lib`/`polkadot-omni-node`/`polkadot-parachain`
binaries. This change is backwards compatible and doesn't require
runtimes to check the timestamp inherent, but they are able to do it now
if needed.

## Review Notes

N/A

---------

Signed-off-by: Iulian Barbu <iulian.barbu@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@dharjeezy dharjeezy requested review from a team as code owners July 18, 2025 12:00
@cla-bot-2021
Copy link

cla-bot-2021 bot commented Jul 18, 2025

User @VolodymyrBg, please sign the CLA here.

User @Stephenlawrence00, please sign the CLA here.

@dharjeezy
Copy link
Contributor Author

@drskalman i have reopened to @bkchr branch. but seems somewhat large.

@drskalman
Copy link
Contributor

drskalman commented Jul 18, 2025

@dharjeezy yeah my bad. Maybe also open a PR against my branch as well. Then I'll mention that in the RFC so people can actually see only the changes relavant to changes in proof of possession protocol ¯_(ツ)_/¯

@dharjeezy
Copy link
Contributor Author

@syed

issue

Screenshot 2025-07-18 at 14 30 26

I can't seem to open a pull request against w3f/polkadot-sdk base repository

@drskalman
Copy link
Contributor

You seem to be making a PR from your master to parity master? they are probbaly in sync :-/ I'll make a PR to https://github.com/paritytech/polkadot-sdk/tree/bkchr-set-keys-proof ust for rebasing and if @bkchr accept that, that should solve the problem.

@dharjeezy
Copy link
Contributor Author

issue

Ok. I couldn't find w3f/polkadot-sdk when trying to choose a base repository

@dharjeezy
Copy link
Contributor Author

dharjeezy commented Jul 18, 2025

issue

Ok. I couldn't find w3f/polkadot-sdk when trying to choose a base repository

@drskalman i think coz i am not part of w3f, i can't open a pr from my fork to your branch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.