-
Notifications
You must be signed in to change notification settings - Fork 983
Rebase bkchr-set-keys-proof to master #9266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
drskalman
wants to merge
1,499
commits into
paritytech:bkchr-set-keys-proof
Choose a base branch
from
w3f:set-keys-proof-pop
base: bkchr-set-keys-proof
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Rebase bkchr-set-keys-proof to master #9266
drskalman
wants to merge
1,499
commits into
paritytech:bkchr-set-keys-proof
from
w3f:set-keys-proof-pop
+519,865
−169,882
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
PR adds a temporary fix for cmd-bot. cc paritytech#8195
…#7691) This PR resolves issue paritytech#6119 by ensuring consistent topic ID inclusion in all XCM messages processed or sent via `XcmExecutor`, addressing instances where message topics were absent or inconsistent across multiple hops. To guarantee topic assignment and enhance traceability, this PR implements: * [**`WithUniqueTopic`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_builder/struct.WithUniqueTopic.html): This structure automatically appends a unique topic ID to any XCM that does not already contain a `SetTopic` instruction, guaranteeing that every message has an identifier. * [**`XcmContext.topic_or_message_id()`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm/v5/struct.XcmContext.html): The `topic_or_message_id()` function is used to append a `SetTopic` to any outbound XCM that lacks one, ensuring that each message is consistently traceable by falling back to the context’s message ID when no topic is set. * **Removal of `forward_id_for`**: The `forward_id_for` function, which was used to derive a new ID based on an original, has been removed as the focus is now on maintaining a consistent topic ID throughout the XCM lifecycle. Together, these changes guarantee that all XCMs—whether executed locally or dispatched to other chains—carry an associated topic ID throughout their journey. This significantly improves debugging and observability by enabling comprehensive tracing of message flows within logs and events. This enhancement is particularly beneficial for complex, multi-hop XCM scenarios where topic consistency was previously unreliable, making it easier to follow the path and effects of each cross-chain message. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Adrian Catangiu <adrian@parity.io> Co-authored-by: Serban Iorga <serban@parity.io>
…h#8546) Fixing paritytech#8215 based on paritytech#8185: Improve try-state for pallet-xcm-bridge-hub It removes try_as and uses try_from implementation instead. --------- Co-authored-by: Branislav Kontur <bkontur@gmail.com>
Update `parity-scale-codec` to `v3.7.5` --------- Co-authored-by: Andrii <ndk@parity.io>
…ech#8504) ## Description When dry-running a contract deployment through the runtime API, the returned address does not match the actual address that will be used when the transaction is submitted. This inconsistency occurs because the address derivation logic doesn't properly account for the difference between transaction execution and dry-run execution contexts. The issue stems from the `create1` address derivation logic in `exec.rs`: ```rust address::create1( &deployer, // the Nonce from the origin has been incremented pre-dispatch, so we // need to subtract 1 to get the nonce at the time of the call. if origin_is_caller { account_nonce.saturating_sub(1u32.into()).saturated_into() } else { account_nonce.saturated_into() }, ) ``` The code correctly subtracts 1 from the account nonce during a transaction execution (because the nonce is incremented pre-dispatch), but doesn't account for execution context - whether it's a real transaction or a dry run through the RPC. ## Review Notes This PR adds a new condition to check for the `IncrementOnce` when calculating the nonce for address derivation: ```rust address::create1( &deployer, // the Nonce from the origin has been incremented pre-dispatch, so we // need to subtract 1 to get the nonce at the time of the call. if origin_is_caller && matches!(exec_context, IncrementOnce::AlreadyIncremented) { account_nonce.saturating_sub(1u32.into()).saturated_into() } else { account_nonce.saturated_into() }, ) ``` ## Before Fix - Dry-run contract deployment returns address derived with nonce N - Actual transaction deployment creates contract at address derived with nonce N-1 - Result: Inconsistent addresses between simulation and actual execution ## After Fix - Dry-run and actual transaction deployments both create contracts at the same address - Result: Consistent contract addresses regardless of execution context - Added test case to verify nonce handling in different execution contexts This fix ensures that users can rely on the address returned by a dry run to match the actual address that will be used when the transaction is submitted. Fixes paritytech/contract-issues#37 # Checklist * [x] My PR includes a detailed description as outlined in the "Description" and its two subsections above. * [x] My PR follows the [labeling requirements]( https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process ) of this project (at minimum one label for `T` required) * External contributors: ask maintainers to put the right label on your PR. * [x] I have made corresponding changes to the documentation (if applicable) * [x] I have added tests that prove my fix is effective or that my feature works (if applicable) --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: pgherveou <pgherveou@gmail.com>
paritytech#8585) Backports a part of paritytech#8422 to master so it can be included in ongoing releases sooner. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Dónal Murray <donal.murray@parity.io>
Related to paritytech#8308 Follow-up for paritytech#8021 --------- Co-authored-by: command-bot <> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description Removing a tx subtree means partly removing some txs from the unlocks set of other txs. This logic is buggy and the PR attempts to fix it. Closes paritytech#8498 ## Integration N/A ## Review Notes This doesn't seem to be an important bug. Unit tests for txpool still pass after the fix, so txpool behavior isn't changing much. ### TODOs - [x] test with a heavy load test (5 millions txs) - all txs were validated successfully - [x] added a unit test --------- Signed-off-by: Iulian Barbu <iulian.barbu@parity.io> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description Proposing a new block on top of an existing parent block considers the best ready transactions provided by the txpool to be included in the new block. Whenever the given parent hash to build on top of is part of a fork up to the finalized block which has no best block notified to the pool, it might be the case that the proposer will rely on `ready_at_light` (due to various reason not in our control), and when that's the case, the ready transactions set will be empty. This PR adds a fallback for the `ready_at_light` where we consider the ready txs of the most recent view processed by the txpool, even if those txs might be invalid Closes paritytech#8213 Closes paritytech#6056 ## Integration N/A ## Review Notes In terms of testing, I updated an existing test which already exercises `ready_at_light` in the scope of the newly added fallback. --------- Signed-off-by: Iulian Barbu <iulian.barbu@parity.io> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
Related to paritytech#7575 . This change introduces a few metrics (and corresponding logs) to track the state of the produced collation. - time till collation fetched - backing latency (counting from RP) - backing latency (counting from collation fetch) - inclusion latency - expired collations (not backed, not advertised, not fetched) This information should help understanding the causes and possible improvements for higher parachain block times. --------- Signed-off-by: Andrei Sandu <andrei-mihail@parity.io> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
revert to use changed files action (pointing to commit of latest release). cc: @alvicsam --------- Co-authored-by: Alexander Samusev <41779041+alvicsam@users.noreply.github.com>
Fixes paritytech#7987 Fixes paritytech#7868 --------- Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io> Co-authored-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io> Co-authored-by: Alexandru Gheorghe <49718502+alexggh@users.noreply.github.com> Co-authored-by: Bastian Köcher <git@kchr.de>
Buckets for a maximum unincluded segment size of 24. --------- Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Commenting out all flaky tests and tracking them here paritytech#48 Changes: - Disable flaky Rust tests by adding a new disabled feature. `#[ignore]` attribute is not possible since CI runs with `--ignored` - Disable all Zombienet tests - [ ] Waiting for CI what other tests fail. --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Since zombienet [has been disabled](paritytech#8600) to improve stability, it makes no sense to keep the GitLab configuration.
Discovered while profiling paritytech#6131 (comment) with the benchmark paritytech#8069 that when running in validation a big chunk of the time is spent inserting and retrieving data from the BTreeMap/BTreeSet. By switching to hashbrown HashMap/HashSet in validation TrieCache and TrieRecorder and the memory-db paritytech/trie#221 read costs improve with around ~40% and write with about ~20% --------- Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Rename `CreateInherent` to `CreateBare`, add method `create_bare` and deprecate `create_inherent`. Both unsigned transaction and inherent use the extrinsic type `Bare`. Before this PR `CreateInherent` trait was use to generate unsigned transaction, now unsigned transaction can be generated using a proper trait `CreateBare`. How to upgrade: * Change usage of `CreateInherent` to `CreateBare` and `create_inherent` to `create_bare`. * Implement `CreateBare` for the runtime, the method `create_bare` is usually implemented using `Extrinsic::new_bare`. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…h#8615) Fixing paritytech#8215 based on paritytech#8185: Improve try-state for pallet-xcm-bridge-hub It removes try_as and uses try_into implementation instead. --------- Co-authored-by: Branislav Kontur <bkontur@gmail.com> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…Watch` (paritytech#8345) This PR adds metrics for the following RPC subscription: [transactionWatch_v1_submitAndWatch](https://paritytech.github.io/json-rpc-interface-spec/api/transactionWatch_v1_submitAndWatch.html) Metrics are exposed in two ways: - simple counters of how many events we've seen globally - a histogram vector of execution times, which is labeled by `initial event` -> `final event` - This helps us identify how long it takes the transaction pool to advance the state of the events, and further debug issues Part of: paritytech#8336 ### (outdated) PoC Dashboards  ### Next steps - [x] initial dashboards with a live node - [x] adjust testing --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
- subscription task are "essential tasks" and the service should go down when they fail. - Upgrade subxt to 0.41 - Update zombienet-sdk to use the subxt re-export of subxt so it does not conflict with the workspace version --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
We were only charging storage deposit based on value length but not based on key length. Since we allow for variable length keys this has to be done. Needs to be back ported since changing this in an already deployed system will be nasty. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…aritytech#8650) This PR rejects non-reserved peers in the reserved-only mode of the litep2p notification peerset. Previously, litep2p ignored completely the reserved-only state while accepting inbound connections. However, it handled it properly during the slot allocation phase. - the main changes are in the `report_inbound_substream` function, which now propagated a `Rejected` response to litep2p on the reserved-only state - in response, litep2p should never open an inbound substream after receiving the rejected response - the state of peers is not advanced while in `Disconnected` or `Backoff` states - the opening state is moved to `Cancelled` - for consistency purposes (and fuzzing purposes), the `report_substream_opened` is more robustly handling the `Disconnected` state - while at it have replaced a panic with `debug_assert` and an instant reject ## Testing Done - started 2 nodes in Kusama and Polkadot with litep2p - added the `reserved_only_rejects_non_reserved_peers` test to ensure litep2p handles peers properly from different states This PR has been extracted from paritytech#8461 to ease the review process cc @paritytech/networking --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Dmitry Markin <dmitry@markin.tech>
# Description * This PR adds a new extrinsic `poke_deposit` to `pallet-bounties`. This extrinsic will be used to re-adjust the deposits made in the pallet to create a new bounty. * Part of paritytech#5591 ## Review Notes * Added a new extrinsic `poke_deposit` in `pallet-bounties`. * This extrinsic checks and adjusts the deposits made for creating a bounty. * Added a new event `DepositPoked` to be emitted upon a successful call of the extrinsic. * Although the immediate use of the extrinsic will be to give back some of the deposit after the AH-migration, the extrinsic is written such that it can work if the deposit decreases or increases (both). * The call to the extrinsic would be `free` if an actual adjustment is made to the deposit and `paid` otherwise (when no deposit is changed). * Added tests to test all scenarios. * Added benchmarks ## TO-DOs * [x] Run CI cmd bot to benchmark --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…#8473) The `TokenIdOf` [convert](https://github.com/paritytech/polkadot-sdk/blob/4b83d24f4bc96a7b17964be94b178dd7b8f873b5/bridges/snowbridge/primitives/core/src/location.rs#L40) is XCM version-agnostic, meaning we will get the same token ID for both V5 and legacy V4 asset. However, the extra check is unnecessary, as the`ConvertAssetId::convert(&token_id).ok_or(InvalidAsset)?;` alone is sufficient to verify whether the token is registered.
Update rpc-types - Remove unnecessary derive traits - Fix json decoding for `BlockNumberOrTagOrHash` --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Fix different commit sha for google registry and docker hub images
Move pallet-revive runtime api implementation in a macro, so that we don't repeat the code for every runtime. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Historically, the collection of storage deposits was running in an infallible context. Meaning we needed to make sure that the caller was able to pay the deposits when the last contract execution returns. To achieve that, we capped the storage deposit limit to the maximum balance of the origin. This made the code more complex: It conflated the deposit **limit** with the amount of balance the origin has. In the meantime, we changed code around to make the deposit collection fallible. But never changed this aspect. This PR rectifies that by doing: - The root storage meter and all its nested meter's limits are completely independent of the origin's balance. This makes it way easier to argue about the limit that a nested meter has at any point. - Consistently use `StorageDepositNotEnoughFunds` (limit not reached) and `StorageDepositLimitExhausted` (limit reached). - Origin not being able to pay the ed for a new account is now `StorageDepositNotEnoughFunds` and traps the caller rather then being a `TransferFailed` return code. Important since we are hiding the ed from contracts. So it should also not be an error code that must be handled. Im preparation for: paritytech/contract-issues#38 --------- Co-authored-by: xermicus <cyrill@parity.io> Co-authored-by: PG Herveou <pgherveou@gmail.com> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Instead of just checking for the slot, we also take the block number and the relay parent into account (as we actually allow to build multiple blocks per slot). Then this pr also ensures that we are still able to import blocks from availability recovery. This ensures that a network doesn't get stuck on a storm of equivocations. The next step after this pull request would be to implement on chain slashing for equivocations and probably disabling of the offending author. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Bump memory-db to pick up paritytech#8606 and paritytech/trie#221 Additionally, polkavm needs to be bumped to get rid of to get rid of https://github.com/paritytech/polkadot-sdk/actions/runs/15180236627/job/42688141374#step:5:1869 --------- Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Closes: paritytech#9116 --------- Co-authored-by: Branislav Kontur <bkontur@gmail.com> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Karol Kokoszka <karol@parity.io>
This removes `subwasmlib` and replaces it with some custom code to fetch the metadata. Main point of this change is the removal of some external dependency. Closes: paritytech#9203 --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
User @VolodymyrBg, please sign the CLA here.User @Stephenlawrence00, please sign the CLA here. |
Fix for correctly display the logs (urls) for paras.
…ain validation data inherent (paritytech#9262) Adds the possibility for parachain clients to collect additional relay state keys into the validation data inherent. With this change, other consensus engines can collect additional relay keys into the parachain inherent data: ```rs let paras_inherent_data = ParachainInherentDataProvider::create_at( relay_parent, relay_client, validation_data, para_id, vec![ relay_well_known_keys::EPOCH_INDEX.to_vec() // <----- Example ], ) .await; ```
Locking is a system-level operation, and can only increment the consumer limit at most once. Therefore, it should use `inc_consumer_without_limits`. This behavior is optional, and is only used in the call path of `LockableCurrency`. Reserves, Holds and Freezes (and other operations like transfer etc.) have the ability to return `DispatchResult` and don't need this bypass. This is demonstrated in the unit tests added. Beyond this, this PR: * uses the correct way to get the account data in tests * adds an `Unexpected` event instead of a silent `debug_assert!`. * Adds `try_state` checks for correctness of `account.frozen` invariant. --------- Co-authored-by: Ankan <10196091+Ank4n@users.noreply.github.com> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
… to the import queue (paritytech#9147) We agreed to split paritytech#8446 into two PRs: one for BABE (this one) and one for AURA. This is the easier one. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Regardless of the descriptor version, the CandidateDescriptor was logged as a CandidateDescriptorV2 instance. To address this issue we now derive RuntimeDebug only when std is not enabled so we can have that empty implementation that does not bloat the runtime WASM. When std is enabled we implement core::fmt::Debug by hand and print the structure differently depending on the CandidateDescriptor version. Fixes: paritytech#8457 --------- Signed-off-by: Alexandru Cihodaru <alexandru.cihodaru@parity.io> Co-authored-by: Bastian Köcher <git@kchr.de>
Fixes paritytech#9085 --------- Signed-off-by: Alexandru Cihodaru <alexandru.cihodaru@parity.io>
All is not well when a validator is not properly connected, e.g: of things that might happen: - Finality might be slightly delay because validator will be no-show because they can't retrieve PoVs to validate approval work: paritytech#8915. - When they author blocks they won't back things because gossiping of backing statements happen using the grid topology:, e.g blocks authored by validators with a low number of peers: https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc-polkadot.helixstreet.io#/explorer/query/26931262 https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc-polkadot.helixstreet.io#/explorer/query/26931260 https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpolkadot.api.onfinality.io%2Fpublic-ws#/explorer/query/26931334 https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpolkadot-public-rpc.blockops.network%2Fws#/explorer/query/26931314 https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpolkadot-public-rpc.blockops.network%2Fws#/explorer/query/26931292 https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpolkadot-public-rpc.blockops.network%2Fws#/explorer/query/26931447 The problem is seen in `polkadot_parachain_peer_count` metrics, but it seems people are not monitoring that well enough, so let's make it more visible nodes with low connectivity are not working in good conditions. I also reduced the threshold to 85%, so that we don't trigger this error to eagerly. --------- Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…tions (paritytech#9251) Allow setting idle connection timeout value. This can be helpful in custom networks to allow maintaining long-lived connections. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
On Ethereum, 1 ETH is represented as 10^18 wei (wei being the smallest unit). On Polkadot 1 DOT is defined as 1010 plancks. It means that any value smaller than 10^8 wei can not be expressed with the native balance. Any contract that attempts to use such a value currently reverts with a DecimalPrecisionLoss error. In theory, RPC can define a decimal representation different from Ethereum mainnet (10^18). In practice tools (frontend libraries, wallets, and compilers) ignore it and expect 18 decimals. The current behaviour breaks eth compatibility and needs to be updated. See issue paritytech#109 for more details. Fix paritytech/contract-issues#109 [weights compare](https://weights.tasty.limo/compare?unit=weight&ignore_errors=true&threshold=10&method=asymptotic&repo=polkadot-sdk&old=master&new=pg/eth-decimals&path_pattern=substrate/frame/**/src/weights.rs,polkadot/runtime/*/src/weights/**/*.rs,polkadot/bridges/modules/*/src/weights.rs,cumulus/**/weights/*.rs,cumulus/**/weights/xcm/*.rs,cumulus/**/src/weights.rs) --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Alexander Theißen <alex.theissen@me.com> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Fixes: paritytech#9256 --------- Signed-off-by: Alexandru Cihodaru <alexandru.cihodaru@parity.io>
## 🔄 Zombienet CI Refactor: Matrix-Based Workflows This PR refactors the Zombienet CI workflows to use a **matrix-based approach**, resulting in: - ✅ **Easier test maintenance** – easily add or remove tests without duplicating workflow logic. - 🩹 **Improved flaky test handling** – flaky tests are excluded by default but can be explicitly included by pattern. - 🔍 **Pattern-based test selection** – run only tests matching a name pattern, ideal for debugging. --- ## 🗂️ Structure Changes - **Test definitions** are now stored in `.github/zombienet-tests/`. - Each workflow (`Cumulus`, `Substrate`, `Polkadot`, `Parachain Template`) has its own YAML file with test configurations. --- ## 🧰 Added Scripts ### `.github/scripts/parse-zombienet-tests.py` - Parses test definitions and generates a GitHub Actions matrix. - Filters out flaky tests by default. - If a `test_pattern` is provided, matching tests are **included even if flaky**. ### `.github/scripts/dispatch-zombienet-workflow.sh` - Triggers a Zombienet workflow multiple times, optionally filtered by test name pattern. - Stores results in a **CSV file** for analysis. - Useful for debugging flaky tests or stress-testing specific workflows. - Intended to be run from the local machine. --------- Co-authored-by: Javier Viola <363911+pepoviola@users.noreply.github.com> Co-authored-by: Alexander Samusev <41779041+alvicsam@users.noreply.github.com> Co-authored-by: Javier Viola <javier@parity.io>
…ustification (paritytech#9015) A grandpa race-casse has been identified in the versi-net stack around authority set changes, which leads to the following: - T0 / Node A: Completes round (15) - T1 / Node A: Applies new authority set change and increments the SetID (from 0 to 1) - T2 / Node B: Sends Precommit for round (15) with SetID (0) -- previous set ID - T3 / Node B: Applies new authority set change and increments the SetID (1) In this scenario, Node B is not aware at the moment of sending justifications that the Set ID has changed. The downstream effect is that Node A will not be able to verify the signature of justifications, since a different SetID is taken into account. This will cascade through the sync engine, where the Node B is wrongfully banned and disconnected. This PR aims to fix the edge-case by making the grandpa resilient to verifying prior setIDs for signatures. When the signature of the grandpa justification fails to decode, the prior SetID is also verified. If the prior SetID produces a valid signature, then the outdated justification error is propagated through the code (ie `SignatureResult::OutdatedSet`). The sync engine will handle the outdated justifications as invalid, but without banning the peer. This leads to increased stability of the network during authority changes, which caused frequent disconnects to versi-net in the past. ### Review Notes - Main changes that verify prior SetId on failures are placed in [check_message_signature_with_buffer](https://github.com/paritytech/polkadot-sdk/pull/9015/files#diff-359d7a46ea285177e5d86979f62f0f04baabf65d595c61bfe44b6fc01af70d89R458-R501) - Sync engine no longer disconnects outdated justifications in [process_service_command](https://github.com/paritytech/polkadot-sdk/pull/9015/files#diff-9ab3391aa82ee2b2868ece610100f84502edcf40638dba9ed6953b6e572dfba5R678-R703) ### Testing Done - Deployed the PR to versi-net with 40 validators - Prior we have noticed 10/40 validators disconnecting every 15-20 minutes, leading to instability - Over past 24h the issue has been mitigated: https://grafana.teleport.parity.io/goto/FPNWlmsHR?orgId=1 - Note: bootnodes 0 and 1 are currently running outdated versions that do not incorporate this SetID verification improvement Closes: paritytech#8872 Closes: paritytech#1147 --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Dmitry Markin <dmitry@markin.tech>
## litep2p v0.10.0 This release adds the ability to use system DNS resolver and change Kademlia DNS memory store capacity. It also fixes the Bitswap protocol implementation and correctly handles the dropped notification substreams by unregistering them from the protocol list. ### Added - kad: Expose memory store configuration ([paritytech#407](paritytech/litep2p#407)) - transport: Allow changing DNS resolver config ([paritytech#384](paritytech/litep2p#384)) ### Fixed - notification: Unregister dropped protocols ([paritytech#391](paritytech/litep2p#391)) - bitswap: Fix protocol implementation ([paritytech#402](paritytech/litep2p#402)) - transport-manager: stricter supported multiaddress check ([paritytech#403](paritytech/litep2p#403)) --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ech#9233) # Description Deduplicate some dependencies between `dependencies` and `dev-dependencies` sections --------- Co-authored-by: Bastian Köcher <git@kchr.de>
add `solc` and `resolc` binaries to image ``` $ solc --version solc, the solidity compiler commandline interface Version: 0.8.30+commit.73712a01.Linux.g++ $ resolc --version Solidity frontend for the revive compiler version 0.3.0+commit.ed60869.llvm-18.1.8 ``` You can update or install specific version with `/builds/download-bin.sh <solc | resolc> [version | latest]` e.g. ``` /builds/download-bin.sh solc v0.8.30 ```
Closes paritytech#9277. Still WIP testing --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Replaces regular addition with saturating addition when accumulating era reward points in `pallet-staking-async` to prevent potential overflow. --------- Co-authored-by: Bastian Köcher <git@kchr.de>
) This PR replaces `log` with `tracing` instrumentation on `pallet-bridge-grandpa` by providing structured logging. Partially addresses paritytech#9211
…h#9179) `subsume_assets` fails to correctly subsume two instances of `AssetsInHolding` under certain conditions which can result in loss of funds (as assets are overriden rather than summed together) Eg. consider following test: ``` #[test] fn subsume_assets_different_length_holdings() { let mut t1 = AssetsInHolding::new(); t1.subsume(CFP(400)); let mut t2 = AssetsInHolding::new(); t2.subsume(CF(100)); t2.subsume(CFP(100)); t1.subsume_assets(t2); ``` current result (without this PR change): ``` let mut iter = t1.into_assets_iter(); assert_eq!(Some(CF(100)), iter.next()); assert_eq!(Some(CFP(100)), iter.next()); ``` expected result: ``` let mut iter = t1.into_assets_iter(); assert_eq!(Some(CF(100)), iter.next()); assert_eq!(Some(CFP(500)), iter.next()); ``` --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Branislav Kontur <bkontur@gmail.com>
This fixes the YAP parachain runtimes in case you encounter a panic in the collator similar to paritytech/zombienet#2050: ``` Failed to retrieve the parachain id ``` (which we do have zombienet-sdk tests for [here](https://github.com/paritytech/polkadot-sdk/blob/master/substrate/client/transaction-pool/tests/zombienet/yap_test.rs)) --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…#9309) Fixes paritytech#782 --------- Signed-off-by: Alexandru Cihodaru <alexandru.cihodaru@parity.io>
90ee6bd
to
11b036d
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is just to rebase @bkchr proof of ownership to master so we could concentrate only on the changes to proof of possession changes in subsequent commits.