Replies: 16 comments 51 replies
-
1. Why should we use a shared sequencer?
|
Beta Was this translation helpful? Give feedback.
-
2. Why should we have a standard interface between shared sequencers and rollup frameworks?
|
Beta Was this translation helpful? Give feedback.
-
3. How does a shared sequencer communicate with an executor?For this proposal, we will show how we think about what a shared sequencer does and does not. A shared sequencer does ordering and inclusion. It does not execute the transactions and, therefore, does not commit over the state root or app hash. A “Header Producer” or executor does the execution and produces the rollup header, which is metadata about the block, which, at minimum, includes a commitment to the transactions in that block. The header producer and the shared sequencer are 2 different logical entities. Additional reading can be done in Redefining Sequencers. Let’s see how a shared sequencer works with an optimistic rollup. In this diagram, a centralized header producer performs the execution for simplicity. graph TB
style U fill:#EE82EE
style FN fill:#008000
style HP fill:#FA8072
subgraph HP["Execution"]
CHP -- "Header and Stateroots" --> DAL2["DA-Layer"]
end
CHP -- "Soft Commitment Headers" --> LN["Rollup Light Node"]
style LN fill:#D8BFD8
DAL2 -- "Header" --> LN
DAL2 -- "Header and Stateroots" --> FN["Rollup Full Node"]
CHP -- "Soft Commitment Rollup Headers \n and Stateroots \n" --> FN
FN -- "Fraud Proof" --> LN
style U fill:#FFA07A
style FN fill:#98FB98
style LN fill:#D8BFD8
style HP fill:#FA8072
U["User"] -- "Transaction" --> SA["Shared Sequencer"]
style U fill:#FFA07A
style A fill:#87CEFA
style FN fill:#98FB98
subgraph A["Ordering"]
SA -- "Ordered Batch" --> DAL["DA-Layer"]
end
SA -- "Soft Committed Ordered Batch " --> CHP
SA -- " Shared Sequencer Header+ " --> CHP
DAL -- "Ordered Batch" --> FN
SA -- "Soft Committed Ordered Batch " --> FN
SA -- " Shared Sequencer Header+ " --> FN
SA -- "Shared Sequencer Header+ " --> LN
DAL -- "Ordered Batch" --> CHP["Centralized Header Producer"]
style FN fill:#98FB98
linkStyle 0 stroke:#00FF00,stroke-width:2px;
linkStyle 1 stroke:#000000,stroke-width:2px;
linkStyle 2 stroke:#00FF00,stroke-width:2px;
linkStyle 3 stroke:#00FF00,stroke-width:2px;
linkStyle 4 stroke:#000000,stroke-width:2px;
linkStyle 5 stroke:#00FF00,stroke-width:2px;
linkStyle 6 stroke:#FF0000,stroke-width:2px;
linkStyle 7 stroke:#FF0000,stroke-width:2px;
linkStyle 8 stroke:#000000,stroke-width:2px;
linkStyle 9 stroke:#00FF00,stroke-width:2px;
linkStyle 10 stroke:#FF0000,stroke-width:2px;
linkStyle 11 stroke:#000000,stroke-width:2px;
linkStyle 12 stroke:#FF0000,stroke-width:2px;
linkStyle 13 stroke:#FF0000,stroke-width:2px;
linkStyle 14 stroke:#00FF00,stroke-width:2px;
The arrows in black are optional and used for better UX. The arrows in red are necessary for Rollup Full Nodes to function. The arrows in green are necessary for Rollup Light Nodes to function. |
Beta Was this translation helpful? Give feedback.
-
4. Why must the shared sequencer publish the ordered batches to a DA-Layer?graph TB
style U fill:#EE82EE
style FN fill:#008000
style U fill:#FFA07A
style FN fill:#98FB98
U["User"] -- "Transaction" --> SA["Shared Sequencer"]
style U fill:#FFA07A
style A fill:#87CEFA
style FN fill:#98FB98
subgraph A["Ordering"]
SA -- "Ordered Batch" --> DAL["DA-Layer"]
end
DAL -- "Ordered Batch" --> FN["Rollup Full Node"]
DAL -- "Ordered Batch" --> CHP["Centralized Header Producer"]
style FN fill:#98FB98
We need the ordered batches on a DA-Layer, as this is how we get a data availability guarantee for rollups. This is also a requirement for rollup full nodes to execute the blocks canonically, reading from the DA-Layer as the source of truth and creating a zk or fraud-proof when needed to enable trust-minimized rollup light nodes (execution light nodes). |
Beta Was this translation helpful? Give feedback.
-
5. Why does a shared sequencer have to commit over the ordering?Let's imagine we are posting ordered batches to the DA-Layer. The picture shows three blobs in the DA-Layer, but only 1 and 3 are from the shared sequencer we want to follow. To differentiate between which transactions to execute, we need to know which transactions are from the shared sequencer and which are not. That way, we can ignore the second blob. We can only start executing the transactions after we get and trust the commitment over the ordered batch. Let’s call this commitment the order-commitment. The following threads assume we commit over the ordering. There are other ways to prove that a blob comes from a shared sequencer, like signing over each blob. We are happy to discuss other paths, but we found this solution to be one of the more optimal ones. |
Beta Was this translation helpful? Give feedback.
-
6. How do we validate that the order-commitment is correct?We validate that the order commitment is correct through a shared sequencer light client. This light client can validate the shared sequencer header under its trust assumptions following the chain. The shared sequencer set would sign over this commitment or a reference to it that the shared sequencer light client can derive and validate. This light client can either be on-chain or a node in the network communicating through P2P. On-chain light clients are light clients that are built in as modules or smart contracts on another blockchain. A classic smart contract rollup will have an on-chain rollup light client to verify the ZK or fraud proofs. The same goes for Tendermint light clients that connect IBC connections. These are also on-chain light clients verifying the Tendermint consensus. The limitation is that you need a successful transaction or interaction with the blockchain to update the on-chain light client. A Light node is something a user can run where they can do the same verification locally and get the proofs through the P2P network without needing to wait for the inclusion of the transaction and without running a node of the blockchain where the on-chain light client is running. |
Beta Was this translation helpful? Give feedback.
-
7. Why does the order-commitment need to be in the header?This is an implementation detail from the shared sequencer, so feedback on how other teams think about it would be appreciated. Let's imagine we have a simple merkle tree over all order-commitments. Let's call it the commitment root. The sequencer signs over this commitment root. A shared sequencer full node can reconstruct the header and check the validity of the commitment root. It can now respond to queries against a certain order-commitment from light nodes against this root and respond with a merkle proof from the order-commitment to the root. The shared sequencer light node depends on getting the proof of the order-commitment to the commitment root. We can eliminate the need for queries by creating an extended header with the original header and a list of all order-commitments. That way, a light client can check the commitment root's correct construction. An on-chain light client would only have a commitment root, but an SS light node could get an extended header with all order commitments. There is an argument against it that having all those commitments in the extended header will be too big of a storage overhead, but you can safely prune them as a light node after verification. This will make trust-minimized SS execution light clients more light. However, it could be over-optimized, as SS light nodes still have a 2/3 honest majority assumption at this point in time. |
Beta Was this translation helpful? Give feedback.
-
8. Why does the shared sequencer and, by extension, the order-commitment need to be DA-aware?The first assumption that we are making is that a shared sequencer will want to support all kinds of rollups, and those rollups might want to run on different DA-Layers. Even if your shared sequencer only supports one DA-Layer we want to point out that the order-commitment has to use the same commitment scheme as the DA-Layer. The argumentation will be the following: We will try to construct something for a da-agnostic commitment scheme and show how that scheme runs into a problem.
|
Beta Was this translation helpful? Give feedback.
-
9. Why do the Shared Sequencer and Rollup Network have to agree on Serialization / Deserialization?For a header producer and rollup full nodes to read the transactions from the DA-Layer and execute on top of them, you must be able to deserialize the bytes correctly. The SS might only see raw bytes from the user. This means that they will order raw bytes next to each other. Now, the challenge lies at the header producer part to know when a transaction ends and when to start reading new transaction bytes. If we use a compression algorithm, we must also be aware of that one to decompress accordingly. Another challenge will be optimistic rollups that use single-round fraud proofs that need an inclusion proof to the rollups transaction inside a blob. The related discussion on this topic is here. The proposed solution we are following now is to post ISR (Intermediate state roots) and transactions next to each other. This will not be possible anymore as we are separating ordering and execution now, so the Shared Sequencer will not be able to execute the root, and they will have to live in separate namespaces/blobs. Furthermore, Rollkit wraps transactions in celestia`s compact shares transaction format. This enables us to read any share in the blob and be able to parse out the transactions. If we want to keep this feature a shared sequencer might want to adopt this and make transactions self-parsable. This is an open problem and suggestions are welcome. |
Beta Was this translation helpful? Give feedback.
-
10. Why do we need an accumulated Datahash?The problem statement is the following. We assume the shared sequencers block time will define the rollups blocktime in the general case. This, of course, is not a must. A rollup can interpret the bytes however they want. With the original assumption, each rollup height will be at least one shared sequencer height. This only goes in one direction, as not every shared sequencer height will have rollup transactions for that height. How does a rollup light node know whether or not a shared sequencer height had a rollup blob for a given namespace? And what if the header producer skipped a shared sequencer height? We need to differentiate if he did that maliciously or not. We could create a fraud-proof for that behavior so that a light node can detect it. Another option would be to have an exclusion proof for a given namespace. Still, a shared sequencer might not use a namespace merkle tree, so creating an exclusion proof for a given namespace would be difficult. An open question is if a Rollup skips an SS height if it only contains invalid transactions / random bytes or if that is just an empty block. Having empty blocks could be a solution itself, but that would be wasteful posting of bytes to the DA-Layer from the header producer's perspective. The next option is to bake this distinction into the verification logic itself. We can let the dataHash/order-commitment for rollup height x in shared sequencer height (a) influence the dataHash/order-commitment for rollup height x+1 in shared sequencer height (b). We gave it the name of accumulated Datahash, and it works as follows. Let's say DH is the DataHash for a batch of transactions for a rollup height. Let's say CR is the commitment root over the DataHashes. Let DataHash(Height(x)) = Hash(Datahash(height x-1), Datahash(txbatch)). Imagine it being a chain of hashes of hashes. The Diagram shows 3 SS heights and 3 namespaces(green, blue, red) . You can see how the namespace green skips SS height 2 but is still connected through the chain of hashes. Here is another diagram of the blue`s accumulated Datahash. |
Beta Was this translation helpful? Give feedback.
-
11. Why are we calling it the namespace, namespaceHeight, and namespaceDA, not rollupID, rollupHeight, and rollupDA?While researching, we discovered that multiple rollups can use the same namespace, which is just a matter of how you interpret it. One example of how you can explore this is if the rollups inside the namespace are aware of each other's state. For simplicity, let’s assume the person executing (header producer) the transaction would be the same for both rollups. This could give us atomic execution between rollups. It would be like taking Shared Validity Sequencing but separating ordering and execution. It also aligns with the recent blog post by anoma. To use their terminology, each namespace would be a chimera state partition in this case. Either way, there are many unexplored ways to use a namespace for cross-communication, so we should not limit it to one rollup. Of course, if anybody can post to the namespace, then you are susceptible to the woods attack |
Beta Was this translation helpful? Give feedback.
-
12. Who is running what?
|
Beta Was this translation helpful? Give feedback.
-
Thanks for posting this and starting the discussion!
Astria calls Astria also doesn't have |
Beta Was this translation helpful? Give feedback.
-
Is the user signing twice with // End user uses BroadcastMetaTx to send their transaction to a
// shared sequencer.
// params:
type SequencerTx struct {
AppTxData []byte
SharedSequencerFee uint64
SharedSequencerGasLimit uint64
Namespace uint64
NamespaceDA string
SequencerTxSignature []byte
} ? |
Beta Was this translation helpful? Give feedback.
-
For the read path I think we should instead do something by timestamp because it may be easier to standardize across the board and easier for rollups to integrate especially in frameworks like OP Stack. It also gives a bit more flexibility in general across DA layers. |
Beta Was this translation helpful? Give feedback.
-
This is quite a thorough exploration of sequencing design. I learned a lot in just reading it. Thanks for putting it together. A few initial thoughts and clarifications:
Overall, I like the simplicity of the interface. Lends itself to iteration nicely. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
This API proposal will be going through how there could be a common interface between shared sequencers and rollup frameworks. We are making some assumptions about the architecture of shared sequencers and rollup frameworks, so we invite rollup frameworks, shared sequencers, da-layers and others to comment.
This top comment will display the most up-to-date API and will be edited while the discussion is ongoing and keeping a log of edits. The reasoning will be explained throughout the other threads. We split assumptions into threads so we can have more focused discussions. There are still open questions, and solutions to some are proposed. Please comment on your thoughts about each problem and whether you agree with our assumptions.
Shared Sequencer Write Path
Shared Sequencer Read Path
Beta Was this translation helpful? Give feedback.
All reactions