Skip to content
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 64 additions & 0 deletions .github/workflows/book.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
--- # MD Book generation and deployment workflow

name: Plonky3 recursion mdbook

on:
push:
branches: [main]
pull_request:
branches:
- "**"

jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
override: true

- name: Install mdbook
uses: actions-rs/cargo@v1
with:
command: install
args: mdbook

- name: Install preprocessors
uses: actions-rs/cargo@v1
with:
command: install
args: mdbook-katex mdbook-bib mdbook-mermaid

- name: Initialize mermaid preprocessor
run: mdbook-mermaid install book

- name: Build book
run: mdbook build book

- name: Upload built book
uses: actions/upload-artifact@v4
with:
name: built-mdbook
path: ./book/book

deploy:
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
needs: build
steps:
- uses: actions/checkout@v4

- name: Download built book
uses: actions/download-artifact@v3
with:
name: built-mdbook
path: ./book/book

- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./book/book
25 changes: 25 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,27 @@
# Plonky3-recursion
Plonky3 native support for uni-stark recursion.

## Production Use

This codebase is under active development and hasn't been audited yet. As such, we do not recommend its use in any
production software.

## Documentation

Documentation is still incomplete and will be improved over time.
You can go through the [Plonky3 recursion book](https://Plonky3.github.io/Plonky3-recursion/)
for a walkthrough of the recursion approach.

## License

Licensed under either of

* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)

at your option.

### Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you,
as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
6 changes: 6 additions & 0 deletions book/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
book

# Mermaid initialization files
# Obtained with `mdbook-mermaid install book`.
mermaid-init.js
mermaid.min.js
35 changes: 35 additions & 0 deletions book/book.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
[book]
authors = ["Polygon Labs"]
language = "en"
multilingual = false
src = "src"
title = "The Plonky3 recursion book"

[rust]
edition = "2024"

[build]
create-missing = true

[preprocessor.index]

[preprocessor.links]

[preprocessor.katex]

[preprocessor.bib]
bibliography = "bibliography.bib"
link-citations = true

[preprocessor.mermaid]
command = "mdbook-mermaid"

[output.html]
additional-js = ["mermaid.min.js", "mermaid-init.js"]

[output.html.print]
# Disable page break
page-break = false

[output.html.search]
limit-results = 15
5 changes: 5 additions & 0 deletions book/src/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Plonky3 recursion book

This book introduces the **Plonky3 recursion** stack, a [Polygon Labs](https://polygon.technology/) project, aimed at providing sufficient modularity and flexility for developers, while maintaining high efficiency through its fixed recursive verifier design.

The material is organized to help potential contributors and users understand the motivation, construction and extensions of the recursive verifier.
7 changes: 7 additions & 0 deletions book/src/SUMMARY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Summary

[Overview](README.md)
* [Introduction](introduction.md)
* [Construction](construction.md)
* [Handling arbitrary programs](extensions.md)
* [Benchmarks](benchmark.md)
7 changes: 7 additions & 0 deletions book/src/benchmark.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Benchmarks

This section will present empirical performance results for the Plonky3 recursion system,
including instructions for reproduceability across target machines.


*To be filled soon.*
23 changes: 23 additions & 0 deletions book/src/bibliography.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
@misc{rap,
author = {Ariel Gabizon},
title = {From AIRs to RAPs - how PLONK-style arithmetization works},
note = {HackMD},
year = {2021},
url = {https://hackmd.io/@aztec-network/plonk-arithmetiization-air},
}

@misc{zktree,
author = {Sai Deng and Bo Du},
title = {{zkTree}: A Zero-Knowledge Recursion Tree with {ZKP} Membership Proofs},
howpublished = {Cryptology {ePrint} Archive, Paper 2023/208},
year = {2023},
url = {https://eprint.iacr.org/2023/208}
}

@misc{fri_lift,
author = {Adrian Hamelink},
title = {Lifting plonky3},
note = {HackMD},
year = {2025},
url = {https://hackmd.io/HkfET6x1Qh-yNvm4fKc7zA},
}
104 changes: 104 additions & 0 deletions book/src/construction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# Recursion Approach and Construction

## High-level architecture

Recursion in zero-knowledge proofs means using one proof to verify another: an (outer) prover will generate a proof
to assert validity of an (inner) STARK proof. By applying this recursively, one obtains a (possibly compact) outer proof that attests to arbitrarily deep chains of computation.

Our approach to recursion for Plonky3 differs from a traditional zkVM approach: there is **no program counter, instruction set, or branching logic**. Instead, a fixed program is chosen, and the verifier circuit is specialized to this program only.

## Why fixing the program shape?

- **Performance**: without program counter logic, branching, or instruction decoding,
the verifier’s constraints are much lighter.

- **Recursion efficiency**: since the shape of the trace is predetermined,
the recursion circuit can be aggressively optimized.

- **Simplicity**: all inputs follow the same structural pattern, which keeps
implementation complexity low.

## Limitations

- **Rigidity**: only the supported program(s) can be proven.

- **No variable-length traces**: input size must fit the circuit’s predefined structure.

- **Reusability**: adapting to a new program requires a new circuit.

The rest of this book explains how this approach is built, [how to soften its rigidity](extensions.md#strategies),
and why it provides a powerful foundation for recursive proof systems.

## Execution IR

An **Execution IR** (intermediate representation) is defined to describe the steps of the verifier.
This IR is *not proved itself*; it only guides trace population.
The actual soundness comes from the constraints inside the operation-specific STARK chips along with their lookups into the central witness table.


## Witness Table

The Witness table is a central bus that stores values shared across operations. It gathers the pairs `(index, value)` that will be accessed by
the different chips via lookups to enforce consistency.

- The index column is *transparent*, or *preprocessed* [@@rap]: it is known to both prover and verifier in advance, requiring no online commitment.[^1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we only need one @ not two @@. Also for other citations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a single @ does not render the hyperlink when building on my side ([@@foo] is typical for the bib preprocessor AFAIK). An alternative solution (a tad prettier as it removes the brackets) is to forget about a common bibliography/references page and have everything inline as footnote references.

- The Witness table values are represented as extension field elements directly (where base field elements are padded with 0 on higher coordinates) for addressing efficiency.


## Operation-specific STARK Chips

Each operation family (e.g. addition, multiplication, Merkle path verification, FRI folding) has its own chip.

A chip contains:

- Local columns for its variables.
- Lookup ports into the witness table.
- An AIR that enforces its semantics.


## Lookups

All chips interactions are performed via a lookup argument against the central Witness table. Enforcing multiset equality between chip ports and the Witness table entries ensures correctness without proving the execution order of the entire IR itself.

Below is a representation of the interactions between the main Witness table and the different chips.

```mermaid
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Somehow I got:

Syntax error in text
mermaid version 11.6.0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tweaked it lightly, it compiles locally so not sure what's wrong, could you retry with latest version please?

%%{init: {'theme':'dark',"flowchart":{"htmlLabels":true}}}%%
flowchart TB
subgraph P[PI Chip]
P1["• Purpose: bind index=0 to the declared public input x."]
P2["• Lookup: (0, x) must appear in Witness; also exposed as a public value."]
end

subgraph C[CONST Chip]
C1["• Transparent rows: (1, 37), (3, 111), (4, 0)"]
C2["• Lookup: transparent pairs must be present in Witness (aggregated lookup)."]
end

subgraph W[Witness Table]
<!-- Note: The ugly `&nbsp;` are just empty whitespaces for alignment, I couldn't find a better way. -->
W0["0: 3 // &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;public input x"]
W1["1: 37 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;// constant"]
W2["2: 111 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;// p = 37 * x"]
W3["3: 111 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;// constant"]
W4["4: 0 // const (y = p - 111)"]
end

subgraph M[MUL chip]
M1["• Ports: (0, x), (1, 37) → inputs; (2, p) → output."]
M2["• AIR: x * 37 = p."]
end

subgraph S [ SUB Chip ]
S1["• Ports: (2, p), (3, 111) → inputs; (4, y) → output."]
S2["• AIR: p - 111 = y."]
end

W --- P
W --- C
W --- M
W --- S
```


[^1]: Transparent columns / polynomials can be reconstructed manually by the verifier, removing the need for a prover to commit to them and later perform the FRI protocol on them. However, the verifier needs $O(n)$ work when these columns are not structured, as it still needs to interpolate them. To alleviate this, the Plonky3 recursion stack performs *offline* commitment of unstructured transparent columns, so that we need only one instance of the FRI protocol to verify all transparent columns evaluations.
24 changes: 24 additions & 0 deletions book/src/extensions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Handling arbitrary programs

The fixed recursive verifier described in this book supports only fixed, predetermined programs.
This design choice maximizes performance but raises the question: **how can one prove statements of varying size or complexity?**

We highlight below to distinct approaches to alleviate this limitation and allow for arbitrary recursion

## Tree-style recursion for variable-length inputs

One can split a large computation into chunks and prove each piece using a fixed inner circuit in parallel.
These proofs can then be *recursively aggregated* in a tree structure, where each leaf of the tree corresponds to a prover portion of the computation. The tree root yields a single proof attesting to the validity of the entire computation.

A formal description of this tree-style recursion for STARKs can be seen in [@@zktree].

## Flexible FRI verification

To support proofs with different FRI shapes, one can:

* **Lift the proofs** to a larger domain, as described in [@@fri_lift].
Lifting allows a fixed circuit to efficiently verify proofs of varying trace sizes
by projecting smaller domains into larger ones, reusing the original LDE and commitments without recomputation.

* **Verify distinct proof shapes together** inside a fixed FRI verifier circuit. Instead of having a single proof
size that can be verified by a given FRI verifier circuit, one can extend it over a range of sizes instead at a minimal overhead cost. See a related implementation in `plonky2` recursion: ([Plonky2 PR #1635](https://github.com/0xPolygonZero/plonky2/pull/1635)) for more details.
9 changes: 9 additions & 0 deletions book/src/introduction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Introduction

[Plonky3](https://github.com/Plonky3/Plonky3) offers a comprehensive toolbox of cryptographic building blocks: hash functions, finite fields, and polynomial commitment schemes, ... to build tailored STARK proof systems. However, its adoption has been limited by the lack of native recursion, to allow arbitrary program execution replication in proof systems and to alleviate proof sizes and related on-chain verification costs.

This project aims at addressing this limitation, by proposing a minimal, fixed recursive verifier for Plonky3, which conceptual simplicity allows for blazing fast recursion performance. A key distinction with its predecessor [plonky2](https://github.com/0xPolygonZero/plonky2), is that rather than wrapping a STARK proof in a separate plonkish SNARK, the Plonky3 recursion stack itself is built using Plonky3’s STARK primitives.

The source code is open-source, available at [Plonky3 recursion](https://github.com/Plonky3/Plonky3) and dual-licensed MIT/APACHE-2.

***NOTE***: *This project is under active development, unaudited and as such not ready for production use. We welcome all external contributors who would like to support the development effort.*
Loading