[General] zk app Chain with 16GB RAM HW-based provers #1018
-
Team or ProjectDema ZK chainNewChain EnvironmentTestnet L2 block numberNo response Provide a brief description of the functionality you're trying to implement and the issue you are running into.After searching/reviewing all "hardware" related threads in the zkSync Github discussions and reviewing https://docs.zksync.io/zk-stack/running/proving#enabling-boojum-prover for potentially more minimal hardware requirements, we'd like to know if it's possible to run a prover with 8/16GB RAM and a consumer grade GPU. Hi, 👋🏼 we’re considering setting up an Elastic app chain with inexpensive prover/sequencer nodes. The nodes would run on 16-core x86_64 machines (8–16GB RAM) + RTX 3000-class CUDA GPU. We’ll be proving ~100 txs/sec max, with parallelized proving, and each tx includes ~150–200 constraints (simple checks like user validation, geo-policy, escrow, etc.). We’d like to keep only 1-day state (Merkle trie <1GB), minimize prover memory (8-16GB, incl. Merkle trie, execution context, proof buffer, prover binaries & OS, plus a margin), we'll be using a custom data layout (i.e.: not full state history) and we'll deploy 4 custom contracts in Solidity (ID, escrow, marketplace logic) and support wallets across chains (e.g. Optimism/Base) enabled with account abstraction (EIP-4337 + EIP-7702) Question: Can we run a zksync app chain reasonably with above hardware? Any limitations to. consider? To clarify: Trying to figure out if I want control over prover/sequencer infra with lower hardware needs, should I fork zkSync stack as a DIY AppChain using open-sourced zkSync tech versus build a light zkRollup with optimized circuits (e.g. Circom + Halo2) Thanks! 🙏🏼 Note: I posted on this on Discord in the AI Developer channel and received an answer. However, I'm not comfortable with the AI generated answer since it might lead us astray only to find out that such requirements can't be met. The Ai generated answer indicated that this is possible but I'd confirmation from a human. Thanks. 🙏🏼 Repo Link (Optional)No response Additional DetailsNo response |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hey @AtabeTuatara. Running chains with lower hardware is possible, but keep in mind performance will suffer. Amount of RAM is directly proportional with the # of txs you include per batch (which can be configured via seal criteria). The RTX 3000 will need to be a 3080, as we need > 20GB of VRAM to prove (you can fit within less for circuit proving, but last step of compression will require > 20GB).
The use-case is not clear for me, but if you want to unblock yourself, forking might be faster. If you want to contribute the circuits upstream afterwards, that's a possibility as well. Hope this helps! |
Beta Was this translation helpful? Give feedback.
Hey @AtabeTuatara.
Running chains with lower hardware is possible, but keep in mind performance will suffer. Amount of RAM is directly proportional with the # of txs you include per batch (which can be configured via seal criteria). The RTX 3000 will need to be a 3080, as we need > 20GB of VRAM to prove (you can fit within less for circuit proving, but last step of compression will require > 20GB).
The use-case is not clear for me, but if you want to unblock yourself, forking might be faster. If you want to contribute the circuits upstream afterwards, that's …