Index any EVM chain and query in SQL
Getting Started | Examples | Design Goals & Features | RoadMap | Contributing
📊 Here is what indexing and tracking owers of your favorite NFTs looks like:
use chaindexing::states::{ContractState, Filters, Updates};
use chaindexing::{EventContext, EventHandler};
use crate::states::Nft;
pub struct TransferHandler;
#[chaindexing::augmenting_std::async_trait]
impl EventHandler for TransferHandler {
fn abi(&self) -> &'static str {
"event Transfer(address indexed from, address indexed to, uint256 indexed tokenId)"
}
async fn handle_event<'a, 'b>(&self, context: EventContext<'a, 'b>) {
let event_params = context.get_event_params();
let _from = event_params.get_address_string("from");
let to = event_params.get_address_string("to");
let token_id = event_params.get_u32("tokenId");
if let Some(existing_nft) =
Nft::read_one(&Filters::new("token_id", token_id), &context).await
{
let updates = Updates::new("owner_address", &to);
existing_nft.update(&updates, &context).await;
} else {
let new_nft = Nft {
token_id,
owner_address: to,
};
new_nft.create(&context).await;
}
}
}
A quick and effective way to get started is by exploring the comprehensive examples provided here: https://github.com/chaindexing/chaindexing-examples/tree/main/rust.
- 💸 Free forever
- ⚡ Real-time use-cases
- 🌐 Multi-chain
- 🧂 Granular, 🧩 Modular & 📈 Scalable
- 🌍 Environment-agnostic to allow inspecting 🔍 & replicating indexes anywhere!
- 🔓 ORM-agnostic, use any ORM to access indexed data
- 📤 Easy export to any data lake: S3, Snowflake, etc.
- 🚫 No complex YAML/JSON/CLI config
- 💪 Index contracts discovered at runtime
- ✨ Handles re-org with no UX impact
- 🔥 Side effect handling for notifications & bridging use cases
- 💸 Optimize RPC cost by indexing when certain activities happen in your DApp
- 💎 Language-agnostic, so no macros!
- ⬜ Expose
is_at_block_tail
flag to improve op heuristics for applications - ⬜ Support SQLite Database (Currently supports only Postgres)
- ⬜ Support indexing raw transactions & call traces.
- ⬜ Improved error handling/messages/reporting (Please feel free to open an issue when an opaque runtime error is encountered)
- ⬜ Support TLS connections
- ⬜ Minimal UI for inspecting events and indexed states
Chaindexing is still young and optimized for ergonomics rather than raw throughput. The default configuration works well for real-time indexing of a few contracts, but historical backfills or very high-volume workloads may expose the following constraints:
- 🐢 Historical Throughput: Each chain ingester pulls
blocks_per_batch
blocks everyingestion_rate_ms
milliseconds. With the defaults (450 blocks / 20 000 ms) this translates to roughly 22 blocks / s per chain. Tune these knobs to trade throughput for RPC cost. - 🔗 Chain Concurrency: Only
chain_concurrency
chains are ingested in parallel (default 4). Additional chains are processed sequentially. - ⚙️ Handler Cadence: Event handlers execute every
handler_rate_ms
(default 4 000 ms). If a contract emits thousands of events per block this cycle can lag behind ingestion. - 🗄️ Database Bottlenecks: Chaindexing currently supports Postgres only. Inserts are batched inside transactions over a limited connection pool—disk or network latency can throttle the pipeline.
- 🌐 RPC Provider Limits: Latency and rate-limits of your JSON-RPC provider (e.g. Alchemy, Infura) directly affect indexing speed. Public endpoints often cap block ranges and requests per second.
- ⏳ Deep Backfills: Indexing hundreds of millions of historical blocks has not been fully optimized and may require substantial time and memory. Consider chunked backfills or starting closer to the present block.
These limitations are passively being addressed; community benchmarks and pull requests are highly appreciated!
All contributions are welcome. Before working on a PR, please consider opening an issue detailing the feature/bug. Equally, when submitting a PR, please ensure that all checks pass to facilitate a smooth review process.