Argus Panoptes (“the All-Seeing”) was the sleepless giant of Greek mythology. With a hundred eyes, he could watch without rest, making him a perfect guardian.
Note: This project is under active development and is not yet ready for production use. The API and architecture are subject to change.
Argus is a next-generation, open-source, self-hosted monitoring tool for EVM chains. It's designed to be API-first, highly reliable, and deeply flexible, serving as a foundational piece of infrastructure for any EVM project.
- Real-Time EVM Monitoring: Connects to an EVM RPC endpoint to monitor new blocks in real time, processing transactions and logs as they occur.
- Flexible Filtering with Rhai: Uses the embedded Rhai scripting language to create highly specific and powerful filters for any on-chain event, from simple balance changes to complex DeFi interactions.
- EVM Value Wrappers: Convenient functions like
ether,gwei, andusdcfor handling common token denominations, plus a genericdecimalsfunction for custom tokens. This makes filter scripts more readable and less error-prone. - Multi-Channel Notifications: Supports webhook, Slack, Discord, and Telegram notifications, allowing for easy integration with your favorite services.
- Advanced Notification Policies: Support alert aggregation and throttling to reduce noise.
- Message Queue Integration: Stream filtered events into Kafka, RabbitMQ and NATS for scalable data pipelines and event-driven architectures.
- Stateful Processing: Tracks its progress in a local SQLite database, allowing it to stop and resume from where it left off without missing any blocks.
- REST API for Introspection: A read-only HTTP API to observe the status and configuration of running monitors.
- CLI Dry-Run Mode: A
dry-runcommand allows you to test your monitor against a range of historical blocks to ensure it works as expected before deploying it live. - Docker Support: Comes with a multi-stage
Dockerfileanddocker compose.ymlfor easy, portable deployments.
- Dynamic Configuration via REST API: Enhance the existing API to allow adding, updating, and removing monitors on the fly without any downtime.
- Stateful Filtering: The ability for a filter to remember past events and make decisions based on a time window (e.g., "alert if an address withdraws more than 5 times in 10 minutes").
- Data Enrichment & Cross-Contract Checks: Make monitors "smarter" by allowing them to fetch external data (e.g., from a price API) or check the state of another contract as part of their filtering logic.
- Automatic ABI Fetching: Automatically fetch contract ABIs from public registries like Etherscan, reducing the amount of manual configuration required.
- Web UI/Dashboard: A simple web interface for managing monitors and viewing a live feed of recent alerts.
Benchmarks were run on a MacBook Pro (Apple M1 Pro) over a consistent 100 block range (23,545,500 to 23,545,600) using a local RPC cache to eliminate network latency. The numbers below represent the mean execution time.
| Scenario | Objective | Mean Time (± σ) |
|---|---|---|
| A: Baseline Throughput | Raw block ingestion and simple tx.value filtering |
418.0 ms ± 25.2 ms |
| B: Log-Heavy Workload | Global ERC20 Transfer log decoding and matching |
1.506 s ± 0.053 s |
| C: Calldata-Heavy | Calldata decoding for a high-traffic contract | 259.2 ms ± 8.6 ms |
For more details on how to run the benchmarks yourself, see the benchmarks/README.md.
-
Clone the repository:
git clone https://github.com/isSerge/argus-rs cd argus-rs -
Install
sqlx-cli:cargo install sqlx-cli
The application's behavior is primarily configured through three YAML files located in the configs directory: app.yaml, monitors.yaml, and actions.yaml. You can specify an alternative directory using the --config-dir CLI argument.
This file contains the main application settings, such as the database connection, RPC endpoints, and performance tuning parameters. For a complete list of parameters and their descriptions, see the app.yaml documentation.
Example app.yaml
database_url: "sqlite:data/monitor.db"
rpc_urls:
- "https://eth.llamarpc.com"
- "https://1rpc.io/eth"
- "https://rpc.mevblocker.io"
network_id: "mainnet"
# Start 100 blocks behind the chain tip to avoid issues with block reorganizations.
initial_start_block: -100
block_chunk_size: 5
polling_interval_ms: 10000
confirmation_blocks: 12
notification_channel_capacity: 1024
abi_config_path: abis/
# API server configuration
server:
# Address and port for the HTTP server to listen on.
listen_address: "0.0.0.0:8080"
# API key for securing write endpoints (can also be set via ARGUS_API_KEY env var)
api_key: "your-secret-api-key-here"This file is where you define what you want to monitor on the blockchain. Each monitor specifies a network, an optional contract address, and a Rhai filter script. If your script needs to inspect event logs (i.e., access the log variable), you must also provide the name of the contract's ABI. This name should correspond to a .json file (without the .json extension) located in the abis/ directory (or the directory configured for ABIs in app.yaml). For a complete list of parameters and their descriptions, see the monitors.yaml documentation.
See configs/monitors.example.yaml for more detailed examples.
Example monitors.yaml
monitors:
# Monitor for large native ETH transfers (no ABI needed)
- name: "Large ETH Transfers"
network: "mainnet"
# No address means it runs on every transaction.
# This type of monitor inspects transaction data directly.
filter_script: |
tx.value > ether(10)
# actions are used to send notifications when the monitor triggers.
actions:
# This monitor will use the "my-generic-webhook" action defined in `actions.yaml`.
- "my-generic-webhook"
# Monitor for large USDC transfers (ABI is required)
- name: "Large USDC Transfers"
network: "mainnet"
address: "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48"
# The name of the ABI (Application Binary Interface) for the contract being monitored
abi: "usdc"
filter_script: |
log.name == "Transfer" && log.params.value > usdc(1_000_000)
# actions can be used to send notifications when the monitor triggers.
actions:
# This monitor will use the "slack-notifications" action defined in `actions.yaml`.
- "slack-notifications"This file defines how you want to be notified when a monitor finds a match. You can configure various notification channels like webhooks, Slack, or Discord. For a complete list of parameters and their descriptions, see the actions.yaml documentation.
See configs/actions.example.yaml for more detailed examples.
Example actions.yaml
actions:
- name: "my-generic-webhook"
webhook:
url: "https://my-service.com/webhook-endpoint"
message:
title: "New Transaction Alert: {{ monitor_name }}"
body: |
A new event was detected on contract {{ log.address }}.
- **Block Number**: {{ block_number }}
- **Transaction Hash**: {{ transaction_hash }}
- name: "slack-notifications"
slack:
slack_url: "https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX"
message:
title: "Large USDC Transfer Detected"
body: |
A transfer of over 1,000,000 USDC was detected.
<https://etherscan.io/tx/{{ transaction_hash }}|View on Etherscan>This project includes a variety of examples to help you get started. Each example is self-contained and demonstrates a specific use case.
For a full list of examples, see the Examples README.
Argus uses the tracing crate for structured logging. You can control the verbosity of the logs using the RUST_LOG environment variable.
Examples:
RUST_LOG=info cargo run --release: Only showsINFOlevel messages and above.RUST_LOG=debug cargo run --release: ShowsDEBUGlevel messages and above.RUST_LOG=argus=trace cargo run --release: ShowsTRACElevel messages and above specifically for thearguscrate.
For more detailed control, refer to the tracing-subscriber documentation on EnvFilter.
The application uses sqlx to manage database migrations. The state is stored in a local SQLite database file, configured via the database_url in app.yaml.
The database file will be created automatically on the first run if it doesn't exist.
-
Ensure
database_urlis set in yourapp.yaml. -
Run migrations: This command will create the necessary tables in the database.
sqlx migrate run
Once the setup is complete, you can run the application.
-
Build the project:
cargo build --release
-
Run the application:
# This runs the main monitoring service cargo run --release -- run # To run a dry run for a block range cargo run --release -- dry-run --from <START> --to <END> # This runs the main monitoring service with custom config directory cargo run --release -- run --config-dir <CUSTOM CONFIG DIR> # To run a dry run with custom config directory cargo run --release -- dry-run --from <START> --to <END> --config-dir <CUSTOM CONFIG DIR>
The easiest way to run Argus is with Docker Compose. This method handles the database, configuration, and application lifecycle for you.
-
Clone the repository:
git clone https://github.com/isSerge/argus-rs cd argus-rs -
Configure Secrets: Copy the example environment file and fill in your action secrets (e.g., Telegram token, Slack webhook URL).
cp .env.example .env # Now, edit .env with your secrets -
Create Data Directory: Create a local directory to persist the database.
mkdir -p data
-
Run the Application: Start the application in detached mode.
docker compose up -d
You can view logs with docker compose logs -f and stop the application with docker compose down. For more details, see the Deployment with Docker documentation.
The repository is organized to separate application logic, configuration, and documentation.
configs: Holds the default YAML configuration files (app.yaml,monitors.yaml,actions.yaml).benchmarks: Contains performance benchmark configurations and scripts.examples: Contains a collection of self-contained, runnable examples, each demonstrating a specific feature or use case.docs: The source for the project's official documentation, built withmdbook.abis: The default directory for storing contract ABI JSON files, which are used to decode event logs.migrations: Contains the SQL migration files for setting up and updating the application's database schema.
The src directory contains all the Rust source code, organized into the following modules:
abi: Handles ABI parsing, decoding, and management.action_dispatcher: Manages sending notifications to services like webhooks, Slack, and message queues like Kafka.context: Encapsulates all necessary components for use throughout the applicationcmd: Contains the definitions for the command-line interface (CLI) commands likerunanddry-run.config: Manages application configuration loading and validation.engine: The core processing and filtering logic, including the Rhai script executor.http_client: Provides a retryable HTTP client and a pool for managing clients.loader: Handles the loading and parsing of configuration files.models: Defines the core data structures (e.g.,Monitor,BlockData,Transaction).monitor: Manages the lifecycle and validation of monitor configurations.persistence: Manages the application's state via theStateRepositorytrait.providers: Fetches data from external sources like EVM nodes.supervisor: The top-level orchestrator that initializes and coordinates all services.test_helpers: Utilities and mock objects for tests.main.rs: The application's entry point, handling CLI parsing and startup.lib.rs: The library crate root.
Contributions are welcome! Please see our Contributing Guide for more details on how to get started, our development process, and coding standards.
