Skip to content

Update development documentation with improved format and REPL commands #208

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 10, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
140 changes: 86 additions & 54 deletions docs/development.md
Original file line number Diff line number Diff line change
@@ -1,122 +1,154 @@
# Install protoc
# Development Guide

```
# For working with grpc
## Installation Prerequisites

Before you can start developing, you'll need to install the following tools:

```bash
# For working with gRPC
brew install grpc protobuf

# For generate abibinding
# For generating ABI bindings
go install github.com/ethereum/go-ethereum/cmd/abigen@latest

# For gRPC tools
npm install -g grpc-tools
```

# Spin up local node
## Running a Local Node

For node that connect to Ava pre-deployed AVS contract on Holesky testnet, we need to create a `config/aggregator.yaml`. Ask a dev on how to construct this file.
For a node that connects to Ava pre-deployed AVS contract on Holesky testnet, you need to create a `config/aggregator.yaml` file. Contact a developer for help with constructing this file.

After having the config file, we can run the aggregator:
After having the config file, you can run the aggregator:

```
```bash
make dev-build
make dev-agg
```

Or use docker compsose directly:
Or use Docker Compose directly:

```
# Only need to rebuild when having code change
```bash
# Only needed when code changes
docker compose build

# Start the services
docker compose up
```

Once Docker Compose is up, there are two services running:

Once the docker compose is up, there are 2 services running:
1. The aggregator on `localhost:2206`
2. A gRPC web UI to inspect and debug content on `localhost:8080`

1. The aggregator on localhost:2206
2. A GRPC webui to inspect and debug content on localhost:8080
Visit http://localhost:8080 to interactively create and construct requests to the gRPC node.

Visit http://localhost:8080 and you can interactively create and construct
request to our grpc node.
For details on methods and payloads, check the `protocol.md` documentation. Look into the `examples` directory to run example workflows against the local node.

For detail of each method and payload, check the protocol.md docs. Look into `examples` to run example workflow against the local node
## Running an Operator Locally

# Run operator locally
Running an operator locally allows you to schedule and see job execution. First, prepare a `config/operator.yaml` file, then run:

Run an operator locally allow you to schedule and seeing job execution. First, you need to prepare an `config/operator.yaml` file then run

```
```bash
make dev-op
```

This local operator will connect to EigenLayer Holesky env, so you will need to make sure you have an existing operator keys onboard with EigenLayer.


This local operator will connect to the EigenLayer Holesky environment, so you will need to make sure you have existing operator keys onboarded with EigenLayer.

# Live reload
## Live Reload

To auto compile and live reload the node, run:
To automatically compile and live reload the node during development, run:


```
```bash
make dev-live
```

## Client SDK

We generate the client sdk for JavaScript. The code is generated based on our
protobuf definition on this file.
We generate the client SDK for JavaScript. The code is generated based on our protobuf definition files.

## Storage REPL

To inspect storage we use a simple repl.
To inspect storage, we use a simple REPL (Read-Eval-Print Loop) interface. Connect to it with:

```
```bash
telnet /tmp/ap.sock
```

The repl support a few commands:
### Supported Commands

```
list <prefix>*
get <key>
set <key> <value>
gc
``
The REPL supports the following commands:

Example:

### List everything
#### `list <prefix>*`
Lists all keys that match the given prefix pattern. The asterisk (*) is used as a wildcard.

```
Example:
```bash
# List all keys in the database
list *
```

### List active tasks

```
# List all active tasks
list t:a:*
```

### Read a key
#### `get <key>`
Retrieves and displays the value associated with the specified key.

```
Example:
```bash
# Get the value of a specific task
get t:a:01JD3252QZKJPK20CPH0S179FH
```

### Set a key
#### `set <key> <value>`
Sets a key to the specified value. You can also load values from files by prefixing the file path with an @ symbol.

```
Examples:
```bash
# Set a key with a direct value
set t:a:01JD3252QZKJPK20CPH0S179FH 'value here'

# Set a key with the contents of a file
set t:a:01JD3252QZKJPK20CPH0S179FH @/path/to/file
```

#### `gc`
Triggers garbage collection on the database with a ratio of 0.7. This helps reclaim space from deleted data.

```bash
gc
```

Checkout repl.go for more information
#### `exit`
Closes the REPL connection.

```bash
exit
```

## Reset storage
#### `rm <prefix>*`
Deletes all keys that match the given prefix pattern. The command first lists all matching keys and then deletes them.

```bash
# Delete keys with a specific prefix
rm prefix:*
```

During development, we may have to reset storage to erase bad data due to schema change. Once we're mature we will implement migration to migrate storage. For now to wipe out storage run:
#### `backup <directory>`
Creates a backup of the database to the specified directory. The backup is stored in a timestamped subdirectory with a filename of `badger.backup`.

```bash
# Backup the database to the /tmp/backups directory
backup /tmp/backups
```

#### `trigger`
This command is currently under development and not fully implemented.

## Resetting Storage

During development, you may need to reset storage to erase bad data due to schema changes. In the future, we will implement migration to properly migrate storage. For now, to wipe out storage, run:

```bash
make dev-clean
```
Loading