Welcome to the AptScan's Aptos Indexer repository! This repository provides an efficient and comprehensive indexing solution for the Aptos dataset, enabling easy access and analysis of the data.
The purpose of this repository is to enhance the existing Aptos indexer by adding Kafka support. This addition allows for real-time streaming of data updates and enables seamless integration with Kafka-based data processing pipelines.
As the proud owners of this repository, aptscan.ai
is committed to maintaining and enhancing the Aptos data index, making it a reliable resource for researchers, practitioners, and developers in the field of ophthalmology.
- Indexing of Aptos dataset for easy data retrieval and exploration.
- Seamless integration with Kafka for real-time data streaming and processing.
To get started with using the Aptos data index repository, please follow the instructions below:
-
Replace the aptos-core indexer with the indexer from this repository.
-
Rename
config.json.example
toconfig.json
and customize it to adapt to your environment. Modify the configuration settings inconfig.json
to match your specific setup, including Kafka broker addresses, topic names, and other relevant parameters.
By completing these steps, you will replace the aptos-core indexer with the updated version from this repository and configure it to suit your environment. We appreciate your interest in AptScan's Aptos Indexer repository.
Happy exploring and analyzing the Aptos dataset!
PRs are welcome! This is the quickest way to get your changes ingested into the Aptos system. PR's should be made against the master
branch. Please include testing details.
Tails the blockchain's transactions and pushes them into a postgres DB
A fullnode can run an indexer with the proper configs. If enabled, the indexer will tail
transactions in the fullnode with business logic from each registered TransactionProcessor
. On
startup, by default, will restart from the first gap (e.g. version 5 if versions succeeded are 0, 1, 2, 3, 4, 6).
Each TransactionProcessor
will need to be run in a separate fullnode. Please note that it may be difficult to run several transaction processors simultaneously in a single machine due to port conflicts.
When developing your own, ensure each TransactionProcessor
is idempotent, and being called with the same input won't result in an error if some or all of the processing had previously been completed.
brew install libpq
(this is a postgres C API library). Also perform all export commands post-installationbrew install postgres
pg_ctl -D /opt/homebrew/var/postgres start
orbrew services start postgresql
/opt/homebrew/bin/createuser -s postgres
- Ensure you're able to do:
psql postgres
cargo install diesel_cli --no-default-features --features postgres
- Make sure that you're in the indexer folder (run
cd crates/indexer
from base directory), rundiesel migration run --database-url postgresql://localhost/postgres
a. If for some reason this database is already being used, try a different db. e.g.DATABASE_URL=postgres://postgres@localhost:5432/indexer_v2 diesel database reset
Please follow standard fullnode installation guide on aptos.dev (https://aptos.dev/nodes/full-node/fullnode-source-code-or-docker)
cargo run -p aptos-node --features "indexer" --release -- -f <some_path>/fullnode.yaml
- Example fullnode.yaml modification
storage: enable_indexer: true # This is to avoid the node being pruned storage_pruner_config: ledger_pruner_config: enable: false indexer: enabled: true postgres_uri: "postgres://postgres@localhost:5432/postgres" processor: "default_processor" check_chain_id: true emit_every: 500
- Complete Installation Guide above
brew install --cask pgadmin4
- Open PgAdmin4
- Create a master password
- Right Click Servers >
Register
>Server
- Enter the information in the registration Modal:
General:
Name: Indexer
Connection:
Hostname / Address: 127.0.0.1
Port: 5432
Maintenance Database: postgres
Username: postgres
- Save
Notes:
- Diesel uses the
DATABASE_URL
env var to connect to the database, or the--database-url
argument.- Diesel CLI can be installed via cargo, e.g.,
cargo install diesel_cli --no-default-features --features postgres
.diesel migration run
sets up the database and runs all available migrations.- Aptos tests use the
INDEXER_DATABASE_URL
env var. It needs to be set for the relevant tests to run.- Postgres can be installed and run via brew.
diesel migration generate <your_migration_name>
generates a new folder containingup.sql + down.sql
for your migrationdiesel migration run
to apply the missing migrations. This will re-generateschema.rs
as required.diesel migration redo
to rollback and apply the last migrationdiesel database reset
drops the existing database and reruns all the migrations- You can find more information in the Diesel documentation
- If you run into
= note: ld: library not found for -lpq
clang: error: linker command failed with exit code 1 (use -v to see invocation)
first make sure you have postgres
and libpq
installed via homebrew
, see installation guide above for more details.
This is complaining about the libpq
library, a postgres C API library which diesel needs to run, more on this issue here
2. Postgresql Mac M1 installation guide
3. Stop postgresql: brew services stop postgresql
4. Since homebrew installs packages in /opt/homebrew/bin/postgres
, your pg_hba.conf
should be in /opt/homebrew/var/postgres/
for Apple Silicon users
5. Likewise, your postmaster.pid
should be retrievable via cat /opt/homebrew/var/postgres/postmaster.pid
. Sometimes you may have to remove this if you are unable to start your server, an error like:
waiting for server to start....2022-05-17 12:49:42.735 PDT [4936] FATAL: lock file "postmaster.pid" already exists
2022-05-17 12:49:42.735 PDT [4936] HINT: Is another postmaster (PID 4885) running in data directory "/opt/homebrew/var/postgres"?
stopped waiting
pg_ctl: could not start server
then run brew services restart postgresql
6. Alias for starting testnet (put this in ~/.zshrc
)
alias testnet="cd ~/Desktop/aptos-core; CARGO_NET_GIT_FETCH_WITH_CLI=true cargo run -p aptos-node -- --test"
Then run source ~/.zshrc
, and start testnet by running testnet
in your terminal