Skip to content

pinax-network/antelope-token-api

Repository files navigation

Antelope Token API

.github/workflows/bun-test.yml

Tokens information from the Antelope blockchains, powered by Substreams

Swagger API

Usage

Method Path Query parameters
(* = Required)
Description
GET
text/html
/ - Swagger API playground
GET
application/json
/balance account*
contract
symcode
limit
page
Balances of an account
GET
application/json
/balance/historical account*
block_num
contract
symcode
limit
page
Historical token balances
GET
application/json
/head limit
page
Head block information
GET
application/json
/holders contract*
symcode*
limit
page
List of holders of a token
GET
application/json
/supply block_num
issuer
contract*
symcode*
limit
page
Total supply for a token
GET
application/json
/tokens limit
page
List of available tokens
GET
application/json
/transfers block_range
contract*
symcode*
limit
page
All transfers related to a token
GET
application/json
/transfers/account account*
block_range
from
to
contract
symcode
limit
page
All transfers related to an account
GET
application/json
/transfers/id trx_id*
limit
page
Specific transfer related to a token

Docs

Method Path Description
GET
application/json
/openapi OpenAPI specification
GET
application/json
/version API version and Git short commit hash

Monitoring

Method Path Description
GET
text/plain
/health Checks database connection
GET
text/plain
/metrics Prometheus metrics

X-Api-Key

Use the Variables tab at the bottom to add your API key:

{
  "X-Api-Key": "changeme"
}

Additional notes

  • For the block_range parameter in transfers, you can pass a single integer value (low bound) or an array of two values (inclusive range).
  • Use the from and to field for transfers of an account to further filter the results (i.e. incoming or outgoing transactions from/to another account).
  • Don't forget to request the meta fields in the response to get access to pagination and statistics !

Requirements

API stack architecture

Token API architecture diagram

Setting up the database backend (ClickHouse)

Without a cluster

Example on how to set up the ClickHouse backend for sinking EOS data.

  1. Start the ClickHouse server
clickhouse server
  1. Create the token database
echo "CREATE DATABASE eos_tokens_v1" | clickhouse client -h <host> --port 9000 -d <database> -u <user> --password <password>
  1. Run the create_schema.sh script
./create_schema.sh -o /tmp/schema.sql
  1. Execute the schema
cat /tmp/schema.sql | clickhouse client -h <host> --port 9000 -d <database> -u <user> --password <password>
  1. Run the sink
substreams-sink-sql run clickhouse://<username>:<password>@<host>:9000/eos_tokens_v1 \
https://github.com/pinax-network/substreams-antelope-tokens/releases/download/v0.4.0/antelope-tokens-v0.4.0.spkg `#Substreams package` \
-e eos.substreams.pinax.network:443 `#Substreams endpoint` \
1: `#Block range <start>:<end>` \
--final-blocks-only --undo-buffer-size 1 --on-module-hash-mistmatch=warn --batch-block-flush-interval 100 --development-mode `#Additional flags`
  1. Start the API
# Will be available on locahost:8080 by default
antelope-token-api --host <host> --database eos_tokens_v1 --username <username> --password <password> --verbose

With a cluster

If you run ClickHouse in a cluster, change step 2 & 3:

  1. Create the token database
echo "CREATE DATABASE eos_tokens_v1 ON CLUSTER <cluster>" | clickhouse client -h <host> --port 9000 -d <database> -u <user> --password <password>
  1. Run the create_schema.sh script
./create_schema.sh -o /tmp/schema.sql -c <cluster>

Warning

Linux x86 only

$ wget https://github.com/pinax-network/antelope-token-api/releases/download/v4.0.0/antelope-token-api
$ chmod +x ./antelope-token-api
$ ./antelope-token-api --help                                                                                                       
Usage: antelope-token-api [options]

Token balances, supply and transfers from the Antelope blockchains

Options:
  -V, --version            output the version number
  -p, --port <number>      HTTP port on which to attach the API (default: "8080", env: PORT)
  --hostname <string>      Server listen on HTTP hostname (default: "localhost", env: HOSTNAME)
  --host <string>          Database HTTP hostname (default: "http://localhost:8123", env: HOST)
  --database <string>      The database to use inside ClickHouse (default: "default", env: DATABASE)
  --username <string>      Database user (default: "default", env: USERNAME)
  --password <string>      Password associated with the specified username (default: "", env: PASSWORD)
  --max-limit <number>     Maximum LIMIT queries (default: 10000, env: MAX_LIMIT)
  -v, --verbose <boolean>  Enable verbose logging (choices: "true", "false", default: false, env: VERBOSE)
  -h, --help               display help for command

.env Environment variables

# API Server
PORT=8080
HOSTNAME=localhost

# Clickhouse Database
HOST=http://127.0.0.1:8123
DATABASE=default
USERNAME=default
PASSWORD=
MAX_LIMIT=500

# Logging
VERBOSE=true

Docker environment

  • Pull from GitHub Container registry

For latest tagged release

docker pull ghcr.io/pinax-network/antelope-token-api:latest

For head of main branch

docker pull ghcr.io/pinax-network/antelope-token-api:develop
  • Build from source
docker build -t antelope-token-api .
  • Run with .env file
docker run -it --rm --env-file .env ghcr.io/pinax-network/antelope-token-api

Contributing

See CONTRIBUTING.md.

Quick start

Install Bun

bun install
bun dev

Tests

bun lint
bun test

About

Tokens information from the Antelope blockchains, powered by Substreams

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors 3

  •  
  •  
  •