Skip to content

lambdaclass/gas-benchmarks

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gas Benchmarks

This repository contains scripts to run benchmarks across multiple clients. Follow the instructions below to run the benchmarks locally.

Prerequisites

Make sure you have the following installed on your system:

  • Python 3.10
  • Docker
  • Docker Compose
  • .NET 8.0.x
  • make (for running make commands)

Setup

  1. Clone the repository:
git clone https://github.com/nethermindeth/gas-benchmarks.git
cd gas-benchmarks
  1. Install Python dependencies:
pip install -r requirements.txt
  1. Prepare Kute dependencies (specific to Nethermind):
make prepare_tools
  1. Create a results directory:
mkdir -p results

Running the Benchmarks

Script: Run all

For running the whole pipeline, you can use the run.sh script.

bash run.sh -t "testPath" -w "warmupFilePath" -c "client1,client2" -r runNumber -i "image1,image2"

Example run:

run.sh -t "tests/" -w "warmup/warmup-1000bl-16wi-24tx.txt" -c "nethermind,geth,reth" -r 8

Flags:

  • --t it's used to define the path where the tests are located.
  • --w it's used to define the path where the warmup file is located.
  • --c it's used to define the clients that you want to run the benchmarks. Separate the clients with a comma.
  • --r it's used to define the number of iterations that you want to run the benchmarks. It's a numeric value.
  • --i it's used to define the images that you want to use to run the benchmarks. Separate the images with a comma, and match the clients. Use default if you want to ignore the values.

Now you're ready to run the benchmarks locally!

Populating the PostgreSQL Database with Benchmark Data

After running benchmarks and generating report files, you can populate a PostgreSQL database with the results for further analysis. This process involves two main scripts: generate_postgres_schema.py to set up the database table, and fill_postgres_db.py to load the data.

1. Setting up the Database Schema

The generate_postgres_schema.py script creates the necessary table in your PostgreSQL database to store the benchmark data.

Usage:

python generate_postgres_schema.py \
    --db-host <your_db_host> \
    --db-port <your_db_port> \
    --db-user <your_db_user> \
    --db-name <your_db_name> \
    --table-name <target_table_name> \
    --log-level <DEBUG|INFO|WARNING|ERROR|CRITICAL>
  • You will be prompted to enter the password for the specified database user.
  • --table-name: Defaults to benchmark_data.
  • --log-level: Defaults to INFO.

Example:

python generate_postgres_schema.py \
    --db-host localhost \
    --db-port 5432 \
    --db-user myuser \
    --db-name benchmarks \
    --table-name gas_benchmark_results

This will create a table named gas_benchmark_results (if it doesn't already exist) in the benchmarks database.

2. Populating the Database with Benchmark Data

Once the schema is set up, use fill_postgres_db.py to parse the benchmark report files (generated by run.sh or other means) and insert the data into the PostgreSQL table.

Usage:

python fill_postgres_db.py \
    --reports-dir <path_to_reports_directory> \
    --db-host <your_db_host> \
    --db-port <your_db_port> \
    --db-user <your_db_user> \
    --db-password <your_db_password> \
    --db-name <your_db_name> \
    --table-name <target_table_name> \
    --log-level <DEBUG|INFO|WARNING|ERROR|CRITICAL>
  • --reports-dir: Path to the directory containing the benchmark output files (e.g., output_*.csv, raw_results_*.csv, and index.html or computer_specs.txt).
  • --db-password: The password for the database user.
  • --table-name: Should match the table name used with generate_postgres_schema.py. Defaults to benchmark_data.
  • --log-level: Defaults to INFO.

Example:

python fill_postgres_db.py \
    --reports-dir ./results/my_benchmark_run_01 \
    --db-host localhost \
    --db-port 5432 \
    --db-user myuser \
    --db-password "securepassword123" \
    --db-name benchmarks \
    --table-name gas_benchmark_results

This script will scan the specified reports directory, parse the client benchmark data and computer specifications, and insert individual run records into the gas_benchmark_results table.

Continuous Metrics Posting

A new script run_and_post_metrics.sh is available to run the benchmarks and post metrics continuously in an infinite loop. This script updates the local repository with git pull, runs the benchmark tests, populates the PostgreSQL database, and cleans up the reports directory.

Usage:

./run_and_post_metrics.sh --table-name gas_limit_benchmarks --db-user nethermind --db-host perfnet.core.nethermind.dev --db-password "MyPass" [--warmup warmup/warmup-1000bl-16wi-24tx.txt]

Parameters:

  • --table-name: The database table name where benchmark data will be inserted.
  • --db-user: The database user.
  • --db-host: The database host.
  • --db-password: The database password.
  • --warmup: (Optional) The warmup file to use. Defaults to warmup/warmup-1000bl-16wi-24tx.txt.

Examples:

Using the default warmup file:

./run_and_post_metrics.sh --table-name gas_limit_benchmarks --db-user nethermind --db-host perfnet.core.nethermind.dev --db-password "MyPass"

Using a custom warmup file and run in background:

nohup ./run_and_post_metrics.sh --table-name gas_limit_benchmarks --db-user nethermind --db-host perfnet.core.nethermind.dev --db-password "MyPass" --warmup "warmup/custom_warmup.txt" &

Prevent creating of nohup.txt (to save disk space):

nohup ./run_and_post_metrics.sh --table-name gas_limit_benchmarks --db-user nethermind --db-host perfnet.core.nethermind.dev --db-password "MyPass" --warmup "warmup/custom_warmup.txt" > /dev/null 2>&1 &

About

Gas benchmark research repository

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 84.3%
  • Shell 15.5%
  • Makefile 0.2%