This repository contains reproducible artifacts for evaluating Data Processing Unit (DPU) performance and capabilities. All artifacts are packaged as Docker images and uploaded to the GitHub Container Registry (GHCR) with tags like ghcr.io/btreemap/hephaestus:<experiment-name>
. Users are encouraged to run these pre-built images instead of building them locally to ensure consistent results.
The main objective of this repository is to provide standardized, reproducible experiments for evaluating Data Processing Units (DPUs) across various workloads and configurations. These artifacts enable consistent benchmarking and validation of DPU performance claims.
- Docker installed on your machine
- Basic knowledge of containerization
- Access to DPU hardware (for hardware-specific tests)
All experiment images are built and uploaded to the GitHub Container Registry (GHCR) with tags like ghcr.io/btreemap/hephaestus:<experiment-name>
.
You can pull and run the images directly:
docker run ghcr.io/btreemap/hephaestus:<experiment-name>
For example, to run the network offload benchmark:
docker run ghcr.io/btreemap/hephaestus:network-offload-benchmark
You can also use the images in your docker-compose.yml
file:
services:
dpu-benchmark:
image: ghcr.io/btreemap/hephaestus:network-offload-benchmark
# Additional configuration as needed
This repository includes reproducible artifacts for the following DPU experiments:
- Network Offload Benchmarks: Evaluate network processing performance when offloaded to DPUs
- Security Function Benchmarks: Measure performance of security functions (encryption, firewall, etc.)
- Storage Acceleration Tests: Evaluate NVMe-oF and storage processing performance
- CPU Offload Measurements: Quantify host CPU savings from DPU offloading
Each artifact contains detailed documentation on its methodology, expected results, and configuration parameters.
For consistent results:
- Use the pre-built images with the exact version tags
- Follow the hardware setup instructions included in each artifact
- Run experiments with the provided scripts to ensure methodology consistency
- Compare your results with the reference results included in each artifact
Contributions that improve reproducibility or add new experiments are welcome. Please ensure all contributions maintain the rigorous standards for reproducibility.
This project is licensed under the MIT License.
- Thanks to all researchers and engineers who contributed benchmark methodologies
- Special appreciation to the open-source community for providing tools that made these artifacts possible