A modular, pluggable pipeline to detect energy regressions across Git commits, branches, or tags. Ideal for research and diagnostics in performance-aware software engineering.
- β‘ energytrackr - Energy Measurement Pipeline
- π Modular architecture β add/remove stages easily
- π Batch & repeat execution β ensures statistical significance
- π Energy regression detection β based on Intel RAPL or
perf
- π¦ Multi-language support β via custom build/test stages
- π Automated plots β violin charts + change point detection
- π οΈ CLI-based β easy to use and integrate into scripts
[main.py]
β
[Load Config & Repo]
β
[Pre-Stages] β Check setup
[Pre-Test Stages] β Checkout, Build, Prepare
[Batch Stages] β Measure energy across N repetitions
β
[Results: CSV + PNG]
Your pipeline is controlled by a config.json
file:
{
"repo": {
"url": "https://github.com/example/project.git",
"branch": "main"
},
"execution_plan": {
"granularity": "commits",
"num_commits": 10,
"num_runs": 1,
"num_repeats": 30,
"randomize_tasks": true
},
"test_command": "pytest",
"setup_commands": ["pip install -r requirements.txt"]
}
π See the docs on configuration for full schema.
git clone https://github.com/yourusername/energy-pipeline.git
./setup.sh
Prepare your system for accurate measurements:
sudo ./system_setup.sh first-setup
reboot
sudo ./system_setup.sh setup
For more details: Installation Guide
Run a stability check (recommended before measurement):
energytrackr stability-test
Measure energy across commits:
energytrackr measure --config path/to/config.json
Sort CSV by Git history:
energytrackr sort unsorted.csv /repo/path sorted.csv
Generate plots:
energytrackr plot sorted.csv
Want to support another language or measurement tool? Just add a Python file to modules/
, e.g.:
class MyStage(PipelineStage):
def run(self, context):
print("Running custom stage")
Expose it via get_stage()
and list it in your config:
"modules_enabled": ["my_stage.py"]
- CSV:
[commit, energy-pkg, energy-core, energy-gpu]
- PNG plots with:
- Violin distribution per commit
- Median & error bars
- Normality testing
- Change point markers
Absolutely! Hereβs a revised and cleaned-up README.md
section for developers with your current tooling, workflow, and best practices in mind:
-
Create and initialize a virtual environment with all necessary dependencies:
make install-dev
This will:
- Create the virtual environment in
.venv/
- Install runtime, test, and documentation dependencies
- Install developer tools like
pre-commit
,coverage
,pylint
,pyright
, etc.
βΉοΈ Requires
make
andpython>=3.13
.
-
Install Git hooks (done once):
pre-commit install
-
Run all hooks manually:
make precommit
Hooks include formatting (
ruff format
), linting (ruff
,pylint
), YAML and whitespace checks, and test+coverage validation.
Run tests:
make test
Run tests with coverage (fails if coverage < 80%):
make coverage
Run all linters (Ruff, Pylint, Pyright):
make lint
Run full quality gate (format + lint + tests + coverage):
make check
We use Sphinx for documentation, and it's hosted on ReadTheDocs.
To build the docs locally:
make docs
The HTML output will be in docs/_build/html
.
To clean generated files:
make clean-docs
To maximize developer experience:
- Python (core support)
- Ruff (lint + format)
- Pylance (type checking)
- Pylint
- Python Debugger
- Docker
- Git Extension Pack
Configure VSCode to use
.venv/
as the Python interpreter.
- We require at least 80% test coverage.
- All code must pass
ruff
formatting and linting. - All tests must pass before pushing.
- All code must have strict typing (
pyright
). - Use
pre-commit
to catch issues before they reach CI.
make install-dev # Full dev environment setup
make format # Auto-format code with Ruff
make lint # Run Ruff, Pylint, Pyright
make test # Run pytest
make coverage # Run tests + enforce coverage threshold
make check # Full pipeline: format + lint + tests
make precommit # Manually run pre-commit hooks
make docs # Build documentation with Sphinx
make clean-docs # Remove generated doc files
- π Full Documentation (Sphinx)
- π§± Pipeline Architecture
- βοΈ Usage Guide
- π§© Writing Custom Stages
- Don't erase produced CSV files, use a timestamp with project name
- Automatically detect java version in pom and use export this one, so tests don't fail
- Build project one time for each commit, for this copy the project x batch times and checkout in each one and compile in each one than for the 30 runs for each commit we just need to run the tests, copying and compiling can be done in parallel and with unlocked frequencies
- Add documentation probably with sphinx
- Save run conditions (temperature, CPU governor, number of CPU cycles, etc.), perf could be use for part of this and fastfetch for the rest. Also save config file. Place all this metadata either in the CSV or in a separate file
- Display run conditions on the graph (e.g. temperature)
- From measured CPU cycles, use CPU power usage provided by manufacturer to display a second line on the graph to compare energy consumption from RAPL with theoretical estimations.
- Run only the same tests between commits
- Do a warm-up run before the actual measurement
- Add a cooldown between measurements, 1 second by default
- Check that pc won't go into sleep mode during the test
- Check that most background processes are disabled
- Unload drivers/modules that could interfere with the measurement
- Add tests with code coverage
- Add GitHub actions
- Inspired by energy-efficient software engineering research
- Powered by:
GitPython
,perf
,tqdm
,matplotlib
,ruptures
,pydantic
This project is licensed under the MIT License.