This Jupyter-based toolset supports energy analysis, modeling, and prediction for network experiment testbeds. It utilizes RO-Crate metadata and CSV energy logs to generate rich visualizations, create machine learning models, and simulate power draw under configurable load conditions.
Analyzes raw energy CSV files for multiple nodes and runs.
- Loads and processes CSV energy data dynamically.
- Extracts node-level metadata from
ro-crate-metadata.json
. - Provides in-depth visualizations:
- Power over time
- Cumulative energy usage
- Energy rate (mW/s)
- Power vs. CPU load (if CPU data available)
- Current and voltage trends
- Per-node energy bar charts
- Generates formatted metadata summaries and clickable topology links.
Input:
energy/
folder &ro-crate-metadata.json
Output: Visual plots + metadata tables
Exemplary energy consumption plot:
Fits regression models (linear/polynomial) to stress test results.
- Uses stress-run outputs from previous testbed executions.
- Fits two model types:
- Linear (with or without idle intercept)
- Polynomial (quadratic)
- Stores each trained model as a
.json
file for later prediction.
Input: CPU-only energy runs (per node) Output: Model file
cpu_model_<node>.json
stored indata/cpu_models/
Exemplary cpu_model.json
:
Predicts server power draw using trained models and user-defined configurations.
- Select multiple nodes to simulate total or individual power draw.
- For each node:
- Choose active NICs
- Set number of active CPU cores
- Select target CPU load (0–100%)
- Visual prediction modes:
- Per-node stacked plots
- System-wide stacked summary
- Fully interactive and updates live on input change.
Input: CPU model files (
cpu_model_<node>.json
) Output: Live power prediction visualizations
Exemplary prediction plot:
Extracts metadata from a local RO-Crate and publishes it to the GreenDIGIT catalogue using the gCat API.
- Extracts title, description, keywords, and authors from ro-crate-metadata.json
- Prompts user to upload the zipped RO-Crate to their D4Science Workspace and input the public link
- Builds and submits a package_create-compatible metadata entry
- Automatically detects and prints the final dataset URL in the catalogue
Input: RO-Crate folder (e.g.
./result_folder_examples/...
)
Output: Published dataset visible at <https://data.d4science.org/ctlg/GreenDIGIT/...
pip install pandas matplotlib seaborn
Or use the virtual environment setup:
python3 -m venv .venv_energy
source .venv_energy/bin/activate
pip install -r requirements.txt
results/
└── <timestamped_result_folder>/
├── energy/ # CSV measurements per node
├── ro-crate-metadata.json # RO-Crate metadata
└── config/ # Optional extra info (e.g., variable sets)
data/
└── cpu_models/ # Fitted model files for prediction
Notebook | Input | Output |
---|---|---|
evaluation |
CSV + RO-Crate metadata | Energy trend plots & hardware summary |
energy_model |
Energy runs from stress tests | Fitted model .json files |
prediction |
Model files + interactive inputs | Live power prediction per scenario |
api_integration |
Local RO-Crate + public ZIP URL | Dataset published to GreenDIGIT catalogue |
Jupyter notebooks track metadata such as execution_count
, which can cause unnecessary changes in Git. To prevent Git from detecting these changes after each run, follow these steps:
nbstripout
removes unnecessary metadata before committing:
pip install nbstripout
Run the following command inside your Git repository:
nbstripout --install
Check that nbstripout
is active:
nbstripout --status
Add the following rule to .gitattributes
in your repository:
*.ipynb filter=jupyter
Then set up the Git filter:
git config filter.jupyter.clean nbstripout
git config filter.jupyter.smudge cat
Apply the filter to existing files (each time after execution the notebook):
git add --renormalize .
To automate it use pre-commit hooks.
If you prefer a manual approach, you can clear metadata using nbconvert
before committing:
jupyter nbconvert --ClearMetadataPreprocessor.enabled=True --to notebook --inplace my_notebook.ipynb
This ensures that execution counts and other unnecessary metadata do not clutter your Git history.