-
Notifications
You must be signed in to change notification settings - Fork 109
Continuous Benchmarking #936
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Explore these optional code suggestions:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces continuous benchmarking for performance-critical workflows, including a new test script, documentation updates, sample benchmark data, and a dedicated GitHub Actions workflow. It also removes existing CI workflows in favor of a single continuous benchmarking pipeline.
- Added
test-components.sh
to validate component execution and JSON conversions. - Generated documentation
docs/documentation/cont-bench.md
with sample benchmark results. - Introduced
bench.yaml/json
,bench-google.json
, and.github/workflows/cont-bench.yml
to automate benchmark collection and Google Benchmark conversion. - Note: an
.env
file with sensitive tokens was added and multiple legacy workflows were removed.
Reviewed Changes
Copilot reviewed 28 out of 28 changed files in this pull request and generated 4 comments.
Show a summary per file
File | Description |
---|---|
test-components.sh | Script for testing component execution and JSON YAML |
docs/documentation/cont-bench.md | Documentation page for continuous benchmarking |
bench.yaml / bench.json / bench-google.json | Sample benchmark data |
.github/workflows/cont-bench.yml | New continuous benchmarking workflow |
.env | Environment file with sensitive tokens |
.github/workflows/*.yml (other CI workflows) | Legacy workflows deleted |
Comments suppressed due to low confidence (4)
.github/workflows/cont-bench.yml:64
- The Python script reads 'bench.json' in the root, but the workflow converts and writes to 'pr/bench.json'. Update the path to match where the file is written.
with open('bench.json', 'r') as f:
bench.yaml:5
- Avoid hard-coded absolute paths; use relative paths or environment variables to improve portability.
path: /home/mohammed/Desktop/cont-bench/benchmarks/5eq_rk3_weno3_hllc/case.py
.env:1
- The .env file commits personal access tokens, exposing secrets. Remove it from version control and use GitHub Secrets instead.
TOKEN=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
Posted my GitHub account tokens, I will just disable them rn. |
You can put back your workflow files. You will want to mimic the setup of the new |
User description
Description
Concerning (#462),
Intended to keep track of benchmark results (
./mfc.sh bench
) for performance-critical improvements. Since there is not a specific benchmark procedure, the four existing MFC benchmark cases' results are reported. To ensure standardized performance with no hardware-bias, all benchmarking occurs on a GitHub runner till figured later what resources/clusters/allocations/runners to utilize. Once poc is finalized, other stuff ought to be easy.Debugging info,
Not much besides reviewing .md pages.
To-dos,
Note to Self:
Look into retrospectively record the previous 10-50 base repo commits to display invaluable datapoints.
PR Type
Enhancement
Description
Implement continuous benchmarking with GitHub Action workflow
Remove legacy cluster-specific benchmark scripts
Add Google Benchmark format conversion for MFC results
Create automated performance tracking and documentation
Changes diagram
Changes walkthrough 📝
21 files
Add continuous benchmarking GitHub Action workflow
Remove legacy benchmark workflow
Remove Frontier cluster benchmark script
Remove Frontier cluster build script
Remove Frontier cluster benchmark submission script
Remove Frontier cluster submission script
Remove Frontier cluster test script
Remove Phoenix cluster benchmark script
Remove Phoenix cluster benchmark submission script
Remove Phoenix cluster submission script
Remove Phoenix cluster test script
Remove code cleanliness workflow
Remove coverage check workflow
Remove documentation workflow
Remove formatting workflow
Remove line count workflow
Remove source linting workflow
Remove toolchain linting workflow
Remove PMD source analysis workflow
Remove spell check workflow
Remove test suite workflow
5 files
Add component testing script for benchmarks
Add benchmark results YAML file
Add Google Benchmark format JSON results
Add benchmark results JSON file
Add benchmark results YAML file
1 files
Add environment configuration with GitHub tokens
1 files
Add continuous benchmarking documentation