Skip to content

[CI][Benchmark] Add additional precommit testing for changes modifying benchmarking scripts #19311

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 6 commits into
base: sycl
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .github/workflows/sycl-detect-changes.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,8 @@ jobs:
- devops/scripts/install_drivers.sh
devigccfg:
- devops/dependencies-igc-dev.json
benchmarks:
- 'devops/scripts/benchmarks/**'
perf-tests:
- sycl/test-e2e/PerformanceTests/**
esimd:
Expand Down
22 changes: 22 additions & 0 deletions .github/workflows/sycl-linux-precommit.yml
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,28 @@ jobs:
skip_run: ${{matrix.use_igc_dev && contains(github.event.pull_request.labels.*.name, 'ci-no-devigc') || 'false'}}
env: ${{ contains(needs.detect_changes.outputs.filters, 'esimd') && '{}' || '{"LIT_FILTER_OUT":"ESIMD/"}' }}

test_benchmark_scripts:
needs: [build, detect_changes]
if: |
always() && !cancelled()
&& needs.build.outputs.build_conclusion == 'success'
&& contains(needs.detect_changes.outputs.filters, 'benchmarks')
uses: ./.github/workflows/sycl-linux-run-tests.yml
with:
name: Benchmark suite precommit testing
runner: '["PVC_PERF"]'
image: ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest
image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN
target_devices: 'level_zero:gpu'
tests_selector: benchmarks
benchmark_upload_results: false
benchmark_preset: 'Minimal'
benchmark_dry_run: true
repo_ref: ${{ github.sha }}
sycl_toolchain_artifact: sycl_linux_default
sycl_toolchain_archive: ${{ needs.build.outputs.artifact_archive_name }}
sycl_toolchain_decompress_command: ${{ needs.build.outputs.artifact_decompress_command }}

test-perf:
needs: [build, detect_changes]
if: |
Expand Down
7 changes: 7 additions & 0 deletions .github/workflows/sycl-linux-run-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,12 @@ on:
type: string
default: 'Minimal'
required: False
benchmark_dry_run:
description: |
Whether or not to fail the workflow upon a regression.
type: string
default: 'false'
required: False

workflow_dispatch:
inputs:
Expand Down Expand Up @@ -335,6 +341,7 @@ jobs:
upload_results: ${{ inputs.benchmark_upload_results }}
save_name: ${{ inputs.benchmark_save_name }}
preset: ${{ inputs.benchmark_preset }}
dry_run: ${{ inputs.benchmark_dry_run }}
env:
RUNNER_TAG: ${{ inputs.runner }}
GITHUB_TOKEN: ${{ secrets.LLVM_SYCL_BENCHMARK_TOKEN }}
5 changes: 4 additions & 1 deletion devops/actions/run-tests/benchmark/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,9 @@ inputs:
preset:
type: string
required: True
dry_run:
type: string
required: False

runs:
using: "composite"
Expand Down Expand Up @@ -162,7 +165,7 @@ runs:
--name "$SAVE_NAME" \
--compare-file "./llvm-ci-perf-results/results/${SAVE_NAME}_${SAVE_TIMESTAMP}.json" \
--results-dir "./llvm-ci-perf-results/results/" \
--regression-filter '^[a-z_]+_sycl '
--regression-filter '^[a-z_]+_sycl ' ${{ inputs.dry_run == 'true' && '--dry-run' }}
echo "-----"

- name: Cache changes to benchmark folder for archival purposes
Expand Down
8 changes: 7 additions & 1 deletion devops/scripts/benchmarks/compare.py
Original file line number Diff line number Diff line change
Expand Up @@ -326,6 +326,11 @@ def to_hist(
help="If provided, only regressions matching provided regex will cause exit status 1.",
default=None,
)
parser_avg.add_argument(
"--dry-run",
action="store_true",
help="Do not return error upon regressions.",
)

args = parser.parse_args()

Expand Down Expand Up @@ -372,7 +377,8 @@ def print_regression(entry: dict):
print("#\n# Regressions:\n#\n")
for test in regressions_of_concern:
print_regression(test)
exit(1) # Exit 1 to trigger github test failure
if not args.dry_run:
exit(1) # Exit 1 to trigger github test failure
print("\nNo unexpected regressions found!")
else:
print("Unsupported operation: exiting.")
Expand Down
Loading