Skip to content

2024 December Updates #69

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 69 commits into from
Dec 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
9526dac
Fixes for igbh dataset download
arjunsuresh Dec 12, 2024
f277ba3
Merge pull request #1 from GATEOverflow/dev
arjunsuresh Dec 12, 2024
663d6be
fixes for rgat reference implementation
arjunsuresh Dec 12, 2024
e0b6ded
Added tqdm deps for get-dataset-igbh
arjunsuresh Dec 12, 2024
45a08cb
Fix old repo name usage
arjunsuresh Dec 12, 2024
0b5bcfe
Fix for avoiding user prompt in download-igbh
arjunsuresh Dec 12, 2024
25d903b
Remove deprecated gui usage
arjunsuresh Dec 12, 2024
ccb1cef
run on pull request
anandhu-eng Dec 12, 2024
036b4e9
change base branch to dev
anandhu-eng Dec 12, 2024
1daee91
Merge pull request #2 from anandhu-eng/clean_links
arjunsuresh Dec 12, 2024
006f23f
Cleanup for mlperf-inference-rgat
arjunsuresh Dec 12, 2024
8e896ed
Fix torch and numpy version for mlperf-inference-rgat
arjunsuresh Dec 12, 2024
1717a56
Support pytorch 2.4 for app-mlperf-inference-rgat
arjunsuresh Dec 12, 2024
0bc8416
Merge branch 'mlcommons:dev' into dev
arjunsuresh Dec 12, 2024
17d7c08
Support igbh dataset from host
arjunsuresh Dec 12, 2024
58b3bfb
Fix fstring formatting in app-mlperf-inference-mlcommons-python
arjunsuresh Dec 12, 2024
c93b9c2
Fix use_dataset_from_host for igbh
arjunsuresh Dec 12, 2024
64f69e6
Remove torchvision deps for mlperf-inference-rgat
arjunsuresh Dec 12, 2024
c1e00cc
Remove torchvision deps for mlperf inference rgat cuda
arjunsuresh Dec 12, 2024
4ef87fa
Create test-mlperf-inference-rgat.yml
arjunsuresh Dec 12, 2024
31c3143
Fix default cm-repo-branch for build-dockerfile
arjunsuresh Dec 12, 2024
b3deeef
Merge pull request #49 from GATEOverflow/dev
arjunsuresh Dec 12, 2024
b899c20
capture docker tool
anandhu-eng Dec 13, 2024
71fd59a
docker tool -> container tool
anandhu-eng Dec 13, 2024
ddee591
Merge pull request #50 from anandhu-eng/podmanCapture
arjunsuresh Dec 13, 2024
216081d
[Automated Commit] Format Codebase (#51)
arjunsuresh Dec 13, 2024
9136723
Update test-mlperf-inference-rgat.yml
arjunsuresh Dec 13, 2024
edcf36c
Test (#52)
arjunsuresh Dec 13, 2024
a1b8a48
Test (#53)
arjunsuresh Dec 14, 2024
48f7a91
Update VERSION | rgat-fixes
arjunsuresh Dec 14, 2024
3d9715f
Updated git_commit_hash.txt
mlcommons-bot Dec 14, 2024
90a4412
Update MLPerf automation repo in github actions (#54)
arjunsuresh Dec 19, 2024
af15e72
Support nvmitten for aarch64 (#55)
arjunsuresh Dec 19, 2024
8b92713
Increment version to 0.6.13
mlcommons-bot Dec 19, 2024
b3a34ec
Updated git_commit_hash.txt
mlcommons-bot Dec 19, 2024
3f25d3c
Copy bert model for nvidia-mlperf-inference implementation instead of…
arjunsuresh Dec 20, 2024
a09686d
Update version (#57)
arjunsuresh Dec 20, 2024
e6ad511
Updated git_commit_hash.txt
mlcommons-bot Dec 20, 2024
f399c2c
Update github actions - use master branch of inference repository (#58)
arjunsuresh Dec 20, 2024
d2db3b4
Migrate MLPerf inference unofficial results repo to MLCommons (#59)
arjunsuresh Dec 21, 2024
2b1e23c
Updated git_commit_hash.txt
mlcommons-bot Dec 21, 2024
3439a72
Create reset-fork.yml
arjunsuresh Dec 21, 2024
5ddfc95
Update pyproject.toml
arjunsuresh Dec 21, 2024
f5eb712
Update VERSION
arjunsuresh Dec 21, 2024
17833df
Updated git_commit_hash.txt
mlcommons-bot Dec 21, 2024
cfd76e1
Fix scc24 github action (#61)
arjunsuresh Dec 21, 2024
d0c6c3e
Fix dangling softlink issue with nvidia-mlperf-inference-bert (#64)
arjunsuresh Dec 21, 2024
188708b
Update VERSION
arjunsuresh Dec 21, 2024
26cf833
Updated git_commit_hash.txt
mlcommons-bot Dec 21, 2024
7f48c88
Support pull_inference_changes in run-mlperf-inference-app (#65)
arjunsuresh Dec 21, 2024
b051bb1
Added pull_inference_changes support to run-mlperf-inference-app
arjunsuresh Dec 21, 2024
7bc5f0d
Fix github action failures (#68)
arjunsuresh Dec 22, 2024
225220c
Update test-cm4mlops-wheel-ubuntu.yml
arjunsuresh Dec 22, 2024
bb79019
support --outdirname for ml models, partially fixed #63 (#71)
sahilavaran Dec 23, 2024
a9e8329
Update test-cm-based-submission-generation.yml (#73)
arjunsuresh Dec 23, 2024
7dcef66
Fix exit code for docker run failures (#74)
arjunsuresh Dec 23, 2024
d28df7e
Support --outdirname for datasets fixes #63 (#75)
sahilavaran Dec 23, 2024
cf575d0
Support version in preprocess-submission, cleanups for coco2014 scrip…
arjunsuresh Dec 23, 2024
1fc32ab
Fixed stable-diffusion-xl name in SUT configs
arjunsuresh Dec 24, 2024
79fb471
Fix tensorrt detect on aarch64
arjunsuresh Dec 24, 2024
5189696
Added torch deps for get-ml-model-gptj-nvidia
arjunsuresh Dec 24, 2024
76796b4
Update VERSION
arjunsuresh Dec 24, 2024
a90475d
Updated git_commit_hash.txt
mlcommons-bot Dec 24, 2024
3551660
Fix coco2014 sample ids path
arjunsuresh Dec 25, 2024
c465378
Fixes for podman support (#79)
arjunsuresh Dec 27, 2024
c3550d2
Not use SHELL command in CM docker (#82)
arjunsuresh Dec 27, 2024
f79e2f3
Support adding dependent CM script commands in CM dockerfile
arjunsuresh Dec 27, 2024
6ba3117
Fixes for igbh dataset detection (#85)
arjunsuresh Dec 27, 2024
467517e
Merge branch 'main' into dev
arjunsuresh Dec 27, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions .github/workflows/check-broken-links.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
name: "Check .md README files for broken links"

on:
push:
pull_request:
branches:
- master
- dev

jobs:
markdown-link-check:
Expand All @@ -18,3 +18,4 @@ jobs:
with:
use-quiet-mode: 'yes'
check-modified-files-only: 'yes'
base-branch: dev
42 changes: 42 additions & 0 deletions .github/workflows/reset-fork.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
name: Reset Current Branch to Upstream After Squash Merge

on:
workflow_dispatch:
inputs:
branch:
description: 'Branch to reset (leave blank for current branch)'
required: false
default: ''

jobs:
reset-branch:
runs-on: ubuntu-latest

steps:
- name: Checkout Repository
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Detect Current Branch
if: ${{ inputs.branch == '' }}
run: echo "branch=$(git rev-parse --abbrev-ref HEAD)" >> $GITHUB_ENV

- name: Use Input Branch
if: ${{ inputs.branch != '' }}
run: echo "branch=${{ inputs.branch }}" >> $GITHUB_ENV

- name: Add Upstream Remote
run: |
git remote add upstream https://github.com/mlcommons/mlperf-automations.git
git fetch upstream
- name: Reset Branch to Upstream
run: |
git checkout ${{ env.branch }}
git reset --hard upstream/${{ env.branch }}
if: success()

- name: Force Push to Origin
run: |
git push origin ${{ env.branch }} --force-with-lease
if: success()
Original file line number Diff line number Diff line change
Expand Up @@ -22,5 +22,5 @@ jobs:
export CM_REPOS=$HOME/GH_CM
pip install --upgrade cm4mlops
cm pull repo
cm run script --tags=run-mlperf,inference,_all-scenarios,_full,_r4.1-dev --execution_mode=valid --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=IntelSPR.24c --implementation=amd --backend=pytorch --category=datacenter --division=open --scenario=Offline --docker_dt=yes --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --device=rocm --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet --docker_skip_run_cmd=yes
# cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=main --commit_message="Results from GH action on SPR.24c" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=IntelSPR.24c
cm run script --tags=run-mlperf,inference,_all-scenarios,_full,_r4.1-dev --execution_mode=valid --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=IntelSPR.24c --implementation=amd --backend=pytorch --category=datacenter --division=open --scenario=Offline --docker_dt=yes --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --docker_cm_repo_branch=dev --adr.compiler.tags=gcc --device=rocm --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet --docker_skip_run_cmd=yes
# cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=dev --commit_message="Results from GH action on SPR.24c" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=IntelSPR.24c
18 changes: 14 additions & 4 deletions .github/workflows/test-cm-based-submission-generation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -80,19 +80,29 @@ jobs:
fi
# Dynamically set the log group to simulate a dynamic step name
echo "::group::$description"
cm ${{ matrix.action }} script --tags=generate,inference,submission --adr.submission-checker-src.tags=_branch.dev --clean --preprocess_submission=yes --results_dir=$PWD/submission_generation_tests/${{ matrix.case }}/ --run-checker --submitter=MLCommons --tar=yes --env.CM_TAR_OUTFILE=submission.tar.gz --division=${{ matrix.division }} --env.CM_DETERMINE_MEMORY_CONFIGURATION=yes --quiet $extra_run_args
cm ${{ matrix.action }} script --tags=generate,inference,submission --version=v4.1 --clean --preprocess_submission=yes --results_dir=$PWD/submission_generation_tests/${{ matrix.case }}/ --run-checker --submitter=MLCommons --tar=yes --env.CM_TAR_OUTFILE=submission.tar.gz --division=${{ matrix.division }} --env.CM_DETERMINE_MEMORY_CONFIGURATION=yes --quiet $extra_run_args
exit_status=$?
echo "Exit status for the job ${description} ${exit_status}"
if [[ "${{ matrix.case }}" == "case-5" || "${{ matrix.case }}" == "case-6" ]]; then
# For cases 5 and 6, exit status should be 0 if cm command fails, 1 if it succeeds
if [[ ${exit_status} -ne 0 ]]; then
exit 0
echo "STEP_FAILED=false" >> $GITHUB_ENV
else
exit ${exit_status}
echo "STEP_FAILED=true" >> $GITHUB_ENV
fi
else
# For other cases, exit with the original status
test ${exit_status} -eq 0 || exit ${exit_status}
if [[ ${exit_status} -eq 0 ]]; then
echo "STEP_FAILED=false" >> $GITHUB_ENV
else
echo "STEP_FAILED=true" >> $GITHUB_ENV
fi
fi
echo "::endgroup::"
- name: Fail if Step Failed
if: env.STEP_FAILED == 'true'
continue-on-error: false
run: |
echo "Manually failing the workflow because the step failed."
exit 1

3 changes: 1 addition & 2 deletions .github/workflows/test-cm4mlops-wheel-ubuntu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ on:
branches:
- main
- dev
- mlperf-inference
paths:
- '.github/workflows/test-cm4mlops-wheel-ubuntu.yml'
- 'setup.py'
Expand All @@ -16,7 +15,7 @@ jobs:
fail-fast: false
matrix:
os: [ubuntu-latest, ubuntu-20.04]
python-version: ['3.7', '3.8', '3.11', '3.12']
python-version: ['3.8', '3.11', '3.12']
exclude:
- os: ubuntu-latest
python-version: "3.8"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,5 +22,5 @@ jobs:
export CM_REPOS=$HOME/GH_CM
pip install --upgrade cm4mlops
pip install tabulate
cm run script --tags=run-mlperf,inference,_all-scenarios,_submission,_full,_r4.1-dev --preprocess_submission=yes --execution_mode=valid --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=IntelSPR.24c --implementation=intel --backend=pytorch --category=datacenter --division=open --scenario=Offline --docker_dt=yes --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --device=cpu --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=main --commit_message="Results from GH action on SPR.24c" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=IntelSPR.24c
cm run script --tags=run-mlperf,inference,_all-scenarios,_submission,_full,_r4.1-dev --preprocess_submission=yes --execution_mode=valid --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=IntelSPR.24c --implementation=intel --backend=pytorch --category=datacenter --division=open --scenario=Offline --docker_dt=yes --docker_it=no --docker_cm_repo=mlcommons@mlperf-automations --docker_cm_repo_branch=dev --adr.compiler.tags=gcc --device=cpu --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from GH action on SPR.24c" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=IntelSPR.24c
Original file line number Diff line number Diff line change
Expand Up @@ -60,4 +60,4 @@ jobs:
git config --global credential.https://github.com.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/mlperf_inference_test_submissions_v5.0 --repo_branch=main --commit_message="Results from Bert GH action on ${{ matrix.os }}" --quiet
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from Bert GH action on ${{ matrix.os }}" --quiet
4 changes: 2 additions & 2 deletions .github/workflows/test-mlperf-inference-dlrm.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ jobs:
export CM_REPOS=$HOME/GH_CM
python3 -m pip install cm4mlops
cm pull repo
cm run script --tags=run-mlperf,inference,_performance-only --adr.mlperf-implementation.tags=_branch.dev --adr.mlperf-implementation.version=custom --submitter="MLCommons" --model=dlrm-v2-99 --implementation=reference --backend=pytorch --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=1 --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --clean
cm run script --tags=run-mlperf,inference,_performance-only --pull_changes=yes --pull_inference_changes=yes --submitter="MLCommons" --model=dlrm-v2-99 --implementation=reference --backend=pytorch --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=1 --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --clean

build_intel:
if: github.repository_owner == 'gateoverflow_off'
Expand All @@ -45,4 +45,4 @@ jobs:
export CM_REPOS=$HOME/GH_CM
python3 -m pip install cm4mlops
cm pull repo
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --model=dlrm-v2-99 --implementation=intel --batch_size=1 --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=1 --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --model=dlrm-v2-99 --implementation=intel --batch_size=1 --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=1 --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean
4 changes: 2 additions & 2 deletions .github/workflows/test-mlperf-inference-gptj.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,6 @@ jobs:
export CM_REPOS=$HOME/GH_CM
python3 -m pip install cm4mlops
cm pull repo
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --docker --model=gptj-99 --backend=${{ matrix.backend }} --device=cuda --scenario=Offline --test_query_count=1 --precision=${{ matrix.precision }} --target_qps=1 --quiet --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --beam_size=1 --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --get_platform_details=yes --implementation=reference --clean
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/mlperf_inference_test_submissions_v5.0 --repo_branch=main --commit_message="Results from self hosted Github actions - NVIDIARTX4090" --quiet --submission_dir=$HOME/gh_action_submissions
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --docker --pull_changes=yes --pull_inference_changes=yes --model=gptj-99 --backend=${{ matrix.backend }} --device=cuda --scenario=Offline --test_query_count=1 --precision=${{ matrix.precision }} --target_qps=1 --quiet --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --docker_cm_repo_branch=dev --adr.compiler.tags=gcc --beam_size=1 --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --get_platform_details=yes --implementation=reference --clean
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from self hosted Github actions - NVIDIARTX4090" --quiet --submission_dir=$HOME/gh_action_submissions

8 changes: 4 additions & 4 deletions .github/workflows/test-mlperf-inference-llama2.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: MLPerf inference LLAMA 2 70B
name: MLPerf inference LLAMA2-70B

on:
schedule:
Expand All @@ -20,7 +20,7 @@ jobs:
precision: [ "bfloat16" ]

steps:
- name: Test MLPerf Inference LLAMA 2 70B reference implementation
- name: Test MLPerf Inference LLAMA2-70B reference implementation
run: |
source gh_action/bin/deactivate || python3 -m venv gh_action
source gh_action/bin/activate
Expand All @@ -31,5 +31,5 @@ jobs:
pip install "huggingface_hub[cli]"
git config --global credential.helper store
huggingface-cli login --token ${{ secrets.HF_TOKEN }} --add-to-git-credential
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --model=llama2-70b-99 --implementation=reference --backend=${{ matrix.backend }} --precision=${{ matrix.precision }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=0.001 --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --env.CM_MLPERF_MODEL_LLAMA2_70B_DOWNLOAD_TO_HOST=yes --clean
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/mlperf_inference_test_submissions_v5.0 --repo_branch=main --commit_message="Results from self hosted Github actions" --quiet --submission_dir=$HOME/gh_action_submissions
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --model=llama2-70b-99 --implementation=reference --backend=${{ matrix.backend }} --precision=${{ matrix.precision }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=0.001 --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --env.CM_MLPERF_MODEL_LLAMA2_70B_DOWNLOAD_TO_HOST=yes --clean
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from self hosted Github actions" --quiet --submission_dir=$HOME/gh_action_submissions
7 changes: 4 additions & 3 deletions .github/workflows/test-mlperf-inference-mixtral.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,12 @@ name: MLPerf inference MIXTRAL-8x7B

on:
schedule:
- cron: "08 23 * * *" # 30th minute and 20th hour => 20:30 UTC => 2 AM IST
- cron: "59 19 * * *" # 30th minute and 20th hour => 20:30 UTC => 2 AM IST

jobs:
build_reference:
if: github.repository_owner == 'gateoverflow'
timeout-minutes: 1440
runs-on: [ self-hosted, phoenix, linux, x64 ]
strategy:
fail-fast: false
Expand All @@ -30,5 +31,5 @@ jobs:
git config --global credential.helper store
huggingface-cli login --token ${{ secrets.HF_TOKEN }} --add-to-git-credential
cm pull repo
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --model=mixtral-8x7b --implementation=reference --batch_size=1 --precision=${{ matrix.precision }} --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker_it=no --docker_cm_repo=gateoverflow@cm4mlops --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --docker --quiet --test_query_count=3 --target_qps=0.001 --clean --env.CM_MLPERF_MODEL_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --env.CM_MLPERF_DATASET_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --adr.openorca-mbxp-gsm8k-combined-preprocessed.tags=_size.1
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/mlperf_inference_test_submissions_v5.0 --repo_branch=main --commit_message="Results from self hosted Github actions - GO-phoenix" --quiet --submission_dir=$HOME/gh_action_submissions
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --model=mixtral-8x7b --implementation=reference --batch_size=1 --precision=${{ matrix.precision }} --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --docker --quiet --test_query_count=3 --target_qps=0.001 --clean --env.CM_MLPERF_MODEL_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --env.CM_MLPERF_DATASET_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --adr.openorca-mbxp-gsm8k-combined-preprocessed.tags=_size.1
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from self hosted Github actions - GO-phoenix" --quiet --submission_dir=$HOME/gh_action_submissions
Original file line number Diff line number Diff line change
Expand Up @@ -59,4 +59,4 @@ jobs:
git config --global credential.https://github.com.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/mlperf_inference_test_submissions_v5.0 --repo_branch=main --commit_message="Results from MLCommons C++ ResNet50 GH action on ${{ matrix.os }}" --quiet
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from MLCommons C++ ResNet50 GH action on ${{ matrix.os }}" --quiet
Loading
Loading