Skip to content

Sync Dev #173

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 37 commits into from
Jan 31, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
62f0cc8
Added zenodo alternative link for bert fake quantized model
arjunsuresh Jan 26, 2025
59e2ce1
Update VERSION
arjunsuresh Jan 26, 2025
a87f4cd
Updated git_commit_hash.txt
mlcommons-bot Jan 26, 2025
83a09e2
Update setup.py
arjunsuresh Jan 26, 2025
99b3674
Update pyproject.toml
arjunsuresh Jan 26, 2025
d652858
Update VERSION
arjunsuresh Jan 26, 2025
52aa051
Updated git_commit_hash.txt
mlcommons-bot Jan 26, 2025
3fa4446
Fixes for nvidia mlperf inference (#156)
arjunsuresh Jan 26, 2025
93a3be6
Fix cuda-python version
arjunsuresh Jan 26, 2025
a63e5e2
Fix typo in docker_utils
arjunsuresh Jan 26, 2025
61e7be5
Cleanup (#158)
arjunsuresh Jan 26, 2025
819953c
Update test-nvidia-mlperf-inference-implementations.yml (#159)
arjunsuresh Jan 27, 2025
a4a380e
Update build_wheel.yml
arjunsuresh Jan 27, 2025
97bca70
Update VERSION
arjunsuresh Jan 27, 2025
3a02626
Update test-mlc-script-features.yml
arjunsuresh Jan 27, 2025
f8572f9
Fix getuser inside container (#160)
arjunsuresh Jan 27, 2025
29eddc1
Dataset and model scripts for automotive reference implementation (#161)
anandhu-eng Jan 27, 2025
c2abbc7
Update VERSION
arjunsuresh Jan 27, 2025
4b39d54
Update build_wheel.yml
arjunsuresh Jan 27, 2025
f0d5b30
Update VERSION
arjunsuresh Jan 27, 2025
9291850
Updated git_commit_hash.txt
mlcommons-bot Jan 27, 2025
3a8dad3
Update pyproject.toml
arjunsuresh Jan 27, 2025
90fbda0
Update VERSION
arjunsuresh Jan 27, 2025
606c2b6
Updated git_commit_hash.txt
mlcommons-bot Jan 27, 2025
3657548
Use search and not find in script module (#162)
arjunsuresh Jan 27, 2025
6161058
Fixes to docker detached mode (#163)
arjunsuresh Jan 27, 2025
09e07b4
Use global logger (#164)
arjunsuresh Jan 28, 2025
30a1c0c
Support image_name in docker_settings (#166)
arjunsuresh Jan 28, 2025
fb9214c
Fix logging (#167)
arjunsuresh Jan 28, 2025
4fe42de
Updated git_commit_hash.txt
mlcommons-bot Jan 28, 2025
3dacfdc
Fix clean-nvidia-scratch-space (#168)
arjunsuresh Jan 29, 2025
615cc25
Updated git_commit_hash.txt
mlcommons-bot Jan 29, 2025
7cccb35
Bug fix (#169)
arjunsuresh Jan 29, 2025
7f1550a
Fix typo in clean-nvidia-scratch-space (#170)
arjunsuresh Jan 29, 2025
1dd7264
Fix duplicate cache entries, added CacheAction inside ScriptAutomatio…
arjunsuresh Jan 30, 2025
7ee9cf4
Added support for automotive pointpainting benchmark (#172)
anandhu-eng Jan 31, 2025
02683cf
Merge branch 'main' into dev
arjunsuresh Jan 31, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .github/workflows/build_wheel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ on:
types: [published]
push:
branches:
- main
- dev
paths:
- VERSION
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
python3 -m venv gh_action
source gh_action/bin/activate
export MLC_REPOS=$HOME/GH_MLC
pip install --upgrade cm4mlops
cm pull repo
pip install --upgrade mlc-scripts
mlc pull repo
mlcr --tags=run-mlperf,inference,_all-scenarios,_full,_r4.1-dev --execution_mode=valid --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=IntelSPR.24c --implementation=amd --backend=pytorch --category=datacenter --division=open --scenario=Offline --docker_dt=yes --docker_it=no --docker_mlc_repo=gateoverflow@mlperf-automations --docker_mlc_repo_branch=dev --adr.compiler.tags=gcc --device=rocm --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet --docker_skip_run_cmd=yes
# mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=dev --commit_message="Results from GH action on SPR.24c" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=IntelSPR.24c
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
python3 -m venv gh_action_conda
source gh_action_conda/bin/activate
export MLC_REPOS=$HOME/GH_MLC
pip install --upgrade cm4mlops
pip install --upgrade mlc-scripts
pip install tabulate
mlcr --tags=run-mlperf,inference,_all-scenarios,_submission,_full,_r4.1-dev --preprocess_submission=yes --execution_mode=valid --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=IntelSPR.24c --implementation=intel --backend=pytorch --category=datacenter --division=open --scenario=Offline --docker_dt=yes --docker_it=no --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --adr.compiler.tags=gcc --device=cpu --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet
mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from GH action on SPR.24c" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=IntelSPR.24c
88 changes: 83 additions & 5 deletions .github/workflows/test-mlc-script-features.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name: MLC script automation features test

on:
pull_request:
pull_request_target:
branches: [ "main", "dev" ]
paths:
- '.github/workflows/test-mlc-script-features.yml'
Expand Down Expand Up @@ -61,23 +61,101 @@ jobs:
mlcr --tags=python,src,install,_shared --version=3.9.10 --quiet
mlc search cache --tags=python,src,install,_shared,version-3.9.10

test_docker:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.12", "3.8"]

steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Pull MLOps repository
run: |
pip install mlcflow
mlc pull repo ${{ github.event.pull_request.head.repo.html_url }} --branch=${{ github.event.pull_request.head.ref }}

- name: Run docker container from dockerhub on linux
if: runner.os == 'linux'
run: |
mlcr --tags=run,docker,container --adr.compiler.tags=gcc --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --image_name=cm-script-app-image-classification-onnx-py --env.MLC_DOCKER_RUN_SCRIPT_TAGS=app,image-classification,onnx,python --env.MLC_DOCKER_IMAGE_BASE=ubuntu:22.04 --env.MLC_DOCKER_IMAGE_REPO=cknowledge --quiet

- name: Run docker container locally on linux
if: runner.os == 'linux'
run: |
mlcr --tags=run,docker,container --adr.compiler.tags=gcc --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --image_name=mlc-script-app-image-classification-onnx-py --env.MLC_DOCKER_RUN_SCRIPT_TAGS=app,image-classification,onnx,python --env.MLC_DOCKER_IMAGE_BASE=ubuntu:22.04 --env.MLC_DOCKER_IMAGE_REPO=local --quiet

test_mlperf_retinanet_cpp_venv:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.12", "3.8"]

steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Pull MLOps repository
run: |
pip install mlcflow
mlc pull repo ${{ github.event.pull_request.head.repo.html_url }} --branch=${{ github.event.pull_request.head.ref }}

- name: Run MLPerf Inference Retinanet with native and virtual Python
if: runner.os == 'linux'
run: |
mlcr --tags=app,mlperf,inference,generic,_cpp,_retinanet,_onnxruntime,_cpu --adr.python.version_min=3.8 --adr.compiler.tags=gcc --adr.openimages-preprocessed.tags=_50 --scenario=Offline --mode=accuracy --test_query_count=10 --rerun --quiet

mlcr --tags=app,mlperf,inference,generic,_cpp,_retinanet,_onnxruntime,_cpu --adr.python.version_min=3.8 --adr.compiler.tags=gcc --adr.openimages-preprocessed.tags=_50 --scenario=Offline --mode=performance --test_query_count=10 --rerun --quiet

mlcr --tags=install,python-venv --version=3.10.8 --name=mlperf --quiet

mlcr --tags=run,mlperf,inference,generate-run-cmds,_submission,_short --adr.python.name=mlperf --adr.python.version_min=3.8 --adr.compiler.tags=gcc --adr.openimages-preprocessed.tags=_50 --submitter=Community --implementation=cpp --hw_name=default --model=retinanet --backend=onnxruntime --device=cpu --scenario=Offline --quiet
mlcr --tags=run,mlperf,inference,_submission,_short --adr.python.name=mlperf --adr.python.version_min=3.8 --adr.compiler.tags=gcc --adr.openimages-preprocessed.tags=_50 --submitter=MLCommons --implementation=cpp --hw_name=default --model=retinanet --backend=onnxruntime --device=cpu --scenario=Offline --quiet

# Step for Linux/MacOS
- name: Randomly Execute Step (Linux/MacOS)
if: runner.os != 'Windows'
run: |
RANDOM_NUMBER=$((RANDOM % 10))
echo "Random number is $RANDOM_NUMBER"
if [ "$RANDOM_NUMBER" -eq 0 ]; then
echo "run_step=true" >> $GITHUB_ENV
else
echo "run_step=false" >> $GITHUB_ENV
fi

# Step for Windows
- name: Randomly Execute Step (Windows)
if: runner.os == 'Windows'
run: |
$RANDOM_NUMBER = Get-Random -Maximum 10
Write-Host "Random number is $RANDOM_NUMBER"
if ($RANDOM_NUMBER -eq 0) {
Write-Host "run_step=true" | Out-File -FilePath $Env:GITHUB_ENV -Append
} else {
Write-Host "run_step=false" | Out-File -FilePath $Env:GITHUB_ENV -Append
}

- name: Retrieve secrets from Keeper
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
id: ksecrets
uses: Keeper-Security/ksm-action@master
with:
keeper-secret-config: ${{ secrets.KSM_CONFIG }}
secrets: |-
ubwkjh-Ii8UJDpG2EoU6GQ/field/Access Token > env:PAT
- name: Push Results
env:
GITHUB_TOKEN: ${{ env.PAT }}
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
run: |
git config --global user.name "mlcommons-bot"
git config --global user.email "mlcommons-bot@users.noreply.github.com"
git config --global credential.https://github.com.helper ""
git config --global credential.https://github.com.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
name: Build Python Wheel
name: Build mlc-scripts Wheel

on:
pull_request:
branches:
- main
- dev
paths:
- '.github/workflows/test-mlperf-wheel.yml'
- '.github/workflows/test-mlcscripts-wheel.yml'
- 'setup.py'

jobs:
Expand All @@ -16,6 +16,9 @@ jobs:
matrix:
os: [macos-latest, ubuntu-latest, windows-latest]
python-version: [ '3.8', '3.13']
exclude:
- os: windows-latest
python-version: "3.8"

runs-on: ${{ matrix.os }}

Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/test-mlperf-inference-gptj.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ jobs:
source gh_action/bin/deactivate || python3 -m venv gh_action
source gh_action/bin/activate
export MLC_REPOS=$HOME/GH_MLC
python3 -m pip install cm4mlops
cm pull repo
python3 -m pip install --upgrade mlc-scripts
mlc pull repo
mlcr --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --docker --pull_changes=yes --pull_inference_changes=yes --model=gptj-99 --backend=${{ matrix.backend }} --device=cuda --scenario=Offline --test_query_count=1 --precision=${{ matrix.precision }} --target_qps=1 --quiet --docker_it=no --docker_mlc_repo=gateoverflow@mlperf-automations --docker_mlc_repo_branch=dev --adr.compiler.tags=gcc --beam_size=1 --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --get_platform_details=yes --implementation=reference --clean
mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from self hosted Github actions - NVIDIARTX4090" --quiet --submission_dir=$HOME/gh_action_submissions

4 changes: 2 additions & 2 deletions .github/workflows/test-mlperf-inference-llama2.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ jobs:
source gh_action/bin/deactivate || python3 -m venv gh_action
source gh_action/bin/activate
export MLC_REPOS=$HOME/GH_MLC
pip install cm4mlops
pip install mlc-scripts
pip install tabulate
cm pull repo
mlc pull repo
pip install "huggingface_hub[cli]"
git config --global credential.helper store
huggingface-cli login --token ${{ secrets.HF_TOKEN }} --add-to-git-credential
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/test-mlperf-inference-mixtral.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@ jobs:
source gh_action/bin/deactivate || python3 -m venv gh_action
source gh_action/bin/activate
export MLC_REPOS=$HOME/GH_MLC
pip install cm4mlops
pip install --upgrade mlc-scripts
pip install "huggingface_hub[cli]"
git config --global credential.helper store
huggingface-cli login --token ${{ secrets.HF_TOKEN }} --add-to-git-credential
cm pull repo
mlc pull repo
mlcr --tags=run-mlperf,inference,_submission,_short --adr.inference-src.tags=_branch.dev --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --model=mixtral-8x7b --implementation=reference --batch_size=1 --precision=${{ matrix.precision }} --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker_it=no --docker_mlc_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --docker --quiet --test_query_count=3 --target_qps=0.001 --clean --env.MLC_MLPERF_MODEL_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --env.MLC_MLPERF_DATASET_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --adr.openorca-mbxp-gsm8k-combined-preprocessed.tags=_size.1
mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from self hosted Github actions - GO-phoenix" --quiet --submission_dir=$HOME/gh_action_submissions
Original file line number Diff line number Diff line change
@@ -1,11 +1,8 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: MLPerf inference MLCommons C++ ResNet50

on:
pull_request:
branches: [ "main", "dev", "mlperf-inference" ]
pull_request_target:
branches: [ "main", "dev" ]
paths:
- '.github/workflows/test-mlperf-inference-mlcommons-cpp-resnet50.yml'
- '**'
Expand Down
32 changes: 23 additions & 9 deletions .github/workflows/test-mlperf-inference-resnet50.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,16 +58,30 @@ jobs:
if: matrix.os != 'windows-latest'
run: |
mlcr --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name=gh_${{ matrix.os }}_x86 --model=resnet50 --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=500 --target_qps=1 -v --quiet
- name: Randomly Execute Step
id: random-check
# Step for Linux/MacOS
- name: Randomly Execute Step (Linux/MacOS)
if: runner.os != 'Windows'
run: |
RANDOM_NUMBER=$((RANDOM % 10))
echo "Random number is $RANDOM_NUMBER"
if [ "$RANDOM_NUMBER" -eq 0 ]; then
echo "run_step=true" >> $GITHUB_ENV
else
echo "run_step=false" >> $GITHUB_ENV
fi
RANDOM_NUMBER=$((RANDOM % 10))
echo "Random number is $RANDOM_NUMBER"
if [ "$RANDOM_NUMBER" -eq 0 ]; then
echo "run_step=true" >> $GITHUB_ENV
else
echo "run_step=false" >> $GITHUB_ENV
fi

# Step for Windows
- name: Randomly Execute Step (Windows)
if: runner.os == 'Windows'
run: |
$RANDOM_NUMBER = Get-Random -Maximum 10
Write-Host "Random number is $RANDOM_NUMBER"
if ($RANDOM_NUMBER -eq 0) {
Write-Host "run_step=true" | Out-File -FilePath $Env:GITHUB_ENV -Append
} else {
Write-Host "run_step=false" | Out-File -FilePath $Env:GITHUB_ENV -Append
}

- name: Retrieve secrets from Keeper
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
id: ksecrets
Expand Down
33 changes: 24 additions & 9 deletions .github/workflows/test-mlperf-inference-retinanet.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,16 +52,31 @@ jobs:
if: matrix.os != 'windows-latest'
run: |
mlcr --tags=run,mlperf,inference,generate-run-cmds,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name=gh_${{ matrix.os }}_x86 --model=retinanet --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=5 --quiet -v --target_qps=1
- name: Randomly Execute Step
id: random-check

# Step for Linux/MacOS
- name: Randomly Execute Step (Linux/MacOS)
if: runner.os != 'Windows'
run: |
RANDOM_NUMBER=$((RANDOM % 10))
echo "Random number is $RANDOM_NUMBER"
if [ "$RANDOM_NUMBER" -eq 0 ]; then
echo "run_step=true" >> $GITHUB_ENV
else
echo "run_step=false" >> $GITHUB_ENV
fi

# Step for Windows
- name: Randomly Execute Step (Windows)
if: runner.os == 'Windows'
run: |
RANDOM_NUMBER=$((RANDOM % 10))
echo "Random number is $RANDOM_NUMBER"
if [ "$RANDOM_NUMBER" -eq 0 ]; then
echo "run_step=true" >> $GITHUB_ENV
else
echo "run_step=false" >> $GITHUB_ENV
fi
$RANDOM_NUMBER = Get-Random -Maximum 10
Write-Host "Random number is $RANDOM_NUMBER"
if ($RANDOM_NUMBER -eq 0) {
Write-Host "run_step=true" | Out-File -FilePath $Env:GITHUB_ENV -Append
} else {
Write-Host "run_step=false" | Out-File -FilePath $Env:GITHUB_ENV -Append
}

- name: Retrieve secrets from Keeper
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
id: ksecrets
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/test-mlperf-inference-rnnt.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,10 @@ jobs:
python-version: ${{ matrix.python-version }}
- name: Install dependencies on Unix Platforms
run: |
MLC_PULL_DEFAULT_MLOPS_REPO=no pip install cm4mlops
pip install mlcflow
- name: Pull MLOps repository
run: |
cm pull repo --url=${{ github.event.pull_request.head.repo.html_url }} --checkout=${{ github.event.pull_request.head.ref }}
mlc pull repo --url=${{ github.event.pull_request.head.repo.html_url }} --checkout=${{ github.event.pull_request.head.ref }}
mlcr --quiet --tags=get,sys-utils-cm
- name: Test MLPerf Inference RNNT
run: |
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/test-mlperf-inference-sdxl.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ jobs:
source gh_action/bin/deactivate || python3 -m venv gh_action
source gh_action/bin/activate
export MLC_REPOS=$HOME/GH_MLC
python3 -m pip install cm4mlops
cm pull repo
python3 -m pip install mlc-scripts
mlc pull repo
mlcr --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --docker --model=sdxl --backend=${{ matrix.backend }} --device=cuda --scenario=Offline --test_query_count=1 --precision=${{ matrix.precision }} --quiet --docker_it=no --docker_mlc_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --env.MLC_MLPERF_MODEL_SDXL_DOWNLOAD_TO_HOST=yes --clean
mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from self hosted Github actions - NVIDIARTX4090" --quiet --submission_dir=$HOME/gh_action_submissions
Loading
Loading