Skip to content

Commit 5bcde4c

Browse files
authored
Merge pull request #105 from mlcommons/dev
Merge from dev
2 parents bd5ed77 + 4ac4fc4 commit 5bcde4c

File tree

16 files changed

+161
-39
lines changed

16 files changed

+161
-39
lines changed

.github/workflows/format.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -53,10 +53,10 @@ jobs:
5353
run: |
5454
HAS_CHANGES=$(git diff --staged --name-only)
5555
if [ ${#HAS_CHANGES} -gt 0 ]; then
56-
git config --global user.name mlcommons-bot
57-
git config --global user.email "mlcommons-bot@users.noreply.github.com"
56+
# Use the GitHub actor's name and email
57+
git config --global user.name "${GITHUB_ACTOR}"
58+
git config --global user.email "${GITHUB_ACTOR}@users.noreply.github.com"
5859
# Commit changes
5960
git commit -m '[Automated Commit] Format Codebase'
60-
# Use the PAT to push changes
6161
git push
6262
fi

.github/workflows/test-mlperf-inference-dlrm.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ jobs:
2525
export CM_REPOS=$HOME/GH_CM
2626
python3 -m pip install cm4mlops
2727
cm pull repo
28-
cm run script --tags=run-mlperf,inference,_performance-only --pull_changes=yes --pull_inference_changes=yes --submitter="MLCommons" --model=dlrm-v2-99 --implementation=reference --backend=pytorch --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=1 --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --clean
28+
cm run script --tags=run-mlperf,inference,_performance-only --pull_changes=yes --pull_inference_changes=yes --submitter="MLCommons" --model=dlrm-v2-99 --implementation=reference --backend=pytorch --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=1 --docker_it=no --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --clean
2929
3030
build_intel:
3131
if: github.repository_owner == 'gateoverflow_off'

.github/workflows/test-mlperf-inference-gptj.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,5 +27,5 @@ jobs:
2727
python3 -m pip install cm4mlops
2828
cm pull repo
2929
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --docker --pull_changes=yes --pull_inference_changes=yes --model=gptj-99 --backend=${{ matrix.backend }} --device=cuda --scenario=Offline --test_query_count=1 --precision=${{ matrix.precision }} --target_qps=1 --quiet --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --docker_cm_repo_branch=dev --adr.compiler.tags=gcc --beam_size=1 --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --get_platform_details=yes --implementation=reference --clean
30-
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from self hosted Github actions - NVIDIARTX4090" --quiet --submission_dir=$HOME/gh_action_submissions
30+
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from self hosted Github actions - NVIDIARTX4090" --quiet --submission_dir=$HOME/gh_action_submissions
3131

.github/workflows/test-mlperf-inference-llama2.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,4 +32,4 @@ jobs:
3232
git config --global credential.helper store
3333
huggingface-cli login --token ${{ secrets.HF_TOKEN }} --add-to-git-credential
3434
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --model=llama2-70b-99 --implementation=reference --backend=${{ matrix.backend }} --precision=${{ matrix.precision }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=0.001 --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --env.CM_MLPERF_MODEL_LLAMA2_70B_DOWNLOAD_TO_HOST=yes --clean
35-
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from self hosted Github actions" --quiet --submission_dir=$HOME/gh_action_submissions
35+
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from self hosted Github actions" --quiet --submission_dir=$HOME/gh_action_submissions

.github/workflows/test-mlperf-inference-mixtral.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,4 +32,4 @@ jobs:
3232
huggingface-cli login --token ${{ secrets.HF_TOKEN }} --add-to-git-credential
3333
cm pull repo
3434
cm run script --tags=run-mlperf,inference,_submission,_short --adr.inference-src.tags=_branch.dev --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --model=mixtral-8x7b --implementation=reference --batch_size=1 --precision=${{ matrix.precision }} --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --docker --quiet --test_query_count=3 --target_qps=0.001 --clean --env.CM_MLPERF_MODEL_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --env.CM_MLPERF_DATASET_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --adr.openorca-mbxp-gsm8k-combined-preprocessed.tags=_size.1
35-
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from self hosted Github actions - GO-phoenix" --quiet --submission_dir=$HOME/gh_action_submissions
35+
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from self hosted Github actions - GO-phoenix" --quiet --submission_dir=$HOME/gh_action_submissions

.github/workflows/test-mlperf-inference-resnet50.yml

Lines changed: 27 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,10 @@
1-
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
2-
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
1+
# Run MLPerf inference ResNet50
32

43
name: MLPerf inference ResNet50
54

65
on:
76
pull_request_target:
8-
branches: [ "main", "dev", "mlperf-inference" ]
7+
branches: [ "main", "dev" ]
98
paths:
109
- '.github/workflows/test-mlperf-inference-resnet50.yml'
1110
- '**'
@@ -39,10 +38,20 @@ jobs:
3938
if: matrix.os == 'windows-latest'
4039
run: |
4140
git config --system core.longpaths true
42-
- name: Install dependencies
41+
42+
- name: Install cm4mlops on Windows
43+
if: matrix.os == 'windows-latest'
44+
run: |
45+
$env:CM_PULL_DEFAULT_MLOPS_REPO = "no"; pip install cm4mlops
46+
- name: Install dependencies on Unix Platforms
47+
if: matrix.os != 'windows-latest'
48+
run: |
49+
CM_PULL_DEFAULT_MLOPS_REPO=no pip install cm4mlops
50+
51+
- name: Pull MLOps repo
4352
run: |
44-
pip install "cmind @ git+https://git@github.com/mlcommons/ck.git@mlperf-inference#subdirectory=cm"
4553
cm pull repo --url=${{ github.event.pull_request.head.repo.html_url }} --checkout=${{ github.event.pull_request.head.ref }}
54+
4655
- name: Test MLPerf Inference ResNet50 (Windows)
4756
if: matrix.os == 'windows-latest'
4857
run: |
@@ -51,17 +60,19 @@ jobs:
5160
if: matrix.os != 'windows-latest'
5261
run: |
5362
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name=gh_${{ matrix.os }}_x86 --model=resnet50 --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=500 --target_qps=1 -v --quiet
63+
- name: Retrieve secrets from Keeper
64+
id: ksecrets
65+
uses: Keeper-Security/ksm-action@master
66+
with:
67+
keeper-secret-config: ${{ secrets.KSM_CONFIG }}
68+
secrets: |-
69+
ubwkjh-Ii8UJDpG2EoU6GQ/field/Access Token > env:PAT # Fetch PAT and store in environment variable
70+
5471
- name: Push Results
55-
if: github.repository_owner == 'gateoverflow'
72+
if: github.repository_owner == 'mlcommons'
5673
env:
57-
USER: "GitHub Action"
58-
EMAIL: "admin@gateoverflow.com"
59-
GITHUB_TOKEN: ${{ secrets.TEST_RESULTS_GITHUB_TOKEN }}
74+
GITHUB_TOKEN: ${{ env.PAT }}
6075
run: |
61-
git config --global user.name "${{ env.USER }}"
62-
git config --global user.email "${{ env.EMAIL }}"
63-
git config --global credential.https://github.com.helper ""
64-
git config --global credential.https://github.com.helper "!gh auth git-credential"
65-
git config --global credential.https://gist.github.com.helper ""
66-
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
67-
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet
76+
git config --global user.name mlcommons-bot
77+
git config --global user.email "mlcommons-bot@users.noreply.github.com"
78+
cm run script --tags=push,github,mlperf,inference,submission --env.CM_GITHUB_PAT=${{ env.PAT }} --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet

.github/workflows/test-mlperf-inference-retinanet.yml

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,10 @@
1-
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
2-
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
1+
# Run MLPerf inference Retinanet
32

43
name: MLPerf inference retinanet
54

65
on:
76
pull_request_target:
8-
branches: [ "main", "dev", "mlperf-inference" ]
7+
branches: [ "main", "dev" ]
98
paths:
109
- '.github/workflows/test-mlperf-inference-retinanet.yml'
1110
- '**'
@@ -39,10 +38,18 @@ jobs:
3938
if: matrix.os == 'windows-latest'
4039
run: |
4140
git config --system core.longpaths true
42-
- name: Install dependencies
41+
- name: Install cm4mlops on Windows
42+
if: matrix.os == 'windows-latest'
43+
run: |
44+
$env:CM_PULL_DEFAULT_MLOPS_REPO = "no"; pip install cm4mlops
45+
- name: Install dependencies on Unix Platforms
46+
if: matrix.os != 'windows-latest'
47+
run: |
48+
CM_PULL_DEFAULT_MLOPS_REPO=no pip install cm4mlops
49+
- name: Pull MLOps repo
4350
run: |
44-
python3 -m pip install "cmind @ git+https://git@github.com/mlcommons/ck.git@mlperf-inference#subdirectory=cm"
4551
cm pull repo --url=${{ github.event.pull_request.head.repo.html_url }} --checkout=${{ github.event.pull_request.head.ref }}
52+
4653
- name: Test MLPerf Inference Retinanet using ${{ matrix.backend }} on ${{ matrix.os }}
4754
if: matrix.os == 'windows-latest'
4855
run: |

HISTORY.md

Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
## Timeline of CM developments
2+
3+
### **🚀 2022: Foundation and Early Developments**
4+
5+
- **March 2022:** Grigori Fursin began developing **CM (Collective Mind)**, also referred to as **CK2**, as a successor to CK [at OctoML](https://github.com/octoml/ck/commits/master/?since=2022-03-01&until=2022-03-31).
6+
- **April 2022:** **Arjun Suresh** joined OctoML and collaborated with Grigori on developing **CM Automation** tools.
7+
- **May 2022:** The **CM CLI** and **Python interface** were successfully [implemented and stabilized](https://github.com/octoml/ck/commits/master/?since=2022-04-01&until=2022-05-31) by Grigori.
8+
9+
---
10+
11+
### **🛠️ July–September 2022: MLPerf Integration and First Submission**
12+
13+
- Arjun completed the development of the **MLPerf Inference Script** within CM.
14+
- OctoML achieved **first MLPerf Inference submission (v2.1)** using **CM Automation** ([progress here](https://github.com/octoml/ck/commits/master/?since=2022-06-01&until=2022-09-30)).
15+
16+
---
17+
18+
### **📊 October 2022 – March 2023: End-to-End Automation**
19+
20+
- End-to-end MLPerf inference automations using CM was successfully [completed in CM](https://github.com/octoml/ck/commits/master/?since=2022-10-01&until=2023-03-31).
21+
- **Additional benchmarks** and **Power Measurement support** were integrated into CM.
22+
- **cTuning** achieved a successful MLPerf Inference **v3.0 submission** using CM Automation.
23+
24+
---
25+
26+
### **🔄 April 2023: Transition and New Funding**
27+
28+
- Arjun and Grigori departed OctoML and resumed **CM development** under funding from **cKnowledge.org** and **cTuning**.
29+
30+
---
31+
32+
### **🚀 April–October 2023: Expanded Support and Milestone Submission**
33+
34+
- MLPerf inference automations were [extended](https://github.com/mlcommons/ck/commits/master?since=2023-04-01&until=2023-10-31) to support **NVIDIA implementations**.
35+
- **cTuning** achieved the **largest-ever MLPerf Inference submission (v3.1)** using CM Automation.
36+
37+
---
38+
39+
### **🤝 November 2023: MLCommons Partnership**
40+
41+
- **MLCommons** began funding CM development to enhance support for **NVIDIA MLPerf inference** and introduce support for **Intel** and **Qualcomm MLPerf inference** implementations.
42+
43+
---
44+
45+
### **🌐 October 2023 – March 2024: Multi-Platform Expansion**
46+
47+
- MLPerf inference automations were [expanded](https://github.com/mlcommons/ck/commits/master?since=2023-10-01&until=2024-03-15) to support **NVIDIA, Intel, and Qualcomm implementations**.
48+
- **cTuning** completed the **MLPerf Inference v4.0 submission** using CM Automation.
49+
50+
---
51+
52+
### **📝 April 2024: Documentation Improvements**
53+
54+
- MLCommons contracted **Arjun Suresh** via **GATEOverflow** to improve **MLPerf inference documentation** and enhance CM Automation on various platforms.
55+
56+
---
57+
58+
### **👥 May 2024: Team Expansion**
59+
60+
- **Anandhu Sooraj** joined MLCommons to collaborate with **Arjun Suresh** on CM development.
61+
62+
---
63+
64+
### **📖 June–December 2024: Enhanced Documentation and Automation**
65+
66+
- **Dedicated documentation site** launched for **MLPerf inference**.
67+
- **CM scripts** were developed for **MLPerf Automotive**.
68+
- **CM Docker support** was stabilized.
69+
- **GitHub Actions workflows** were added for **MLPerf inference reference implementations** and **NVIDIA integrations** ([see updates](https://github.com/mlcommons/mlperf-automations/commits/main?since=2024-06-01&until=2024-12-31)).
70+
71+
---
72+

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
0.6.18
1+
0.6.19

git_commit_hash.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
76796b4c3966b04011c3cb6118412516c90ba50b
1+
81816f94c4a396a012412cb3a1cf4096b4ad103e

0 commit comments

Comments
 (0)