Skip to content

Sync Dev #154

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Jan 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .github/workflows/build_wheel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ on:
- dev
paths:
- VERSION
- setup.py

jobs:
build_wheels:
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/check-broken-links.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,12 @@ on:
jobs:
markdown-link-check:
runs-on: ubuntu-latest

# check out the latest version of the code
steps:
- uses: actions/checkout@v4


# Checks the status of hyperlinks in .md files in verbose mode
- name: Check links
uses: gaurav-nelson/github-action-markdown-link-check@v1
Expand Down
20 changes: 11 additions & 9 deletions .github/workflows/test-mlc-script-features.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,42 +35,44 @@ jobs:

- name: Test Python venv
run: |
mlc run script --tags=install,python-venv --name=test --quiet
mlcr --tags=install,python-venv --name=test --quiet
mlc search cache --tags=get,python,virtual,name-test --quiet

- name: Test variations
run: |
mlc run script --tags=get,dataset,preprocessed,imagenet,_NHWC --quiet
mlcr --tags=get,dataset,preprocessed,imagenet,_NHWC --quiet
mlc search cache --tags=get,dataset,preprocessed,imagenet,-_NCHW
mlc search cache --tags=get,dataset,preprocessed,imagenet,-_NHWC

- name: Test versions
continue-on-error: true
if: runner.os == 'linux'
run: |
mlc run script --tags=get,generic-python-lib,_package.scipy --version=1.9.3 --quiet
mlcr --tags=get,generic-python-lib,_package.scipy --version=1.9.3 --quiet
test $? -eq 0 || exit $?
mlc run script --tags=get,generic-python-lib,_package.scipy --version=1.9.2 --quiet
mlcr --tags=get,generic-python-lib,_package.scipy --version=1.9.2 --quiet
test $? -eq 0 || exit $?
mlc run script --tags=get,generic-python-lib,_package.scipy --version=1.9.3 --quiet --only_execute_from_cache=True
test $? -eq 0 || exit 0
# Need to add find cache here
# mlcr --tags=get,generic-python-lib,_package.scipy --version=1.9.3 --quiet --only_execute_from_cache=True
# test $? -eq 0 || exit 0

- name: Test python install from src
run: |
mlc run script --tags=python,src,install,_shared --version=3.9.10 --quiet
mlcr --tags=python,src,install,_shared --version=3.9.10 --quiet
mlc search cache --tags=python,src,install,_shared,version-3.9.10

- name: Run docker container from dockerhub on linux
if: runner.os == 'linux'
run: |
mlc run script --tags=run,docker,container --adr.compiler.tags=gcc --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --image_name=cm-script-app-image-classification-onnx-py --env.MLC_DOCKER_RUN_SCRIPT_TAGS=app,image-classification,onnx,python --env.MLC_DOCKER_IMAGE_BASE=ubuntu:22.04 --env.MLC_DOCKER_IMAGE_REPO=cknowledge --quiet
mlcr --tags=run,docker,container --adr.compiler.tags=gcc --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --image_name=cm-script-app-image-classification-onnx-py --env.MLC_DOCKER_RUN_SCRIPT_TAGS=app,image-classification,onnx,python --env.MLC_DOCKER_IMAGE_BASE=ubuntu:22.04 --env.MLC_DOCKER_IMAGE_REPO=cknowledge --quiet

- name: Run docker container locally on linux
if: runner.os == 'linux'
run: |
mlc run script --tags=run,docker,container --adr.compiler.tags=gcc --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --image_name=mlc-script-app-image-classification-onnx-py --env.MLC_DOCKER_RUN_SCRIPT_TAGS=app,image-classification,onnx,python --env.MLC_DOCKER_IMAGE_BASE=ubuntu:22.04 --env.MLC_DOCKER_IMAGE_REPO=local --quiet
mlcr --tags=run,docker,container --adr.compiler.tags=gcc --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --image_name=mlc-script-app-image-classification-onnx-py --env.MLC_DOCKER_RUN_SCRIPT_TAGS=app,image-classification,onnx,python --env.MLC_DOCKER_IMAGE_BASE=ubuntu:22.04 --env.MLC_DOCKER_IMAGE_REPO=local --quiet

- name: Run MLPerf Inference Retinanet with native and virtual Python
if: runner.os == 'linux'
run: |
mlcr --tags=app,mlperf,inference,generic,_cpp,_retinanet,_onnxruntime,_cpu --adr.python.version_min=3.8 --adr.compiler.tags=gcc --adr.openimages-preprocessed.tags=_50 --scenario=Offline --mode=accuracy --test_query_count=10 --rerun --quiet

Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/test-mlperf-inference-abtf-poc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ jobs:
python-version: [ "3.8", "3.12" ]
backend: [ "pytorch" ]
implementation: [ "python" ]
docker: [ "", " --docker --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --docker_dt" ]
docker: [ "", " --docker --docker_mlc_repo=${{ github.event.pull_request.head.repo.html_url }} --docker_mlc_repo_branch=${{ github.event.pull_request.head.ref }} --docker_dt" ]
extra-args: [ "--adr.compiler.tags=gcc", "--env.MLC_MLPERF_LOADGEN_BUILD_FROM_SRC=off" ]
exclude:
- os: ubuntu-24.04
Expand All @@ -28,16 +28,16 @@ jobs:
- os: windows-latest
extra-args: "--adr.compiler.tags=gcc"
- os: windows-latest
docker: " --docker --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --docker_dt"
docker: " --docker --docker_mlc_repo=${{ github.event.pull_request.head.repo.html_url }} --docker_mlc_repo_branch=${{ github.event.pull_request.head.ref }} --docker_dt"
# windows docker image is not supported in CM yet
- os: macos-latest
python-version: "3.8"
- os: macos-13
python-version: "3.8"
- os: macos-latest
docker: " --docker --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --docker_dt"
docker: " --docker --docker_mlc_repo=${{ github.event.pull_request.head.repo.html_url }} --docker_mlc_repo_branch=${{ github.event.pull_request.head.ref }} --docker_dt"
- os: macos-13
docker: " --docker --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --docker_dt"
docker: " --docker --docker_mlc_repo=${{ github.event.pull_request.head.repo.html_url }} --docker_mlc_repo_branch=${{ github.event.pull_request.head.ref }} --docker_dt"

steps:
- uses: actions/checkout@v3
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,18 @@ jobs:
if: matrix.os != 'windows-latest'
run: |
mlcr --tags=run,mlperf,inference,generate-run-cmds,_submission,_short --submitter="MLCommons" --hw_name=gh_${{ matrix.os }}_x86 --model=bert-99 --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=5 --precision=${{ matrix.precision }} --target_qps=1 -v --quiet
- name: Randomly Execute Step
id: random-check
run: |
RANDOM_NUMBER=$((RANDOM % 10))
echo "Random number is $RANDOM_NUMBER"
if [ "$RANDOM_NUMBER" -eq 0 ]; then
echo "run_step=true" >> $GITHUB_ENV
else
echo "run_step=false" >> $GITHUB_ENV
fi
- name: Retrieve secrets from Keeper
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
id: ksecrets
uses: Keeper-Security/ksm-action@master
with:
Expand All @@ -57,6 +68,7 @@ jobs:
- name: Push Results
env:
GITHUB_TOKEN: ${{ env.PAT }}
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
run: |
git config --global user.name "mlcommons-bot"
git config --global user.email "mlcommons-bot@users.noreply.github.com"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,12 +48,30 @@ jobs:
if: matrix.os != 'windows-latest'
run: |
mlcr --tags=app,mlperf,inference,mlcommons,cpp --submitter="MLCommons" --hw_name=gh_${{ matrix.os }} -v --quiet
- name: Randomly Execute Step
id: random-check
run: |
RANDOM_NUMBER=$((RANDOM % 10))
echo "Random number is $RANDOM_NUMBER"
if [ "$RANDOM_NUMBER" -eq 0 ]; then
echo "run_step=true" >> $GITHUB_ENV
else
echo "run_step=false" >> $GITHUB_ENV
fi
- name: Retrieve secrets from Keeper
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
id: ksecrets
uses: Keeper-Security/ksm-action@master
with:
keeper-secret-config: ${{ secrets.KSM_CONFIG }}
secrets: |-
ubwkjh-Ii8UJDpG2EoU6GQ/field/Access Token > env:PAT
- name: Push Results
if: github.repository_owner == 'gateoverflow'
env:
USER: "GitHub Action"
EMAIL: "admin@gateoverflow.com"
GITHUB_TOKEN: ${{ secrets.TEST_RESULTS_GITHUB_TOKEN }}
GITHUB_TOKEN: ${{ env.PAT }}
USER: mlcommons-bot
EMAIL: mlcommons-bot@users.noreply.github.com
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
run: |
git config --global user.name "${{ env.USER }}"
git config --global user.email "${{ env.EMAIL }}"
Expand Down
27 changes: 13 additions & 14 deletions .github/workflows/test-mlperf-inference-resnet50.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,8 +58,18 @@ jobs:
if: matrix.os != 'windows-latest'
run: |
mlcr --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name=gh_${{ matrix.os }}_x86 --model=resnet50 --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=500 --target_qps=1 -v --quiet
- name: Randomly Execute Step
id: random-check
run: |
RANDOM_NUMBER=$((RANDOM % 10))
echo "Random number is $RANDOM_NUMBER"
if [ "$RANDOM_NUMBER" -eq 0 ]; then
echo "run_step=true" >> $GITHUB_ENV
else
echo "run_step=false" >> $GITHUB_ENV
fi
- name: Retrieve secrets from Keeper
if: github.repository_owner == 'mlcommons'
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
id: ksecrets
uses: Keeper-Security/ksm-action@master
with:
Expand All @@ -69,7 +79,7 @@ jobs:
- name: Push Results
env:
GITHUB_TOKEN: ${{ env.PAT }}
if: github.repository_owner == 'mlcommons'
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
run: |
git config --global user.name "mlcommons-bot"
git config --global user.email "mlcommons-bot@users.noreply.github.com"
Expand All @@ -78,15 +88,4 @@ jobs:
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet
- name: Push Results
env:
GITHUB_TOKEN: ${{ secrets.PAT1 }}
if: github.repository_owner == 'gateoverflow'
run: |
git config --global user.name "mlcommons-bot"
git config --global user.email "mlcommons-bot@users.noreply.github.com"
git config --global credential.https://github.com.helper ""
git config --global credential.https://github.com.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/gateoverflow/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet

12 changes: 12 additions & 0 deletions .github/workflows/test-mlperf-inference-retinanet.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,18 @@ jobs:
if: matrix.os != 'windows-latest'
run: |
mlcr --tags=run,mlperf,inference,generate-run-cmds,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name=gh_${{ matrix.os }}_x86 --model=retinanet --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=5 --quiet -v --target_qps=1
- name: Randomly Execute Step
id: random-check
run: |
RANDOM_NUMBER=$((RANDOM % 10))
echo "Random number is $RANDOM_NUMBER"
if [ "$RANDOM_NUMBER" -eq 0 ]; then
echo "run_step=true" >> $GITHUB_ENV
else
echo "run_step=false" >> $GITHUB_ENV
fi
- name: Retrieve secrets from Keeper
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
id: ksecrets
uses: Keeper-Security/ksm-action@master
with:
Expand All @@ -62,6 +73,7 @@ jobs:
- name: Push Results
env:
GITHUB_TOKEN: ${{ env.PAT }}
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
run: |
git config --global user.name "mlcommons-bot"
git config --global user.email "mlcommons-bot@users.noreply.github.com"
Expand Down
20 changes: 17 additions & 3 deletions .github/workflows/test-mlperf-inference-tvm-resnet50.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,19 @@ jobs:
mlcr --quiet --tags=get,sys-utils-cm
- name: Test MLC Tutorial TVM
run: |
mlcr --tags=run-mlperf,inference,_submission,_short --adr.python.name=mlperf --adr.python.version_min=3.8 --submitter=Community --implementation=python --hw_name=default --model=resnet50 --backend=tvm-onnx --device=cpu --scenario=Offline --mode=accuracy --test_query_count=5 --clean --quiet ${{ matrix.extra-options }}
mlcr --tags=run-mlperf,inference,_submission,_short --adr.python.name=mlperf --adr.python.version_min=3.8 --submitter=MLCommons --implementation=python --hw_name=gh_ubuntu-latest --model=resnet50 --backend=tvm-onnx --device=cpu --scenario=Offline --mode=accuracy --test_query_count=5 --clean --quiet ${{ matrix.extra-options }}
- name: Randomly Execute Step
id: random-check
run: |
RANDOM_NUMBER=$((RANDOM % 10))
echo "Random number is $RANDOM_NUMBER"
if [ "$RANDOM_NUMBER" -eq 0 ]; then
echo "run_step=true" >> $GITHUB_ENV
else
echo "run_step=false" >> $GITHUB_ENV
fi
- name: Retrieve secrets from Keeper
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
id: ksecrets
uses: Keeper-Security/ksm-action@master
with:
Expand All @@ -45,9 +56,12 @@ jobs:
- name: Push Results
env:
GITHUB_TOKEN: ${{ env.PAT }}
USER: mlcommons-bot
EMAIL: mlcommons-bot@users.noreply.github.com
if: github.repository_owner == 'mlcommons' && env.run_step == 'true'
run: |
git config --global user.name "mlcommons-bot"
git config --global user.email "mlcommons-bot@users.noreply.github.com"
git config --global user.name "${{ env.USER }}"
git config --global user.email "${{ env.EMAIL }}"
git config --global credential.https://github.com.helper ""
git config --global credential.https://github.com.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
Expand Down
20 changes: 12 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

[![License](https://img.shields.io/badge/License-Apache%202.0-green)](LICENSE.md)
[![Downloads](https://static.pepy.tech/badge/mlcflow)](https://pepy.tech/project/mlcflow)
[![MLC Script Automation Test](https://github.com/mlcommons/mlperf-automations/actions/workflows/test-mlc-script-features.yml/badge.svg)](https://github.com/mlcommons/mlperf-automations/actions/workflows/test-mlc-script-features.yml)
[![MLC script automation features test](https://github.com/mlcommons/mlperf-automations/actions/workflows/test-mlc-script-features.yml/badge.svg?cache-bust=1)](https://github.com/mlcommons/mlperf-automations/actions/workflows/test-mlc-script-features.yml)
[![MLPerf Inference ABTF POC Test](https://github.com/mlcommons/mlperf-automations/actions/workflows/test-mlperf-inference-abtf-poc.yml/badge.svg)](https://github.com/mlcommons/mlperf-automations/actions/workflows/test-mlperf-inference-abtf-poc.yml)


Expand All @@ -23,12 +23,9 @@ Starting **January 2025**, MLPerf automation scripts are built on the powerful [

## 🧰 MLCFlow (MLC) Automations

Building on the foundation of its predecessor, the **Collective Mind (CM)** framework, MLCFlow takes ML workflows to the next level by streamlining complex tasks like Docker container management and caching. The `mlcflow` package, written in Python, provides seamless support through both a command-line interface (CLI) and an API, making it easy to access and manage automation scripts.

### Core Automations
- **Script Automation** – Automates script execution across different environments.
- **Cache Management** – Manages reusable cached results to accelerate workflow processes.
Building upon the robust foundation of its predecessor, the Collective Mind (CM) framework, MLCFlow elevates machine learning workflows by simplifying complex tasks such as Docker container management and caching. Written in Python, the mlcflow package offers a versatile interface, supporting both a user-friendly command-line interface (CLI) and a flexible API for effortless automation script management.

At its core, MLCFlow relies on a single powerful automation, the Script, which is extended by two actions: CacheAction and DockerAction. Together, these components provide streamlined functionality to optimize and enhance your ML workflow automation experience.

---

Expand All @@ -40,10 +37,17 @@ We welcome contributions from the community! To contribute:

Your contributions help drive the project forward!


---

## 💬 Join the Discussion
Connect with us on the [MLCommons Benchmark Infra Discord channel](https://discord.gg/T9rHVwQFNX) to engage in discussions about **MLCFlow** and **MLPerf Automations**. We’d love to hear your thoughts, questions, and ideas!

---

## 📰 News
Stay tuned for upcoming updates and announcements.
## 📰 Stay Updated
Keep track of the latest development progress and tasks on our [MLPerf Automations Development Board](https://github.com/orgs/mlcommons/projects/50/views/7?sliceBy%5Bvalue%5D=_noValue).
Stay tuned for exciting updates and announcements!

---

Expand Down
Loading
Loading