Skip to content

Merge with dev #99

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 87 commits into from
Jan 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
87 commits
Select commit Hold shift + click to select a range
9526dac
Fixes for igbh dataset download
arjunsuresh Dec 12, 2024
f277ba3
Merge pull request #1 from GATEOverflow/dev
arjunsuresh Dec 12, 2024
663d6be
fixes for rgat reference implementation
arjunsuresh Dec 12, 2024
e0b6ded
Added tqdm deps for get-dataset-igbh
arjunsuresh Dec 12, 2024
45a08cb
Fix old repo name usage
arjunsuresh Dec 12, 2024
0b5bcfe
Fix for avoiding user prompt in download-igbh
arjunsuresh Dec 12, 2024
25d903b
Remove deprecated gui usage
arjunsuresh Dec 12, 2024
ccb1cef
run on pull request
anandhu-eng Dec 12, 2024
036b4e9
change base branch to dev
anandhu-eng Dec 12, 2024
1daee91
Merge pull request #2 from anandhu-eng/clean_links
arjunsuresh Dec 12, 2024
006f23f
Cleanup for mlperf-inference-rgat
arjunsuresh Dec 12, 2024
8e896ed
Fix torch and numpy version for mlperf-inference-rgat
arjunsuresh Dec 12, 2024
1717a56
Support pytorch 2.4 for app-mlperf-inference-rgat
arjunsuresh Dec 12, 2024
0bc8416
Merge branch 'mlcommons:dev' into dev
arjunsuresh Dec 12, 2024
17d7c08
Support igbh dataset from host
arjunsuresh Dec 12, 2024
58b3bfb
Fix fstring formatting in app-mlperf-inference-mlcommons-python
arjunsuresh Dec 12, 2024
c93b9c2
Fix use_dataset_from_host for igbh
arjunsuresh Dec 12, 2024
64f69e6
Remove torchvision deps for mlperf-inference-rgat
arjunsuresh Dec 12, 2024
c1e00cc
Remove torchvision deps for mlperf inference rgat cuda
arjunsuresh Dec 12, 2024
4ef87fa
Create test-mlperf-inference-rgat.yml
arjunsuresh Dec 12, 2024
31c3143
Fix default cm-repo-branch for build-dockerfile
arjunsuresh Dec 12, 2024
b3deeef
Merge pull request #49 from GATEOverflow/dev
arjunsuresh Dec 12, 2024
b899c20
capture docker tool
anandhu-eng Dec 13, 2024
71fd59a
docker tool -> container tool
anandhu-eng Dec 13, 2024
ddee591
Merge pull request #50 from anandhu-eng/podmanCapture
arjunsuresh Dec 13, 2024
216081d
[Automated Commit] Format Codebase (#51)
arjunsuresh Dec 13, 2024
9136723
Update test-mlperf-inference-rgat.yml
arjunsuresh Dec 13, 2024
edcf36c
Test (#52)
arjunsuresh Dec 13, 2024
a1b8a48
Test (#53)
arjunsuresh Dec 14, 2024
48f7a91
Update VERSION | rgat-fixes
arjunsuresh Dec 14, 2024
3d9715f
Updated git_commit_hash.txt
mlcommons-bot Dec 14, 2024
90a4412
Update MLPerf automation repo in github actions (#54)
arjunsuresh Dec 19, 2024
af15e72
Support nvmitten for aarch64 (#55)
arjunsuresh Dec 19, 2024
8b92713
Increment version to 0.6.13
mlcommons-bot Dec 19, 2024
b3a34ec
Updated git_commit_hash.txt
mlcommons-bot Dec 19, 2024
3f25d3c
Copy bert model for nvidia-mlperf-inference implementation instead of…
arjunsuresh Dec 20, 2024
a09686d
Update version (#57)
arjunsuresh Dec 20, 2024
e6ad511
Updated git_commit_hash.txt
mlcommons-bot Dec 20, 2024
f399c2c
Update github actions - use master branch of inference repository (#58)
arjunsuresh Dec 20, 2024
d2db3b4
Migrate MLPerf inference unofficial results repo to MLCommons (#59)
arjunsuresh Dec 21, 2024
2b1e23c
Updated git_commit_hash.txt
mlcommons-bot Dec 21, 2024
3439a72
Create reset-fork.yml
arjunsuresh Dec 21, 2024
5ddfc95
Update pyproject.toml
arjunsuresh Dec 21, 2024
f5eb712
Update VERSION
arjunsuresh Dec 21, 2024
17833df
Updated git_commit_hash.txt
mlcommons-bot Dec 21, 2024
cfd76e1
Fix scc24 github action (#61)
arjunsuresh Dec 21, 2024
d0c6c3e
Fix dangling softlink issue with nvidia-mlperf-inference-bert (#64)
arjunsuresh Dec 21, 2024
188708b
Update VERSION
arjunsuresh Dec 21, 2024
26cf833
Updated git_commit_hash.txt
mlcommons-bot Dec 21, 2024
7f48c88
Support pull_inference_changes in run-mlperf-inference-app (#65)
arjunsuresh Dec 21, 2024
b051bb1
Added pull_inference_changes support to run-mlperf-inference-app
arjunsuresh Dec 21, 2024
7bc5f0d
Fix github action failures (#68)
arjunsuresh Dec 22, 2024
225220c
Update test-cm4mlops-wheel-ubuntu.yml
arjunsuresh Dec 22, 2024
bb79019
support --outdirname for ml models, partially fixed #63 (#71)
sahilavaran Dec 23, 2024
a9e8329
Update test-cm-based-submission-generation.yml (#73)
arjunsuresh Dec 23, 2024
7dcef66
Fix exit code for docker run failures (#74)
arjunsuresh Dec 23, 2024
d28df7e
Support --outdirname for datasets fixes #63 (#75)
sahilavaran Dec 23, 2024
cf575d0
Support version in preprocess-submission, cleanups for coco2014 scrip…
arjunsuresh Dec 23, 2024
1fc32ab
Fixed stable-diffusion-xl name in SUT configs
arjunsuresh Dec 24, 2024
79fb471
Fix tensorrt detect on aarch64
arjunsuresh Dec 24, 2024
5189696
Added torch deps for get-ml-model-gptj-nvidia
arjunsuresh Dec 24, 2024
76796b4
Update VERSION
arjunsuresh Dec 24, 2024
a90475d
Updated git_commit_hash.txt
mlcommons-bot Dec 24, 2024
3551660
Fix coco2014 sample ids path
arjunsuresh Dec 25, 2024
c465378
Fixes for podman support (#79)
arjunsuresh Dec 27, 2024
c3550d2
Not use SHELL command in CM docker (#82)
arjunsuresh Dec 27, 2024
f79e2f3
Support adding dependent CM script commands in CM dockerfile
arjunsuresh Dec 27, 2024
6ba3117
Fixes for igbh dataset detection (#85)
arjunsuresh Dec 27, 2024
467517e
Merge branch 'main' into dev
arjunsuresh Dec 27, 2024
c52956b
Copied mlperf automotive CM scripts (#86)
arjunsuresh Dec 28, 2024
ca9263a
Generated docker image name - always lower case (#87)
anandhu-eng Dec 29, 2024
664215f
Fixes for podman (#88)
arjunsuresh Dec 29, 2024
59785a1
Dont use ulimit in docker extra args
arjunsuresh Dec 29, 2024
b3149a2
CM_MLPERF_PERFORMANCE_SAMPLE_COUNT -> CM_MLPERF_LOADGEN_PERFORMANCE_S…
arjunsuresh Dec 29, 2024
477f80f
Fix env corruption in docker run command (#92)
arjunsuresh Dec 30, 2024
48ea6b4
Fixes for R-GAT submission generation (#93)
arjunsuresh Dec 31, 2024
5faf15a
Fix mounting of host cache entries inside docker for mlperf-inference…
arjunsuresh Jan 1, 2025
19aed59
Fixes for podman run, github actions (#95)
arjunsuresh Jan 2, 2025
d3babb6
Fix docker detached mode with podman
arjunsuresh Jan 2, 2025
ed8d525
[Automated Commit] Format Codebase
mlcommons-bot Jan 2, 2025
5e8daea
Fix bug in docker container detect
arjunsuresh Jan 2, 2025
53ebc8d
[Automated Commit] Format Codebase
mlcommons-bot Jan 2, 2025
e20bcae
Update format.yml
arjunsuresh Jan 2, 2025
d9551f8
Fixed merge conflicts
arjunsuresh Jan 2, 2025
9825d35
Fix SUT name update in mlperf-inference-submission-generation (#96)
arjunsuresh Jan 2, 2025
7be8b1c
Update format.yml
arjunsuresh Jan 2, 2025
62ed33d
Added submit-mlperf-results CM script for automatic mlperf result sub…
arjunsuresh Jan 3, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 9 additions & 8 deletions .github/workflows/format.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name: "Code formatting"
on:
push:
branches:
- "**"
- "**"

env:
python_version: "3.9"
Expand All @@ -12,16 +12,17 @@ jobs:
format-code:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
ssh-key: ${{ secrets.DEPLOY_KEY }}

- name: Set up Python ${{ env.python_version }}
uses: actions/setup-python@v3
with:
python-version: ${{ env.python_version }}

- name: Format modified python files
- name: Format modified Python files
env:
filter: ${{ github.event.before }}
run: |
Expand All @@ -48,14 +49,14 @@ jobs:
fi
done

- name: Commit and Push
- name: Commit and push changes
run: |
HAS_CHANGES=$(git diff --staged --name-only)
if [ ${#HAS_CHANGES} -gt 0 ]; then
git config --global user.name mlcommons-bot
git config --global user.email "mlcommons-bot@users.noreply.github.com"
# Commit changes
git commit -m '[Automated Commit] Format Codebase'
git push

fi
# Use the PAT to push changes
git push
fi
6 changes: 3 additions & 3 deletions .github/workflows/run-individual-script-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@ name: Individual CM script Tests

on:
pull_request:
branches: [ "main", "mlperf-inference", "dev" ]
branches: [ "main", "dev" ]
paths:
- 'script/**_cm.json'
- 'script/**_cm.yml'
- 'script/**_cm.yaml'

jobs:
run-script-tests:
Expand Down Expand Up @@ -34,4 +34,4 @@ jobs:
done
python3 -m pip install "cmind @ git+https://git@github.com/mlcommons/ck.git@mlperf-inference#subdirectory=cm"
cm pull repo --url=${{ github.event.pull_request.head.repo.html_url }} --checkout=${{ github.event.pull_request.head.ref }}
DOCKER_CM_REPO=${{ github.event.pull_request.head.repo.html_url }} DOCKER_CM_REPO_BRANCH=${{ github.event.pull_request.head.ref }} TEST_INPUT_INDEX=${{ matrix.test-input-index }} python3 tests/script/process_tests.py ${{ steps.getfile.outputs.files }}
DOCKER_CM_REPO=${{ github.event.pull_request.head.repo.html_url }} DOCKER_CM_REPO_BRANCH=${{ github.event.pull_request.head.ref }} TEST_INPUT_INDEX=${{ matrix.test-input-index }} python3 script/test-cm-core/src/script/process_tests.py ${{ steps.getfile.outputs.files }}
6 changes: 3 additions & 3 deletions .github/workflows/test-mlperf-inference-abtf-poc.yml
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: MLPerf inference ABTF POC Test
name: MLPerf Automotive POC Test

on:
pull_request:
branches: [ "main", "mlperf-inference" ]
branches: [ "main", "dev" ]
paths:
- '.github/workflows/test-mlperf-inference-abtf-poc.yml'
- '**'
Expand Down Expand Up @@ -55,7 +55,7 @@ jobs:
run: |
pip install "cmind @ git+https://git@github.com/mlcommons/ck.git@mlperf-inference#subdirectory=cm"
cm pull repo --url=${{ github.event.pull_request.head.repo.html_url }} --checkout=${{ github.event.pull_request.head.ref }}
cm pull repo mlcommons@cm4abtf --branch=poc
#cm pull repo mlcommons@cm4abtf --branch=poc

- name: Install Docker on macos
if: runner.os == 'macOS-deactivated'
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test-mlperf-inference-mixtral.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,5 +31,5 @@ jobs:
git config --global credential.helper store
huggingface-cli login --token ${{ secrets.HF_TOKEN }} --add-to-git-credential
cm pull repo
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --model=mixtral-8x7b --implementation=reference --batch_size=1 --precision=${{ matrix.precision }} --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --docker --quiet --test_query_count=3 --target_qps=0.001 --clean --env.CM_MLPERF_MODEL_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --env.CM_MLPERF_DATASET_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --adr.openorca-mbxp-gsm8k-combined-preprocessed.tags=_size.1
cm run script --tags=run-mlperf,inference,_submission,_short --adr.inference-src.tags=_branch.dev --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --model=mixtral-8x7b --implementation=reference --batch_size=1 --precision=${{ matrix.precision }} --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --docker --quiet --test_query_count=3 --target_qps=0.001 --clean --env.CM_MLPERF_MODEL_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --env.CM_MLPERF_DATASET_MIXTRAL_8X7B_DOWNLOAD_TO_HOST=yes --adr.openorca-mbxp-gsm8k-combined-preprocessed.tags=_size.1
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from self hosted Github actions - GO-phoenix" --quiet --submission_dir=$HOME/gh_action_submissions
2 changes: 1 addition & 1 deletion .github/workflows/test-mlperf-inference-resnet50.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,4 +64,4 @@ jobs:
git config --global credential.https://github.com.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet
2 changes: 1 addition & 1 deletion .github/workflows/test-mlperf-inference-retinanet.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,4 +64,4 @@ jobs:
git config --global credential.https://github.com.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from Retinanet GH action on ${{ matrix.os }}" --quiet
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from Retinanet GH action on ${{ matrix.os }}" --quiet
4 changes: 2 additions & 2 deletions .github/workflows/test-mlperf-inference-rgat.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ jobs:
cm pull repo --url=${{ github.event.pull_request.head.repo.html_url }} --checkout=${{ github.event.pull_request.head.ref }}
- name: Test MLPerf Inference R-GAT using ${{ matrix.backend }} on ${{ matrix.os }}
run: |
cm run script --tags=run,mlperf,inference,generate-run-cmds,_submission,_short --pull_changes=yes --pull_inference_changes=yes --submitter="MLCommons" --hw_name=gh_${{ matrix.os }}_x86 --model=rgat --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=500 --adr.compiler.tags=gcc --category=datacenter --quiet -v --target_qps=1
cm run script --tags=run,mlperf,inference,generate-run-cmds,_submission,_short --adr.inference-src.tags=_branch.dev --pull_changes=yes --pull_inference_changes=yes --submitter="MLCommons" --hw_name=gh_${{ matrix.os }}_x86 --model=rgat --implementation=${{ matrix.implementation }} --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=500 --adr.compiler.tags=gcc --category=datacenter --quiet -v --target_qps=1
- name: Push Results
if: github.repository_owner == 'gateoverflow'
env:
Expand All @@ -45,4 +45,4 @@ jobs:
git config --global credential.https://github.com.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from R-GAT GH action on ${{ matrix.os }}" --quiet
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R-GAT GH action on ${{ matrix.os }}" --quiet
2 changes: 1 addition & 1 deletion .github/workflows/test-mlperf-inference-sdxl.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,4 @@ jobs:
python3 -m pip install cm4mlops
cm pull repo
cm run script --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --docker --model=sdxl --backend=${{ matrix.backend }} --device=cuda --scenario=Offline --test_query_count=1 --precision=${{ matrix.precision }} --quiet --docker_it=no --docker_cm_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --env.CM_MLPERF_MODEL_SDXL_DOWNLOAD_TO_HOST=yes --clean
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=dev --commit_message="Results from self hosted Github actions - NVIDIARTX4090" --quiet --submission_dir=$HOME/gh_action_submissions
cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from self hosted Github actions - NVIDIARTX4090" --quiet --submission_dir=$HOME/gh_action_submissions
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: MLPerf Inference Nvidia implementations

on:
schedule:
- cron: "08 01 * * */3" #to be adjusted
- cron: "58 23 * * *" #to be adjusted

jobs:
run_nvidia:
Expand All @@ -17,20 +17,31 @@ jobs:
strategy:
fail-fast: false
matrix:
system: [ "GO-spr", "phoenix-Amd-Am5", "GO-i9" ]
# system: [ "GO-spr", "phoenix-Amd-Am5", "GO-i9", "mlc-server" ]
system: [ "mlc-server" ]
python-version: [ "3.12" ]
model: [ "resnet50", "retinanet", "bert-99", "bert-99.9", "gptj-99.9", "3d-unet-99.9", "sdxl" ]
exclude:
- model: gptj-99.9

steps:
- name: Test MLPerf Inference NVIDIA ${{ matrix.model }}
env:
gpu_name: rtx_4090
run: |
# Set hw_name based on matrix.system
if [ "${{ matrix.system }}" = "GO-spr" ]; then
hw_name="RTX4090x2"
gpu_name=rtx_4090
docker_string=" --docker"
elif [ "${{ matrix.system }}" = "mlc-server" ]; then
hw_name="H100x8"
gpu_name=h100
docker_string=" "
else
hw_name="RTX4090x1"
gpu_name=rtx_4090
docker_string=" --docker"
fi

if [ -f "gh_action/bin/deactivate" ]; then source gh_action/bin/deactivate; fi
Expand All @@ -40,6 +51,6 @@ jobs:
pip install --upgrade cm4mlops
cm pull repo

cm run script --tags=run-mlperf,inference,_all-scenarios,_submission,_full,_r4.1-dev --preprocess_submission=yes --pull_changes=yes --pull_inference_changes=yes --execution_mode=valid --gpu_name=rtx_4090 --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=$hw_name --implementation=nvidia --backend=tensorrt --category=datacenter,edge --division=closed --docker_dt=yes --docker_it=no --docker_cm_repo=mlcommons@mlperf-automations --docker_cm_repo_branch=dev --adr.compiler.tags=gcc --device=cuda --use_model_from_host=yes --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet
cm run script --tags=run-mlperf,inference,_all-scenarios,_submission,_full,_r4.1-dev --preprocess_submission=yes --pull_changes=yes --pull_inference_changes=yes --execution_mode=valid --gpu_name=$gpu_name --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=$hw_name --implementation=nvidia --backend=tensorrt --category=datacenter,edge --division=closed --docker_dt=yes --docker_it=no --docker_cm_repo=mlcommons@mlperf-automations --docker_cm_repo_branch=dev --adr.compiler.tags=gcc --device=cuda --use_model_from_host=yes --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean $docker_string --quiet

cm run script --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from GH action on NVIDIA_$hw_name" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=$hw_name
30 changes: 19 additions & 11 deletions automation/script/module_misc.py
Original file line number Diff line number Diff line change
Expand Up @@ -1902,6 +1902,9 @@ def docker(i):

noregenerate_docker_file = i.get('docker_noregenerate', False)
norecreate_docker_image = i.get('docker_norecreate', True)
recreate_docker_image = i.get('docker_recreate', False)
if recreate_docker_image: # force recreate
norecreate_docker_image = False

if i.get('docker_skip_build', False):
noregenerate_docker_file = True
Expand Down Expand Up @@ -1974,8 +1977,6 @@ def docker(i):
env['CM_DOCKER_CACHE'] = docker_cache

image_repo = i.get('docker_image_repo', '')
if image_repo == '':
image_repo = 'local'

# Host system needs to have docker
r = self_module.cmind.access({'action': 'run',
Expand Down Expand Up @@ -2169,7 +2170,7 @@ def docker(i):

# env keys corresponding to container mounts are explicitly passed to
# the container run cmd
container_env_string = ''
container_env = {}
for index in range(len(mounts)):
mount = mounts[index]
# Since windows may have 2 :, we search from the right
Expand Down Expand Up @@ -2211,7 +2212,6 @@ def docker(i):
new_container_mount, new_container_mount_env = get_container_path(
env[tmp_value])
container_env_key = new_container_mount_env
# container_env_string += " --env.{}={} ".format(tmp_value, new_container_mount_env)
else: # we skip those mounts
mounts[index] = None
skip = True
Expand All @@ -2223,8 +2223,7 @@ def docker(i):
continue
mounts[index] = new_host_mount + ":" + new_container_mount
if host_env_key:
container_env_string += " --env.{}={} ".format(
host_env_key, container_env_key)
container_env[host_env_key] = container_env_key

for v in docker_input_mapping:
if docker_input_mapping[v] == host_env_key:
Expand Down Expand Up @@ -2255,10 +2254,16 @@ def docker(i):
for key in proxy_keys:
if os.environ.get(key, '') != '':
value = os.environ[key]
container_env_string += " --env.{}={} ".format(key, value)
container_env[key] = value
env['+ CM_DOCKER_BUILD_ARGS'].append(
"{}={}".format(key, value))

if container_env:
if not i_run_cmd.get('env'):
i_run_cmd['env'] = container_env
else:
i_run_cmd['env'] = {**i_run_cmd['env'], **container_env}

docker_use_host_group_id = i.get(
'docker_use_host_group_id',
docker_settings.get('use_host_group_id'))
Expand Down Expand Up @@ -2400,8 +2405,7 @@ def docker(i):
'docker_run_cmd_prefix': i.get('docker_run_cmd_prefix', '')})
if r['return'] > 0:
return r
run_cmd = r['run_cmd_string'] + ' ' + \
container_env_string + ' --docker_run_deps '
run_cmd = r['run_cmd_string'] + ' ' + ' --docker_run_deps '

env['CM_RUN_STATE_DOCKER'] = True

Expand Down Expand Up @@ -2432,10 +2436,8 @@ def docker(i):
'docker_os_version': docker_os_version,
'cm_repo': cm_repo,
'env': env,
'image_repo': image_repo,
'interactive': interactive,
'mounts': mounts,
'image_name': image_name,
# 'image_tag': script_alias,
'image_tag_extra': image_tag_extra,
'detached': detached,
Expand All @@ -2452,6 +2454,12 @@ def docker(i):
}
}

if image_repo:
cm_docker_input['image_repo'] = image_repo

if image_name:
cm_docker_input['image_name'] = image_name

if all_gpus:
cm_docker_input['all_gpus'] = True

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# CM script
Loading
Loading