Skip to content

Commit 035b611

Browse files
committed
Auto-merge updates from auto-update branch
2 parents 9ed4c53 + e0a5d7c commit 035b611

File tree

23 files changed

+543
-566
lines changed

23 files changed

+543
-566
lines changed
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
22
|-----------|------------|------------|--------------|-------------------|
3-
| retinanet | offline | 49.593 | 0.416 | - |
3+
| retinanet | offline | 49.593 | 0.422 | - |
Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,31 @@
1-
*Check [MLC MLPerf docs](https://docs.mlcommons.org/inference) for more details.*
1+
*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*
22

33
## Host platform
44

5-
* OS version: Linux-6.11.0-1018-azure-x86_64-with-glibc2.34
5+
* OS version: Linux-6.11.0-1018-azure-x86_64-with-glibc2.39
66
* CPU version: x86_64
7-
* Python version: 3.8.18 (default, Dec 12 2024, 19:15:30)
8-
[GCC 13.2.0]
7+
* Python version: 3.12.11 (main, Jun 4 2025, 04:14:08) [GCC 13.3.0]
98
* MLC version: unknown
109

11-
## MLC Run Command
10+
## CM Run Command
1211

13-
See [MLC installation guide](https://docs.mlcommons.org/inference/install/).
12+
See [CM installation guide](https://docs.mlcommons.org/inference/install/).
1413

1514
```bash
1615
pip install -U mlcflow
1716

1817
mlc rm cache -f
1918

20-
mlc pull repo GATEOverflow@mlperf-automations --checkout=e5a1b2043c6aa97bf4b5dee7d2d78b66a819808a
19+
mlc pull repo mlcommons@mlperf-automations --checkout=19e76c14a701b57e24b4e458336fe4423ae09c1b
2120

2221

2322
```
2423
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf,
25-
you should simply reload GATEOverflow@mlperf-automations without checkout and clean MLC cache as follows:*
24+
you should simply reload mlcommons@mlperf-automations without checkout and clean MLC cache as follows:*
2625

2726
```bash
28-
mlc rm repo GATEOverflow@mlperf-automations
29-
mlc pull repo GATEOverflow@mlperf-automations
27+
mlc rm repo mlcommons@mlperf-automations
28+
mlc pull repo mlcommons@mlperf-automations
3029
mlc rm cache -f
3130

3231
```
@@ -41,4 +40,4 @@ Model Precision: fp32
4140
`mAP`: `49.593`, Required accuracy for closed division `>= 37.1745`
4241

4342
### Performance Results
44-
`Samples per second`: `0.416222`
43+
`Samples per second`: `0.422059`

open/MLCommons/measurements/default-mlcommons_cpp-cpu-onnxruntime-default_config/retinanet/offline/accuracy_console.out

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
User Conf path: /home/runner/MLC/repos/GATEOverflow@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/32700038f5444f31a2df57738cadb5f8.conf
2-
Dataset Preprocessed path: /home/runner/MLC/repos/local/cache/get-preprocessed-dataset-openimages_9a1d2151
3-
Dataset List filepath: /home/runner/MLC/repos/local/cache/get-preprocessed-dataset-openimages_9a1d2151/annotations/openimages-mlperf.json
1+
User Conf path: /home/runner/MLC/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/3670b880f4bd45c1b907438c7cbb28d4.conf
2+
Dataset Preprocessed path: /home/runner/MLC/repos/local/cache/get-preprocessed-dataset-openimages_578222ac
3+
Dataset List filepath: /home/runner/MLC/repos/local/cache/get-preprocessed-dataset-openimages_578222ac/annotations/openimages-mlperf.json
44
Scenario: Offline
55
Mode: AccuracyOnly
66
Batch size: 1

open/MLCommons/measurements/default-mlcommons_cpp-cpu-onnxruntime-default_config/retinanet/offline/mlc-deps.mmd

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
graph TD
22
app-mlperf-inference,d775cac873ee4231_(_cpp,_retinanet,_onnxruntime,_cpu,_test,_r5.1-dev_default,_offline_) --> detect,os
3-
app-mlperf-inference,d775cac873ee4231_(_cpp,_retinanet,_onnxruntime,_cpu,_test,_r5.1-dev_default,_offline_) --> get,sys-utils-mlc
3+
app-mlperf-inference,d775cac873ee4231_(_cpp,_retinanet,_onnxruntime,_cpu,_test,_r5.1-dev_default,_offline_) --> get,sys-utils-cm
44
app-mlperf-inference,d775cac873ee4231_(_cpp,_retinanet,_onnxruntime,_cpu,_test,_r5.1-dev_default,_offline_) --> get,python
55
app-mlperf-inference,d775cac873ee4231_(_cpp,_retinanet,_onnxruntime,_cpu,_test,_r5.1-dev_default,_offline_) --> get,mlcommons,inference,src
66
get-mlperf-inference-utils,e341e5f86d8342e5 --> get,mlperf,inference,src
77
app-mlperf-inference,d775cac873ee4231_(_cpp,_retinanet,_onnxruntime,_cpu,_test,_r5.1-dev_default,_offline_) --> get,mlperf,inference,utils
8-
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_cpu,_onnxruntime,_offline_) --> detect,os
8+
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_offline,_onnxruntime,_retinanet,_cpu_) --> detect,os
99
detect-cpu,586c8a43320142f7 --> detect,os
10-
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_cpu,_onnxruntime,_offline_) --> detect,cpu
11-
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_cpu,_onnxruntime,_offline_) --> get,sys-utils-mlc
10+
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_offline,_onnxruntime,_retinanet,_cpu_) --> detect,cpu
11+
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_offline,_onnxruntime,_retinanet,_cpu_) --> get,sys-utils-cm
1212
get-mlperf-inference-loadgen,64c3d98d0ba04950_(_wg-inference_) --> detect,os
1313
get-mlperf-inference-loadgen,64c3d98d0ba04950_(_wg-inference_) --> get,python3
1414
get-mlperf-inference-loadgen,64c3d98d0ba04950_(_wg-inference_) --> get,mlcommons,inference,src
@@ -40,27 +40,27 @@ graph TD
4040
get-generic-python-lib,94b62a682bc44791_(_pip_) --> get,python3
4141
get-generic-python-lib,94b62a682bc44791_(_package.setuptools_) --> get,generic-python-lib,_pip
4242
get-mlperf-inference-loadgen,64c3d98d0ba04950_(_wg-inference_) --> get,generic-python-lib,_package.setuptools
43-
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_cpu,_onnxruntime,_offline_) --> get,loadgen,_wg-inference
44-
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_cpu,_onnxruntime,_offline_) --> get,mlcommons,inference,src
45-
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_cpu,_onnxruntime,_offline_) --> get,lib,onnxruntime,lang-cpp,_cpu
46-
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_cpu,_onnxruntime,_offline_) --> get,dataset,preprocessed,openimages,_validation,_NCHW,_50
47-
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_cpu,_onnxruntime,_offline_) --> get,ml-model,retinanet,_onnx,_fp32
43+
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_offline,_onnxruntime,_retinanet,_cpu_) --> get,loadgen,_wg-inference
44+
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_offline,_onnxruntime,_retinanet,_cpu_) --> get,mlcommons,inference,src
45+
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_offline,_onnxruntime,_retinanet,_cpu_) --> get,lib,onnxruntime,lang-cpp,_cpu
46+
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_offline,_onnxruntime,_retinanet,_cpu_) --> get,dataset,preprocessed,openimages,_validation,_NCHW,_50
47+
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_offline,_onnxruntime,_retinanet,_cpu_) --> get,ml-model,retinanet,_onnx,_fp32
4848
generate-mlperf-inference-user-conf,3af4475745964b93_(_wg-inference_) --> detect,os
4949
detect-cpu,586c8a43320142f7 --> detect,os
5050
generate-mlperf-inference-user-conf,3af4475745964b93_(_wg-inference_) --> detect,cpu
5151
generate-mlperf-inference-user-conf,3af4475745964b93_(_wg-inference_) --> get,python
5252
get-mlperf-inference-sut-configs,c2fbf72009e2445b --> get,cache,dir,_name.mlperf-inference-sut-configs
5353
generate-mlperf-inference-user-conf,3af4475745964b93_(_wg-inference_) --> get,sut,configs
5454
generate-mlperf-inference-user-conf,3af4475745964b93_(_wg-inference_) --> get,mlcommons,inference,src
55-
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_cpu,_onnxruntime,_offline_) --> generate,user-conf,mlperf,inference,_wg-inference
55+
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_offline,_onnxruntime,_retinanet,_cpu_) --> generate,user-conf,mlperf,inference,_wg-inference
5656
detect-cpu,586c8a43320142f7 --> detect,os
5757
compile-program,c05042ba005a4bfa --> detect,cpu
5858
compile-program,c05042ba005a4bfa --> get,compiler,gcc
5959
detect-cpu,586c8a43320142f7 --> detect,os
6060
get-compiler-flags,31be8b74a69742f8 --> detect,cpu
6161
compile-program,c05042ba005a4bfa --> get,compiler-flags
62-
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_cpu,_onnxruntime,_offline_) --> compile,cpp-program
62+
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_offline,_onnxruntime,_retinanet,_cpu_) --> compile,cpp-program
6363
detect-cpu,586c8a43320142f7 --> detect,os
6464
benchmark-program,19f369ef47084895 --> detect,cpu
6565
benchmark-program-mlperf,cfff0132a8aa4018 --> benchmark-program,program
66-
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_cpu,_onnxruntime,_offline_) --> benchmark-mlperf
66+
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_offline,_onnxruntime,_retinanet,_cpu_) --> benchmark-mlperf

open/MLCommons/measurements/default-mlcommons_cpp-cpu-onnxruntime-default_config/retinanet/offline/mlc-version-info.json

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,9 @@
1111
}
1212
},
1313
{
14-
"get,sys-utils-mlc": {
14+
"get,sys-utils-cm": {
1515
"script_uid": "bc90993277e84b8e",
16-
"script_alias": "get-sys-utils-mlc",
16+
"script_alias": "get-sys-utils-cm",
1717
"script_tags": "get,sys-utils-cm,sys-utils-mlc",
1818
"script_variations": "",
1919
"version": "",
@@ -67,7 +67,7 @@
6767
"script_tags": "detect-os,detect,os,info",
6868
"script_variations": "",
6969
"version": "",
70-
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( retinanet,_cpu,_onnxruntime,_offline )"
70+
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( offline,_onnxruntime,_retinanet,_cpu )"
7171
}
7272
},
7373
{
@@ -87,17 +87,17 @@
8787
"script_tags": "detect,cpu,detect-cpu,info",
8888
"script_variations": "",
8989
"version": "",
90-
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( retinanet,_cpu,_onnxruntime,_offline )"
90+
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( offline,_onnxruntime,_retinanet,_cpu )"
9191
}
9292
},
9393
{
94-
"get,sys-utils-mlc": {
94+
"get,sys-utils-cm": {
9595
"script_uid": "bc90993277e84b8e",
96-
"script_alias": "get-sys-utils-mlc",
96+
"script_alias": "get-sys-utils-cm",
9797
"script_tags": "get,sys-utils-cm,sys-utils-mlc",
9898
"script_variations": "",
9999
"version": "",
100-
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( retinanet,_cpu,_onnxruntime,_offline )"
100+
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( offline,_onnxruntime,_retinanet,_cpu )"
101101
}
102102
},
103103
{
@@ -417,7 +417,7 @@
417417
"script_tags": "get,loadgen,inference,inference-loadgen,mlperf,mlcommons",
418418
"script_variations": "wg-inference",
419419
"version": "master",
420-
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( retinanet,_cpu,_onnxruntime,_offline )"
420+
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( offline,_onnxruntime,_retinanet,_cpu )"
421421
}
422422
},
423423
{
@@ -427,7 +427,7 @@
427427
"script_tags": "get,src,source,inference,inference-src,inference-source,mlperf,mlcommons",
428428
"script_variations": "",
429429
"version": "master-git-50de99161e33f32b569c7a00b6ccf56f274d418d",
430-
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( retinanet,_cpu,_onnxruntime,_offline )"
430+
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( offline,_onnxruntime,_retinanet,_cpu )"
431431
}
432432
},
433433
{
@@ -437,7 +437,7 @@
437437
"script_tags": "install,onnxruntime,get,prebuilt,lib,lang-c,lang-cpp",
438438
"script_variations": "cpu",
439439
"version": "",
440-
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( retinanet,_cpu,_onnxruntime,_offline )"
440+
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( offline,_onnxruntime,_retinanet,_cpu )"
441441
}
442442
},
443443
{
@@ -447,7 +447,7 @@
447447
"script_tags": "get,dataset,openimages,open-images,object-detection,preprocessed",
448448
"script_variations": "validation,NCHW,50",
449449
"version": "",
450-
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( retinanet,_cpu,_onnxruntime,_offline )"
450+
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( offline,_onnxruntime,_retinanet,_cpu )"
451451
}
452452
},
453453
{
@@ -457,7 +457,7 @@
457457
"script_tags": "get,ml-model,raw,resnext50,retinanet,object-detection",
458458
"script_variations": "onnx,fp32",
459459
"version": "",
460-
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( retinanet,_cpu,_onnxruntime,_offline )"
460+
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( offline,_onnxruntime,_retinanet,_cpu )"
461461
}
462462
},
463463
{
@@ -537,7 +537,7 @@
537537
"script_tags": "generate,mlperf,inference,user-conf,inference-user-conf",
538538
"script_variations": "wg-inference",
539539
"version": "",
540-
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( retinanet,_cpu,_onnxruntime,_offline )"
540+
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( offline,_onnxruntime,_retinanet,_cpu )"
541541
}
542542
},
543543
{
@@ -607,7 +607,7 @@
607607
"script_tags": "compile,program,c-program,cpp-program,compile-program,compile-c-program,compile-cpp-program",
608608
"script_variations": "",
609609
"version": "",
610-
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( retinanet,_cpu,_onnxruntime,_offline )"
610+
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( offline,_onnxruntime,_retinanet,_cpu )"
611611
}
612612
},
613613
{
@@ -647,7 +647,7 @@
647647
"script_tags": "mlperf,benchmark-mlperf",
648648
"script_variations": "",
649649
"version": "",
650-
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( retinanet,_cpu,_onnxruntime,_offline )"
650+
"parent": "app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf ( offline,_onnxruntime,_retinanet,_cpu )"
651651
}
652652
}
653653
]

open/MLCommons/measurements/default-mlcommons_cpp-cpu-onnxruntime-default_config/retinanet/offline/os_info.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,6 @@
2828
"MLC_HOST_PYTHON_BITS": "64",
2929
"MLC_HOST_SYSTEM_NAME": "pkrvmpptgkbjq6m",
3030
"+PATH": [
31-
"/home/runner/MLC/repos/local/cache/install-python-src_a7e61b92/install/bin"
31+
"/home/runner/MLC/repos/local/cache/install-python-src_6e192641/install/bin"
3232
]
3333
}

open/MLCommons/measurements/default-mlcommons_cpp-cpu-onnxruntime-default_config/retinanet/offline/performance_console.out

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
User Conf path: /home/runner/MLC/repos/GATEOverflow@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/b55b1a7afba84ff795459637992336d6.conf
2-
Dataset Preprocessed path: /home/runner/MLC/repos/local/cache/get-preprocessed-dataset-openimages_9a1d2151
3-
Dataset List filepath: /home/runner/MLC/repos/local/cache/get-preprocessed-dataset-openimages_9a1d2151/annotations/openimages-mlperf.json
1+
User Conf path: /home/runner/MLC/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/96d6f73fdbc147118e26f128e7835389.conf
2+
Dataset Preprocessed path: /home/runner/MLC/repos/local/cache/get-preprocessed-dataset-openimages_578222ac
3+
Dataset List filepath: /home/runner/MLC/repos/local/cache/get-preprocessed-dataset-openimages_578222ac/annotations/openimages-mlperf.json
44
Scenario: Offline
55
Mode: PerformanceOnly
66
Batch size: 1
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
22
|----------|------------|------------|--------------|-------------------|
3-
| resnet50 | offline | 76 | 21.153 | - |
3+
| resnet50 | offline | 76 | 21.21 | - |

open/MLCommons/measurements/gh_ubuntu-latest_x86-reference-cpu-tf_v2.19.0-default_config/resnet50/offline/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,16 +16,16 @@ pip install -U mlcflow
1616

1717
mlc rm cache -f
1818

19-
mlc pull repo GATEOverflow@mlperf-automations --checkout=73fdb2c57595cdb675257d6f6cabad4a59c4600e
19+
mlc pull repo mlcommons@mlperf-automations --checkout=19e76c14a701b57e24b4e458336fe4423ae09c1b
2020

2121

2222
```
2323
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf,
24-
you should simply reload GATEOverflow@mlperf-automations without checkout and clean MLC cache as follows:*
24+
you should simply reload mlcommons@mlperf-automations without checkout and clean MLC cache as follows:*
2525

2626
```bash
27-
mlc rm repo GATEOverflow@mlperf-automations
28-
mlc pull repo GATEOverflow@mlperf-automations
27+
mlc rm repo mlcommons@mlperf-automations
28+
mlc pull repo mlcommons@mlperf-automations
2929
mlc rm cache -f
3030

3131
```
@@ -40,4 +40,4 @@ Model Precision: fp32
4040
`acc`: `76.0`, Required accuracy for closed division `>= 75.6954`
4141

4242
### Performance Results
43-
`Samples per second`: `21.1529`
43+
`Samples per second`: `21.2098`

0 commit comments

Comments
 (0)