Skip to content

Commit 9591af4

Browse files
committed
clean up docs
1 parent 378ed68 commit 9591af4

File tree

3 files changed

+209
-209
lines changed

3 files changed

+209
-209
lines changed

cm/README.md

Lines changed: 31 additions & 209 deletions
Original file line numberDiff line numberDiff line change
@@ -9,18 +9,26 @@
99

1010
### About
1111

12-
Collective Mind (CM) is a collection of portable, extensible, technology-agnostic and ready-to-use automation recipes
13-
with a human-friendly interface (aka CM scripts) to unify and automate all the manual steps required to compose, run, benchmark and optimize complex ML/AI applications
14-
on any platform with any software and hardware: see [CM4MLOps online catalog](https://access.cknowledge.org/playground/?action=scripts),
15-
[source code](https://github.com/mlcommons/ck/blob/master/cm-mlops/script), [ArXiv white paper]( https://arxiv.org/abs/2406.16791 ).
16-
17-
CM scripts require Python 3.7+ with minimal dependencies and are
12+
Collective Mind (CM) is a small, modular, cross-platform and decentralized workflow automation framework
13+
with a human-friendly interface to make it easier to build, run, benchmark and optimize applications
14+
across diverse models, data sets, software and hardware.
15+
16+
CM is a part of [Collective Knowledge (CK)](https://github.com/mlcommons/ck) -
17+
an educational community project to learn how to run emerging workloads
18+
in the most efficient and cost-effictive way across diverse
19+
and continuously changing systems.
20+
21+
CM includes a collection of portable, extensible and technology-agnostic automation recipes
22+
with a common API and CLI (aka CM scripts) to unify and automate different steps
23+
required to compose, run, benchmark and optimize complex ML/AI applications
24+
on any platform with any software and hardware.
25+
26+
CM scripts extend the concept of `cmake` with simple Python automations, native scripts
27+
and JSON/YAML meta descriptions. They require Python 3.7+ with minimal dependencies and are
1828
[continuously extended by the community and MLCommons members](https://github.com/mlcommons/ck/blob/master/CONTRIBUTING.md)
1929
to run natively on Ubuntu, MacOS, Windows, RHEL, Debian, Amazon Linux
2030
and any other operating system, in a cloud or inside automatically generated containers
21-
while keeping backward compatibility - please don't hesitate
22-
to report encountered issues [here](https://github.com/mlcommons/ck/issues)
23-
to help this collaborative engineering effort!
31+
while keeping backward compatibility.
2432

2533
CM scripts were originally developed based on the following requirements from the
2634
[MLCommons members](https://mlcommons.org)
@@ -36,210 +44,24 @@ from Nvidia, Intel, AMD, Google, Qualcomm, Amazon and other vendors:
3644
and simple JSON/YAML descriptions instead of inventing new workflow languages;
3745
* must have the same interface to run all automations natively, in a cloud or inside containers.
3846

39-
[CM scripts](https://access.cknowledge.org/playground/?action=scripts)
40-
are used by MLCommons, cTuning.org and cKnowledge.org to modularize MLPerf inference benchmarks
41-
(see [this white paper](https://arxiv.org/abs/2406.16791))
42-
and help anyone run them across different models, datasets, software and hardware:
43-
https://docs.mlcommons.org/inference .
44-
45-
For example, you should be able to run the MLPerf inference benchmark on Linux, Windows and MacOS
46-
using a few CM commands:
47-
48-
```bash
49-
50-
pip install cmind -U
51-
52-
cm pull repo mlcommons@cm4mlops --branch=dev
53-
54-
cm run script "run-mlperf-inference _r4.1 _accuracy-only _short" \
55-
--device=cpu \
56-
--model=resnet50 \
57-
--precision=float32 \
58-
--implementation=reference \
59-
--backend=onnxruntime \
60-
--scenario=Offline \
61-
--clean \
62-
--quiet \
63-
--time
64-
65-
cm run script "run-mlperf-inference _r4.1 _submission _short" \
66-
--device=cpu \
67-
--model=resnet50 \
68-
--precision=float32 \
69-
--implementation=reference \
70-
--backend=onnxruntime \
71-
--scenario=Offline \
72-
--clean \
73-
--quiet \
74-
--time
75-
76-
...
77-
78-
0
79-
Organization CTuning
80-
Availability available
81-
Division open
82-
SystemType edge
83-
SystemName ip_172_31_87_92
84-
Platform ip_172_31_87_92-reference-cpu-onnxruntime-v1.1...
85-
Model resnet50
86-
MlperfModel resnet
87-
Scenario Offline
88-
Result 14.3978
89-
Accuracy 80.0
90-
number_of_nodes 1
91-
host_processor_model_name Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
92-
host_processors_per_node 1
93-
host_processor_core_count 2
94-
accelerator_model_name NaN
95-
accelerators_per_node 0
96-
Location open/CTuning/results/ip_172_31_87_92-reference...
97-
framework onnxruntime v1.18.1
98-
operating_system Ubuntu 24.04 (linux-6.8.0-1009-aws-glibc2.39)
99-
notes Automated by MLCommons CM v2.3.2.
100-
compliance 1
101-
errors 0
102-
version v4.1
103-
inferred 0
104-
has_power False
105-
Units Samples/s
106-
107-
108-
```
109-
110-
You can also run the same commands using a unified CM Python API:
111-
112-
```python
113-
import cmind
114-
output=cmind.access({
115-
'action':'run', 'automation':'script',
116-
'tags':'run-mlperf-inference,_r4.1,_performance-only,_short',
117-
'device':'cpu',
118-
'model':'resnet50',
119-
'precision':'float32',
120-
'implementation':'reference',
121-
'backend':'onnxruntime',
122-
'scenario':'Offline',
123-
'clean':True,
124-
'quiet':True,
125-
'time':True,
126-
'out':'con'
127-
})
128-
if output['return']==0: print (output)
129-
```
130-
131-
132-
We suggest you to use this [online CM interface](https://access.cknowledge.org/playground/?action=howtorun)
133-
to generate CM commands that will prepare and run MLPerf benchmarks and AI applications across different platforms.
134-
135-
136-
See more examples of CM scripts and workflows to download Stable Diffusion, GPT-J and LLAMA2 models, process datasets, install tools and compose AI applications:
137-
138-
139-
```bash
140-
pip install cmind -U
141-
142-
cm pull repo mlcommons@cm4mlops --branch=dev
143-
144-
cm show repo
145-
146-
cm run script "python app image-classification onnx"
147-
cmr "python app image-classification onnx"
148-
149-
cmr "download file _wget" --url=https://cKnowledge.org/ai/data/computer_mouse.jpg --verify=no --env.CM_DOWNLOAD_CHECKSUM=45ae5c940233892c2f860efdf0b66e7e
150-
cmr "python app image-classification onnx" --input=computer_mouse.jpg
151-
cmr "python app image-classification onnx" --input=computer_mouse.jpg --debug
152-
153-
cm find script "python app image-classification onnx"
154-
cm load script "python app image-classification onnx" --yaml
155-
156-
cmr "get python" --version_min=3.8.0 --name=mlperf-experiments
157-
cmr "install python-venv" --version_max=3.10.11 --name=mlperf
158-
159-
cmr "get ml-model stable-diffusion"
160-
cmr "get ml-model sdxl _fp16 _rclone"
161-
cmr "get ml-model huggingface zoo _model-stub.alpindale/Llama-2-13b-ONNX" --model_filename=FP32/LlamaV2_13B_float32.onnx --skip_cache
162-
cmr "get dataset coco _val _2014"
163-
cmr "get dataset openimages" -j
164-
165-
cm show cache
166-
cm show cache "get ml-model stable-diffusion"
167-
168-
cmr "get generic-python-lib _package.onnxruntime" --version_min=1.16.0
169-
cmr "python app image-classification onnx" --input=computer_mouse.jpg
170-
171-
cm rm cache -f
172-
cmr "python app image-classification onnx" --input=computer_mouse.jpg --adr.onnxruntime.version_max=1.16.0
173-
174-
cmr "get cuda" --version_min=12.0.0 --version_max=12.3.1
175-
cmr "python app image-classification onnx _cuda" --input=computer_mouse.jpg
176-
177-
cm gui script "python app image-classification onnx"
178-
179-
cm docker script "python app image-classification onnx" --input=computer_mouse.jpg
180-
cm docker script "python app image-classification onnx" --input=computer_mouse.jpg -j -docker_it
181-
182-
cm docker script "get coco dataset _val _2017" --to=d:\Downloads\COCO-2017-val --store=d:\Downloads --docker_cm_repo=ctuning@mlcommons-ck
183-
184-
cmr "run common mlperf inference" --implementation=nvidia --model=bert-99 --category=datacenter --division=closed
185-
cm find script "run common mlperf inference"
186-
187-
cmr "get generic-python-lib _package.torch" --version=2.1.2
188-
cmr "get generic-python-lib _package.torchvision" --version=0.16.2
189-
cmr "python app image-classification torch" --input=computer_mouse.jpg
190-
191-
192-
cm rm repo mlcommons@cm4mlops
193-
cm pull repo --url=https://zenodo.org/records/12528908/files/cm4mlops-20240625.zip
194-
195-
cmr "install llvm prebuilt" --version=17.0.6
196-
cmr "app image corner-detection"
197-
198-
cm run experiment --tags=tuning,experiment,batch_size -- echo --batch_size={{ print_str("VAR1{range(1,8)}") }}
199-
200-
cm replay experiment --tags=tuning,experiment,batch_size
201-
202-
cmr "get conda"
203-
204-
cm pull repo ctuning@cm4research
205-
cmr "reproduce paper micro-2023 victima _install_deps"
206-
cmr "reproduce paper micro-2023 victima _run"
207-
208-
```
209-
210-
211-
See a few examples of modular containers and GitHub actions with CM commands:
212-
213-
* [GitHub action with CM commands to test MLPerf inference benchmark](https://github.com/mlcommons/inference/blob/master/.github/workflows/test-bert.yml)
214-
* [Dockerfile to run MLPerf inference benchmark via CM](https://github.com/mlcommons/ck/blob/master/cm-mlops/script/app-mlperf-inference/dockerfiles/bert-99.9/ubuntu_22.04_python_onnxruntime_cpu.Dockerfile)
215-
216-
217-
Please check the [**Getting Started Guide**](https://github.com/mlcommons/ck/blob/master/docs/getting-started.md)
218-
to understand how CM automation recipes work, how to use them to automate your own projects,
219-
and how to implement and share new automations in your public or private projects.
220-
221-
222-
### Documentation
223-
224-
**MLCommons is updating the CM documentation based on user feedback - please stay tuned for more details**.
225-
226-
* [Getting Started Guide and FAQ](https://github.com/mlcommons/ck/tree/master/docs/getting-started.md)
227-
* [Common CM interface to run MLPerf inference benchmarks](https://github.com/mlcommons/ck/tree/master/docs/mlperf/inference)
228-
* [Common CM interface to re-run experiments from ML and Systems papers including MICRO'23 and the Student Cluster Competition @ SuperComputing'23](https://github.com/mlcommons/ck/tree/master/docs/tutorials/common-interface-to-reproduce-research-projects.md)
229-
* [Other CM tutorials](https://github.com/mlcommons/ck/tree/master/docs/tutorials)
230-
* [Full documentation](https://github.com/mlcommons/ck/tree/master/docs/tutorials/README.md)
231-
232-
### Projects modularized and automated by CM
47+
### Resources
23348

234-
* [cm4research](https://github.com/ctuning/cm4research)
235-
* [cm4mlops](https://github.com/mlcommons/cm4mlops)
236-
* [cm4abtf](https://github.com/mlcommons/cm4abtf)
49+
* CM v2.x (stable version 2022-cur): [installation on Linux, Windows, MacOS](https://access.cknowledge.org/playground/?action=install) ;
50+
[docs](https://docs.mlcommons.org/ck) ; [popular commands](https://github.com/mlcommons/ck/tree/master/cm/docs/demos/some-cm-commands.md) ;
51+
[getting started guide](https://github.com/mlcommons/ck/blob/master/docs/getting-started.md)
52+
* CM v3.x (prototype 2024-cur): [docs](https://github.com/mlcommons/ck/tree/master/cm/docs/cmx)
53+
* MLPerf inference benchmark automated via CM
54+
* [Run MLPerf for submissions](https://docs.mlcommons.org/inference)
55+
* [Run MLPerf at the Student Cluster Competition'24](https://docs.mlcommons.org/inference/benchmarks/text_to_image/reproducibility/scc24)
56+
* Examples of modular containers and GitHub actions with CM commands:
57+
* [GitHub action with CM commands to test MLPerf inference benchmark](https://github.com/mlcommons/inference/blob/master/.github/workflows/test-bert.yml)
58+
* [Dockerfile to run MLPerf inference benchmark via CM](https://github.com/mlcommons/ck/blob/master/cm-mlops/script/app-mlperf-inference/dockerfiles/bert-99.9/ubuntu_22.04_python_onnxruntime_cpu.Dockerfile)
23759

23860
### License
23961

24062
[Apache 2.0](LICENSE.md)
24163

242-
### Citing CM
64+
### Citing CM and CM4MLOps
24365

24466
If you found CM useful, please cite this article:
24567
[ [ArXiv](https://arxiv.org/abs/2406.16791) ], [ [BibTex](https://github.com/mlcommons/ck/blob/master/citation.bib) ].
@@ -248,11 +70,11 @@ You can learn more about the motivation behind these projects from the following
24870

24971
* "Enabling more efficient and cost-effective AI/ML systems with Collective Mind, virtualized MLOps, MLPerf, Collective Knowledge Playground and reproducible optimization tournaments": [ [ArXiv](https://arxiv.org/abs/2406.16791) ]
25072
* ACM REP'23 keynote about the MLCommons CM automation framework: [ [slides](https://doi.org/10.5281/zenodo.8105339) ]
251-
* ACM TechTalk'21 about automating research projects: [ [YouTube](https://www.youtube.com/watch?v=7zpeIVwICa4) ] [ [slides](https://learning.acm.org/binaries/content/assets/leaning-center/webinar-slides/2021/grigorifursin_techtalk_slides.pdf) ]
73+
* ACM TechTalk'21 about Collective Knowledge project: [ [YouTube](https://www.youtube.com/watch?v=7zpeIVwICa4) ] [ [slides](https://learning.acm.org/binaries/content/assets/leaning-center/webinar-slides/2021/grigorifursin_techtalk_slides.pdf) ]
25274

25375
### Acknowledgments
25476

255-
Collective Knowledge (CK) and Collective Mind (CM) were created by [Grigori Fursin](https://cKnowledge.org/gfursin),
77+
The Collective Mind framework (CM) was created by [Grigori Fursin](https://cKnowledge.org/gfursin),
25678
sponsored by cKnowledge.org and cTuning.org, and donated to MLCommons to benefit everyone.
25779
Since then, this open-source technology (CM, CM4MLOps, CM4MLPerf, CM4ABTF, CM4Research, etc)
25880
is being developed as a community effort thanks to all our

cm/docs/cmx/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Collective Mind v3 (prototype)
2+
3+
* [Installation (Linux, Windows, MacOS)](install.md)

0 commit comments

Comments
 (0)