You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Collective Mind (CM) is a collection of portable, extensible, technology-agnostic and ready-to-use automation recipes
13
-
with a human-friendly interface (aka CM scripts) to unify and automate all the manual steps required to compose, run, benchmark and optimize complex ML/AI applications
14
-
on any platform with any software and hardware: see [CM4MLOps online catalog](https://access.cknowledge.org/playground/?action=scripts),
15
-
[source code](https://github.com/mlcommons/ck/blob/master/cm-mlops/script), [ArXiv white paper](https://arxiv.org/abs/2406.16791).
16
-
17
-
CM scripts require Python 3.7+ with minimal dependencies and are
12
+
Collective Mind (CM) is a small, modular, cross-platform and decentralized workflow automation framework
13
+
with a human-friendly interface to make it easier to build, run, benchmark and optimize applications
14
+
across diverse models, data sets, software and hardware.
15
+
16
+
CM is a part of [Collective Knowledge (CK)](https://github.com/mlcommons/ck) -
17
+
an educational community project to learn how to run emerging workloads
18
+
in the most efficient and cost-effictive way across diverse
19
+
and continuously changing systems.
20
+
21
+
CM includes a collection of portable, extensible and technology-agnostic automation recipes
22
+
with a common API and CLI (aka CM scripts) to unify and automate different steps
23
+
required to compose, run, benchmark and optimize complex ML/AI applications
24
+
on any platform with any software and hardware.
25
+
26
+
CM scripts extend the concept of `cmake` with simple Python automations, native scripts
27
+
and JSON/YAML meta descriptions. They require Python 3.7+ with minimal dependencies and are
18
28
[continuously extended by the community and MLCommons members](https://github.com/mlcommons/ck/blob/master/CONTRIBUTING.md)
19
29
to run natively on Ubuntu, MacOS, Windows, RHEL, Debian, Amazon Linux
20
30
and any other operating system, in a cloud or inside automatically generated containers
21
-
while keeping backward compatibility - please don't hesitate
22
-
to report encountered issues [here](https://github.com/mlcommons/ck/issues)
23
-
to help this collaborative engineering effort!
31
+
while keeping backward compatibility.
24
32
25
33
CM scripts were originally developed based on the following requirements from the
26
34
[MLCommons members](https://mlcommons.org)
@@ -36,210 +44,24 @@ from Nvidia, Intel, AMD, Google, Qualcomm, Amazon and other vendors:
36
44
and simple JSON/YAML descriptions instead of inventing new workflow languages;
37
45
* must have the same interface to run all automations natively, in a cloud or inside containers.
We suggest you to use this [online CM interface](https://access.cknowledge.org/playground/?action=howtorun)
133
-
to generate CM commands that will prepare and run MLPerf benchmarks and AI applications across different platforms.
134
-
135
-
136
-
See more examples of CM scripts and workflows to download Stable Diffusion, GPT-J and LLAMA2 models, process datasets, install tools and compose AI applications:
137
-
138
-
139
-
```bash
140
-
pip install cmind -U
141
-
142
-
cm pull repo mlcommons@cm4mlops --branch=dev
143
-
144
-
cm show repo
145
-
146
-
cm run script "python app image-classification onnx"
cm pull repo --url=https://zenodo.org/records/12528908/files/cm4mlops-20240625.zip
194
-
195
-
cmr "install llvm prebuilt" --version=17.0.6
196
-
cmr "app image corner-detection"
197
-
198
-
cm run experiment --tags=tuning,experiment,batch_size -- echo --batch_size={{ print_str("VAR1{range(1,8)}") }}
199
-
200
-
cm replay experiment --tags=tuning,experiment,batch_size
201
-
202
-
cmr "get conda"
203
-
204
-
cm pull repo ctuning@cm4research
205
-
cmr "reproduce paper micro-2023 victima _install_deps"
206
-
cmr "reproduce paper micro-2023 victima _run"
207
-
208
-
```
209
-
210
-
211
-
See a few examples of modular containers and GitHub actions with CM commands:
212
-
213
-
*[GitHub action with CM commands to test MLPerf inference benchmark](https://github.com/mlcommons/inference/blob/master/.github/workflows/test-bert.yml)
214
-
*[Dockerfile to run MLPerf inference benchmark via CM](https://github.com/mlcommons/ck/blob/master/cm-mlops/script/app-mlperf-inference/dockerfiles/bert-99.9/ubuntu_22.04_python_onnxruntime_cpu.Dockerfile)
215
-
216
-
217
-
Please check the [**Getting Started Guide**](https://github.com/mlcommons/ck/blob/master/docs/getting-started.md)
218
-
to understand how CM automation recipes work, how to use them to automate your own projects,
219
-
and how to implement and share new automations in your public or private projects.
220
-
221
-
222
-
### Documentation
223
-
224
-
**MLCommons is updating the CM documentation based on user feedback - please stay tuned for more details**.
225
-
226
-
*[Getting Started Guide and FAQ](https://github.com/mlcommons/ck/tree/master/docs/getting-started.md)
227
-
*[Common CM interface to run MLPerf inference benchmarks](https://github.com/mlcommons/ck/tree/master/docs/mlperf/inference)
228
-
*[Common CM interface to re-run experiments from ML and Systems papers including MICRO'23 and the Student Cluster Competition @ SuperComputing'23](https://github.com/mlcommons/ck/tree/master/docs/tutorials/common-interface-to-reproduce-research-projects.md)
229
-
*[Other CM tutorials](https://github.com/mlcommons/ck/tree/master/docs/tutorials)
[getting started guide](https://github.com/mlcommons/ck/blob/master/docs/getting-started.md)
52
+
* CM v3.x (prototype 2024-cur): [docs](https://github.com/mlcommons/ck/tree/master/cm/docs/cmx)
53
+
* MLPerf inference benchmark automated via CM
54
+
*[Run MLPerf for submissions](https://docs.mlcommons.org/inference)
55
+
*[Run MLPerf at the Student Cluster Competition'24](https://docs.mlcommons.org/inference/benchmarks/text_to_image/reproducibility/scc24)
56
+
* Examples of modular containers and GitHub actions with CM commands:
57
+
*[GitHub action with CM commands to test MLPerf inference benchmark](https://github.com/mlcommons/inference/blob/master/.github/workflows/test-bert.yml)
58
+
*[Dockerfile to run MLPerf inference benchmark via CM](https://github.com/mlcommons/ck/blob/master/cm-mlops/script/app-mlperf-inference/dockerfiles/bert-99.9/ubuntu_22.04_python_onnxruntime_cpu.Dockerfile)
@@ -248,11 +70,11 @@ You can learn more about the motivation behind these projects from the following
248
70
249
71
* "Enabling more efficient and cost-effective AI/ML systems with Collective Mind, virtualized MLOps, MLPerf, Collective Knowledge Playground and reproducible optimization tournaments": [[ArXiv](https://arxiv.org/abs/2406.16791)]
250
72
* ACM REP'23 keynote about the MLCommons CM automation framework: [[slides](https://doi.org/10.5281/zenodo.8105339)]
251
-
* ACM TechTalk'21 about automating research projects: [[YouTube](https://www.youtube.com/watch?v=7zpeIVwICa4)][[slides](https://learning.acm.org/binaries/content/assets/leaning-center/webinar-slides/2021/grigorifursin_techtalk_slides.pdf)]
73
+
* ACM TechTalk'21 about Collective Knowledge project: [[YouTube](https://www.youtube.com/watch?v=7zpeIVwICa4)][[slides](https://learning.acm.org/binaries/content/assets/leaning-center/webinar-slides/2021/grigorifursin_techtalk_slides.pdf)]
252
74
253
75
### Acknowledgments
254
76
255
-
Collective Knowledge (CK) and Collective Mind (CM) were created by [Grigori Fursin](https://cKnowledge.org/gfursin),
77
+
The Collective Mind framework (CM) was created by [Grigori Fursin](https://cKnowledge.org/gfursin),
256
78
sponsored by cKnowledge.org and cTuning.org, and donated to MLCommons to benefit everyone.
257
79
Since then, this open-source technology (CM, CM4MLOps, CM4MLPerf, CM4ABTF, CM4Research, etc)
258
80
is being developed as a community effort thanks to all our
0 commit comments