@@ -81,14 +81,14 @@ jobs:
81
81
run :
82
82
runs-on : ubuntu-latest
83
83
# optionally use a convenient Ubuntu LTS + DVC + CML image
84
- # container: docker:// ghcr.io/iterative/cml:0-dvc2-base1
84
+ # container: ghcr.io/iterative/cml:0-dvc2-base1
85
85
steps :
86
- - uses : actions/checkout@v2
86
+ - uses : actions/checkout@v3
87
87
# may need to setup NodeJS & Python3 on e.g. self-hosted
88
- # - uses: actions/setup-node@v2
88
+ # - uses: actions/setup-node@v3
89
89
# with:
90
90
# node-version: '16'
91
- # - uses: actions/setup-python@v2
91
+ # - uses: actions/setup-python@v4
92
92
# with:
93
93
# python-version: '3.x'
94
94
- uses : iterative/setup-cml@v1
@@ -103,17 +103,17 @@ jobs:
103
103
run : |
104
104
# Post reports as comments in GitHub PRs
105
105
cat results.txt >> report.md
106
- cml send- comment report.md
106
+ cml comment create report.md
107
107
` ` `
108
108
109
109
## Usage
110
110
111
111
We helpfully provide CML and other useful libraries pre-installed on our
112
112
[custom Docker images](https://github.com/iterative/cml/blob/master/Dockerfile).
113
113
In the above example, uncommenting the field
114
- ` container: docker:// ghcr.io/iterative/cml:0-dvc2-base1`) will make the runner
115
- pull the CML Docker image. The image already has NodeJS, Python 3, DVC and CML
116
- set up on an Ubuntu LTS base for convenience.
114
+ ` container: ghcr.io/iterative/cml:0-dvc2-base1`) will make the runner pull the
115
+ CML Docker image. The image already has NodeJS, Python 3, DVC and CML set up on
116
+ an Ubuntu LTS base for convenience.
117
117
118
118
# ## CML Functions
119
119
@@ -124,18 +124,17 @@ report.
124
124
Below is a table of CML functions for writing markdown reports and delivering
125
125
those reports to your CI system.
126
126
127
- | Function | Description | Example Inputs |
128
- | ----------------------- | ---------------------------------------------------------------- | ----------------------------------------------------------- |
129
- | `cml runner` | Launch a runner locally or hosted by a cloud provider | See [Arguments](https://github.com/iterative/cml#arguments) |
130
- | `cml publish` | Publicly host an image for displaying in a CML report | `<path to image> --title <image title> --md` |
131
- | `cml send-comment` | Return CML report as a comment in your GitLab/GitHub workflow | `<path to report> --head-sha <sha>` |
132
- | `cml send-github-check` | Return CML report as a check in GitHub | `<path to report> --head-sha <sha>` |
133
- | `cml pr` | Commit the given files to a new branch and create a pull request | `<path>...` |
134
- | `cml tensorboard-dev` | Return a link to a Tensorboard.dev page | `--logdir <path to logs> --title <experiment title> --md` |
127
+ | Function | Description | Example Inputs |
128
+ | ------------------------- | ---------------------------------------------------------------- | ----------------------------------------------------------- |
129
+ | `cml runner launch` | Launch a runner locally or hosted by a cloud provider | See [Arguments](https://github.com/iterative/cml#arguments) |
130
+ | `cml comment create` | Return CML report as a comment in your GitLab/GitHub workflow | `<path to report> --head-sha <sha>` |
131
+ | `cml check create` | Return CML report as a check in GitHub | `<path to report> --head-sha <sha>` |
132
+ | `cml pr create` | Commit the given files to a new branch and create a pull request | `<path>...` |
133
+ | `cml tensorboard connect` | Return a link to a Tensorboard.dev page | `--logdir <path to logs> --title <experiment title> --md` |
135
134
136
135
# ### CML Reports
137
136
138
- The `cml send- comment` command can be used to post reports. CML reports are
137
+ The `cml comment create ` command can be used to post reports. CML reports are
139
138
written in markdown ([GitHub](https://github.github.com/gfm),
140
139
[GitLab](https://docs.gitlab.com/ee/user/markdown.html), or
141
140
[Bitbucket](https://confluence.atlassian.com/bitbucketserver/markdown-syntax-guide-776639995.html)
@@ -153,11 +152,12 @@ cat results.txt >> report.md
153
152
154
153
:framed_picture : **Images** Display images using the markdown or HTML. Note that
155
154
if an image is an output of your ML workflow (i.e., it is produced by your
156
- workflow), you will need to use the `cml publish` function to include it a CML
157
- report. For example, if `graph.png` is output by `python train.py`, run :
155
+ workflow), it can be uploaded and included automaticlly to your CML report. For
156
+ example, if `graph.png` is output by `python train.py`, run :
158
157
159
158
` ` ` bash
160
- cml publish graph.png --md >> report.md
159
+ echo "" >> report.md
160
+ cml comment create report.md
161
161
` ` `
162
162
163
163
# ## Getting Started
@@ -189,8 +189,8 @@ jobs:
189
189
run:
190
190
runs-on: ubuntu-latest
191
191
steps:
192
- - uses: actions/checkout@v2
193
- - uses: actions/setup-python@v2
192
+ - uses: actions/checkout@v3
193
+ - uses: actions/setup-python@v4
194
194
- uses: iterative/setup-cml@v1
195
195
- name: Train model
196
196
env:
@@ -200,8 +200,8 @@ jobs:
200
200
python train.py
201
201
202
202
cat metrics.txt >> report.md
203
- cml publish plot.png --md >> report.md
204
- cml send- comment report.md
203
+ echo "" >> report.md
204
+ cml comment create report.md
205
205
` ` `
206
206
207
207
3. In your text editor of choice, edit line 16 of `train.py` to `depth = 5`.
@@ -253,9 +253,9 @@ on: [push]
253
253
jobs:
254
254
run:
255
255
runs-on: ubuntu-latest
256
- container: docker:// ghcr.io/iterative/cml:0-dvc2-base1
256
+ container: ghcr.io/iterative/cml:0-dvc2-base1
257
257
steps:
258
- - uses: actions/checkout@v2
258
+ - uses: actions/checkout@v3
259
259
- name: Train model
260
260
env:
261
261
REPO_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -278,16 +278,16 @@ jobs:
278
278
echo "## Plots" >> report.md
279
279
echo "### Class confusions" >> report.md
280
280
dvc plots diff --target classes.csv --template confusion -x actual -y predicted --show-vega master > vega.json
281
- vl2png vega.json -s 1.5 > plot .png
282
- cml publish --md plot. png >> report.md
281
+ vl2png vega.json -s 1.5 > confusion_plot .png
282
+ echo "" >> report.md
283
283
284
284
# Publish regularization function diff
285
285
echo "### Effects of regularization" >> report.md
286
286
dvc plots diff --target estimators.csv -x Regularization --show-vega master > vega.json
287
287
vl2png vega.json -s 1.5 > plot.png
288
- cml publish --md plot.png >> report.md
288
+ echo "" >> report.md
289
289
290
- cml send- comment report.md
290
+ cml comment create report.md
291
291
` ` `
292
292
293
293
> :warning: If you're using DVC with cloud storage, take note of environment
@@ -423,14 +423,14 @@ jobs:
423
423
runs-on: ubuntu-latest
424
424
steps:
425
425
- uses: iterative/setup-cml@v1
426
- - uses: actions/checkout@v2
426
+ - uses: actions/checkout@v3
427
427
- name: Deploy runner on EC2
428
428
env:
429
429
REPO_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
430
430
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
431
431
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
432
432
run: |
433
- cml runner \
433
+ cml runner launch \
434
434
--cloud=aws \
435
435
--cloud-region=us-west \
436
436
--cloud-type=g4dn.xlarge \
@@ -440,10 +440,10 @@ jobs:
440
440
runs-on: [self-hosted, cml-gpu]
441
441
timeout-minutes: 50400 # 35 days
442
442
container:
443
- image: docker://iterativeai /cml:0-dvc2-base1-gpu
443
+ image: ghcr.io/iterative /cml:0-dvc2-base1-gpu
444
444
options: --gpus all
445
445
steps:
446
- - uses: actions/checkout@v2
446
+ - uses: actions/checkout@v3
447
447
- name: Train model
448
448
env:
449
449
REPO_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
@@ -452,7 +452,7 @@ jobs:
452
452
python train.py
453
453
454
454
cat metrics.txt > report.md
455
- cml send- comment report.md
455
+ cml comment create report.md
456
456
` ` `
457
457
458
458
In the workflow above, the `deploy-runner` step launches an EC2 `g4dn.xlarge`
@@ -466,72 +466,83 @@ newly-launched instance. See [Environment Variables] below for details on the
466
466
467
467
# ### Docker Images
468
468
469
- The CML Docker image (`docker:// iterativeai/cml`) comes loaded with Python,
470
- CUDA, `git`, `node` and other essentials for full-stack data science. Different
471
- versions of these essentials are available from different `iterativeai/cml`
469
+ The CML Docker image (`ghcr.io/iterative/cml` or ` iterativeai/cml`) comes loaded
470
+ with Python, CUDA, `git`, `node` and other essentials for full-stack data
471
+ science. Different versions of these essentials are available from different
472
472
image tags. The tag convention is `{CML_VER}-dvc{DVC_VER}-base{BASE_VER}{-gpu}` :
473
473
474
474
| `{BASE_VER}` | Software included (`-gpu`) |
475
475
| ------------ | --------------------------------------------- |
476
476
| 0 | Ubuntu 18.04, Python 2.7 (CUDA 10.1, CuDNN 7) |
477
477
| 1 | Ubuntu 20.04, Python 3.8 (CUDA 11.2, CuDNN 8) |
478
478
479
- For example, `docker:// iterativeai/cml:0-dvc2-base1-gpu`, or
480
- ` docker:// ghcr.io/iterative/cml:0-dvc2-base1` .
479
+ For example, `iterativeai/cml:0-dvc2-base1-gpu`, or
480
+ ` ghcr.io/iterative/cml:0-dvc2-base1` .
481
481
482
482
# ### Arguments
483
483
484
- The `cml runner` function accepts the following arguments :
484
+ The `cml runner launch ` function accepts the following arguments :
485
485
486
486
` ` `
487
- --help Show help [boolean]
488
- --version Show version number [boolean]
489
- --log Maximum log level
490
- [choices: "error", "warn", "info", "debug"] [default: "info"]
491
- --labels One or more user-defined labels for this runner
492
- (delimited with commas) [default: "cml"]
493
- --idle-timeout Seconds to wait for jobs before shutting down. Set
494
- to -1 to disable timeout [default: 300]
495
- --name Name displayed in the repository once registered
496
- cml-{ID}
497
- --no-retry Do not restart workflow terminated due to instance
498
- disposal or GitHub Actions timeout
499
- [boolean] [default: false]
500
- --single Exit after running a single job
501
- [boolean] [default: false]
502
- --reuse Don't launch a new runner if an existing one has
503
- the same name or overlapping labels
504
- [boolean] [default: false]
505
- --driver Platform where the repository is hosted. If not
506
- specified, it will be inferred from the
507
- environment [choices: "github", "gitlab"]
508
- --repo Repository to be used for registering the runner.
509
- If not specified, it will be inferred from the
510
- environment
511
- --token Personal access token to register a self-hosted
512
- runner on the repository. If not specified, it
513
- will be inferred from the environment
514
- [default: "infer"]
515
- --cloud Cloud to deploy the runner
516
- [choices: "aws", "azure", "gcp", "kubernetes"]
517
- --cloud-region Region where the instance is deployed. Choices:
518
- [us-east, us-west, eu-west, eu-north]. Also
519
- accepts native cloud regions [default: "us-west"]
520
- --cloud-type Instance type. Choices: [m, l, xl]. Also supports
521
- native types like i.e. t2.micro
522
- --cloud-gpu GPU type.
523
- [choices: "nogpu", "k80", "v100", "tesla"]
524
- --cloud-hdd-size HDD size in GB
525
- --cloud-ssh-private Custom private RSA SSH key. If not provided an
526
- automatically generated throwaway key will be used
527
- [default: ""]
528
- --cloud-spot Request a spot instance [boolean]
529
- --cloud-spot-price Maximum spot instance bidding price in USD.
530
- Defaults to the current spot bidding price
531
- [default: "-1"]
532
- --cloud-startup-script Run the provided Base64-encoded Linux shell script
533
- during the instance initialization [default: ""]
534
- --cloud-aws-security-group Specifies the security group in AWS [default: ""]
487
+ --labels One or more user-defined labels for
488
+ this runner (delimited with commas)
489
+ [string] [default: "cml"]
490
+ --idle-timeout Time to wait for jobs before
491
+ shutting down (e.g. "5min"). Use
492
+ "never" to disable
493
+ [string] [default: "5 minutes"]
494
+ --name Name displayed in the repository
495
+ once registered
496
+ [string] [default: cml-{ID}]
497
+ --no-retry Do not restart workflow terminated
498
+ due to instance disposal or GitHub
499
+ Actions timeout [boolean]
500
+ --single Exit after running a single job
501
+ [boolean]
502
+ --reuse Don't launch a new runner if an
503
+ existing one has the same name or
504
+ overlapping labels [boolean]
505
+ --reuse-idle Creates a new runner only if the
506
+ matching labels don't exist or are
507
+ already busy [boolean]
508
+ --docker-volumes Docker volumes, only supported in
509
+ GitLab [array] [default: []]
510
+ --cloud Cloud to deploy the runner
511
+ [string] [choices: "aws", "azure", "gcp", "kubernetes"]
512
+ --cloud-region Region where the instance is
513
+ deployed. Choices: [us-east,
514
+ us-west, eu-west, eu-north]. Also
515
+ accepts native cloud regions
516
+ [string] [default: "us-west"]
517
+ --cloud-type Instance type. Choices: [m, l, xl].
518
+ Also supports native types like i.e.
519
+ t2.micro [string]
520
+ --cloud-permission-set Specifies the instance profile in
521
+ AWS or instance service account in
522
+ GCP [string] [default: ""]
523
+ --cloud-metadata Key Value pairs to associate
524
+ cml-runner instance on the provider
525
+ i.e. tags/labels "key=value"
526
+ [array] [default: []]
527
+ --cloud-gpu GPU type. Choices: k80, v100, or
528
+ native types e.g. nvidia-tesla-t4
529
+ [string]
530
+ --cloud-hdd-size HDD size in GB [number]
531
+ --cloud-ssh-private Custom private RSA SSH key. If not
532
+ provided an automatically generated
533
+ throwaway key will be used [string]
534
+ --cloud-spot Request a spot instance [boolean]
535
+ --cloud-spot-price Maximum spot instance bidding price
536
+ in USD. Defaults to the current spot
537
+ bidding price [number] [default: -1]
538
+ --cloud-startup-script Run the provided Base64-encoded
539
+ Linux shell script during the
540
+ instance initialization [string]
541
+ --cloud-aws-security-group Specifies the security group in AWS
542
+ [string] [default: ""]
543
+ --cloud-aws-subnet, Specifies the subnet to use within
544
+ --cloud-aws-subnet-id AWS [string] [default: ""]
545
+
535
546
` ` `
536
547
537
548
# ### Environment Variables
@@ -556,12 +567,13 @@ CML support proxy via known environment variables `http_proxy` and
556
567
557
568
# ### On-premise (Local) Runners
558
569
559
- This means using on-premise machines as self-hosted runners. The `cml runner`
560
- function is used to set up a local self-hosted runner. On a local machine or
561
- on-premise GPU cluster, [install CML as a package](#local-package) and then run:
570
+ This means using on-premise machines as self-hosted runners. The
571
+ ` cml runner launch` function is used to set up a local self-hosted runner. On a
572
+ local machine or on-premise GPU cluster,
573
+ [install CML as a package](#local-package) and then run:
562
574
563
575
` ` ` bash
564
- cml runner \
576
+ cml runner launch \
565
577
--repo=$your_project_repository_url \
566
578
--token=$PERSONAL_ACCESS_TOKEN \
567
579
--labels="local,runner" \
@@ -577,9 +589,13 @@ pre-installed in a custom Docker image pulled by a CI runner. You can also
577
589
install CML as a package :
578
590
579
591
` ` ` bash
580
- npm i -g @dvcorg/cml
592
+ npm install --location=global @dvcorg/cml
581
593
` ` `
582
594
595
+ You can use `cml` without node by downloading the correct standalone binary for
596
+ your system from the asset section of the
597
+ [releases](https://github.com/iterative/cml/releases).
598
+
583
599
You may need to install additional dependencies to use DVC plots and Vega-Lite
584
600
CLI commands :
585
601
@@ -599,15 +615,15 @@ CML and Vega-Lite package installation require the NodeJS package manager
599
615
use a set up action to install NodeJS :
600
616
601
617
` ` ` bash
602
- uses: actions/setup-node@v2
618
+ uses: actions/setup-node@v3
603
619
with:
604
620
node-version: '16'
605
621
` ` `
606
622
607
623
- **GitLab**: Requires direct installation.
608
624
609
625
` ` ` bash
610
- curl -sL https://deb.nodesource.com/setup_12 .x | bash
626
+ curl -sL https://deb.nodesource.com/setup_16 .x | bash
611
627
apt-get update
612
628
apt-get install -y nodejs
613
629
` ` `
0 commit comments