Skip to content

Commit 2bc8ce9

Browse files
authored
Finalize release for version 0.12.1 (#68)
2 parents 52d00a1 + 2e7d575 commit 2bc8ce9

File tree

1 file changed

+17
-17
lines changed

1 file changed

+17
-17
lines changed

README.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ The workspace requires **Docker** to be installed on your machine ([📖 Install
5757
Deploying a single workspace instance is as simple as:
5858

5959
```bash
60-
docker run -p 8080:8080 mltooling/ml-workspace:0.11.0
60+
docker run -p 8080:8080 mltooling/ml-workspace:0.12.1
6161
```
6262

6363
Voilà, that was easy! Now, Docker will pull the latest workspace image to your machine. This may take a few minutes, depending on your internet speed. Once the workspace is started, you can access it via http://localhost:8080.
@@ -74,7 +74,7 @@ docker run -d \
7474
--env AUTHENTICATE_VIA_JUPYTER="mytoken" \
7575
--shm-size 512m \
7676
--restart always \
77-
mltooling/ml-workspace:0.11.0
77+
mltooling/ml-workspace:0.12.1
7878
```
7979

8080
This command runs the container in background (`-d`), mounts your current working directory into the `/workspace` folder (`-v`), secures the workspace via a provided token (`--env AUTHENTICATE_VIA_JUPYTER`), provides 512MB of shared memory (`--shm-size`) to prevent unexpected crashes (see [known issues section](#known-issues)), and keeps the container running even on system restarts (`--restart always`). You can find additional options for docker run [here](https://docs.docker.com/engine/reference/commandline/run/) and workspace configuration options in [the section below](#Configuration).
@@ -183,7 +183,7 @@ We strongly recommend enabling authentication via one of the following two optio
183183
Activate the token-based authentication based on the authentication implementation of Jupyter via the `AUTHENTICATE_VIA_JUPYTER` variable:
184184

185185
```bash
186-
docker run -p 8080:8080 --env AUTHENTICATE_VIA_JUPYTER="mytoken" mltooling/ml-workspace:0.11.0
186+
docker run -p 8080:8080 --env AUTHENTICATE_VIA_JUPYTER="mytoken" mltooling/ml-workspace:0.12.1
187187
```
188188

189189
You can also use `<generated>` to let Jupyter generate a random token that is printed out on the container logs. A value of `true` will not set any token but activate that every request to any tool in the workspace will be checked with the Jupyter instance if the user is authenticated. This is used for tools like JupyterHub, which configures its own way of authentication.
@@ -193,7 +193,7 @@ You can also use `<generated>` to let Jupyter generate a random token that is pr
193193
Activate the basic authentication via the `WORKSPACE_AUTH_USER` and `WORKSPACE_AUTH_PASSWORD` variable:
194194

195195
```bash
196-
docker run -p 8080:8080 --env WORKSPACE_AUTH_USER="user" --env WORKSPACE_AUTH_PASSWORD="pwd" mltooling/ml-workspace:0.11.0
196+
docker run -p 8080:8080 --env WORKSPACE_AUTH_USER="user" --env WORKSPACE_AUTH_PASSWORD="pwd" mltooling/ml-workspace:0.12.1
197197
```
198198

199199
The basic authentication is configured via the nginx proxy and might be more performant compared to the other option since with `AUTHENTICATE_VIA_JUPYTER` every request to any tool in the workspace will check via the Jupyter instance if the user (based on the request cookies) is authenticated.
@@ -214,7 +214,7 @@ docker run \
214214
-p 8080:8080 \
215215
--env WORKSPACE_SSL_ENABLED="true" \
216216
-v /path/with/certificate/files:/resources/ssl:ro \
217-
mltooling/ml-workspace:0.11.0
217+
mltooling/ml-workspace:0.12.1
218218
```
219219

220220
If you want to host the workspace on a public domain, we recommend to use [Let's encrypt](https://letsencrypt.org/getting-started/) to get a trusted certificate for your domain. To use the generated certificate (e.g., via [certbot](https://certbot.eff.org/) tool) for the workspace, the `privkey.pem` corresponds to the `cert.key` file and the `fullchain.pem` to the `cert.crt` file.
@@ -235,7 +235,7 @@ By default, the workspace container has no resource constraints and can use as m
235235
For example, the following command restricts the workspace to only use a maximum of 8 CPUs, 16 GB of memory, and 1 GB of shared memory (see [Known Issues](#known-issues)):
236236

237237
```bash
238-
docker run -p 8080:8080 --cpus=8 --memory=16g --shm-size=1G mltooling/ml-workspace:0.11.0
238+
docker run -p 8080:8080 --cpus=8 --memory=16g --shm-size=1G mltooling/ml-workspace:0.12.1
239239
```
240240

241241
> 📖 _For more options and documentation on resource constraints, please refer to the [official docker guide](https://docs.docker.com/config/containers/resource_constraints/)._
@@ -264,7 +264,7 @@ In addition to the main workspace image (`mltooling/ml-workspace`), we provide o
264264
The minimal flavor (`mltooling/ml-workspace-minimal`) is our smallest image that contains most of the tools and features described in the [features section](#features) without most of the python libraries that are pre-installed in our main image. Any Python library or excluded tool can be installed manually during runtime by the user.
265265

266266
```bash
267-
docker run -p 8080:8080 mltooling/ml-workspace-minimal:0.11.0
267+
docker run -p 8080:8080 mltooling/ml-workspace-minimal:0.12.1
268268
```
269269
</details>
270270

@@ -282,7 +282,7 @@ docker run -p 8080:8080 mltooling/ml-workspace-minimal:0.11.0
282282
The R flavor (`mltooling/ml-workspace-r`) is based on our default workspace image and extends it with the R-interpreter, R-Jupyter kernel, RStudio server (access via `Open Tool -> RStudio`), and a variety of popular packages from the R ecosystem.
283283

284284
```bash
285-
docker run -p 8080:8080 mltooling/ml-workspace-r:0.11.0
285+
docker run -p 8080:8080 mltooling/ml-workspace-r:0.12.1
286286
```
287287
</details>
288288

@@ -300,7 +300,7 @@ docker run -p 8080:8080 mltooling/ml-workspace-r:0.11.0
300300
The Spark flavor (`mltooling/ml-workspace-spark`) is based on our R-flavor workspace image and extends it with the Spark runtime, Spark-Jupyter kernel, Zeppelin Notebook (access via `Open Tool -> Zeppelin`), PySpark, Hadoop, Java Kernel, and a few additional libraries & Jupyter extensions.
301301

302302
```bash
303-
docker run -p 8080:8080 mltooling/ml-workspace-spark:0.11.0
303+
docker run -p 8080:8080 mltooling/ml-workspace-spark:0.12.1
304304
```
305305

306306
</details>
@@ -324,13 +324,13 @@ The GPU flavor (`mltooling/ml-workspace-gpu`) is based on our default workspace
324324
- (Docker >= 19.03) Nvidia Container Toolkit ([📖 Instructions](https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(Native-GPU-Support))).
325325

326326
```bash
327-
docker run -p 8080:8080 --gpus all mltooling/ml-workspace-gpu:0.11.0
327+
docker run -p 8080:8080 --gpus all mltooling/ml-workspace-gpu:0.12.1
328328
```
329329

330330
- (Docker < 19.03) Nvidia Docker 2.0 ([📖 Instructions](https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(version-2.0))).
331331

332332
```bash
333-
docker run -p 8080:8080 --runtime nvidia --env NVIDIA_VISIBLE_DEVICES="all" mltooling/ml-workspace-gpu:0.11.0
333+
docker run -p 8080:8080 --runtime nvidia --env NVIDIA_VISIBLE_DEVICES="all" mltooling/ml-workspace-gpu:0.12.1
334334
```
335335

336336
The GPU flavor also comes with a few additional configuration options, as explained below:
@@ -369,7 +369,7 @@ The workspace is designed as a single-user development environment. For a multi-
369369
ML Hub makes it easy to set up a multi-user environment on a single server (via Docker) or a cluster (via Kubernetes) and supports a variety of usage scenarios & authentication providers. You can try out ML Hub via:
370370

371371
```bash
372-
docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock mltooling/ml-hub:0.11.0
372+
docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock mltooling/ml-hub:0.12.1
373373
```
374374

375375
For more information and documentation about ML Hub, please take a look at the [Github Site](https://github.com/ml-tooling/ml-hub).
@@ -728,7 +728,7 @@ To run Python code as a job, you need to provide a path or URL to a code directo
728728
You can execute code directly from Git, Mercurial, Subversion, or Bazaar by using the pip-vcs format as described in [this guide](https://pip.pypa.io/en/stable/reference/pip_install/#vcs-support). For example, to execute code from a [subdirectory](https://github.com/ml-tooling/ml-workspace/tree/main/resources/tests/ml-job) of a git repository, just run:
729729

730730
```bash
731-
docker run --env EXECUTE_CODE="git+https://github.com/ml-tooling/ml-workspace.git#subdirectory=resources/tests/ml-job" mltooling/ml-workspace:0.11.0
731+
docker run --env EXECUTE_CODE="git+https://github.com/ml-tooling/ml-workspace.git#subdirectory=resources/tests/ml-job" mltooling/ml-workspace:0.12.1
732732
```
733733

734734
> 📖 _For additional information on how to specify branches, commits, or tags please refer to [this guide](https://pip.pypa.io/en/stable/reference/pip_install/#vcs-support)._
@@ -738,7 +738,7 @@ docker run --env EXECUTE_CODE="git+https://github.com/ml-tooling/ml-workspace.gi
738738
In the following example, we mount and execute the current working directory (expected to contain our code) into the `/workspace/ml-job/` directory of the workspace:
739739

740740
```bash
741-
docker run -v "${PWD}:/workspace/ml-job/" --env EXECUTE_CODE="/workspace/ml-job/" mltooling/ml-workspace:0.11.0
741+
docker run -v "${PWD}:/workspace/ml-job/" --env EXECUTE_CODE="/workspace/ml-job/" mltooling/ml-workspace:0.12.1
742742
```
743743

744744
#### Install Dependencies
@@ -764,7 +764,7 @@ python /resources/scripts/execute_code.py /path/to/your/job
764764
It is also possible to embed your code directly into a custom job image, as shown below:
765765

766766
```dockerfile
767-
FROM mltooling/ml-workspace:0.11.0
767+
FROM mltooling/ml-workspace:0.12.1
768768

769769
# Add job code to image
770770
COPY ml-job /workspace/ml-job
@@ -829,7 +829,7 @@ The workspace can be extended in many ways at runtime, as explained [here](#exte
829829

830830
```dockerfile
831831
# Extend from any of the workspace versions/flavors
832-
FROM mltooling/ml-workspace:0.11.0
832+
FROM mltooling/ml-workspace:0.12.1
833833

834834
# Run you customizations, e.g.
835835
RUN \
@@ -1082,7 +1082,7 @@ You can do this, but please be aware that this port is <b>not</b> protected by t
10821082
Certain desktop tools (e.g., recent versions of [Firefox](https://github.com/jlesage/docker-firefox#increasing-shared-memory-size)) or libraries (e.g., Pytorch - see Issues: [1](https://github.com/pytorch/pytorch/issues/2244), [2](https://github.com/pytorch/pytorch/issues/1355)) might crash if the shared memory size (`/dev/shm`) is too small. The default shared memory size of Docker is 64MB, which might not be enough for a few tools. You can provide a higher shared memory size via the `shm-size` docker run option:
10831083

10841084
```bash
1085-
docker run --shm-size=2G mltooling/ml-workspace:0.11.0
1085+
docker run --shm-size=2G mltooling/ml-workspace:0.12.1
10861086
```
10871087

10881088
</details>

0 commit comments

Comments
 (0)