You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+15-15
Original file line number
Diff line number
Diff line change
@@ -57,7 +57,7 @@ The workspace requires **Docker** to be installed on your machine ([📖 Install
57
57
Deploying a single workspace instance is as simple as:
58
58
59
59
```bash
60
-
docker run -p 8080:8080 mltooling/ml-workspace:0.12.1
60
+
docker run -p 8080:8080 mltooling/ml-workspace:0.13.2
61
61
```
62
62
63
63
Voilà, that was easy! Now, Docker will pull the latest workspace image to your machine. This may take a few minutes, depending on your internet speed. Once the workspace is started, you can access it via http://localhost:8080.
@@ -74,7 +74,7 @@ docker run -d \
74
74
--env AUTHENTICATE_VIA_JUPYTER="mytoken" \
75
75
--shm-size 512m \
76
76
--restart always \
77
-
mltooling/ml-workspace:0.12.1
77
+
mltooling/ml-workspace:0.13.2
78
78
```
79
79
80
80
This command runs the container in background (`-d`), mounts your current working directory into the `/workspace` folder (`-v`), secures the workspace via a provided token (`--env AUTHENTICATE_VIA_JUPYTER`), provides 512MB of shared memory (`--shm-size`) to prevent unexpected crashes (see [known issues section](#known-issues)), and keeps the container running even on system restarts (`--restart always`). You can find additional options for docker run [here](https://docs.docker.com/engine/reference/commandline/run/) and workspace configuration options in [the section below](#Configuration).
@@ -183,7 +183,7 @@ We strongly recommend enabling authentication via one of the following two optio
183
183
Activate the token-based authentication based on the authentication implementation of Jupyter via the `AUTHENTICATE_VIA_JUPYTER` variable:
184
184
185
185
```bash
186
-
docker run -p 8080:8080 --env AUTHENTICATE_VIA_JUPYTER="mytoken" mltooling/ml-workspace:0.12.1
186
+
docker run -p 8080:8080 --env AUTHENTICATE_VIA_JUPYTER="mytoken" mltooling/ml-workspace:0.13.2
187
187
```
188
188
189
189
You can also use `<generated>` to let Jupyter generate a random token that is printed out on the container logs. A value of `true` will not set any token but activate that every request to any tool in the workspace will be checked with the Jupyter instance if the user is authenticated. This is used for tools like JupyterHub, which configures its own way of authentication.
@@ -193,7 +193,7 @@ You can also use `<generated>` to let Jupyter generate a random token that is pr
193
193
Activate the basic authentication via the `WORKSPACE_AUTH_USER` and `WORKSPACE_AUTH_PASSWORD` variable:
194
194
195
195
```bash
196
-
docker run -p 8080:8080 --env WORKSPACE_AUTH_USER="user" --env WORKSPACE_AUTH_PASSWORD="pwd" mltooling/ml-workspace:0.12.1
196
+
docker run -p 8080:8080 --env WORKSPACE_AUTH_USER="user" --env WORKSPACE_AUTH_PASSWORD="pwd" mltooling/ml-workspace:0.13.2
197
197
```
198
198
199
199
The basic authentication is configured via the nginx proxy and might be more performant compared to the other option since with `AUTHENTICATE_VIA_JUPYTER` every request to any tool in the workspace will check via the Jupyter instance if the user (based on the request cookies) is authenticated.
If you want to host the workspace on a public domain, we recommend to use [Let's encrypt](https://letsencrypt.org/getting-started/) to get a trusted certificate for your domain. To use the generated certificate (e.g., via [certbot](https://certbot.eff.org/) tool) for the workspace, the `privkey.pem` corresponds to the `cert.key` file and the `fullchain.pem` to the `cert.crt` file.
@@ -235,7 +235,7 @@ By default, the workspace container has no resource constraints and can use as m
235
235
For example, the following command restricts the workspace to only use a maximum of 8 CPUs, 16 GB of memory, and 1 GB of shared memory (see [Known Issues](#known-issues)):
236
236
237
237
```bash
238
-
docker run -p 8080:8080 --cpus=8 --memory=16g --shm-size=1G mltooling/ml-workspace:0.12.1
238
+
docker run -p 8080:8080 --cpus=8 --memory=16g --shm-size=1G mltooling/ml-workspace:0.13.2
239
239
```
240
240
241
241
> 📖 _For more options and documentation on resource constraints, please refer to the [official docker guide](https://docs.docker.com/config/containers/resource_constraints/)._
@@ -264,7 +264,7 @@ In addition to the main workspace image (`mltooling/ml-workspace`), we provide o
264
264
The minimal flavor (`mltooling/ml-workspace-minimal`) is our smallest image that contains most of the tools and features described in the [features section](#features) without most of the python libraries that are pre-installed in our main image. Any Python library or excluded tool can be installed manually during runtime by the user.
265
265
266
266
```bash
267
-
docker run -p 8080:8080 mltooling/ml-workspace-minimal:0.12.1
267
+
docker run -p 8080:8080 mltooling/ml-workspace-minimal:0.13.2
268
268
```
269
269
</details>
270
270
@@ -324,13 +324,13 @@ The GPU flavor (`mltooling/ml-workspace-gpu`) is based on our default workspace
docker run -p 8080:8080 --runtime nvidia --env NVIDIA_VISIBLE_DEVICES="all" mltooling/ml-workspace-gpu:0.12.1
333
+
docker run -p 8080:8080 --runtime nvidia --env NVIDIA_VISIBLE_DEVICES="all" mltooling/ml-workspace-gpu:0.13.2
334
334
```
335
335
336
336
The GPU flavor also comes with a few additional configuration options, as explained below:
@@ -369,7 +369,7 @@ The workspace is designed as a single-user development environment. For a multi-
369
369
ML Hub makes it easy to set up a multi-user environment on a single server (via Docker) or a cluster (via Kubernetes) and supports a variety of usage scenarios & authentication providers. You can try out ML Hub via:
370
370
371
371
```bash
372
-
docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock mltooling/ml-hub:0.12.1
372
+
docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock mltooling/ml-hub:latest
373
373
```
374
374
375
375
For more information and documentation about ML Hub, please take a look at the [Github Site](https://github.com/ml-tooling/ml-hub).
@@ -727,7 +727,7 @@ To run Python code as a job, you need to provide a path or URL to a code directo
727
727
You can execute code directly from Git, Mercurial, Subversion, or Bazaar by using the pip-vcs format as described in [this guide](https://pip.pypa.io/en/stable/reference/pip_install/#vcs-support). For example, to execute code from a [subdirectory](https://github.com/ml-tooling/ml-workspace/tree/main/resources/tests/ml-job) of a git repository, just run:
728
728
729
729
```bash
730
-
docker run --env EXECUTE_CODE="git+https://github.com/ml-tooling/ml-workspace.git#subdirectory=resources/tests/ml-job" mltooling/ml-workspace:0.12.1
730
+
docker run --env EXECUTE_CODE="git+https://github.com/ml-tooling/ml-workspace.git#subdirectory=resources/tests/ml-job" mltooling/ml-workspace:0.13.2
731
731
```
732
732
733
733
> 📖 _For additional information on how to specify branches, commits, or tags please refer to [this guide](https://pip.pypa.io/en/stable/reference/pip_install/#vcs-support)._
@@ -737,7 +737,7 @@ docker run --env EXECUTE_CODE="git+https://github.com/ml-tooling/ml-workspace.gi
737
737
In the following example, we mount and execute the current working directory (expected to contain our code) into the `/workspace/ml-job/` directory of the workspace:
738
738
739
739
```bash
740
-
docker run -v "${PWD}:/workspace/ml-job/" --env EXECUTE_CODE="/workspace/ml-job/" mltooling/ml-workspace:0.12.1
740
+
docker run -v "${PWD}:/workspace/ml-job/" --env EXECUTE_CODE="/workspace/ml-job/" mltooling/ml-workspace:0.13.2
It is also possible to embed your code directly into a custom job image, as shown below:
764
764
765
765
```dockerfile
766
-
FROM mltooling/ml-workspace:0.12.1
766
+
FROM mltooling/ml-workspace:0.13.2
767
767
768
768
# Add job code to image
769
769
COPY ml-job /workspace/ml-job
@@ -828,7 +828,7 @@ The workspace can be extended in many ways at runtime, as explained [here](#exte
828
828
829
829
```dockerfile
830
830
# Extend from any of the workspace versions/flavors
831
-
FROM mltooling/ml-workspace:0.12.1
831
+
FROM mltooling/ml-workspace:0.13.2
832
832
833
833
# Run you customizations, e.g.
834
834
RUN \
@@ -1086,7 +1086,7 @@ If you want to configure another language than English in your workspace and som
1086
1086
Certain desktop tools (e.g., recent versions of [Firefox](https://github.com/jlesage/docker-firefox#increasing-shared-memory-size)) or libraries (e.g., Pytorch - see Issues: [1](https://github.com/pytorch/pytorch/issues/2244), [2](https://github.com/pytorch/pytorch/issues/1355)) might crash if the shared memory size (`/dev/shm`) is too small. The default shared memory size of Docker is 64MB, which might not be enough for a few tools. You can provide a higher shared memory size via the `shm-size` docker run option:
1087
1087
1088
1088
```bash
1089
-
docker run --shm-size=2G mltooling/ml-workspace:0.12.1
1089
+
docker run --shm-size=2G mltooling/ml-workspace:0.13.2
0 commit comments