Skip to content

Commit 193561d

Browse files
committed
Merge branch 'ubuntu20.04'
2 parents 91a3cea + 54ee005 commit 193561d

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

67 files changed

+18232
-20868
lines changed

Dockerfile

Lines changed: 108 additions & 135 deletions
Large diffs are not rendered by default.

README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -572,7 +572,6 @@ Port tunneling is quite useful when you have started any server-based tool withi
572572
- `8090`: Jupyter server.
573573
- `8054`: VS Code server.
574574
- `5901`: VNC server.
575-
- `3389`: RDP server.
576575
- `22`: SSH server.
577576

578577
You can find port information on all the tools in the [supervisor configuration](https://github.com/ml-tooling/ml-workspace/blob/main/resources/supervisor/supervisord.conf).
@@ -1069,6 +1068,11 @@ import sys
10691068
You can do this, but please be aware that this port is <b>not</b> protected by the workspace's authentication mechanism then! For security reasons, we therefore highly recommend to use the <a href="#access-ports">Access Ports</a> functionality of the workspace.
10701069
</details>
10711070

1071+
<details>
1072+
<summary><b>System and Tool Translations</b> (click to expand...)</summary>
1073+
If you want to configure another language than English in your workspace and some tools are not translated properly, have a look <a href="https://github.com/ml-tooling/ml-workspace/issues/70#issuecomment-841863145">at this issue</a>. Try to comment out the 'exclude translations' line in `/etc/dpkg/dpkg.cfg.d/excludes` and re-install / configure the package.
1074+
</details>
1075+
10721076
---
10731077

10741078
<br>

build.py

Lines changed: 1 addition & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
parser = argparse.ArgumentParser(add_help=False)
1414
parser.add_argument(
1515
"--" + FLAG_FLAVOR,
16-
help="Flavor (full, light, minimal, r, spark, gpu) used for docker container",
16+
help="Flavor (full, light, minimal, gpu) used for docker container",
1717
default="all",
1818
)
1919

@@ -40,18 +40,9 @@
4040
args[FLAG_FLAVOR] = "full"
4141
build_utils.build(".", args)
4242

43-
args[FLAG_FLAVOR] = "r"
44-
build_utils.build("r-flavor", args)
45-
46-
args[FLAG_FLAVOR] = "spark"
47-
build_utils.build("spark-flavor", args)
48-
4943
args[FLAG_FLAVOR] = "gpu"
5044
build_utils.build("gpu-flavor", args)
5145

52-
args[FLAG_FLAVOR] = "gpu-r"
53-
build_utils.build("r-flavor", args)
54-
5546
build_utils.exit_process(0)
5647

5748
# unknown flavor -> try to build from subdirectory

docs/update-workspace-image.md

Lines changed: 16 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Workspace Update Process
22

3-
We plan to do a full workspace image update (all libraries and tools) about every three month. The full update involves quiet a bit of manual work as documented below:
3+
We plan to do a full workspace image update (all libraries and tools) about every three months. The full update involves quiet a bit of manual work as documented below:
44

55
1. Refactor incubation zone:
66

@@ -17,7 +17,7 @@ We plan to do a full workspace image update (all libraries and tools) about ever
1717

1818
3. Update core (gui) tools:
1919

20-
- TigetVNC: [latest release](https://dl.bintray.com/tigervnc/stable/)
20+
- TigerVNC: [latest release](https://dl.bintray.com/tigervnc/stable/)
2121
- noVNC: [latest release](https://github.com/novnc/noVNC/releases/latest)
2222
- Websockify: [latest release](https://github.com/novnc/websockify/releases/latest)
2323
- VS Code Server: [latest release](https://github.com/cdr/code-server/releases/latest)
@@ -47,18 +47,16 @@ We plan to do a full workspace image update (all libraries and tools) about ever
4747
- pycharm.sh: [latest release](https://www.jetbrains.com/pycharm/download/other.html)
4848
- nteract.sh: [latest release](https://github.com/nteract/nteract/releases/latest)
4949
- r-runtime.sh: [latest release](https://www.rstudio.com/products/rstudio/download-server/)
50-
- rstudio-server.sh: [latest release](https://www.rstudio.com/products/rstudio/download-server/)
51-
- rstudio-desktop.sh: [latest release](https://www.rstudio.com/products/rstudio/download/#download)
5250
- sqlectron.sh: [latest release](https://github.com/sqlectron/sqlectron-gui/releases/latest)
5351
- zeppelin.sh: [latest release](http://zeppelin.apache.org/download.html)
5452
- robo3t.sh: [latest release](https://github.com/Studio3T/robomongo/releases/latest)
5553
- metabase.sh: [latest release](https://github.com/metabase/metabase/releases/latest)
5654
- fasttext.sh: [latest release](https://github.com/facebookresearch/fastText/releases/latest)
57-
- kubernetes-utils.sh: [kube-prompt release](https://github.com/c-bata/kube-prompt/releases/latest), [conftest release](ttps://github.com/open-policy-agent/conftest), [yq release](https://github.com/mikefarah/yq/releases)
55+
- kubernetes-utils.sh: [kube-prompt release](https://github.com/c-bata/kube-prompt/releases/latest), [conftest release](https://github.com/open-policy-agent/conftest/releases), [yq release](https://github.com/mikefarah/yq/releases)
5856
- portainer.sh: [latests release](https://github.com/portainer/portainer/releases/latest)
5957
- rapids-gpu.sh: [latests release](https://rapids.ai/)
6058

61-
7. Update `minimmal` and `light` flavor python libraries:
59+
7. Update `minimmal` and `light` flavor Python libraries:
6260

6361
- Update requirement files using [piprot](https://github.com/sesh/piprot), [pur](https://github.com/alanhamlett/pip-update-requirements), or [pip-upgrader](https://github.com/simion/pip-upgrader):
6462
- `piprot ./resources/libraries/requirements-minimal.txt`
@@ -67,7 +65,7 @@ We plan to do a full workspace image update (all libraries and tools) about ever
6765

6866
8. Build and test `minimal` flavor:
6967

70-
- Build minimal workspace flavor via `python build.py --flavor=minimal`
68+
- Build minimal workspace flavor via `python build.py --make --flavor=minimal`
7169
- Run workspace container and check startup logs
7270
- Check/Compare layer sizes of new image with previous version (via Portainer)
7371
- Check Image Labels (via Portainer)
@@ -79,16 +77,16 @@ We plan to do a full workspace image update (all libraries and tools) about ever
7977

8078
9. Build and test `light` flavor:
8179

82-
- Build light workspace flavor via `python build.py --flavor=light`
80+
- Build light workspace flavor via `python build.py --make --flavor=light`
8381
- Run workspace container and check startup logs
8482
- Check/Compare layer sizes of new image with previous version (via Portainer)
8583
- Check folder sizes via `Disk Usage Analyzer` within the Desktop VNC
86-
- Run `/resources/tests/evaluate-python-libraries.ipynb` notebook to update `requirements-full.txt`
84+
- Run `/resources/tests/evaluate-py-libraries.ipynb` notebook to update `requirements-full.txt`
8785
- Run `/resources/tests/test-tool-installers.ipynb` notebook to test installer scripts.
8886

8987
10. Build and test `full` flavor:
9088

91-
- Build main workspace flavor via `python build.py --flavor=full`
89+
- Build main workspace flavor via `python build.py --make --flavor=full`
9290
- Deploy new workspace image and check startup logs
9391
- Check/Compare layer sizes of new image with previous version (via Portainer)
9492
- Check Image Labels (via Portainer)
@@ -108,25 +106,12 @@ We plan to do a full workspace image update (all libraries and tools) about ever
108106

109107
11. Update, build and test `gpu` flavor:
110108

111-
- Update CUDA Tooling based on [cuda container images](https://gitlab.com/nvidia/container-images/cuda/)
112-
- Decide for CUDA version update based on tensorflow & pytorch support
113-
- Update GPU libraries and tooling inside Dockerfile
114-
- Build via `python build.py --flavor=gpu`
115-
- Test `nvidia-smi` in terminal to check for GPU access
116-
- Test image on GPU machine und run `/workspace/tutorials/workspace-test-utilities.ipynb`
117-
- Test GPU interface in Netdata and Glances
109+
- Update CUDA Tooling based on [cuda container images](https://gitlab.com/nvidia/container-images/cuda/)
110+
- Decide for CUDA version update based on tensorflow & pytorch support
111+
- Update GPU libraries and tooling inside Dockerfile
112+
- Build via `python build.py --flavor=gpu`
113+
- Test `nvidia-smi` in terminal to check for GPU access
114+
- Test image on GPU machine und run `/workspace/tutorials/workspace-test-utilities.ipynb`
115+
- Test GPU interface in Netdata and Glances
118116

119-
12. Update, build and test `R` flavor:
120-
121-
- Build via `python build.py --flavor=R`
122-
- Run `/workspace/tutorials/test-r-runtime.Rmd` via R kernel.
123-
- Test `R Studio Server` tool and run the `/workspace/tutorials/test-r-runtime.Rmd`.
124-
125-
13. Build and test `spark` flavor via `python build.py --flavor=spark`
126-
127-
- Build via `python build.py --flavor=spark`
128-
- Run `/workspace/tutorials/test-spark.ipynb` via Python kernel.
129-
- Run `/workspace/tutorials/toree-scala-kernel-tutorial.ipynb` via Toree kernel.
130-
- Test `Zeppelin` tool.
131-
132-
14. Build and push all flavors via `python build.py --deploy --version=<VERSION> --flavor=all`
117+
12. Build and push all flavors via `python build.py --deploy --version=<VERSION> --flavor=all`

gpu-flavor/Dockerfile

Lines changed: 65 additions & 88 deletions
Original file line numberDiff line numberDiff line change
@@ -8,25 +8,27 @@ ENV WORKSPACE_FLAVOR=$ARG_WORKSPACE_FLAVOR
88
USER root
99

1010
### NVIDIA CUDA BASE ###
11-
# https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/10.1/ubuntu18.04-x86_64/base/Dockerfile
12-
RUN apt-get update && apt-get install -y --no-install-recommends gnupg2 curl ca-certificates && \
13-
curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | apt-key add - && \
14-
echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 /" > /etc/apt/sources.list.d/cuda.list && \
15-
echo "deb https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 /" > /etc/apt/sources.list.d/nvidia-ml.list && \
11+
# https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/11.2.2/ubuntu20.04-x86_64/base/Dockerfile
12+
RUN apt-get update && apt-get install -y --no-install-recommends \
13+
gnupg2 curl ca-certificates && \
14+
curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub | apt-key add - && \
15+
echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 /" > /etc/apt/sources.list.d/cuda.list && \
16+
echo "deb https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu2004/x86_64 /" > /etc/apt/sources.list.d/nvidia-ml.list && \
1617
# Cleanup - cannot use cleanup script here, otherwise too much is removed
1718
apt-get clean && \
1819
rm -rf $HOME/.cache/* && \
1920
rm -rf /tmp/* && \
2021
rm -rf /var/lib/apt/lists/*
2122

22-
ENV CUDA_VERSION 10.1.243
23-
ENV CUDA_PKG_VERSION 10-1=$CUDA_VERSION-1
23+
ENV CUDA_VERSION 11.2.2
24+
#ENV CUDA_PKG_VERSION 11-2=$CUDA_VERSION-1
25+
#ENV CUDART_VERSION 11-2=$CUDA_VERSION46-1
2426

2527
# For libraries in the cuda-compat-* package: https://docs.nvidia.com/cuda/eula/index.html#attachment-a
2628
RUN apt-get update && apt-get install -y --no-install-recommends \
27-
cuda-cudart-$CUDA_PKG_VERSION \
28-
cuda-compat-10-1 && \
29-
ln -s cuda-10.1 /usr/local/cuda && \
29+
cuda-cudart-11-2=11.2.152-1 \
30+
cuda-compat-11-2 \
31+
&& ln -s cuda-11.2 /usr/local/cuda && \
3032
rm -rf /var/lib/apt/lists/* && \
3133
# Cleanup - cannot use cleanup script here, otherwise too much is removed
3234
apt-get clean && \
@@ -35,107 +37,101 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
3537
rm -rf /var/lib/apt/lists/*
3638

3739
# Required for nvidia-docker v1
38-
RUN echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && \
39-
echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf
40+
RUN echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf \
41+
&& echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf
4042

4143
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
42-
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}
44+
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64
4345

4446
# nvidia-container-runtime
4547
# https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec
4648
# nvidia-container-runtime
4749
ENV NVIDIA_VISIBLE_DEVICES all
4850
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
49-
ENV NVIDIA_REQUIRE_CUDA "cuda>=10.1 brand=tesla,driver>=396,driver<397 brand=tesla,driver>=410,driver<411 brand=tesla,driver>=418,driver<419"
51+
ENV NVIDIA_REQUIRE_CUDA "cuda>=11.2 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 driver>=450"
5052

5153
### CUDA RUNTIME ###
52-
# https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/10.1/ubuntu18.04-x86_64/runtime/Dockerfile
54+
# https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/11.2.2/ubuntu20.04-x86_64/runtime/Dockerfile
5355

54-
ENV NCCL_VERSION 2.7.8
56+
ENV NCCL_VERSION 2.8.4
5557

5658
RUN apt-get update && apt-get install -y --no-install-recommends \
57-
cuda-libraries-$CUDA_PKG_VERSION \
58-
cuda-npp-$CUDA_PKG_VERSION \
59-
cuda-nvtx-$CUDA_PKG_VERSION \
60-
libcublas10=10.2.1.243-1 \
61-
libnccl2=$NCCL_VERSION-1+cuda10.1 && \
62-
apt-mark hold libnccl2 && \
59+
cuda-libraries-11-2=11.2.2-1 \
60+
libnpp-11-2=11.3.2.152-1 \
61+
cuda-nvtx-11-2=11.2.152-1 \
62+
libcublas-11-2=11.4.1.1043-1 \
63+
libcusparse-11-2=11.4.1.1152-1 \
64+
libnccl2=$NCCL_VERSION-1+cuda11.2 \
65+
&& rm -rf /var/lib/apt/lists/* \
6366
# Cleanup - cannot use cleanup script here, otherwise too much is removed
64-
apt-get clean && \
65-
rm -rf $HOME/.cache/* && \
66-
rm -rf /tmp/* && \
67-
rm -rf /var/lib/apt/lists/*
67+
&& apt-get clean \
68+
&& rm -rf $HOME/.cache/* \
69+
&& rm -rf /tmp/* \
70+
&& rm -rf /var/lib/apt/lists/*
6871

69-
# apt from auto upgrading the cublas package. See https://gitlab.com/nvidia/container-images/cuda/-/issues/88
70-
RUN apt-mark hold libcublas10
72+
RUN apt-mark hold libcublas-11-2 libnccl2
7173

7274
### END CUDA RUNTIME ###
7375

7476
### CUDA DEVEL ###
75-
# https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/10.1/ubuntu18.04-x86_64/devel/Dockerfile
77+
# https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/11.2.2/ubuntu20.04-x86_64/devel/Dockerfile
7678
RUN apt-get update && apt-get install -y --no-install-recommends \
77-
cuda-nvml-dev-$CUDA_PKG_VERSION \
78-
cuda-command-line-tools-$CUDA_PKG_VERSION \
79-
cuda-nvprof-$CUDA_PKG_VERSION \
80-
cuda-npp-dev-$CUDA_PKG_VERSION \
81-
cuda-libraries-dev-$CUDA_PKG_VERSION \
82-
cuda-minimal-build-$CUDA_PKG_VERSION \
83-
libcublas-dev=10.2.1.243-1 \
84-
libnccl-dev=$NCCL_VERSION-1+cuda10.1 && \
85-
apt-mark hold libnccl-dev && \
79+
libtinfo5 libncursesw5 \
80+
cuda-cudart-dev-11-2=11.2.152-1 \
81+
cuda-command-line-tools-11-2=11.2.2-1 \
82+
cuda-minimal-build-11-2=11.2.2-1 \
83+
cuda-libraries-dev-11-2=11.2.2-1 \
84+
cuda-nvml-dev-11-2=11.2.152-1 \
85+
libnpp-dev-11-2=11.3.2.152-1 \
86+
libnccl-dev=2.8.4-1+cuda11.2 \
87+
libcublas-dev-11-2=11.4.1.1043-1 \
88+
libcusparse-dev-11-2=11.4.1.1152-1 && \
8689
# Cleanup - cannot use cleanup script here, otherwise too much is removed
8790
apt-get clean && \
8891
rm -rf $HOME/.cache/* && \
8992
rm -rf /tmp/* && \
9093
rm -rf /var/lib/apt/lists/*
9194

9295
# apt from auto upgrading the cublas package. See https://gitlab.com/nvidia/container-images/cuda/-/issues/88
93-
RUN apt-mark hold libcublas-dev
94-
96+
RUN apt-mark hold libcublas-dev-11-2 libnccl-dev
9597
ENV LIBRARY_PATH /usr/local/cuda/lib64/stubs
9698

9799
### END CUDA DEVEL ###
98100

99-
### CUDANN7 DEVEL ###
100-
# https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/10.1/ubuntu18.04-x86_64/devel/cudnn7/Dockerfile
101+
### CUDANN8 DEVEL ###
102+
# https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/11.2.2/ubuntu20.04-x86_64/devel/cudnn8/Dockerfile
101103

102-
ENV CUDNN_VERSION 7.6.5.32
104+
ENV CUDNN_VERSION 8.1.1.33
103105
LABEL com.nvidia.cudnn.version="${CUDNN_VERSION}"
104106

105-
RUN apt-get update && \
106-
apt-get install -y --no-install-recommends \
107-
libcudnn7=$CUDNN_VERSION-1+cuda10.1 \
108-
libcudnn7-dev=$CUDNN_VERSION-1+cuda10.1 && \
109-
apt-mark hold libcudnn7 && \
107+
RUN apt-get update && apt-get install -y --no-install-recommends \
108+
libcudnn8=$CUDNN_VERSION-1+cuda11.2 \
109+
libcudnn8-dev=$CUDNN_VERSION-1+cuda11.2 \
110+
&& apt-mark hold libcudnn8 && \
110111
# Cleanup
111112
apt-get clean && \
112113
rm -rf /root/.cache/* && \
113114
rm -rf /tmp/* && \
114115
rm -rf /var/lib/apt/lists/*
115116

116-
### END CUDANN7 ###
117+
### END CUDANN8 ###
117118

118119
# Link Cupti:
119120
ENV LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/usr/local/cuda/extras/CUPTI/lib64
120121

121-
# Install TensorRT. Requires that libcudnn7 is installed above.
122-
# https://www.tensorflow.org/install/gpu#ubuntu_1804_cuda_101
123-
RUN apt-get update && apt-get install -y --no-install-recommends \
124-
libnvinfer6=6.0.1-1+cuda10.1 \
125-
libnvinfer-dev=6.0.1-1+cuda10.1 \
126-
libnvinfer-plugin6=6.0.1-1+cuda10.1 && \
127-
# Cleanup
128-
clean-layer.sh
129-
130122
### GPU DATA SCIENCE LIBRARIES ###
131123

132124
RUN \
133125
apt-get update && \
134126
apt-get install -y libomp-dev libopenblas-base && \
135-
# Not needed? Install cuda-toolkit (e.g. for pytorch: https://pytorch.org/): https://anaconda.org/anaconda/cudatoolkit
136-
conda install -y cudatoolkit=10.1 -c pytorch && \
127+
# Install pytorch gpu
128+
# uninstall cpu only packages via conda
129+
conda remove --force -y pytorch cpuonly && \
130+
# https://pytorch.org/get-started/locally/
131+
conda install cudatoolkit=11.2 -c pytorch -c nvidia && \
132+
pip install --no-cache-dir torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html && \
137133
# Install cupy: https://cupy.chainer.org/
138-
pip install --no-cache-dir cupy-cuda101 && \
134+
pip install --no-cache-dir cupy-cuda112 && \
139135
# Install pycuda: https://pypi.org/project/pycuda
140136
pip install --no-cache-dir pycuda && \
141137
# Install gpu utils libs
@@ -144,25 +140,19 @@ RUN \
144140
pip install --no-cache-dir scikit-cuda && \
145141
# Install tensorflow gpu
146142
pip uninstall -y tensorflow tensorflow-cpu intel-tensorflow && \
147-
# TODO: tensorflow 2.3.1 installs tenorboard 2.4.0 with problems, use 2.3.0
148-
pip install --no-cache-dir tensorflow-gpu==2.3.0 && \
143+
pip install --no-cache-dir tensorflow-gpu==2.5.0 && \
149144
# Install ONNX GPU Runtime
150-
# TODO: 1.4.x is latest with cuda 10.1 support
151145
pip uninstall -y onnxruntime && \
152-
pip install --no-cache-dir onnxruntime-gpu==1.4.0 && \
153-
# Install pytorch gpu
154-
# uninstall cpu only packages via conda
155-
conda remove --force -y pytorch cpuonly && \
156-
# https://pytorch.org/get-started/locally/
157-
conda install -y pytorch -c pytorch && \
158-
# Install faiss gpu
159-
conda remove --force -y faiss-cpu && \
160-
conda install -y faiss-gpu -c pytorch && \
146+
pip install --no-cache-dir onnxruntime-gpu==1.8.0 onnxruntime-training==1.8.0 && \
147+
# Install faiss gpu - TODO: to large?
148+
# conda remove --force -y faiss-cpu && \
149+
# conda install -y faiss-gpu -c pytorch && \
161150
# Update mxnet to gpu edition
162151
pip uninstall -y mxnet-mkl && \
163-
pip install --no-cache-dir mxnet-cu101mkl==1.6.0.post0 && \
152+
# cuda111 -> >= 11.1
153+
pip install --no-cache-dir mxnet-cu112 && \
164154
# install jax: https://github.com/google/jax#pip-installation
165-
pip install --upgrade jax jaxlib==0.1.57+cuda101 -f https://storage.googleapis.com/jax-releases/jax_releases.html && \
155+
pip install --upgrade jax[cuda111] -f https://storage.googleapis.com/jax-releases/jax_releases.html && \
166156
# Install pygpu - Required for theano: http://deeplearning.net/software/libgpuarray/
167157
conda install -y pygpu && \
168158
# Install lightgbm
@@ -177,19 +167,6 @@ RUN \
177167
# Cleanup
178168
clean-layer.sh
179169

180-
# TODO: nvdashboard does not work with relative paths
181-
# RUN \
182-
# # Install Jupyterlab GPU Plugin: https://github.com/rapidsai/jupyterlab-nvdashboard
183-
# pip install jupyterlab-nvdashboard && \
184-
# jupyter labextension install jupyterlab-nvdashboard && \
185-
# # Clean jupyter lab cache: https://github.com/jupyterlab/jupyterlab/issues/4930
186-
# jupyter lab clean && \
187-
# jlpm cache clean && \
188-
# # Remove build folder -> should be remove by lab clean as well?
189-
# rm -rf $CONDA_ROOT/share/jupyter/lab/staging && \
190-
# # Cleanup
191-
# clean-layer.sh
192-
193170
# TODO install DALI: https://docs.nvidia.com/deeplearning/dali/user-guide/docs/installation.html#dali-and-ngc
194171
# TODO: if > Ubuntu 19.04 -> install nvtop: https://github.com/Syllo/nvtop
195172
# TODO: Install Arrrayfire: https://arrayfire.com/download/ pip install --no-cache-dir arrayfire && \

0 commit comments

Comments
 (0)