Skip to content
This repository was archived by the owner on Apr 28, 2023. It is now read-only.

Commit e96e913

Browse files
Update BUILD.md
Splitting Caffe2 / dev + C++ benchmark mode build instructions from more mundane PyTorch-only installation.
1 parent a9057d9 commit e96e913

File tree

1 file changed

+48
-34
lines changed

1 file changed

+48
-34
lines changed

BUILD.md

Lines changed: 48 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Important notice
2-
***In order to uniformize and simplify the build system we had to make choices. TC is currently only officially supported on Ubuntu 16.04 with gcc 5.4.0, cuda 9.0 and cudnn 7.***
2+
***In order to uniformize and simplify the build system we had to make choices. TC is currently only officially supported on Ubuntu 16.04 with gcc 5.4.0.***
33
Other configurations may work too but are not yet officially supported.
44
For more information about setting up the config that we use to build the conda dependencies see the following [Dockerfile](conda_recipes/Dockerfile).
55

@@ -24,9 +24,13 @@ Create a new environment in which TC will be built and install core dependencies
2424
conda create -y --name tc_build python=3.6
2525
conda activate tc_build
2626
conda install -y pyyaml mkl-include pytest
27+
conda install -y -c nicolasvasilache llvm-tapir50 halide
28+
```
29+
30+
Then install the PyTorch version that corresponds to your system binaries (e.g. for PyTorch with cuda 9.0):
31+
```
2732
conda install -y -c pytorch pytorch torchvision cuda90
2833
conda remove -y cudatoolkit --force
29-
conda install -y -c nicolasvasilache llvm-tapir50 halide
3034
```
3135

3236
***Note*** As of PyTorch 0.4, PyTorch links cuda libraries dynamically and it
@@ -36,14 +40,6 @@ As a consequence cudatoolkit only contains redundant libraries and we remove it
3640
explicitly. In a near future, the unified PyTorch + Caffe2 build system will link
3741
everything statically and stop pulling the cudatoolkit dependency.
3842

39-
# Optional dependencies
40-
Optionally if you want to use Caffe2 (which is necessary for building the C++ benchmarks
41-
since Caffe2 is our baseline):
42-
```
43-
conda install -y -c conda-forge eigen
44-
conda install -y -c nicolasvasilache caffe2
45-
```
46-
4743
# Activate preinstalled conda in your current terminal
4844

4945
Once the first time configuration above has been completed, one should activate conda in
@@ -54,41 +50,22 @@ equivalent)
5450
conda activate tc_build
5551
```
5652

57-
# Cudnn version
58-
***Note*** As of PyTorch 0.4, we need to package our own Caffe2. The curent PyTorch + Caffe2
59-
build system links cudnn dynamically. The version of cudnn that is linked dynamically
60-
is imposed on us by the docker image supported by NVIDIA
61-
[Dockerfile](conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile).
62-
For now this cudnn version is cudnn 7.1.
63-
If for some reason, one cannot install cudnn 7.1 system-wide, one may resort to the
64-
following:
65-
```
66-
conda install -c anaconda cudnn
67-
conda remove -y cudatoolkit --force
68-
```
69-
70-
***Note*** cudnn pulls a cudatoolkit dependencey but this can never replace a system
71-
installation because it cannot package libcuda.so (which comes with the driver,
72-
not the toolkit).
73-
As a consequence cudatoolkit only contains redundant libraries and we remove it
74-
explicitly. In a near future, the unified PyTorch + Caffe2 build system will link
75-
everything statically and we will not need to worry about cudnn anymore.
76-
77-
# Build TC with dependencies supplied by conda (including cudnn 7.1)
53+
# Build TC with dependencies supplied by conda
7854
```
7955
CLANG_PREFIX=$(${CONDA_PREFIX}/bin/llvm-config --prefix) ./build.sh
8056
```
8157
You may need to pass the environment variable `CUDA_TOOLKIT_ROOT_DIR` pointing
8258
to your cuda installation (this is required for `FindCUDA.cmake` to find your cuda installation
83-
and can be omitted on most systems).
59+
and can be omitted on most systems). When required, passing `CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda`
60+
is generally sufficient.
8461

8562
# Test locally
8663
Run C++ tests:
8764
```
8865
./test.sh
8966
```
9067

91-
Install the TC Python package locally to `/tmp`:
68+
Install the TC Python package locally to `/tmp` for smoke checking:
9269
```
9370
python setup.py install --prefix=/tmp
9471
export PYTHONPATH=${PYTHONPATH}:$(find /tmp/lib -name site-packages)
@@ -103,4 +80,41 @@ python -c 'import tensor_comprehensions'
10380
Run Python tests:
10481
```
10582
./test_python/run_test.sh
106-
```
83+
```
84+
85+
At this point, if things work as expected you can venture installing as follows
86+
(always a good idea to record installed files for easy removal):
87+
```
88+
python setup.py install --record tc_files.txt
89+
```
90+
91+
# Advanced / development mode installation
92+
93+
## Optional dependencies
94+
Optionally if you want to use Caffe2 (this is necessary for building the C++ benchmarks
95+
since Caffe2 is our baseline):
96+
```
97+
conda install -y -c conda-forge eigen
98+
conda install -y -c nicolasvasilache caffe2
99+
```
100+
101+
## Cudnn version 7.1 in Caffe2 / dev mode
102+
***Note*** As of PyTorch 0.4, we need to package our own Caffe2. The curent PyTorch + Caffe2
103+
build system links cudnn dynamically. The version of cudnn that is linked dynamically
104+
is imposed on us by the docker image supported by NVIDIA
105+
[Dockerfile](conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile).
106+
For now this cudnn version is cudnn 7.1.
107+
108+
If for some reason, one cannot install cudnn 7.1 system-wide, one may resort to the
109+
following:
110+
```
111+
conda install -c anaconda cudnn
112+
conda remove -y cudatoolkit --force
113+
```
114+
115+
***Note*** cudnn pulls a cudatoolkit dependencey but this can never replace a system
116+
installation because it cannot package libcuda.so (which comes with the driver,
117+
not the toolkit).
118+
As a consequence cudatoolkit only contains redundant libraries and we remove it
119+
explicitly. In a near future, the unified PyTorch + Caffe2 build system will link
120+
everything statically and we will not need to worry about cudnn anymore.

0 commit comments

Comments
 (0)