1
1
# Important notice
2
- *** In order to uniformize and simplify the build system we had to make choices. TC is currently only officially supported on Ubuntu 16.04 with gcc 5.4.0, cuda 9.0 and cudnn 7 .***
2
+ *** In order to uniformize and simplify the build system we had to make choices. TC is currently only officially supported on Ubuntu 16.04 with gcc 5.4.0.***
3
3
Other configurations may work too but are not yet officially supported.
4
4
For more information about setting up the config that we use to build the conda dependencies see the following [ Dockerfile] ( conda_recipes/Dockerfile ) .
5
5
@@ -24,9 +24,13 @@ Create a new environment in which TC will be built and install core dependencies
24
24
conda create -y --name tc_build python=3.6
25
25
conda activate tc_build
26
26
conda install -y pyyaml mkl-include pytest
27
+ conda install -y -c nicolasvasilache llvm-tapir50 halide
28
+ ```
29
+
30
+ Then install the PyTorch version that corresponds to your system binaries (e.g. for PyTorch with cuda 9.0):
31
+ ```
27
32
conda install -y -c pytorch pytorch torchvision cuda90
28
33
conda remove -y cudatoolkit --force
29
- conda install -y -c nicolasvasilache llvm-tapir50 halide
30
34
```
31
35
32
36
*** Note*** As of PyTorch 0.4, PyTorch links cuda libraries dynamically and it
@@ -36,14 +40,6 @@ As a consequence cudatoolkit only contains redundant libraries and we remove it
36
40
explicitly. In a near future, the unified PyTorch + Caffe2 build system will link
37
41
everything statically and stop pulling the cudatoolkit dependency.
38
42
39
- # Optional dependencies
40
- Optionally if you want to use Caffe2 (which is necessary for building the C++ benchmarks
41
- since Caffe2 is our baseline):
42
- ```
43
- conda install -y -c conda-forge eigen
44
- conda install -y -c nicolasvasilache caffe2
45
- ```
46
-
47
43
# Activate preinstalled conda in your current terminal
48
44
49
45
Once the first time configuration above has been completed, one should activate conda in
@@ -54,41 +50,22 @@ equivalent)
54
50
conda activate tc_build
55
51
```
56
52
57
- # Cudnn version
58
- *** Note*** As of PyTorch 0.4, we need to package our own Caffe2. The curent PyTorch + Caffe2
59
- build system links cudnn dynamically. The version of cudnn that is linked dynamically
60
- is imposed on us by the docker image supported by NVIDIA
61
- [ Dockerfile] ( conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile ) .
62
- For now this cudnn version is cudnn 7.1.
63
- If for some reason, one cannot install cudnn 7.1 system-wide, one may resort to the
64
- following:
65
- ```
66
- conda install -c anaconda cudnn
67
- conda remove -y cudatoolkit --force
68
- ```
69
-
70
- *** Note*** cudnn pulls a cudatoolkit dependencey but this can never replace a system
71
- installation because it cannot package libcuda.so (which comes with the driver,
72
- not the toolkit).
73
- As a consequence cudatoolkit only contains redundant libraries and we remove it
74
- explicitly. In a near future, the unified PyTorch + Caffe2 build system will link
75
- everything statically and we will not need to worry about cudnn anymore.
76
-
77
- # Build TC with dependencies supplied by conda (including cudnn 7.1)
53
+ # Build TC with dependencies supplied by conda
78
54
```
79
55
CLANG_PREFIX=$(${CONDA_PREFIX}/bin/llvm-config --prefix) ./build.sh
80
56
```
81
57
You may need to pass the environment variable ` CUDA_TOOLKIT_ROOT_DIR ` pointing
82
58
to your cuda installation (this is required for ` FindCUDA.cmake ` to find your cuda installation
83
- and can be omitted on most systems).
59
+ and can be omitted on most systems). When required, passing ` CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda `
60
+ is generally sufficient.
84
61
85
62
# Test locally
86
63
Run C++ tests:
87
64
```
88
65
./test.sh
89
66
```
90
67
91
- Install the TC Python package locally to ` /tmp ` :
68
+ Install the TC Python package locally to ` /tmp ` for smoke checking :
92
69
```
93
70
python setup.py install --prefix=/tmp
94
71
export PYTHONPATH=${PYTHONPATH}:$(find /tmp/lib -name site-packages)
@@ -103,4 +80,41 @@ python -c 'import tensor_comprehensions'
103
80
Run Python tests:
104
81
```
105
82
./test_python/run_test.sh
106
- ```
83
+ ```
84
+
85
+ At this point, if things work as expected you can venture installing as follows
86
+ (always a good idea to record installed files for easy removal):
87
+ ```
88
+ python setup.py install --record tc_files.txt
89
+ ```
90
+
91
+ # Advanced / development mode installation
92
+
93
+ ## Optional dependencies
94
+ Optionally if you want to use Caffe2 (this is necessary for building the C++ benchmarks
95
+ since Caffe2 is our baseline):
96
+ ```
97
+ conda install -y -c conda-forge eigen
98
+ conda install -y -c nicolasvasilache caffe2
99
+ ```
100
+
101
+ ## Cudnn version 7.1 in Caffe2 / dev mode
102
+ *** Note*** As of PyTorch 0.4, we need to package our own Caffe2. The curent PyTorch + Caffe2
103
+ build system links cudnn dynamically. The version of cudnn that is linked dynamically
104
+ is imposed on us by the docker image supported by NVIDIA
105
+ [ Dockerfile] ( conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile ) .
106
+ For now this cudnn version is cudnn 7.1.
107
+
108
+ If for some reason, one cannot install cudnn 7.1 system-wide, one may resort to the
109
+ following:
110
+ ```
111
+ conda install -c anaconda cudnn
112
+ conda remove -y cudatoolkit --force
113
+ ```
114
+
115
+ *** Note*** cudnn pulls a cudatoolkit dependencey but this can never replace a system
116
+ installation because it cannot package libcuda.so (which comes with the driver,
117
+ not the toolkit).
118
+ As a consequence cudatoolkit only contains redundant libraries and we remove it
119
+ explicitly. In a near future, the unified PyTorch + Caffe2 build system will link
120
+ everything statically and we will not need to worry about cudnn anymore.
0 commit comments