|
1 |
| -# Important notice |
2 |
| -***In order to uniformize and simplify the build system we had to make choices. TC is currently only officially supported on Ubuntu 16.04 with gcc 5.4.0, cuda 9.0 and cudnn 7.*** |
3 |
| -Other configurations may work too but are not yet officially supported. |
4 |
| -For more information about setting up the config that we use to build the conda dependencies see the following [Dockerfile](conda_recipes/Dockerfile). |
5 |
| - |
6 |
| -Our main goal with this decision is to make the build procedure extremely simple, both reproducible internally and extensible to new targets in the future. |
7 |
| -In particular the gcc-4 / gcc-5 ABI switch is not something we want to concern ourselves with at this point, we go for gcc-5.4.0. |
8 |
| - |
9 |
| -# Conda from scratch (first time configuration) |
10 |
| -Choose and set an INSTALLATION_PATH then run the following: |
11 |
| - |
12 |
| -``` |
13 |
| -wget https://repo.anaconda.com/archive/Anaconda3-5.1.0-Linux-x86_64.sh -O anaconda.sh && \ |
14 |
| - chmod +x anaconda.sh && \ |
15 |
| - ./anaconda.sh -b -p ${INSTALLATION_PATH} && \ |
16 |
| - rm anaconda.sh |
17 |
| -
|
18 |
| -. ${INSTALLATION_PATH}/bin/activate |
19 |
| -conda update -y -n base conda |
20 |
| -``` |
21 |
| - |
22 |
| -Create a new environment in which TC will be built and install core dependencies: |
23 |
| -``` |
24 |
| -conda create -y --name tc_build python=3.6 |
25 |
| -conda activate tc_build |
26 |
| -conda install -y pyyaml mkl-include pytest |
27 |
| -conda install -y -c pytorch pytorch torchvision cuda90 |
28 |
| -conda remove -y cudatoolkit --force |
29 |
| -conda install -y -c nicolasvasilache llvm-tapir50 halide |
30 |
| -``` |
31 |
| - |
32 |
| -***Note*** As of PyTorch 0.4, PyTorch links cuda libraries dynamically and it |
33 |
| -pulls cudatoolkit. However cudatoolkit can never replace a system installation |
34 |
| -because it cannot package libcuda.so (which comes with the driver, not the toolkit). |
35 |
| -As a consequence cudatoolkit only contains redundant libraries and we remove it |
36 |
| -explicitly. In a near future, the unified PyTorch + Caffe2 build system will link |
37 |
| -everything statically and stop pulling the cudatoolkit dependency. |
38 |
| - |
39 |
| -# Optional dependencies |
40 |
| -Optionally if you want to use Caffe2 (which is necessary for building the C++ benchmarks |
41 |
| -since Caffe2 is our baseline): |
42 |
| -``` |
43 |
| -conda install -y -c conda-forge eigen |
44 |
| -conda install -y -c nicolasvasilache caffe2 |
45 |
| -``` |
46 |
| - |
47 |
| -# Activate preinstalled conda in your current terminal |
48 |
| - |
49 |
| -Once the first time configuration above has been completed, one should activate conda in |
50 |
| -each new terminal window explicitly (it is discouraged to add this to your `.bashrc` or |
51 |
| -equivalent) |
52 |
| -``` |
53 |
| -. ${CONDA_PATH}/bin/activate |
54 |
| -conda activate tc_build |
55 |
| -``` |
56 |
| - |
57 |
| -# Cudnn version |
58 |
| -***Note*** As of PyTorch 0.4, we need to package our own Caffe2. The curent PyTorch + Caffe2 |
59 |
| -build system links cudnn dynamically. The version of cudnn that is linked dynamically |
60 |
| -is imposed on us by the docker image supported by NVIDIA |
61 |
| -[Dockerfile](conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile). |
62 |
| -For now this cudnn version is cudnn 7.1. |
63 |
| -If for some reason, one cannot install cudnn 7.1 system-wide, one may resort to the |
64 |
| -following: |
65 |
| -``` |
66 |
| -conda install -c anaconda cudnn |
67 |
| -conda remove -y cudatoolkit --force |
68 |
| -``` |
69 |
| - |
70 |
| -***Note*** cudnn pulls a cudatoolkit dependencey but this can never replace a system |
71 |
| -installation because it cannot package libcuda.so (which comes with the driver, |
72 |
| -not the toolkit). |
73 |
| -As a consequence cudatoolkit only contains redundant libraries and we remove it |
74 |
| -explicitly. In a near future, the unified PyTorch + Caffe2 build system will link |
75 |
| -everything statically and we will not need to worry about cudnn anymore. |
76 |
| - |
77 |
| -# Build TC with dependencies supplied by conda (including cudnn 7.1) |
78 |
| -``` |
79 |
| -CLANG_PREFIX=$(${CONDA_PREFIX}/bin/llvm-config --prefix) ./build.sh |
80 |
| -``` |
81 |
| -You may need to pass the environment variable `CUDA_TOOLKIT_ROOT_DIR` pointing |
82 |
| -to your cuda installation (this is required for `FindCUDA.cmake` to find your cuda installation |
83 |
| -and can be omitted on most systems). |
84 |
| - |
85 |
| -# Test locally |
86 |
| -Run C++ tests: |
87 |
| -``` |
88 |
| -./test.sh |
89 |
| -``` |
90 |
| - |
91 |
| -Install the TC Python package locally to `/tmp`: |
92 |
| -``` |
93 |
| -python setup.py install --prefix=/tmp |
94 |
| -export PYTHONPATH=${PYTHONPATH}:$(find /tmp/lib -name site-packages) |
95 |
| -``` |
96 |
| - |
97 |
| -Run Python smoke checks: |
98 |
| -``` |
99 |
| -python -c 'import torch' |
100 |
| -python -c 'import tensor_comprehensions' |
101 |
| -``` |
102 |
| - |
103 |
| -Run Python tests: |
104 |
| -``` |
105 |
| -./test_python/run_test.sh |
106 |
| -``` |
| 1 | +see the [instructions](docs/source/installation.rst). |
0 commit comments