Skip to content
This repository was archived by the owner on Apr 28, 2023. It is now read-only.

Commit 1947c36

Browse files
authored
Merge pull request #470 from facebookresearch/nicolasvasilache-patch-1
Update installation instructions
2 parents 964a438 + 5bbcf14 commit 1947c36

11 files changed

+184
-1109
lines changed

BUILD.md

Lines changed: 1 addition & 106 deletions
Original file line numberDiff line numberDiff line change
@@ -1,106 +1 @@
1-
# Important notice
2-
***In order to uniformize and simplify the build system we had to make choices. TC is currently only officially supported on Ubuntu 16.04 with gcc 5.4.0, cuda 9.0 and cudnn 7.***
3-
Other configurations may work too but are not yet officially supported.
4-
For more information about setting up the config that we use to build the conda dependencies see the following [Dockerfile](conda_recipes/Dockerfile).
5-
6-
Our main goal with this decision is to make the build procedure extremely simple, both reproducible internally and extensible to new targets in the future.
7-
In particular the gcc-4 / gcc-5 ABI switch is not something we want to concern ourselves with at this point, we go for gcc-5.4.0.
8-
9-
# Conda from scratch (first time configuration)
10-
Choose and set an INSTALLATION_PATH then run the following:
11-
12-
```
13-
wget https://repo.anaconda.com/archive/Anaconda3-5.1.0-Linux-x86_64.sh -O anaconda.sh && \
14-
chmod +x anaconda.sh && \
15-
./anaconda.sh -b -p ${INSTALLATION_PATH} && \
16-
rm anaconda.sh
17-
18-
. ${INSTALLATION_PATH}/bin/activate
19-
conda update -y -n base conda
20-
```
21-
22-
Create a new environment in which TC will be built and install core dependencies:
23-
```
24-
conda create -y --name tc_build python=3.6
25-
conda activate tc_build
26-
conda install -y pyyaml mkl-include pytest
27-
conda install -y -c pytorch pytorch torchvision cuda90
28-
conda remove -y cudatoolkit --force
29-
conda install -y -c nicolasvasilache llvm-tapir50 halide
30-
```
31-
32-
***Note*** As of PyTorch 0.4, PyTorch links cuda libraries dynamically and it
33-
pulls cudatoolkit. However cudatoolkit can never replace a system installation
34-
because it cannot package libcuda.so (which comes with the driver, not the toolkit).
35-
As a consequence cudatoolkit only contains redundant libraries and we remove it
36-
explicitly. In a near future, the unified PyTorch + Caffe2 build system will link
37-
everything statically and stop pulling the cudatoolkit dependency.
38-
39-
# Optional dependencies
40-
Optionally if you want to use Caffe2 (which is necessary for building the C++ benchmarks
41-
since Caffe2 is our baseline):
42-
```
43-
conda install -y -c conda-forge eigen
44-
conda install -y -c nicolasvasilache caffe2
45-
```
46-
47-
# Activate preinstalled conda in your current terminal
48-
49-
Once the first time configuration above has been completed, one should activate conda in
50-
each new terminal window explicitly (it is discouraged to add this to your `.bashrc` or
51-
equivalent)
52-
```
53-
. ${CONDA_PATH}/bin/activate
54-
conda activate tc_build
55-
```
56-
57-
# Cudnn version
58-
***Note*** As of PyTorch 0.4, we need to package our own Caffe2. The curent PyTorch + Caffe2
59-
build system links cudnn dynamically. The version of cudnn that is linked dynamically
60-
is imposed on us by the docker image supported by NVIDIA
61-
[Dockerfile](conda_recipes/docker-images/tc-cuda9.0-cudnn7.1-ubuntu16.04-devel/Dockerfile).
62-
For now this cudnn version is cudnn 7.1.
63-
If for some reason, one cannot install cudnn 7.1 system-wide, one may resort to the
64-
following:
65-
```
66-
conda install -c anaconda cudnn
67-
conda remove -y cudatoolkit --force
68-
```
69-
70-
***Note*** cudnn pulls a cudatoolkit dependencey but this can never replace a system
71-
installation because it cannot package libcuda.so (which comes with the driver,
72-
not the toolkit).
73-
As a consequence cudatoolkit only contains redundant libraries and we remove it
74-
explicitly. In a near future, the unified PyTorch + Caffe2 build system will link
75-
everything statically and we will not need to worry about cudnn anymore.
76-
77-
# Build TC with dependencies supplied by conda (including cudnn 7.1)
78-
```
79-
CLANG_PREFIX=$(${CONDA_PREFIX}/bin/llvm-config --prefix) ./build.sh
80-
```
81-
You may need to pass the environment variable `CUDA_TOOLKIT_ROOT_DIR` pointing
82-
to your cuda installation (this is required for `FindCUDA.cmake` to find your cuda installation
83-
and can be omitted on most systems).
84-
85-
# Test locally
86-
Run C++ tests:
87-
```
88-
./test.sh
89-
```
90-
91-
Install the TC Python package locally to `/tmp`:
92-
```
93-
python setup.py install --prefix=/tmp
94-
export PYTHONPATH=${PYTHONPATH}:$(find /tmp/lib -name site-packages)
95-
```
96-
97-
Run Python smoke checks:
98-
```
99-
python -c 'import torch'
100-
python -c 'import tensor_comprehensions'
101-
```
102-
103-
Run Python tests:
104-
```
105-
./test_python/run_test.sh
106-
```
1+
see the [instructions](docs/source/installation.rst).

docs/source/framework/caffe2_integration/installation_caffe2_integration.rst

Lines changed: 0 additions & 139 deletions
This file was deleted.

docs/source/framework/caffe2_integration/integration_with_example.rst

Lines changed: 0 additions & 71 deletions
This file was deleted.

docs/source/framework/pytorch_integration/autotuning_layers.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ kernel timing. You can adopt the following parameter settings as starters for au
9292
9393
9494
Initial CudaMappingOptions
95-
-----------------------
95+
--------------------------
9696

9797
At the beginning of autotuning, the kernel is mapped to whatever :code:`mapping options`
9898
user passes. If no mapping options are passed by user, then the default :code:`naive`

docs/source/framework/pytorch_integration/writing_layers.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ tc.define
4646
.. _must_pass_options:
4747

4848
Specifying CudaMappingOptions
49-
--------------------------
49+
-----------------------------
5050

5151
TC is transformed into :code:`CUDA` kernel by using the :code:`Options` which
5252
is used to run the layer and hence also determines the performance of the kernel

docs/source/index.rst

Lines changed: 1 addition & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -43,23 +43,12 @@ Machine Learning.
4343
framework/pytorch_integration/debugging
4444
framework/pytorch_integration/frequently_asked_questions
4545

46-
.. toctree::
47-
:maxdepth: 1
48-
:caption: Caffe2 Integration
49-
50-
framework/caffe2_integration/integration_with_example
51-
framework/caffe2_integration/installation_caffe2_integration
52-
5346
.. toctree::
5447
:maxdepth: 1
5548
:caption: Installation
5649

5750
installation
58-
installation_docker_image
59-
installation_conda_dep
60-
installation_conda
61-
installation_non_conda
62-
51+
installation_colab_research
6352

6453
.. toctree::
6554
:maxdepth: 1

0 commit comments

Comments
 (0)