Skip to content
This repository was archived by the owner on Apr 28, 2023. It is now read-only.

Commit be7037b

Browse files
Priya Goyalnicolasvasilache
authored andcommitted
Update docs and README.md
1 parent 46bf623 commit be7037b

File tree

9 files changed

+68
-11
lines changed

9 files changed

+68
-11
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ After a few generations of autotuning on a 2-GPU P100 system, we see results res
5656
We have not yet characterized the precise fraction of peak performance we obtain but it is not uncommon to obtain 80%+ of peak shared memory bandwidth after autotuning. Solid register-level optimizations are still in the work but TC in its current form already addresses the productivity gap between the needs of research and the needs of production. Which is why we are excited to share it with the entire community and bring this collaborative effort in the open.
5757
5858
# Documentation, Environment and Prerequisites
59-
We provide pre-built VM images in the docker subdirectory, they can be downloaded from dockerhub. We use and support those VMs as part of our continuous integration. Note that we can cross-compile CUDA (but not execute) even if the machine has no physical GPUs. In any case the CUDA toolkit and libraries should always be installed, for now.
59+
We provide pre-built docker images in the docker subdirectory, they can be downloaded from [dockerhub](https://hub.docker.com/u/tensorcomprehensions/). We use and support those images as part of our continuous integration. Note that we can cross-compile CUDA (but not execute) even if the machine has no physical GPUs. In any case the CUDA toolkit and libraries should always be installed, for now.
6060
6161
To get started, see the [docs](master/docs) directory.
6262

docs/Makefile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ SPHINXOPTS =
66
SPHINXBUILD = sphinx-build
77
SPHINXPROJ = TensorComprehensions
88
SOURCEDIR = source
9-
BUILDDIR = build
9+
BUILDDIR = ../../TensorComprehensions-docs
1010

1111
# Put it first so that "make" without argument is like "make help".
1212
help:
@@ -17,4 +17,4 @@ help:
1717
# Catch-all target: route all unknown targets to Sphinx using the new
1818
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
1919
%: Makefile
20-
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
20+
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

docs/source/conf.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -62,8 +62,8 @@
6262

6363
# General information about the project.
6464
project = 'Tensor Comprehensions'
65-
copyright = '2018, Tensor Comprehensions contributors and users'
66-
author = 'Tensor Comprehensions contributors and users'
65+
copyright = '2017-present, Facebook, Inc.'
66+
author = 'Tensor Comprehensions Contributors'
6767

6868
# The version info for the project you're documenting, acts as replacement for
6969
# |version| and |release|, also used in various other places throughout the

docs/source/contacts.rst

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
Contacts
2+
========
3+
4+
Tensor Comprehensions are in active development and constant improvement. We welcome your feedback and contributions. Tensor Comprehensions team is accessible through different channels, choose the one you prefer or the one that seems most appropriate for your communication.
5+
6+
Bugs and features
7+
-----------------
8+
9+
Found a bug? Want a feature? Open an `issue on GitHub <https://github.com/facebookresearch/TensorComprehensions/issues>`_.
10+
11+
12+
Mailing list
13+
------------
14+
15+
Not sure whether the behavior you see is a bug or a feature? Want to contact the team about something else than code, like an idea for collaboration? Drop us an email at tensorcomp@fb.com.
16+
17+
Contributions
18+
-------------
19+
20+
Want to contribute? Open a `pull request on Github <https://github.com/facebookresearch/TensorComprehensions/pulls>`_.
21+
22+
Don't forget to read the `contributor's instructions <https://github.com/facebookresearch/TensorComprehensions/blob/master/CONTRIBUTING.md>`_. Different parts of Tensor Comprehensions are managed by different people, make sure to tag the `code owners <https://github.com/facebookresearch/TensorComprehensions/blob/master/CodeOwners.md>`_ of the part you are modifying.
23+
24+
25+
Slack channel
26+
-------------
27+
28+
For a faster and tighter interaction, join our team on Slack: `TensorComprehensions.slack.com <https://tensorcomprehensions.slack.com>`_. You may need an invitation to join, contact us by email at tensorcomp@fb.com to get one.

docs/source/docker_image.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,8 @@ Installing TC from docker image
22
===============================
33

44
We provide docker runtime images for :code:`conda` and :code:`non-conda` environments both. TC officially supports
5-
running gcc 4.*, CUDA 8, CUDNN 6 and ubuntu14.04 and gcc 5.*, CUDA9, CUDNN6 on ubuntu16.04
5+
running gcc 4.*, CUDA 8, CUDNN 6 and ubuntu14.04 and gcc 5.*, CUDA9, CUDNN6 on ubuntu16.04. You can find all available images
6+
for Tensor Comprehensions at the `dockerhub <https://hub.docker.com/u/tensorcomprehensions/>`_
67

78
The conda and non-conda images for each setup are below:
89

docs/source/index.rst

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,3 +38,15 @@ Tensor Comprehensions provides framework-Agnostic Abstractions for High-Performa
3838
installation_conda_dep
3939
installation_conda
4040
installation_non_conda
41+
42+
.. toctree::
43+
:maxdepth: 1
44+
:caption: Paper
45+
46+
report
47+
48+
.. toctree::
49+
:maxdepth: 1
50+
:caption: Support
51+
52+
contacts

docs/source/ml_with_tc.rst

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -79,10 +79,9 @@ TC
7979
^^
8080

8181
The current TC implementation sits somewhere here; less verbose than Halide,
82-
more verbose than matrix algebra. The biggest current negative point is its
83-
non-intuitive behavior that depends on the inference procedure. But this
84-
inference procedure can be described properly and follows an intuitive enough
85-
mental model, see :ref:`inference`.
82+
more verbose than matrix algebra. The inference procedure has been one subtle
83+
tradeoff in TC. It has been designed to follow an intuitive enough mental model,
84+
but may still evolve in the future towards greater expressiveness, see :ref:`inference`.
8685

8786
Matrix Languages
8887
^^^^^^^^^^^^^^^^

docs/source/performance.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ Performance of TC
22
=================
33

44
TC can generate competitive code in a variety of cases thanks to its
5-
Autotuner (see our companion paper: LINK).
5+
Autotuner (see our companion paper: `ArXiV <link>`_).
66
We will provide a set of benchmarks to illustrate the cases in
77
which it is recommended to use TC.
88

docs/source/report.rst

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
Tech Report
2+
===========
3+
4+
**Authors**:
5+
6+
`Nicolas Vasilache <https://scholar.google.com/citations?user=vIGcvLsAAAAJ&hl=en&oi=ao>`_ (FAIR),
7+
`Oleksandr Zinenko <https://ozinenko.com>`_ (Inria & DI ENS),
8+
`Theodoros Theodoridis <theodort@student.ethz.ch>`_ (ETH Zürich),
9+
`Priya Goyal <https://scholar.google.com/citations?user=-9yiQMsAAAAJ&hl=en>`_ (FAIR),
10+
`Zachary DeVito <zdevito@fb.com>`_ (FAIR),
11+
`William S. Moses <http://wsmoses.com>`_ (MIT CSAIL),
12+
`Sven Verdoolaege <sven@cs.kuleuven.be>`_ (FAIR),
13+
`Andrew Adams <https://andrew.adams.pub/>`_ (FAIR),
14+
`Albert Cohen <https://who.rocq.inria.fr/Albert.Cohen>`_ (Inria & DI ENS & FAIR)
15+
16+
We provide more details about Tensor Comprehensions in our tech report which can be found
17+
here on `ArXiV <link>`_

0 commit comments

Comments
 (0)