You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 28, 2023. It is now read-only.
@@ -198,6 +200,12 @@ functions. For example, assume one wants to use :code:`fmax` CUDA function in TC
198
200
O= T.relu(torch.randn(100, 128, device='cuda'))
199
201
200
202
TC only supports a subset of built-inCUDA functions.
201
-
Built-in functions supported inTC are listed `here<https://github.com/facebookresearch/TensorComprehensions/blob/master/tc/core/libraries.h#L67>`_.
203
+
Built-in functions supported inTC are listed in`this file<https://github.com/facebookresearch/TensorComprehensions/blob/master/tc/core/libraries.h#L67>`_.
202
204
Documentation
203
-
for these functions is available as part of the official CUDA documentation `here <http://docs.nvidia.com/cuda/cuda-math-api/group__CUDA__MATH__SINGLE.html#group__CUDA__MATH__SINGLE>`_.
205
+
for these functions is available as part of the official `CUDA documentation <http://docs.nvidia.com/cuda/cuda-math-api/group__CUDA__MATH__SINGLE.html#group__CUDA__MATH__SINGLE>`_.
206
+
207
+
208
+
More examples
209
+
-------------
210
+
You can find more examples in our `unit tests <https://github.com/facebookresearch/TensorComprehensions/blob/master/python/tests/test_tc.py>`_.
211
+
We also provide more elaborate examples on how to `compute argmin <https://github.com/facebookresearch/TensorComprehensions/blob/master/python/examples/min_distance.py#L151>`_ as well as a simple TC + PyTorch `python overhead benchmark <https://github.com/facebookresearch/TensorComprehensions/blob/master/python/benchmarks/python_overhead.py>`_.
0 commit comments