Skip to content
This repository was archived by the owner on Apr 28, 2023. It is now read-only.

Commit 3a86565

Browse files
committed
docs: minor spelling tweaks
1 parent 02a664f commit 3a86565

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

docs/source/framework/pytorch_integration/autograd_with_tc.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
Autograd with TC
22
================
33

4-
We provide the TC intergation with PyTorch `autograd` so that it is easy to write
4+
We provide the TC integration with PyTorch `autograd` so that it is easy to write
55
a training layer with TC and be able to run backwards as well if the layer is part
66
of a network. We do not support double backwards right now. In order to write a
77
training layer with TC, you need to follow the steps below:

docs/source/framework/pytorch_integration/note_about_performance.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@ Reuse outputs
55
-------------
66

77
TC depends on a tensor library to do the allocations for temporary variables or output tensors.
8-
So everytime TC is run on given input sizes, the output tensor shapes inferred by
9-
TC backend is passed back to the tensor library and the output variables are allocated
8+
So every time TC is run on given input sizes, the output tensor shapes inferred by
9+
TC backend are passed back to the tensor library and the output variables are allocated
1010
by making a :code:`malloc` call. However, this can be expensive and effect performance
1111
significantly. Rather, if your input tensor sizes do not change every time TC is run,
1212
you can keep reusing the output tensor already allocated in previous call. This helps

docs/source/integrating_any_ml_framework.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Concretely, following functions need to be defined:
4646

4747
* :code:`compile`: This takes the dlpack tensors converted in previous step and dispatches compilation call to TC backend on those input dlpack tensors.
4848

49-
* :code:`prepareOutputs`: TC backend send back the output tensors infor (strides, shapes, type etc.) and framework should allocate the outputs storage.
49+
* :code:`prepareOutputs`: TC backend send back the output tensors info (strides, shapes, type etc.) and framework should allocate the outputs storage.
5050

5151
* :code:`run`: This simply dispatches the output tensor pointers to the TC backend and returns the outputs received.
5252

0 commit comments

Comments
 (0)