Skip to content

Commit 0d0fbb1

Browse files
authored
update api urls (#509)
1 parent a099f4a commit 0d0fbb1

File tree

5 files changed

+7
-7
lines changed

5 files changed

+7
-7
lines changed

cn/docs/cookies/activation_checkpointing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ optimizer = flow.optim.SGD([{'params': model_part1.parameters()},
4040
lr=1e-3)
4141
```
4242

43-
如果要开启 activation checkpointing,只需在 [nn.Graph](../basics/08_nn_graph.md) 模型中的 Eager 模型成员 (即 nn.Module 对象) 上指定 `.config.activation_checkpointing = True`。此 API 详见:[activation_checkpointing](https://oneflow.readthedocs.io/en/master/graph.html#oneflow.nn.graph.block_config.BlockConfig.activation_checkpointing)。对于每个打开 "activation checkpointing" 的 nn.Module,其输入 activation 将会被保留,而其它中间 activation 在反向传播过程中被使用时会被重新计算。
43+
如果要开启 activation checkpointing,只需在 [nn.Graph](../basics/08_nn_graph.md) 模型中的 Eager 模型成员 (即 nn.Module 对象) 上指定 `.config.activation_checkpointing = True`。此 API 详见:[activation_checkpointing](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.graph.block_config.BlockConfig.activation_checkpointing.html)。对于每个打开 "activation checkpointing" 的 nn.Module,其输入 activation 将会被保留,而其它中间 activation 在反向传播过程中被使用时会被重新计算。
4444

4545
```python
4646
class CustomGraph(flow.nn.Graph):

cn/docs/cookies/amp.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ loss_fn = nn.CrossEntropyLoss().to(DEVICE)
2727
optimizer = flow.optim.SGD(model.parameters(), lr=1e-3)
2828
```
2929

30-
如果要开启 AMP 模式,只需在 [nn.Graph](../basics/08_nn_graph.md) 模型中添加 `self.config.enable_amp(True)`,此 API 详见: [enable_amp](https://oneflow.readthedocs.io/en/master/graph.html#oneflow.nn.graph.graph_config.GraphConfig.enable_amp)
30+
如果要开启 AMP 模式,只需在 [nn.Graph](../basics/08_nn_graph.md) 模型中添加 `self.config.enable_amp(True)`,此 API 详见: [enable_amp](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.graph.graph_config.GraphConfig.enable_amp.html)
3131

3232
```python
3333
class CustomGraph(flow.nn.Graph):
@@ -61,7 +61,7 @@ for _ in range(100):
6161

6262
**Gradient Scaling (梯度缩放)** 是一种用于解决 FP16 易导致数值溢出问题的方法,其基本原理是在反向传播的过程中使用一个 scale factor 对损失和梯度进行缩放,以改变其数值的量级,从而尽可能缓解数值溢出问题。
6363

64-
OneFlow 提供了 `GradScaler` 来在 AMP 模式下使用 Gradient Scaling,只需要在 nn.Graph 模型的 `__init__` 方法中实例化一个`GradScaler` 对象,然后通过 [set_grad_scaler](https://oneflow.readthedocs.io/en/master/graph.html#oneflow.nn.Graph.set_grad_scaler) 接口进行指定即可,nn.Graph 将会自动管理 Gradient Scaling 的整个过程。以上文中的 `CustomGraph` 为例,我们需要在其 `__init__` 方法中添加:
64+
OneFlow 提供了 `GradScaler` 来在 AMP 模式下使用 Gradient Scaling,只需要在 nn.Graph 模型的 `__init__` 方法中实例化一个`GradScaler` 对象,然后通过 [set_grad_scaler](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.Graph.set_grad_scaler.html) 接口进行指定即可,nn.Graph 将会自动管理 Gradient Scaling 的整个过程。以上文中的 `CustomGraph` 为例,我们需要在其 `__init__` 方法中添加:
6565

6666
```python
6767
grad_scaler = flow.amp.GradScaler(

en/docs/basics/02_tensor.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ oneflow.int32 cuda:0
164164

165165
## Operations on Tensors
166166

167-
A large number of operators are provided in OneFlow, most of which are in the namespaces of [oneflow](https://oneflow.readthedocs.io/en/v0.8.1/oneflow.html), [oneflow.Tensor](https://oneflow.readthedocs.io/en/v0.8.1/tensor.html), [oneflow.nn](https://oneflow.readthedocs.io/en/master/nn.html), and [oneflow.nn.functional](https://oneflow.readthedocs.io/en/v0.8.1/nn.functional.html).
167+
A large number of operators are provided in OneFlow, most of which are in the namespaces of [oneflow](https://oneflow.readthedocs.io/en/v0.8.1/oneflow.html), [oneflow.Tensor](https://oneflow.readthedocs.io/en/v0.8.1/tensor.html), [oneflow.nn](https://oneflow.readthedocs.io/en/v0.8.1/nn.html), and [oneflow.nn.functional](https://oneflow.readthedocs.io/en/v0.8.1/nn.functional.html).
168168

169169
Tensors in OneFlow are as easy to use as the NumPy arrays. For example, slicing in NumPy style is supported:
170170

en/docs/cookies/activation_checkpointing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ optimizer = flow.optim.SGD([{'params': model_part1.parameters()},
4040
lr=1e-3)
4141
```
4242

43-
To turn on activation checkpointing, you only need to specify `.config.activation_checkpointing = True` on the Eager model member (i.e. the nn.Module object) in the [nn.Graph](../basics/08_nn_graph.md) model. For more details of this API, please refer to: [activation_checkpointing](https://oneflow.readthedocs.io/en/master/graph.html#oneflow.nn.graph.block_config.BlockConfig.activation_checkpointing). For each nn.Module with "activation checkpointing" turned on, its input activations will be preserved, while other intermediate activations will be recomputed when used during backpropagation.
43+
To turn on activation checkpointing, you only need to specify `.config.activation_checkpointing = True` on the Eager model member (i.e. the nn.Module object) in the [nn.Graph](../basics/08_nn_graph.md) model. For more details of this API, please refer to: [activation_checkpointing](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.graph.block_config.BlockConfig.activation_checkpointing.html). For each nn.Module with "activation checkpointing" turned on, its input activations will be preserved, while other intermediate activations will be recomputed when used during backpropagation.
4444

4545
```python
4646
class CustomGraph(flow.nn.Graph):

en/docs/cookies/amp.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ loss_fn = nn.CrossEntropyLoss().to(DEVICE)
2727
optimizer = flow.optim.SGD(model.parameters(), lr=1e-3)
2828
```
2929

30-
If you want to enable AMP mode, just add `self.config.enable_amp(True)` to the model [nn.Graph](../basics/08_nn_graph.md). The details of this API is at: [enable_amp](https://oneflow.readthedocs.io/en/master/graph.html#oneflow.nn.graph.graph_config.GraphConfig.enable_amp).
30+
If you want to enable AMP mode, just add `self.config.enable_amp(True)` to the model [nn.Graph](../basics/08_nn_graph.md). The details of this API is at: [enable_amp](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.graph.graph_config.GraphConfig.enable_amp.html).
3131

3232
```python
3333
class CustomGraph(flow.nn.Graph):
@@ -61,7 +61,7 @@ for _ in range(100):
6161

6262
**Gradient Scaling** is a method for solving the problem that FP16 is prone to numerical overflow. The basic principle is to use a scale factor to scale the loss and gradient in the process of backpropagation to change the magnitude of its value, thereby mitigate numerical overflow problems as much as possible.
6363

64-
OneFlow provides `GradScaler` to use Gradient Scaling in AMP mode. You only need to instantiate a `GradScaler` object in the `__init__` method of the nn.Graph model, and then specify it through the interface [set_grad_scaler](https://oneflow.readthedocs.io/en/master/graph.html#oneflow.nn.Graph.set_grad_scaler). nn.Graph will automatically manage the whole process of Gradient Scaling. Taking the `CustomGraph` above as an example, you need to add the following code to its `__init__` method:
64+
OneFlow provides `GradScaler` to use Gradient Scaling in AMP mode. You only need to instantiate a `GradScaler` object in the `__init__` method of the nn.Graph model, and then specify it through the interface [set_grad_scaler](https://oneflow.readthedocs.io/en/v0.8.1/generated/oneflow.nn.Graph.set_grad_scaler.html). nn.Graph will automatically manage the whole process of Gradient Scaling. Taking the `CustomGraph` above as an example, you need to add the following code to its `__init__` method:
6565

6666
```python
6767
grad_scaler = flow.amp.GradScaler(

0 commit comments

Comments
 (0)