Skip to content

Commit c90b74d

Browse files
chore: update submodules (#178)
Co-authored-by: vfdev-5 <vfdev-5@users.noreply.github.com>
1 parent 9d8b6ac commit c90b74d

File tree

6 files changed

+10
-10
lines changed

6 files changed

+10
-10
lines changed

src/how-to-guides/02-convert-pytorch-to-ignite.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ Accuracy().attach(evaluator, "accuracy")
122122

123123
## Organizing code into Events and Handlers
124124

125-
Next, we need to identify any code that is triggered when an event occurs. Examples of events can be the start of an iteration, completion of an epoch, or even the start of backprop. We already provide some predefined events (complete list [here](https://pytorch.org/ignite/generated/ignite.engine.events.Events.html#ignite.engine.events.Events)) however we can also create custom ones (refer [here](https://pytorch.org/ignite/concepts.html#custom-events)). We move the event-specific code to different handlers (named functions, lambdas, class functions) which are attached to these events and executed whenever a specific event happens. Here are some common handlers:
125+
Next, we need to identify any code that is triggered when an event occurs. Examples of events can be the start of an iteration, completion of an epoch, or even the start of backprop. We already provide some predefined events (complete list [here](https://pytorch.org/ignite/generated/ignite.engine.events.Events.html#ignite.engine.events.Events)) however we can also create custom ones (refer [here](https://pytorch-ignite.ai/concepts/02-events-and-handlers#custom-events). We move the event-specific code to different handlers (named functions, lambdas, class functions) which are attached to these events and executed whenever a specific event happens. Here are some common handlers:
126126

127127
### Running `evaluator`
128128

src/how-to-guides/03-time-profiling.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ tags:
1515
This example demonstrates how you can get the time breakdown for:
1616
- Individual epochs during training
1717
- Total training time
18-
- Individual [`Events`](https://pytorch.org/ignite/concepts.html#events-and-handlers)
19-
- All [`Handlers`](https://pytorch.org/ignite/concepts.html#handlers) correspoding to an `Event`
18+
- Individual [`Events`](https://pytorch-ignite.ai/concepts/02-events-and-handlers#events)
19+
- All [`Handlers`](https://pytorch-ignite.ai/concepts/02-events-and-handlers#handlers) correspoding to an `Event`
2020
- Individual `Handlers`
2121
- Data loading and Data processing.
2222

@@ -251,7 +251,7 @@ basic_profiler.print_results(results);
251251

252252
## Handler-based profiling using `HandlersTimeProfiler`
253253

254-
We can overcome the above problem by using [`HandlersTimeProfiler`](https://pytorch.org/ignite/generated/ignite.handlers.time_profilers.HandlersTimeProfiler.html#handlerstimeprofiler) which gives us only the necessary information. We can also calculate the time taken by handlers attached to [`Custom Events`](https://pytorch.org/ignite/concepts.html#custom-events), which was not previously possible, via this.
254+
We can overcome the above problem by using [`HandlersTimeProfiler`](https://pytorch.org/ignite/generated/ignite.handlers.time_profilers.HandlersTimeProfiler.html#handlerstimeprofiler) which gives us only the necessary information. We can also calculate the time taken by handlers attached to [`Custom Events`](https://pytorch-ignite.ai/concepts/02-events-and-handlers#custom-events), which was not previously possible, via this.
255255

256256

257257
```python

src/how-to-guides/08-custom-events.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ tags:
99
---
1010
# How to create Custom Events based on Forward or Backward Pass
1111

12-
This guide demonstrates how you can create [custom events](https://pytorch.org/ignite/concepts.html#custom-events) that depend on the loss calculated and backward pass.
12+
This guide demonstrates how you can create [custom events](https://pytorch-ignite.ai/concepts/02-events-and-handlers#custom-events) that depend on the loss calculated and backward pass.
1313

1414
In this example, we will be using a ResNet18 model on the MNIST dataset. The base code is the same as used in the [Getting Started Guide](https://pytorch-ignite.ai/tutorials/getting-started/).
1515

src/tutorials/beginner/02-transformers-text-classification.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ model.to(device)
135135

136136
## Create Trainer
137137

138-
Ignite's [`Engine`](https://pytorch.org/ignite/concepts.html#engine) allows users to define a `process_function` to process a given batch of data. This function is applied to all the batches of the dataset. This is a general class that can be applied to train and validate models. A `process_function` has two parameters `engine` and `batch`.
138+
Ignite's [`Engine`](https://pytorch-ignite.ai/concepts/01-engine/) allows users to define a `process_function` to process a given batch of data. This function is applied to all the batches of the dataset. This is a general class that can be applied to train and validate models. A `process_function` has two parameters `engine` and `batch`.
139139

140140
The code for processing a batch of training data in the tutorial is as follows:
141141

@@ -192,7 +192,7 @@ trainer = Engine(train_step)
192192

193193
The `lr_scheduler` we defined perviously was a handler.
194194

195-
[Handlers](https://pytorch.org/ignite/concepts.html#handlers) can be any type of function (lambda functions, class methods, etc). On top of that, Ignite provides several built-in handlers to reduce redundant code. We attach these handlers to engine which is triggered at a specific [event](https://pytorch.org/ignite/concepts.html#events-and-handlers). These events can be anything like the start of an iteration or the end of an epoch. [Here](https://pytorch.org/ignite/generated/ignite.engine.events.Events.html#events) is a complete list of built-in events.
195+
[Handlers](https://pytorch-ignite.ai/concepts/02-events-and-handlers/#handlers) can be any type of function (lambda functions, class methods, etc). On top of that, Ignite provides several built-in handlers to reduce redundant code. We attach these handlers to engine which is triggered at a specific [event](https://pytorch-ignite.ai/concepts/02-events-and-handlers/). These events can be anything like the start of an iteration or the end of an epoch. [Here](https://pytorch.org/ignite/generated/ignite.engine.events.Events.html#events) is a complete list of built-in events.
196196

197197
Therefore, we will attach the `lr_scheduler` (handler) to the `trainer` (`engine`) via [`add_event_handler()`](https://pytorch.org/ignite/generated/ignite.engine.engine.Engine.html#ignite.engine.engine.Engine.add_event_handler) so it can be triggered at `Events.ITERATION_STARTED` (start of an iteration) automatically.
198198

@@ -256,7 +256,7 @@ for batch in eval_dataloader:
256256

257257
Finally, we return the predictions and the actual labels so that we can compute the metrics.
258258

259-
You will notice that we did not compute the metrics in `evaluate_step()`. This is because Ignite provides built-in [metrics](https://pytorch.org/ignite/concepts.html#metrics) which we can later attach to the engine.
259+
You will notice that we did not compute the metrics in `evaluate_step()`. This is because Ignite provides built-in [metrics](https://pytorch-ignite.ai/concepts/04-metrics/) which we can later attach to the engine.
260260

261261
**Note:** Ignite suggests attaching metrics to evaluators and not trainers because during the training the model parameters are constantly changing and it is best to evaluate model on a stationary model. This information is important as there is a difference in the functions for training and evaluating. Training returns a single scalar loss. Evaluating returns `y_pred` and `y` as that output is used to calculate metrics per batch for the entire dataset.
262262

src/tutorials/intermediate/03-reinforcement-learning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ timesteps = range(10000)
170170

171171
## Create Trainer
172172

173-
Ignite's [`Engine`](https://pytorch.org/ignite/concepts.html#engine) allows users to define a `process_function` to run one episode. We select an action from the policy, then take the action through `step()` and finally increment our reward. If the problem is solved, we terminate training and save the `timestep`.
173+
Ignite's [`Engine`](https://pytorch-ignite.ai/concepts/01-engine/) allows users to define a `process_function` to run one episode. We select an action from the policy, then take the action through `step()` and finally increment our reward. If the problem is solved, we terminate training and save the `timestep`.
174174

175175
> An episode is an instance of a game (or life of a game). If the game ends or life decreases, the episode ends. Step, on the other hand, is the time or some discrete value which increases monotonically in an episode. With each change in the state of the game, the value of step increases until the game ends.
176176

0 commit comments

Comments
 (0)