Skip to content

Commit 886edba

Browse files
sviluppToucheSir
andauthored
Update src/optimise/train.jl
Co-authored-by: Brian Chen <ToucheSir@users.noreply.github.com>
1 parent cd36179 commit 886edba

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/optimise/train.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ Here `pars` is produced by calling [`Flux.params`](@ref) on your model.
8787
(Or just on the layers you want to train, like `train!(loss, params(model[1:end-2]), data, opt)`.)
8888
This is the "implicit" style of parameter handling.
8989
90-
Then, this gradient is used by optimizer `opt` to update the parameters:
90+
This gradient is then used by optimizer `opt` to update the parameters:
9191
```
9292
update!(opt, pars, grads)
9393
```

0 commit comments

Comments
 (0)