Skip to content

Commit e7686b2

Browse files
Merge #1628
1628: Update "Composing Optimisers" docs r=darsnack a=StevenWhitaker Addresses #1627 (perhaps only partially). Use `1` instead of `0.001` for the first argument of `ExpDecay` in the example, so that the sentence following the example, i.e., > Here we apply exponential decay to the `Descent` optimiser. makes more sense. It was also [suggested](#1627 (comment)) in the linked issue that it might be worth changing the default learning rate of `ExpDecay` to `1`. Since this PR doesn't address that, I'm not sure merging this PR should necessarily close the issue. Co-authored-by: StevenWhitaker <steventwhitaker@gmail.com>
2 parents de76e08 + 6235c2a commit e7686b2

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/src/training/optimisers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ Flux defines a special kind of optimiser simply called `Optimiser` which takes i
107107
that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Flux defines some basic decays including `ExpDecay`, `InvDecay` etc.
108108

109109
```julia
110-
opt = Optimiser(ExpDecay(0.001, 0.1, 1000, 1e-4), Descent())
110+
opt = Optimiser(ExpDecay(1, 0.1, 1000, 1e-4), Descent())
111111
```
112112

113113
Here we apply exponential decay to the `Descent` optimiser. The defaults of `ExpDecay` say that its learning rate will be decayed every 1000 steps.

0 commit comments

Comments
 (0)