You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1612: fix AdamW and improve decays docs r=DhairyaLGandhi a=CarloLucibello
There is great disorder under the sky with optimizers. Since in chaining optimizers
```
opt = Optimizer(opt1, opt2)
```
the order generally matters (a lot!) we have to be very careful in documenting how to use decays. In fact, we were giving completely wrong indirections for `InvDecays` and `ExpDecays`. The correct ordering for standard use is
```julia
Optimizer(WeightDecay(), ADAM()) # equivalent to L2 regularization
Optimizer(ADAM(), InvDecay()) # learning rate scheduling
Optimizer(ADAM(), ExpDecay()) # learning rate scheduling
```
Different orderings are to be typically considered as bugs in user code.
This PR fixes examples and tries to clarify documentation in this regard.
Also fixes AdamW, which was doing something totally wrong due to the aforementioned confusion.
(see https://towardsdatascience.com/why-adamw-matters-736223f31b5d for how AdamW works).
Related in model-zoo: FluxML/model-zoo#303 and FluxML/model-zoo#304
Co-authored-by: CarloLucibello <carlo.lucibello@gmail.com>
Co-authored-by: Carlo Lucibello <carlo.lucibello@unibocconi.it>
0 commit comments