Skip to content

Commit e5f67dd

Browse files
committed
add a page for Zygote
1 parent a6d93cb commit e5f67dd

File tree

2 files changed

+23
-0
lines changed

2 files changed

+23
-0
lines changed

docs/make.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ makedocs(
2828
"Training Models" => [
2929
"Optimisers" => "training/optimisers.md",
3030
"Training" => "training/training.md",
31+
"Zygote.jl" => "training/zygote.md",
3132
],
3233
"GPU Support" => "gpu.md",
3334
"Model Tools" => [

docs/src/training/zygote.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
# Automatic Differentiation using Zygote.jl
2+
3+
Flux re-exports the `gradient` from [Zygote](https://github.com/FluxML/Zygote.jl), and uses this function within [`train!`](@ref) to differentiate the model. Zygote has its own [documentation](https://fluxml.ai/Zygote.jl/dev/), in particulat listing some [limitations](https://fluxml.ai/Zygote.jl/dev/limitations/).
4+
5+
```@docs
6+
Zygote.gradient
7+
Zygote.jacobian
8+
Zygote.withgradient
9+
```
10+
11+
Sometimes it is necessary to exclude some code, or a whole function, from automatic differentiation. This can be done using [ChainRules](https://github.com/JuliaDiff/ChainRules.jl):
12+
13+
```@docs
14+
ChainRulesCore.ignore_derivatives
15+
ChainRulesCore.@non_differentiable
16+
```
17+
18+
To manually supply the gradient for one function, you should define a method of `rrule`. ChainRules has [detailed documentation](https://juliadiff.org/ChainRulesCore.jl/stable/) on how this works.
19+
20+
```@docs
21+
ChainRulesCore.rrule
22+
```

0 commit comments

Comments
 (0)