Skip to content

Commit d31694d

Browse files
committed
fixup
1 parent e5f67dd commit d31694d

File tree

5 files changed

+10
-8
lines changed

5 files changed

+10
-8
lines changed

docs/make.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
1-
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays
1+
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore
22

33

44
DocMeta.setdocmeta!(Flux, :DocTestSetup, :(using Flux); recursive = true)
55

66
makedocs(
7-
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays],
7+
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Base],
88
doctest = false,
99
sitename = "Flux",
10-
strict = [:cross_references,],
10+
# strict = [:cross_references,],
1111
pages = [
1212
"Home" => "index.md",
1313
"Building Models" => [

docs/src/data/onehot.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,8 @@ julia> onecold(ans, [:a, :b, :c])
5151

5252
Note that these operations returned `OneHotVector` and `OneHotMatrix` rather than `Array`s. `OneHotVector`s behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood.
5353

54+
### Function listing
55+
5456
```@docs
5557
OneHotArrays.onehot
5658
OneHotArrays.onecold

docs/src/models/losses.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Flux provides a large number of common loss functions used for training machine learning models.
44
They are grouped together in the `Flux.Losses` module.
55

6-
Loss functions for supervised learning typically expect as inputs a target `y`, and a prediction ``.
6+
Loss functions for supervised learning typically expect as inputs a target `y`, and a prediction `` from your model.
77
In Flux's convention, the order of the arguments is the following
88

99
```julia
@@ -21,7 +21,7 @@ loss(ŷ, y, agg=x->mean(w .* x)) # weighted mean
2121
loss(ŷ, y, agg=identity) # no aggregation.
2222
```
2323

24-
## Losses Reference
24+
### Function listing
2525

2626
```@docs
2727
Flux.Losses.mae

docs/src/outputsize.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
## Model Building
1+
# Size Propagation
22

33
Flux provides some utility functions to help you generate models in an automated fashion.
44

docs/src/utilities.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ julia> Dense(4 => 5, tanh; init=Flux.randn32(MersenneTwister(1)))
2727
Dense(4 => 5, tanh) # 25 parameters
2828
```
2929

30-
## Initialisation Functions
30+
## Initialisation functions
3131

3232
```@docs
3333
Flux.glorot_uniform
@@ -52,7 +52,7 @@ Flux.default_rng_value
5252
Flux.nfan
5353
```
5454

55-
## Changing the type of model parameters
55+
## Changing the type of all parameters
5656

5757
The default `eltype` for models is `Float32` since models are often trained/run on GPUs.
5858
The `eltype` of model `m` can be changed to `Float64` by `f64(m)`:

0 commit comments

Comments
 (0)