Skip to content

Commit 26fe1d8

Browse files
authored
Get rid of documentation warnings and 404 pages (#1987)
* Get rid of documentation warnings * fix cross-references * add missing docstrings * Use `onehotbatch` instead of internal struct * Fix doctests * Add `Optimiser.jl` as a doc dependency Add `cpu` and `gpu` to the manual and `Optimisers.jl` as a dependency
1 parent a7f5849 commit 26fe1d8

File tree

11 files changed

+40
-12
lines changed

11 files changed

+40
-12
lines changed

docs/Project.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
44
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
55
MLUtils = "f1d291b0-491e-4a28-83b9-f70985020b54"
66
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
7+
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
78

89
[compat]
910
Documenter = "0.26"

docs/make.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
using Documenter, Flux, NNlib, Functors, MLUtils, BSON
1+
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers
22

33

44
DocMeta.setdocmeta!(Flux, :DocTestSetup, :(using Flux); recursive = true)
55

66
makedocs(
7-
modules = [Flux, NNlib, Functors, MLUtils, BSON],
7+
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers],
88
doctest = false,
99
sitename = "Flux",
1010
pages = [

docs/src/data/mlutils.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,15 @@ Below is a non-exhaustive list of such utility functions.
2020

2121
```@docs
2222
MLUtils.unsqueeze
23+
MLUtils.flatten
2324
MLUtils.stack
2425
MLUtils.unstack
26+
MLUtils.numobs
27+
MLUtils.getobs
28+
MLUtils.getobs!
2529
MLUtils.chunk
2630
MLUtils.group_counts
31+
MLUtils.group_indices
2732
MLUtils.batch
2833
MLUtils.unbatch
2934
MLUtils.batchseq

docs/src/gpu.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,6 +86,11 @@ julia> x |> cpu
8686
0.7766742
8787
```
8888

89+
```@docs
90+
cpu
91+
gpu
92+
```
93+
8994
## Common GPU Workflows
9095

9196
Some of the common workflows involving the use of GPUs are presented below.

docs/src/models/functors.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,5 +9,7 @@ Functors.isleaf
99
Functors.children
1010
Functors.fcollect
1111
Functors.functor
12+
Functors.@functor
1213
Functors.fmap
14+
Functors.fmapstructure
1315
```

docs/src/models/nnlib.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,9 @@ NNlib.celu
1111
NNlib.elu
1212
NNlib.gelu
1313
NNlib.hardsigmoid
14+
NNlib.sigmoid_fast
1415
NNlib.hardtanh
16+
NNlib.tanh_fast
1517
NNlib.leakyrelu
1618
NNlib.lisht
1719
NNlib.logcosh

docs/src/training/optimisers.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
```@meta
2+
CurrentModule = Flux
3+
```
4+
15
# Optimisers
26

37
Consider a [simple linear regression](../models/basics.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
@@ -189,3 +193,13 @@ opt = Optimiser(ClipValue(1e-3), Adam(1e-3))
189193
ClipValue
190194
ClipNorm
191195
```
196+
197+
# Optimisers.jl
198+
199+
Flux re-exports some utility functions from [`Optimisers.jl`](https://github.com/FluxML/Optimisers.jl)
200+
and the complete `Optimisers` package under the `Flux.Optimisers` namespace.
201+
202+
```@docs
203+
Optimisers.destructure
204+
Optimisers.trainable
205+
```

docs/src/utilities.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ Flux.f32
5959

6060
Flux provides some utility functions to help you generate models in an automated fashion.
6161

62-
[`outputsize`](@ref) enables you to calculate the output sizes of layers like [`Conv`](@ref)
62+
[`Flux.outputsize`](@ref) enables you to calculate the output sizes of layers like [`Conv`](@ref)
6363
when applied to input samples of a given size. This is achieved by passing a "dummy" array into
6464
the model that preserves size information without running any computation.
6565
`outputsize(f, inputsize)` works for all layers (including custom layers) out of the box.
@@ -107,7 +107,6 @@ Flux.outputsize
107107

108108
```@docs
109109
Flux.modules
110-
Flux.destructure
111110
Flux.nfan
112111
```
113112

src/functor.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ Given a model or specific layers from a model, create a `Params` object pointing
5454
5555
This can be used with the `gradient` function, see [Taking Gradients](@ref), or as input to the [`Flux.train!`](@ref Flux.train!) function.
5656
57-
The behaviour of `params` on custom types can be customized using [`Functor.@functor`](@ref) or [`Flux.trainable`](@ref).
57+
The behaviour of `params` on custom types can be customized using [`Functors.@functor`](@ref) or [`Flux.trainable`](@ref).
5858
5959
# Examples
6060
```jldoctest

src/layers/basic.jl

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ The out `y` will be a vector of length `out`, or a batch with
124124
125125
Keyword `bias=false` will switch off trainable bias for the layer.
126126
The initialisation of the weight matrix is `W = init(out, in)`, calling the function
127-
given to keyword `init`, with default [`glorot_uniform`](@doc Flux.glorot_uniform).
127+
given to keyword `init`, with default [`glorot_uniform`](@ref Flux.glorot_uniform).
128128
The weight matrix and/or the bias vector (of length `out`) may also be provided explicitly.
129129
130130
# Examples
@@ -262,7 +262,7 @@ which constructs them, and the number to construct.
262262
263263
Maxout over linear dense layers satisfies the univeral approximation theorem.
264264
See Goodfellow, Warde-Farley, Mirza, Courville & Bengio "Maxout Networks"
265-
[https://arxiv.org/abs/1302.4389](1302.4389).
265+
[https://arxiv.org/abs/1302.4389](https://arxiv.org/abs/1302.4389).
266266
267267
See also [`Parallel`](@ref) to reduce with other operators.
268268
@@ -651,7 +651,7 @@ for a vocabulary of size `in`.
651651
652652
This layer is often used to store word embeddings and retrieve them using indices.
653653
The input to the layer can be either a vector of indexes
654-
or the corresponding [onehot encoding](@ref Flux.OneHotArray).
654+
or the corresponding [`onehot encoding`](@ref Flux.onehotbatch).
655655
656656
# Examples
657657
```jldoctest
@@ -662,8 +662,8 @@ Embedding(1000 => 4) # 4_000 parameters
662662
663663
julia> vocab_idxs = [1, 722, 53, 220, 3];
664664
665-
julia> x = Flux.OneHotMatrix(vocab_idxs, vocab_size); summary(x)
666-
"1000×5 OneHotMatrix(::Vector{Int64}) with eltype Bool"
665+
julia> x = Flux.onehotbatch(vocab_idxs, 1:vocab_size); summary(x)
666+
"1000×5 OneHotMatrix(::Vector{UInt32}) with eltype Bool"
667667
668668
julia> model(x) |> summary
669669
"4×5 Matrix{Float32}"

0 commit comments

Comments
 (0)