Skip to content

Documentation headings & sections #2056

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Sep 19, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
[deps]
BSON = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
MLUtils = "f1d291b0-491e-4a28-83b9-f70985020b54"
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
OneHotArrays = "0b1bfda6-eb8a-41d2-88d8-f5af5cad476f"
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

[compat]
Documenter = "0.27"
34 changes: 19 additions & 15 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,41 +1,45 @@
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore


DocMeta.setdocmeta!(Flux, :DocTestSetup, :(using Flux); recursive = true)

makedocs(
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays],
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Base],
doctest = false,
sitename = "Flux",
strict = [:cross_references,],
# strict = [:cross_references,],
pages = [
"Home" => "index.md",
"Building Models" => [
"Overview" => "models/overview.md",
"Basics" => "models/basics.md",
"Recurrence" => "models/recurrence.md",
"Model Reference" => "models/layers.md",
"Layer Reference" => "models/layers.md",
"Loss Functions" => "models/losses.md",
"Regularisation" => "models/regularisation.md",
"Advanced Model Building" => "models/advanced.md",
"Neural Network primitives from NNlib.jl" => "models/nnlib.md",
"Recursive transformations from Functors.jl" => "models/functors.md"
"Custom Layers" => "models/advanced.md",
"NNlib.jl" => "models/nnlib.md",
"Activation Functions" => "models/activation.md",
],
"Handling Data" => [
"One-Hot Encoding with OneHotArrays.jl" => "data/onehot.md",
"Working with data using MLUtils.jl" => "data/mlutils.md"
"MLUtils.jl" => "data/mlutils.md",
"OneHotArrays.jl" => "data/onehot.md",
Comment on lines 24 to +26
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I'm trying to (1) shorten these to fit on one line, and (2) put the package Name.jl to make such pages of "foreign API" visually distinct from pages about Flux itself.

I hope that "Working with data" is clear enough from "Handling Data" just above. The actual titles on the pages remain as before.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New on the right:
Screenshot 2022-08-29 at 19 26 02

],
"Training Models" => [
"Optimisers" => "training/optimisers.md",
"Training" => "training/training.md"
"Training" => "training/training.md",
"Callback Helpers" => "training/callbacks.md",
"Zygote.jl" => "training/zygote.md",
],
"GPU Support" => "gpu.md",
"Saving & Loading" => "saving.md",
"The Julia Ecosystem" => "ecosystem.md",
"Utility Functions" => "utilities.md",
"Model Tools" => [
"Saving & Loading" => "saving.md",
"Shape Inference" => "outputsize.md",
"Weight Initialisation" => "utilities.md",
"Functors.jl" => "models/functors.md",
],
"Performance Tips" => "performance.md",
"Datasets" => "datasets.md",
"Community" => "community.md"
Comment on lines -37 to -38
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Datasets became one line in the "ecosystem" page.

Community I put on the first page, after "learning flux"

"Flux's Ecosystem" => "ecosystem.md",
],
format = Documenter.HTML(
analytics = "UA-36890222-9",
Expand Down
5 changes: 0 additions & 5 deletions docs/src/community.md

This file was deleted.

2 changes: 2 additions & 0 deletions docs/src/data/onehot.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,8 @@ julia> onecold(ans, [:a, :b, :c])

Note that these operations returned `OneHotVector` and `OneHotMatrix` rather than `Array`s. `OneHotVector`s behave like normal vectors but avoid any unnecessary cost compared to using an integer index directly. For example, multiplying a matrix with a one-hot vector simply slices out the relevant row of the matrix under the hood.

### Function listing

```@docs
OneHotArrays.onehot
OneHotArrays.onecold
Expand Down
6 changes: 0 additions & 6 deletions docs/src/datasets.md

This file was deleted.

6 changes: 5 additions & 1 deletion docs/src/ecosystem.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# The Julia Ecosystem
# The Julia Ecosystem around Flux

One of the main strengths of Julia lies in an ecosystem of packages
globally providing a rich and consistent user experience.
Expand Down Expand Up @@ -49,7 +49,10 @@ Utility tools you're unlikely to have met if you never used Flux!

### Datasets

Commonly used machine learning datasets are provided by the following packages in the julia ecosystem:

- [MLDatasets.jl](https://github.com/JuliaML/MLDatasets.jl) focuses on downloading, unpacking, and accessing benchmark datasets.
- [GraphMLDatasets.jl](https://github.com/yuehhua/GraphMLDatasets.jl): a library for machine learning datasets on graph.

### Plumbing

Expand Down Expand Up @@ -87,6 +90,7 @@ Packages based on differentiable programming but not necessarily related to Mach

- [OnlineStats.jl](https://github.com/joshday/OnlineStats.jl) provides single-pass algorithms for statistics.


## Useful miscellaneous packages

Some useful and random packages!
Expand Down
6 changes: 6 additions & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,9 @@ NOTE: Flux used to have a CuArrays.jl dependency until v0.10.4, replaced by CUDA
## Learning Flux

There are several different ways to learn Flux. If you just want to get started writing models, the [model zoo](https://github.com/FluxML/model-zoo/) gives good starting points for many common ones. This documentation provides a reference to all of Flux's APIs, as well as a from-scratch introduction to Flux's take on models and how they work. Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.

## Community

All Flux users are welcome to join our community on the [Julia forum](https://discourse.julialang.org/), or the [slack](https://discourse.julialang.org/t/announcing-a-julia-slack/4866) (channel #machine-learning). If you have questions or issues we'll try to help you out.

If you're interested in hacking on Flux, the [source code](https://github.com/FluxML/Flux.jl) is open and easy to understand -- it's all just the same Julia code you work with normally. You might be interested in our [intro issues](https://github.com/FluxML/Flux.jl/labels/good%20first%20issue) to get started or our [contributing guide](https://github.com/FluxML/Flux.jl/blob/master/CONTRIBUTING.md).
39 changes: 39 additions & 0 deletions docs/src/models/activation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@

# Activation Functions from NNlib.jl

These non-linearities used between layers of your model are exported by the [NNlib](https://github.com/FluxML/NNlib.jl) package.

Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call `σ.(xs)`, `relu.(xs)` and so on. Alternatively, they can be passed to a layer like `Dense(784 => 1024, relu)` which will handle this broadcasting.

```@docs
celu
elu
gelu
hardsigmoid
sigmoid_fast
hardtanh
tanh_fast
leakyrelu
lisht
logcosh
logsigmoid
mish
relu
relu6
rrelu
selu
sigmoid
softplus
softshrink
softsign
swish
hardswish
tanhshrink
trelu
```

Julia's `Base.Math` also provide `tanh`, which can be used as an activation function:

```@docs
tanh
```
2 changes: 1 addition & 1 deletion docs/src/models/advanced.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Advanced Model Building and Customisation
# Defining Customised Layers

Here we will try and describe usage of some more advanced features that Flux provides to give more control over model building.

Expand Down
9 changes: 9 additions & 0 deletions docs/src/models/layers.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,3 +86,12 @@ Many normalisation layers behave differently under training and inference (testi
Flux.testmode!
trainmode!
```


## Listing All Layers

The `modules` command uses Functors to extract a flat list of all layers:

```@docs
Flux.modules
```
4 changes: 2 additions & 2 deletions docs/src/models/losses.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Flux provides a large number of common loss functions used for training machine learning models.
They are grouped together in the `Flux.Losses` module.

Loss functions for supervised learning typically expect as inputs a target `y`, and a prediction `ŷ`.
Loss functions for supervised learning typically expect as inputs a target `y`, and a prediction `ŷ` from your model.
In Flux's convention, the order of the arguments is the following

```julia
Expand All @@ -21,7 +21,7 @@ loss(ŷ, y, agg=x->mean(w .* x)) # weighted mean
loss(ŷ, y, agg=identity) # no aggregation.
```

## Losses Reference
### Function listing

```@docs
Flux.Losses.mae
Expand Down
33 changes: 1 addition & 32 deletions docs/src/models/nnlib.md
Original file line number Diff line number Diff line change
@@ -1,37 +1,6 @@
# Neural Network primitives from NNlib.jl

Flux re-exports all of the functions exported by the [NNlib](https://github.com/FluxML/NNlib.jl) package.

## Activation Functions

Non-linearities that go between layers of your model. Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call `σ.(xs)`, `relu.(xs)` and so on.

```@docs
celu
elu
gelu
hardsigmoid
sigmoid_fast
hardtanh
tanh_fast
leakyrelu
lisht
logcosh
logsigmoid
mish
relu
relu6
rrelu
selu
sigmoid
softplus
softshrink
softsign
swish
hardswish
tanhshrink
trelu
```
Flux re-exports all of the functions exported by the [NNlib](https://github.com/FluxML/NNlib.jl) package. This includes activation functions, described on the next page. Many of the functions on this page exist primarily as the internal implementation of Flux layer, but can also be used independently.

## Softmax

Expand Down
47 changes: 47 additions & 0 deletions docs/src/outputsize.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Shape Inference

To help you generate models in an automated fashion, [`Flux.outputsize`](@ref) lets you
calculate the size returned produced by layers for a given size input.
This is especially useful for layers like [`Conv`](@ref).

It works by passing a "dummy" array into the model that preserves size information without running any computation.
`outputsize(f, inputsize)` works for all layers (including custom layers) out of the box.
By default, `inputsize` expects the batch dimension,
but you can exclude the batch size with `outputsize(f, inputsize; padbatch=true)` (assuming it to be one).

Using this utility function lets you automate model building for various inputs like so:
```julia
"""
make_model(width, height, inchannels, nclasses;
layer_config = [16, 16, 32, 32, 64, 64])
Create a CNN for a given set of configuration parameters.
# Arguments
- `width`: the input image width
- `height`: the input image height
- `inchannels`: the number of channels in the input image
- `nclasses`: the number of output classes
- `layer_config`: a vector of the number of filters per each conv layer
"""
function make_model(width, height, inchannels, nclasses;
layer_config = [16, 16, 32, 32, 64, 64])
# construct a vector of conv layers programmatically
conv_layers = [Conv((3, 3), inchannels => layer_config[1])]
for (infilters, outfilters) in zip(layer_config, layer_config[2:end])
push!(conv_layers, Conv((3, 3), infilters => outfilters))
end

# compute the output dimensions for the conv layers
# use padbatch=true to set the batch dimension to 1
conv_outsize = Flux.outputsize(conv_layers, (width, height, nchannels); padbatch=true)

# the input dimension to Dense is programatically calculated from
# width, height, and nchannels
return Chain(conv_layers..., Dense(prod(conv_outsize) => nclasses))
end
```

```@docs
Flux.outputsize
```
77 changes: 77 additions & 0 deletions docs/src/training/callbacks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Callback Helpers

```@docs
Flux.throttle
Flux.stop
Flux.skip
```

## Patience Helpers

Flux provides utilities for controlling your training procedure according to some monitored condition and a maximum `patience`. For example, you can use `early_stopping` to stop training when the model is converging or deteriorating, or you can use `plateau` to check if the model is stagnating.

For example, below we create a pseudo-loss function that decreases, bottoms out, and then increases. The early stopping trigger will break the loop before the loss increases too much.
```julia
# create a pseudo-loss that decreases for 4 calls, then starts increasing
# we call this like loss()
loss = let t = 0
() -> begin
t += 1
(t - 4) ^ 2
end
end

# create an early stopping trigger
# returns true when the loss increases for two consecutive steps
es = early_stopping(loss, 2; init_score = 9)

# this will stop at the 6th (4 decreasing + 2 increasing calls) epoch
@epochs 10 begin
es() && break
end
```

The keyword argument `distance` of `early_stopping` is a function of the form `distance(best_score, score)`. By default `distance` is `-`, which implies that the monitored metric `f` is expected to be decreasing and minimized. If you use some increasing metric (e.g. accuracy), you can customize the `distance` function: `(best_score, score) -> score - best_score`.
```julia
# create a pseudo-accuracy that increases by 0.01 each time from 0 to 1
# we call this like acc()
acc = let v = 0
() -> v = max(1, v + 0.01)
end

# create an early stopping trigger for accuracy
es = early_stopping(acc, 3; delta = (best_score, score) -> score - best_score)

# this will iterate until the 10th epoch
@epochs 10 begin
es() && break
end
```

`early_stopping` and `plateau` are both built on top of `patience`. You can use `patience` to build your own triggers that use a patient counter. For example, if you want to trigger when the loss is below a threshold for several consecutive iterations:
```julia
threshold(f, thresh, delay) = patience(delay) do
f() < thresh
end
```

Both `predicate` in `patience` and `f` in `early_stopping` / `plateau` can accept extra arguments. You can pass such extra arguments to `predicate` or `f` through the returned function:
```julia
trigger = patience((a; b) -> a > b, 3)

# this will iterate until the 10th epoch
@epochs 10 begin
trigger(1; b = 2) && break
end

# this will stop at the 3rd epoch
@epochs 10 begin
trigger(3; b = 2) && break
end
```

```@docs
Flux.patience
Flux.early_stopping
Flux.plateau
```
Loading