Skip to content

Commit a6d93cb

Browse files
committed
move docs around
1 parent c04210c commit a6d93cb

File tree

10 files changed

+166
-162
lines changed

10 files changed

+166
-162
lines changed

docs/Project.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
[deps]
22
BSON = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
3+
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
34
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
45
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
56
MLUtils = "f1d291b0-491e-4a28-83b9-f70985020b54"

docs/make.jl

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -14,28 +14,29 @@ makedocs(
1414
"Overview" => "models/overview.md",
1515
"Basics" => "models/basics.md",
1616
"Recurrence" => "models/recurrence.md",
17-
"Model Reference" => "models/layers.md",
17+
"Layer Reference" => "models/layers.md",
1818
"Loss Functions" => "models/losses.md",
1919
"Regularisation" => "models/regularisation.md",
2020
"Advanced Model Building" => "models/advanced.md",
21-
"Neural Network primitives from NNlib.jl" => "models/nnlib.md",
22-
"Recursive transformations from Functors.jl" => "models/functors.md"
21+
"NNlib.jl" => "models/nnlib.md",
22+
"Functors.jl" => "models/functors.md",
2323
],
2424
"Handling Data" => [
25-
"One-Hot Encoding with OneHotArrays.jl" => "data/onehot.md",
26-
"Working with data using MLUtils.jl" => "data/mlutils.md"
25+
"MLUtils.jl" => "data/mlutils.md",
26+
"OneHotArrays.jl" => "data/onehot.md",
2727
],
2828
"Training Models" => [
2929
"Optimisers" => "training/optimisers.md",
30-
"Training" => "training/training.md"
30+
"Training" => "training/training.md",
3131
],
3232
"GPU Support" => "gpu.md",
33-
"Saving & Loading" => "saving.md",
34-
"The Julia Ecosystem" => "ecosystem.md",
35-
"Utility Functions" => "utilities.md",
33+
"Model Tools" => [
34+
"Saving & Loading" => "saving.md",
35+
"Size Propagation" => "outputsize.md",
36+
"Weight Initialisation" => "utilities.md",
37+
],
3638
"Performance Tips" => "performance.md",
37-
"Datasets" => "datasets.md",
38-
"Community" => "community.md"
39+
"Flux's Ecosystem" => "ecosystem.md",
3940
],
4041
format = Documenter.HTML(
4142
analytics = "UA-36890222-9",

docs/src/community.md

Lines changed: 0 additions & 5 deletions
This file was deleted.

docs/src/datasets.md

Lines changed: 0 additions & 6 deletions
This file was deleted.

docs/src/ecosystem.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# The Julia Ecosystem
1+
# The Julia Ecosystem around Flux
22

33
One of the main strengths of Julia lies in an ecosystem of packages
44
globally providing a rich and consistent user experience.
@@ -49,7 +49,10 @@ Utility tools you're unlikely to have met if you never used Flux!
4949

5050
### Datasets
5151

52+
Commonly used machine learning datasets are provided by the following packages in the julia ecosystem:
53+
5254
- [MLDatasets.jl](https://github.com/JuliaML/MLDatasets.jl) focuses on downloading, unpacking, and accessing benchmark datasets.
55+
- [GraphMLDatasets.jl](https://github.com/yuehhua/GraphMLDatasets.jl): a library for machine learning datasets on graph.
5356

5457
### Plumbing
5558

@@ -87,6 +90,7 @@ Packages based on differentiable programming but not necessarily related to Mach
8790

8891
- [OnlineStats.jl](https://github.com/joshday/OnlineStats.jl) provides single-pass algorithms for statistics.
8992

93+
9094
## Useful miscellaneous packages
9195

9296
Some useful and random packages!

docs/src/index.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,3 +18,9 @@ NOTE: Flux used to have a CuArrays.jl dependency until v0.10.4, replaced by CUDA
1818
## Learning Flux
1919

2020
There are several different ways to learn Flux. If you just want to get started writing models, the [model zoo](https://github.com/FluxML/model-zoo/) gives good starting points for many common ones. This documentation provides a reference to all of Flux's APIs, as well as a from-scratch introduction to Flux's take on models and how they work. Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.
21+
22+
## Community
23+
24+
All Flux users are welcome to join our community on the [Julia forum](https://discourse.julialang.org/), or the [slack](https://discourse.julialang.org/t/announcing-a-julia-slack/4866) (channel #machine-learning). If you have questions or issues we'll try to help you out.
25+
26+
If you're interested in hacking on Flux, the [source code](https://github.com/FluxML/Flux.jl) is open and easy to understand -- it's all just the same Julia code you work with normally. You might be interested in our [intro issues](https://github.com/FluxML/Flux.jl/labels/good%20first%20issue) to get started or our [contributing guide](https://github.com/FluxML/Flux.jl/blob/master/CONTRIBUTING.md).

docs/src/models/layers.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,3 +86,12 @@ Many normalisation layers behave differently under training and inference (testi
8686
Flux.testmode!
8787
trainmode!
8888
```
89+
90+
91+
## Listing All Layers
92+
93+
The `modules` command uses Functors to extract a flat list of all layers:
94+
95+
```@docs
96+
Flux.modules
97+
```

docs/src/outputsize.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
## Model Building
2+
3+
Flux provides some utility functions to help you generate models in an automated fashion.
4+
5+
[`Flux.outputsize`](@ref) enables you to calculate the output sizes of layers like [`Conv`](@ref)
6+
when applied to input samples of a given size. This is achieved by passing a "dummy" array into
7+
the model that preserves size information without running any computation.
8+
`outputsize(f, inputsize)` works for all layers (including custom layers) out of the box.
9+
By default, `inputsize` expects the batch dimension,
10+
but you can exclude the batch size with `outputsize(f, inputsize; padbatch=true)` (assuming it to be one).
11+
12+
Using this utility function lets you automate model building for various inputs like so:
13+
```julia
14+
"""
15+
make_model(width, height, inchannels, nclasses;
16+
layer_config = [16, 16, 32, 32, 64, 64])
17+
18+
Create a CNN for a given set of configuration parameters.
19+
20+
# Arguments
21+
- `width`: the input image width
22+
- `height`: the input image height
23+
- `inchannels`: the number of channels in the input image
24+
- `nclasses`: the number of output classes
25+
- `layer_config`: a vector of the number of filters per each conv layer
26+
"""
27+
function make_model(width, height, inchannels, nclasses;
28+
layer_config = [16, 16, 32, 32, 64, 64])
29+
# construct a vector of conv layers programmatically
30+
conv_layers = [Conv((3, 3), inchannels => layer_config[1])]
31+
for (infilters, outfilters) in zip(layer_config, layer_config[2:end])
32+
push!(conv_layers, Conv((3, 3), infilters => outfilters))
33+
end
34+
35+
# compute the output dimensions for the conv layers
36+
# use padbatch=true to set the batch dimension to 1
37+
conv_outsize = Flux.outputsize(conv_layers, (width, height, nchannels); padbatch=true)
38+
39+
# the input dimension to Dense is programatically calculated from
40+
# width, height, and nchannels
41+
return Chain(conv_layers..., Dense(prod(conv_outsize) => nclasses))
42+
end
43+
```
44+
45+
```@docs
46+
Flux.outputsize
47+
```

docs/src/training/callbacks.md

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
## Callback Helpers
2+
3+
```@docs
4+
Flux.throttle
5+
Flux.stop
6+
Flux.skip
7+
```
8+
9+
## Patience Helpers
10+
11+
Flux provides utilities for controlling your training procedure according to some monitored condition and a maximum `patience`. For example, you can use `early_stopping` to stop training when the model is converging or deteriorating, or you can use `plateau` to check if the model is stagnating.
12+
13+
For example, below we create a pseudo-loss function that decreases, bottoms out, and then increases. The early stopping trigger will break the loop before the loss increases too much.
14+
```julia
15+
# create a pseudo-loss that decreases for 4 calls, then starts increasing
16+
# we call this like loss()
17+
loss = let t = 0
18+
() -> begin
19+
t += 1
20+
(t - 4) ^ 2
21+
end
22+
end
23+
24+
# create an early stopping trigger
25+
# returns true when the loss increases for two consecutive steps
26+
es = early_stopping(loss, 2; init_score = 9)
27+
28+
# this will stop at the 6th (4 decreasing + 2 increasing calls) epoch
29+
@epochs 10 begin
30+
es() && break
31+
end
32+
```
33+
34+
The keyword argument `distance` of `early_stopping` is a function of the form `distance(best_score, score)`. By default `distance` is `-`, which implies that the monitored metric `f` is expected to be decreasing and minimized. If you use some increasing metric (e.g. accuracy), you can customize the `distance` function: `(best_score, score) -> score - best_score`.
35+
```julia
36+
# create a pseudo-accuracy that increases by 0.01 each time from 0 to 1
37+
# we call this like acc()
38+
acc = let v = 0
39+
() -> v = max(1, v + 0.01)
40+
end
41+
42+
# create an early stopping trigger for accuracy
43+
es = early_stopping(acc, 3; delta = (best_score, score) -> score - best_score)
44+
45+
# this will iterate until the 10th epoch
46+
@epochs 10 begin
47+
es() && break
48+
end
49+
```
50+
51+
`early_stopping` and `plateau` are both built on top of `patience`. You can use `patience` to build your own triggers that use a patient counter. For example, if you want to trigger when the loss is below a threshold for several consecutive iterations:
52+
```julia
53+
threshold(f, thresh, delay) = patience(delay) do
54+
f() < thresh
55+
end
56+
```
57+
58+
Both `predicate` in `patience` and `f` in `early_stopping` / `plateau` can accept extra arguments. You can pass such extra arguments to `predicate` or `f` through the returned function:
59+
```julia
60+
trigger = patience((a; b) -> a > b, 3)
61+
62+
# this will iterate until the 10th epoch
63+
@epochs 10 begin
64+
trigger(1; b = 2) && break
65+
end
66+
67+
# this will stop at the 3rd epoch
68+
@epochs 10 begin
69+
trigger(3; b = 2) && break
70+
end
71+
```
72+
73+
```@docs
74+
Flux.patience
75+
Flux.early_stopping
76+
Flux.plateau
77+
```

0 commit comments

Comments
 (0)