Skip to content

Commit f050eb7

Browse files
committed
Create a getting started section and add a new linear regression example
1 parent c4837f7 commit f050eb7

File tree

10 files changed

+514
-12
lines changed

10 files changed

+514
-12
lines changed

docs/Project.toml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,12 @@
22
BSON = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
33
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
44
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
5+
MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
56
MLUtils = "f1d291b0-491e-4a28-83b9-f70985020b54"
67
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
78
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
9+
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
10+
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
811

912
[compat]
1013
Documenter = "0.26"

docs/make.jl

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,20 @@
1-
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers
1+
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers, Plots, MLDatasets, Statistics
22

33

44
DocMeta.setdocmeta!(Flux, :DocTestSetup, :(using Flux); recursive = true)
55

66
makedocs(
7-
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers],
7+
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers, Plots, MLDatasets, Statistics],
88
doctest = false,
99
sitename = "Flux",
1010
pages = [
1111
"Home" => "index.md",
12+
"Getting Started" => [
13+
"Overview" => "getting_started/overview.md",
14+
"Basics" => "getting_started/basics.md",
15+
"Linear Regression" => "getting_started/linear_regression.md",
16+
],
1217
"Building Models" => [
13-
"Overview" => "models/overview.md",
14-
"Basics" => "models/basics.md",
1518
"Recurrence" => "models/recurrence.md",
1619
"Model Reference" => "models/layers.md",
1720
"Loss Functions" => "models/losses.md",

docs/src/models/basics.md renamed to docs/src/getting_started/basics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -221,4 +221,4 @@ Flux.@functor Affine
221221

222222
This enables a useful extra set of functionality for our `Affine` layer, such as [collecting its parameters](../training/optimisers.md) or [moving it to the GPU](../gpu.md).
223223

224-
For some more helpful tricks, including parameter freezing, please checkout the [advanced usage guide](advanced.md).
224+
For some more helpful tricks, including parameter freezing, please checkout the [advanced usage guide](../models/advanced.md).

docs/src/getting_started/linear_regression.md

Lines changed: 496 additions & 0 deletions
Large diffs are not rendered by default.
File renamed without changes.

docs/src/gpu.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ true
1717

1818
Support for array operations on other hardware backends, like GPUs, is provided by external packages like [CUDA](https://github.com/JuliaGPU/CUDA.jl). Flux is agnostic to array types, so we simply need to move model weights and data to the GPU and Flux will handle it.
1919

20-
For example, we can use `CUDA.CuArray` (with the `cu` converter) to run our [basic example](models/basics.md) on an NVIDIA GPU.
20+
For example, we can use `CUDA.CuArray` (with the `cu` converter) to run our [basic example](getting_started/basics.md) on an NVIDIA GPU.
2121

2222
(Note that you need to have CUDA available to use CUDA.CuArray – please see the [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) instructions for more details.)
2323

docs/src/models/advanced.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ For an intro to Flux and automatic differentiation, see this [tutorial](https://
3434

3535
## Customising Parameter Collection for a Model
3636

37-
Taking reference from our example `Affine` layer from the [basics](basics.md#Building-Layers-1).
37+
Taking reference from our example `Affine` layer from the [basics](../getting_started/basics.md#Building-Layers-1).
3838

3939
By default all the fields in the `Affine` type are collected as its parameters, however, in some cases it may be desired to hold other metadata in our "layers" that may not be needed for training, and are hence supposed to be ignored while the parameters are collected. With Flux, it is possible to mark the fields of our layers that are trainable in two ways.
4040

docs/src/training/optimisers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ CurrentModule = Flux
44

55
# Optimisers
66

7-
Consider a [simple linear regression](../models/basics.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
7+
Consider a [simple linear regression](../getting_started/linear_regression.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
88

99
```julia
1010
using Flux

docs/src/training/training.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -40,8 +40,8 @@ more information can be found on [Custom Training Loops](../models/advanced.md).
4040

4141
## Loss Functions
4242

43-
The objective function must return a number representing how far the model is from its target – the *loss* of the model. The `loss` function that we defined in [basics](../models/basics.md) will work as an objective.
44-
In addition to custom losses, a model can be trained in conjunction with
43+
The objective function must return a number representing how far the model is from its target – the *loss* of the model. The `loss` function that we defined in [basics](../getting_started/basics.md) will work as an objective.
44+
In addition to custom losses, model can be trained in conjuction with
4545
the commonly used losses that are grouped under the `Flux.Losses` module.
4646
We can also define an objective in terms of some model:
4747

@@ -64,11 +64,11 @@ At first glance, it may seem strange that the model that we want to train is not
6464

6565
## Model parameters
6666

67-
The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../models/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.
67+
The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../getting_started/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.
6868

6969
Such an object contains a reference to the model's parameters, not a copy, such that after their training, the model behaves according to their updated values.
7070

71-
Handling all the parameters on a layer-by-layer basis is explained in the [Layer Helpers](../models/basics.md) section. For freezing model parameters, see the [Advanced Usage Guide](../models/advanced.md).
71+
Handling all the parameters on a layer by layer basis is explained in the [Layer Helpers](../getting_started/basics.md) section. Also, for freezing model parameters, see the [Advanced Usage Guide](../models/advanced.md).
7272

7373
```@docs
7474
Flux.params

xy.jld2

1.31 KB
Binary file not shown.

0 commit comments

Comments
 (0)