Skip to content

Commit 0350e03

Browse files
committed
Move to the existing tutorials section
1 parent 17d167e commit 0350e03

File tree

8 files changed

+12
-15
lines changed

8 files changed

+12
-15
lines changed

docs/make.jl

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,6 @@ makedocs(
1515
"Fitting a Line" => "getting_started/overview.md",
1616
"Gradients and Layers" => "getting_started/basics.md",
1717
],
18-
"Tutorials" => [
19-
"Linear Regression" => "tutorials/linear_regression.md",
20-
],
2118
"Building Models" => [
2219
"Built-in Layers 📚" => "models/layers.md",
2320
"Recurrence" => "models/recurrence.md",
@@ -44,11 +41,12 @@ makedocs(
4441
"Flat vs. Nested 📚" => "destructure.md",
4542
"Functors.jl 📚 (`fmap`, ...)" => "models/functors.md",
4643
],
44+
"Tutorials" => [
45+
"Linear Regression" => "tutorials/linear_regression.md",
46+
"Custom Layers" => "tutorials/advanced.md", # TODO move freezing to Training
47+
],
4748
"Performance Tips" => "performance.md",
4849
"Flux's Ecosystem" => "ecosystem.md",
49-
"Tutorials" => [ # TODO, maybe
50-
"Custom Layers" => "models/advanced.md", # TODO move freezing to Training
51-
],
5250
],
5351
format = Documenter.HTML(
5452
sidebar_sitename = false,

docs/src/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,9 @@ Other closely associated packages, also installed automatically, include [Zygote
1616

1717
## Learning Flux
1818

19-
The [quick start](models/quickstart.md) page trains a simple neural network.
19+
The [quick start](getting_started/quickstart.md) page trains a simple neural network.
2020

21-
This rest of this documentation provides a from-scratch introduction to Flux's take on models and how they work, starting with [fitting a line](models/overview.md). Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.
21+
This rest of this documentation provides a from-scratch introduction to Flux's take on models and how they work, starting with [fitting a line](getting_started/overview.md). Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.
2222

2323
Sections with 📚 contain API listings. The same text is avalable at the Julia prompt, by typing for example `?gpu`.
2424

docs/src/models/activation.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
1-
2-
# Activation Functions from NNlib.jl
1+
# [Activation Functions from NNlib.jl](@id man-activations)
32

43
These non-linearities used between layers of your model are exported by the [NNlib](https://github.com/FluxML/NNlib.jl) package.
54

docs/src/models/functors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Flux models are deeply nested structures, and [Functors.jl](https://github.com/F
44

55
New layers should be annotated using the `Functors.@functor` macro. This will enable [`params`](@ref Flux.params) to see the parameters inside, and [`gpu`](@ref) to move them to the GPU.
66

7-
`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](../models/advanced.md) page covers the use cases of `Functors` in greater details.
7+
`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](../tutorials/advanced.md) page covers the use cases of `Functors` in greater details.
88

99
```@docs
1010
Functors.@functor

docs/src/training/optimisers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ CurrentModule = Flux
44

55
# Optimisers
66

7-
Consider a [simple linear regression](../getting_started/linear_regression.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
7+
Consider a [simple linear regression](../tutorials/linear_regression.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
88

99
```julia
1010
using Flux

docs/src/training/training.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Flux.Optimise.train!
3636
```
3737

3838
There are plenty of examples in the [model zoo](https://github.com/FluxML/model-zoo), and
39-
more information can be found on [Custom Training Loops](../models/advanced.md).
39+
more information can be found on [Custom Training Loops](../tutorials/advanced.md).
4040

4141
## Loss Functions
4242

@@ -68,7 +68,7 @@ The model to be trained must have a set of tracked parameters that are used to c
6868

6969
Such an object contains a reference to the model's parameters, not a copy, such that after their training, the model behaves according to their updated values.
7070

71-
Handling all the parameters on a layer by layer basis is explained in the [Layer Helpers](../getting_started/basics.md) section. Also, for freezing model parameters, see the [Advanced Usage Guide](../models/advanced.md).
71+
Handling all the parameters on a layer by layer basis is explained in the [Layer Helpers](../getting_started/basics.md) section. Also, for freezing model parameters, see the [Advanced Usage Guide](../tutorials/advanced.md).
7272

7373
```@docs
7474
Flux.params
File renamed without changes.

src/losses/functions.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -273,7 +273,7 @@ Return the binary cross-entropy loss, computed as
273273
274274
agg(@.(-y * log(ŷ + ϵ) - (1 - y) * log(1 - ŷ + ϵ)))
275275
276-
Where typically, the prediction `ŷ` is given by the output of a [sigmoid](@ref Activation-Functions-from-NNlib.jl) activation.
276+
Where typically, the prediction `ŷ` is given by the output of a [sigmoid](@ref man-activations) activation.
277277
The `ϵ` term is included to avoid infinity. Using [`logitbinarycrossentropy`](@ref) is recomended
278278
over `binarycrossentropy` for numerical stability.
279279

0 commit comments

Comments
 (0)