Skip to content

Commit f2f94df

Browse files
committed
Fix cross-references for loss functions
1 parent 1914f38 commit f2f94df

File tree

2 files changed

+6
-5
lines changed

2 files changed

+6
-5
lines changed

docs/src/models/layers.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,7 @@ These layers don't affect the structure of the network but may improve training
7171
Flux.normalise
7272
BatchNorm
7373
Dropout
74+
Flux.dropout
7475
AlphaDropout
7576
LayerNorm
7677
InstanceNorm

src/losses/functions.jl

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,7 @@ Cross entropy is typically used as a loss in multi-class classification,
167167
in which case the labels `y` are given in a one-hot format.
168168
`dims` specifies the dimension (or the dimensions) containing the class probabilities.
169169
The prediction `ŷ` is supposed to sum to one across `dims`,
170-
as would be the case with the output of a [`softmax`](@ref) operation.
170+
as would be the case with the output of a [softmax](@ref Softmax) operation.
171171
172172
For numerical stability, it is recommended to use [`logitcrossentropy`](@ref)
173173
rather than `softmax` followed by `crossentropy` .
@@ -225,7 +225,7 @@ Return the cross entropy calculated by
225225
226226
This is mathematically equivalent to `crossentropy(softmax(ŷ), y)`,
227227
but is more numerically stable than using functions [`crossentropy`](@ref)
228-
and [`softmax`](@ref) separately.
228+
and [softmax](@ref Softmax) separately.
229229
230230
See also: [`binarycrossentropy`](@ref), [`logitbinarycrossentropy`](@ref), [`label_smoothing`](@ref).
231231
@@ -262,7 +262,7 @@ Return the binary cross-entropy loss, computed as
262262
263263
agg(@.(-y * log(ŷ + ϵ) - (1 - y) * log(1 - ŷ + ϵ)))
264264
265-
Where typically, the prediction `ŷ` is given by the output of a [`sigmoid`](@ref) activation.
265+
Where typically, the prediction `ŷ` is given by the output of a [sigmoid](@ref Activation-Functions) activation.
266266
The `ϵ` term is included to avoid infinity. Using [`logitbinarycrossentropy`](@ref) is recomended
267267
over `binarycrossentropy` for numerical stability.
268268
@@ -452,7 +452,7 @@ end
452452
binary_focal_loss(ŷ, y; agg=mean, γ=2, ϵ=eps(ŷ))
453453
454454
Return the [binary_focal_loss](https://arxiv.org/pdf/1708.02002.pdf)
455-
The input, 'ŷ', is expected to be normalized (i.e. [`softmax`](@ref) output).
455+
The input, 'ŷ', is expected to be normalized (i.e. [softmax](@ref Softmax) output).
456456
457457
For `γ == 0`, the loss is mathematically equivalent to [`Losses.binarycrossentropy`](@ref).
458458
@@ -493,7 +493,7 @@ end
493493
Return the [focal_loss](https://arxiv.org/pdf/1708.02002.pdf)
494494
which can be used in classification tasks with highly imbalanced classes.
495495
It down-weights well-classified examples and focuses on hard examples.
496-
The input, 'ŷ', is expected to be normalized (i.e. [`softmax`](@ref) output).
496+
The input, 'ŷ', is expected to be normalized (i.e. [softmax](@ref Softmax) output).
497497
498498
The modulating factor, `γ`, controls the down-weighting strength.
499499
For `γ == 0`, the loss is mathematically equivalent to [`Losses.crossentropy`](@ref).

0 commit comments

Comments
 (0)