Skip to content

Commit 8a53d73

Browse files
committed
Minor documentation build errors
1 parent ada9671 commit 8a53d73

File tree

8 files changed

+12
-19
lines changed

8 files changed

+12
-19
lines changed

src/convnets/convnext.jl

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Creates a single block of ConvNeXt.
88
99
- `planes`: number of input channels.
1010
- `drop_path_rate`: Stochastic depth rate.
11-
- `λ`: Init value for LayerScale
11+
- `λ`: Initial value for [`LayerScale`](#)
1212
"""
1313
function convnextblock(planes, drop_path_rate = 0.0, λ = 1.0f-6)
1414
layers = SkipConnection(Chain(DepthwiseConv((7, 7), planes => planes; pad = 3),
@@ -33,7 +33,8 @@ Creates the layers for a ConvNeXt model.
3333
- `depths`: list with configuration for depth of each block
3434
- `planes`: list with configuration for number of output channels in each block
3535
- `drop_path_rate`: Stochastic depth rate.
36-
- `λ`: Init value for [LayerScale](https://arxiv.org/abs/2103.17239)
36+
- `λ`: Initial value for [`LayerScale`](#)
37+
([reference](https://arxiv.org/abs/2103.17239))
3738
- `nclasses`: number of output classes
3839
"""
3940
function convnext(depths, planes; inchannels = 3, drop_path_rate = 0.0, λ = 1.0f-6,

src/convnets/inception.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -332,7 +332,7 @@ Creates an Inceptionv4 model.
332332
333333
!!! warning
334334
335-
`Inceptionv4`` does not currently support pretrained weights.
335+
`Inceptionv4` does not currently support pretrained weights.
336336
"""
337337
struct Inceptionv4
338338
layers::Any
@@ -464,8 +464,8 @@ Creates an InceptionResNetv2 model.
464464
- `nclasses`: the number of output classes.
465465
466466
!!! warning
467-
468-
`InceptionResNetv2` does not currently support pretrained weights.
467+
468+
`InceptionResNetv2` does not currently support pretrained weights.
469469
"""
470470
struct InceptionResNetv2
471471
layers::Any

src/convnets/resnet.jl

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -226,10 +226,6 @@ See also [`Metalhead.resnet`](#).
226226
- `depth`: depth of the ResNet model. Options include (18, 34, 50, 101, 152).
227227
- `nclasses`: the number of output classes
228228
229-
!!! warning
230-
231-
Only `ResNet(50)` currently supports pretrained weights.
232-
233229
For `ResNet(18)` and `ResNet(34)`, the parameter-free shortcut style (type `:A`)
234230
is used in the first block and the three other blocks use type `:B` connection
235231
(following the implementation in PyTorch). The published version of

src/convnets/resnext.jl

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -112,9 +112,8 @@ Create a ResNeXt model with specified configuration. Currently supported values
112112
Set `pretrain = true` to load the model with pre-trained weights for ImageNet.
113113
114114
!!! warning
115-
116115
117-
`ResNeXt` does not currently support pretrained weights.
116+
`ResNeXt` does not currently support pretrained weights.
118117
119118
See also [`Metalhead.resnext`](#).
120119
"""

src/convnets/vgg.jl

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -154,10 +154,6 @@ Create a VGG style model with specified `depth`. Available values include (11, 1
154154
([reference](https://arxiv.org/abs/1409.1556v6)).
155155
See also [`VGG`](#).
156156
157-
!!! warning
158-
159-
`VGG` does not currently support pretrained weights.
160-
161157
# Arguments
162158
163159
- `pretrain`: set to `true` to load pre-trained model weights for ImageNet

src/layers/conv.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -154,7 +154,7 @@ Squeeze and excitation layer used by MobileNet variants
154154
155155
- `channels`: the number of input/output feature maps
156156
- `reduction = 4`: the reduction factor for the number of hidden feature maps
157-
(must be >= 1)
157+
(must be 1)
158158
"""
159159
function squeeze_excite(channels, reduction = 4)
160160
@assert (reduction>=1) "`reduction` must be >= 1"
@@ -182,7 +182,7 @@ Create a basic inverted residual block for MobileNet variants
182182
- `stride`: The stride of the convolutional kernel, has to be either 1 or 2
183183
- `reduction`: The reduction factor for the number of hidden feature maps
184184
in a squeeze and excite layer (see [`squeeze_excite`](#)).
185-
Must be >= 1 or `nothing` for no squeeze and excite layer.
185+
Must be 1 or `nothing` for no squeeze and excite layer.
186186
"""
187187
function invertedresidual(kernel_size, inplanes, hidden_planes, outplanes,
188188
activation = relu; stride, reduction = nothing)

src/utilities.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ cat_channels(xy...) = cat(xy...; dims = Val(3))
3939
"""
4040
inputscale(λ; activation = identity)
4141
42-
Scale the input by a scalar λ and applies an activation function to it.
42+
Scale the input by a scalar `λ` and applies an activation function to it.
4343
Equivalent to `activation.(λ .* x)`.
4444
"""
4545
inputscale(λ; activation = identity) = x -> _input_scale(x, λ, activation)

src/vit-based/vit.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,8 @@ Creates a Vision Transformer (ViT) model.
7979
8080
# Arguments
8181
82-
- `mode`: the model configuration, one of [:tiny, :small, :base, :large, :huge, :giant, :gigantic]
82+
- `mode`: the model configuration, one of
83+
`[:tiny, :small, :base, :large, :huge, :giant, :gigantic]`
8384
- `imsize`: image size
8485
- `inchannels`: number of input channels
8586
- `patch_size`: size of the patches

0 commit comments

Comments
 (0)