Skip to content

Commit 72f5566

Browse files
More docs
Co-authored-by: Kyle Daruwalla <daruwalla.k.public@icloud.com>
1 parent af504d3 commit 72f5566

File tree

2 files changed

+20
-20
lines changed

2 files changed

+20
-20
lines changed

src/convnets/inception.jl

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -286,9 +286,9 @@ Create an Inceptionv4 model.
286286
287287
# Arguments
288288
289-
- inchannels: number of input channels.
290-
- dropout: rate of dropout in classifier head.
291-
- nclasses: the number of output classes.
289+
- `inchannels`: number of input channels.
290+
- `dropout`: rate of dropout in classifier head.
291+
- `nclasses`: the number of output classes.
292292
"""
293293
function inceptionv4(; inchannels = 3, dropout = 0.0, nclasses = 1000)
294294
body = Chain(conv_bn((3, 3), inchannels, 32; stride = 2)...,
@@ -326,9 +326,9 @@ Creates an Inceptionv4 model.
326326
# Arguments
327327
328328
- `pretrain`: set to `true` to load the pre-trained weights for ImageNet
329-
- inchannels: number of input channels.
330-
- dropout: rate of dropout in classifier head.
331-
- nclasses: the number of output classes.
329+
- `inchannels`: number of input channels.
330+
- `dropout`: rate of dropout in classifier head.
331+
- `nclasses`: the number of output classes.
332332
333333
!!! warning
334334
@@ -426,9 +426,9 @@ Creates an InceptionResNetv2 model.
426426
427427
# Arguments
428428
429-
- inchannels: number of input channels.
430-
- dropout: rate of dropout in classifier head.
431-
- nclasses: the number of output classes.
429+
- `inchannels`: number of input channels.
430+
- `dropout`: rate of dropout in classifier head.
431+
- `nclasses`: the number of output classes.
432432
"""
433433
function inceptionresnetv2(; inchannels = 3, dropout = 0.0, nclasses = 1000)
434434
body = Chain(conv_bn((3, 3), inchannels, 32; stride = 2)...,
@@ -459,9 +459,9 @@ Creates an InceptionResNetv2 model.
459459
# Arguments
460460
461461
- `pretrain`: set to `true` to load the pre-trained weights for ImageNet
462-
- inchannels: number of input channels.
463-
- dropout: rate of dropout in classifier head.
464-
- nclasses: the number of output classes.
462+
- `inchannels`: number of input channels.
463+
- `dropout`: rate of dropout in classifier head.
464+
- `nclasses`: the number of output classes.
465465
466466
!!! warning
467467
@@ -496,12 +496,12 @@ Create an Xception block.
496496
497497
# Arguments
498498
499-
- inchannels: number of input channels.
500-
- outchannels: number of output channels.
501-
- nrepeats: number of repeats of depthwise separable convolution layers.
502-
- stride: stride by which to downsample the input.
503-
- start_with_relu: if true, start the block with a ReLU activation.
504-
- grow_first: if true, increase the number of channels at the first convolution.
499+
- `inchannels`: number of input channels.
500+
- `outchannels`: number of output channels.
501+
- `nrepeats`: number of repeats of depthwise separable convolution layers.
502+
- `stride`: stride by which to downsample the input.
503+
- `start_with_relu`: if true, start the block with a ReLU activation.
504+
- `grow_first`: if true, increase the number of channels at the first convolution.
505505
"""
506506
function xception_block(inchannels, outchannels, nrepeats; stride = 1,
507507
start_with_relu = true,

src/layers/conv.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,9 +64,9 @@ Create a depthwise separable convolution chain as used in MobileNetv1.
6464
This is sequence of layers:
6565
6666
- a `kernelsize` depthwise convolution from `inplanes => inplanes`
67-
- a batch norm layer + `activation` (if `use_bn1`; otherwise `activation` is applied to the convolution output)
67+
- a batch norm layer + `activation` (if `use_bn[1] == true`; otherwise `activation` is applied to the convolution output)
6868
- a `kernelsize` convolution from `inplanes => outplanes`
69-
- a batch norm layer + `activation` (if `use_bn2`; otherwise `activation` is applied to the convolution output)
69+
- a batch norm layer + `activation` (if `use_bn[2] == true`; otherwise `activation` is applied to the convolution output)
7070
7171
See Fig. 3 in [reference](https://arxiv.org/abs/1704.04861v1).
7272

0 commit comments

Comments
 (0)