Skip to content

Commit 8708a0e

Browse files
committed
use rand32 etc
1 parent 3f8084e commit 8708a0e

File tree

5 files changed

+23
-21
lines changed

5 files changed

+23
-21
lines changed

docs/src/tutorials/linear_regression.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -272,6 +272,8 @@ Let's start by initializing our dataset. We will be using the [`BostonHousing`](
272272
julia> dataset = BostonHousing();
273273
274274
julia> x, y = BostonHousing(as_df=false)[:];
275+
276+
julia> x, y = Float32.(x), Float32.(y)
275277
```
276278

277279
We can now split the obtained data into training and testing data -
@@ -287,7 +289,7 @@ This data contains a diverse number of features, which means that the features h
287289

288290
```jldoctest linear_regression_complex; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
289291
julia> std(x_train)
290-
134.06784844377117
292+
134.06786f0
291293
```
292294

293295
The data is indeed not normalised. We can use the [`Flux.normalise`](@ref) function to normalise the training data.
@@ -296,7 +298,7 @@ The data is indeed not normalised. We can use the [`Flux.normalise`](@ref) funct
296298
julia> x_train_n = Flux.normalise(x_train);
297299
298300
julia> std(x_train_n)
299-
1.0000843694328236
301+
1.0000844f0
300302
```
301303

302304
The standard deviation is now close to one! Our data is ready!
@@ -318,7 +320,7 @@ julia> function loss(model, x, y)
318320
end;
319321
320322
julia> loss(model, x_train_n, y_train)
321-
676.165591625047
323+
676.1656f0
322324
```
323325

324326
We can now proceed to the training phase!
@@ -363,7 +365,7 @@ Let's have a look at the loss -
363365

364366
```jldoctest linear_regression_complex; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
365367
julia> loss(model, x_train_n, y_train)
366-
27.127200028562164
368+
27.1272f0
367369
```
368370

369371
The loss went down significantly! It can be minimized further by choosing an even smaller `δ`.
@@ -376,7 +378,7 @@ The last step of this tutorial would be to test our model using the testing data
376378
julia> x_test_n = Flux.normalise(x_test);
377379
378380
julia> loss(model, x_test_n, y_test)
379-
66.91014769713368
381+
66.91015f0
380382
```
381383

382384
The loss is not as small as the loss of the training data, but it looks good! This also shows that our model is not overfitting!

src/layers/basic.jl

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ true
1616
1717
julia> m = Chain(Dense(10 => 5, tanh), Dense(5 => 2));
1818
19-
julia> x = rand(10, 32);
19+
julia> x = rand32(10, 32);
2020
2121
julia> m(x) == m[2](m[1](x))
2222
true
@@ -132,11 +132,11 @@ The weight matrix and/or the bias vector (of length `out`) may also be provided
132132
julia> d = Dense(5 => 2)
133133
Dense(5 => 2) # 12 parameters
134134
135-
julia> d(rand(Float32, 5, 64)) |> size
135+
julia> d(rand32(5, 64)) |> size
136136
(2, 64)
137137
138-
julia> d(rand(Float32, 5, 1, 1, 64)) |> size # treated as three batch dimensions
139-
(2, 1, 1, 64)
138+
julia> d(rand32(5, 6, 4, 64)) |> size # treated as three batch dimensions
139+
(2, 6, 4, 64)
140140
141141
julia> d1 = Dense(ones(2, 5), false, tanh) # using provided weight matrix
142142
Dense(5 => 2, tanh; bias=false) # 10 parameters
@@ -476,7 +476,7 @@ julia> model = Chain(Dense(3 => 5),
476476
Parallel(vcat, Dense(5 => 4), Chain(Dense(5 => 7), Dense(7 => 4))),
477477
Dense(8 => 17));
478478
479-
julia> model(rand(3)) |> size
479+
julia> model(rand32(3)) |> size
480480
(17,)
481481
482482
julia> model2 = Parallel(+; α = Dense(10, 2, tanh), β = Dense(5, 2))
@@ -486,10 +486,10 @@ Parallel(
486486
β = Dense(5 => 2), # 12 parameters
487487
) # Total: 4 arrays, 34 parameters, 392 bytes.
488488
489-
julia> model2(rand(10), rand(5)) |> size
489+
julia> model2(rand32(10), rand32(5)) |> size
490490
(2,)
491491
492-
julia> model2[:α](rand(10)) |> size
492+
julia> model2[:α](rand32(10)) |> size
493493
(2,)
494494
495495
julia> model2[:β] == model2[2]

src/layers/conv.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ See also [`Conv`](@ref), [`MaxPool`](@ref).
2222
2323
# Examples
2424
```jldoctest
25-
julia> xs = rand(Float32, 100, 100, 3, 50); # a batch of images
25+
julia> xs = rand32(100, 100, 3, 50); # a batch of images
2626
2727
julia> layer = Conv((2,2), 3 => 7, pad=SamePad())
2828
Conv((2, 2), 3 => 7, pad=(1, 0, 1, 0)) # 91 parameters
@@ -96,7 +96,7 @@ See also [`ConvTranspose`](@ref), [`DepthwiseConv`](@ref), [`CrossCor`](@ref).
9696
9797
# Examples
9898
```jldoctest
99-
julia> xs = rand(Float32, 100, 100, 3, 50); # a batch of images
99+
julia> xs = rand32(100, 100, 3, 50); # a batch of 50 RGB images
100100
101101
julia> layer = Conv((5,5), 3 => 7, relu; bias = false)
102102
Conv((5, 5), 3 => 7, relu, bias=false) # 525 parameters
@@ -238,7 +238,7 @@ See also [`Conv`](@ref) for more detailed description of keywords.
238238
239239
# Examples
240240
```jldoctest
241-
julia> xs = rand(Float32, 100, 100, 3, 50); # a batch of 50 RGB images
241+
julia> xs = rand32(100, 100, 3, 50); # a batch of 50 RGB images
242242
243243
julia> layer = ConvTranspose((5,5), 3 => 7, relu)
244244
ConvTranspose((5, 5), 3 => 7, relu) # 532 parameters

src/layers/normalise.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ Does nothing to the input once [`testmode!`](@ref) is true.
9696
```jldoctest
9797
julia> using Statistics
9898
99-
julia> x = randn(1000,1);
99+
julia> x = randn32(1000,1);
100100
101101
julia> m = Chain(Dense(1000 => 1000, selu), AlphaDropout(0.2));
102102

src/train.jl

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -27,10 +27,10 @@ It differs from `Optimisers.setup` in that it:
2727
2828
# Example
2929
```jldoctest
30-
julia> model = Dense(2=>1, leakyrelu; init=ones32);
30+
julia> model = Dense(2=>1, leakyrelu; init=ones);
3131
3232
julia> opt_state = Flux.setup(Momentum(0.1), model) # this encodes the optimiser and its state
33-
(weight = Leaf(Momentum{Float64}(0.1, 0.9), Float32[0.0 0.0]), bias = Leaf(Momentum{Float64}(0.1, 0.9), Float32[0.0]), σ = ())
33+
(weight = Leaf(Momentum{Float64}(0.1, 0.9), [0.0 0.0]), bias = Leaf(Momentum{Float64}(0.1, 0.9), [0.0]), σ = ())
3434
3535
julia> x1, y1 = [0.2, -0.3], [0.4]; # use the same data for two steps:
3636
@@ -39,11 +39,11 @@ julia> Flux.train!(model, [(x1, y1), (x1, y1)], opt_state) do m, x, y
3939
end
4040
4141
julia> model.bias # was zero, mutated by Flux.train!
42-
1-element Vector{Float32}:
43-
10.190001
42+
1-element Vector{Float64}:
43+
10.19
4444
4545
julia> opt_state # mutated by Flux.train!
46-
(weight = Leaf(Momentum{Float64}(0.1, 0.9), Float32[-2.018 3.027]), bias = Leaf(Momentum{Float64}(0.1, 0.9), Float32[-10.09]), σ = ())
46+
(weight = Leaf(Momentum{Float64}(0.1, 0.9), [-2.018 3.027]), bias = Leaf(Momentum{Float64}(0.1, 0.9), [-10.09]), σ = ())
4747
```
4848
"""
4949
function setup(rule::Optimisers.AbstractRule, model)

0 commit comments

Comments
 (0)