Skip to content

Commit c0994c7

Browse files
committed
maybe this example should run on the GPU, since it easily can, even though this is slower
1 parent 739197d commit c0994c7

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

docs/src/models/quickstart.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,14 +17,14 @@ model = Chain(
1717
Dense(2 => 3, tanh), # activation function inside layer
1818
BatchNorm(3),
1919
Dense(3 => 2),
20-
softmax)
20+
softmax) |> gpu # move model to GPU, if available
2121

2222
# The model encapsulates parameters, randomly initialised. Its initial output is:
23-
out1 = model(noisy) # 2×1000 Matrix{Float32}
23+
out1 = model(noisy |> gpu) |> cpu # 2×1000 Matrix{Float32}
2424

2525
# To train the model, we use batches of 64 samples, and one-hot encoding:
2626
target = Flux.onehotbatch(truth, [true, false]) # 2×1000 OneHotMatrix
27-
loader = Flux.DataLoader((noisy, target), batchsize=64, shuffle=true);
27+
loader = Flux.DataLoader((noisy, target) |> gpu, batchsize=64, shuffle=true);
2828
# 16-element DataLoader with first element: (2×64 Matrix{Float32}, 2×64 OneHotMatrix)
2929

3030
pars = Flux.params(model) # contains references to arrays in model
@@ -34,7 +34,7 @@ opt = Flux.Adam(0.01) # will store optimiser momentum, etc.
3434
losses = []
3535
for epoch in 1:1_000
3636
for (x, y) in loader
37-
loss, grad = withgradient(pars) do
37+
loss, grad = Flux.withgradient(pars) do
3838
# Evaluate model and loss inside gradient context:
3939
y_hat = model(x)
4040
Flux.crossentropy(y_hat, y)
@@ -46,7 +46,7 @@ end
4646

4747
pars # parameters, momenta and output have all changed
4848
opt
49-
out2 = model(noisy) # first row is prob. of true, second row p(false)
49+
out2 = model(noisy |> gpu) |> cpu # first row is prob. of true, second row p(false)
5050

5151
mean((out2[1,:] .> 0.5) .== truth) # accuracy 94% so far!
5252
```

0 commit comments

Comments
 (0)