Skip to content

Commit 06cb07c

Browse files
committed
@epcohs is deprecated
1 parent a2128ba commit 06cb07c

File tree

1 file changed

+2
-21
lines changed

1 file changed

+2
-21
lines changed

docs/src/getting_started/linear_regression.md

Lines changed: 2 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -225,35 +225,16 @@ julia> W, b, custom_loss(W, b, x, y)
225225
It works, and the loss went down again! This was the second epoch of our training procedure. Let's plug this in a for loop and train the model for 30 epochs.
226226

227227
```jldoctest linear_regression_simple; filter = r"[+-]?([0-9]*[.])?[0-9]+"
228-
julia> for i = 1:30
228+
julia> for i = 1:40
229229
train_custom_model()
230230
end
231231
232232
julia> W, b, custom_loss(W, b, x, y)
233-
(Float32[4.2408285], Float32[2.243728], 7.668049f0)
233+
(Float32[4.2422233], Float32[2.2460847], 7.6680417f0)
234234
```
235235

236236
There was a significant reduction in loss, and the parameters were updated!
237237

238-
`Flux` provides yet another convenience functionality, the [`Flux.@epochs`](@ref) macro, which can be used to train a model for a specific number of epochs.
239-
240-
```jldoctest linear_regression_simple; filter = r"[+-]?([0-9]*[.])?[0-9]+"
241-
julia> Flux.@epochs 10 train_custom_model()
242-
[ Info: Epoch 1
243-
[ Info: Epoch 2
244-
[ Info: Epoch 3
245-
[ Info: Epoch 4
246-
[ Info: Epoch 5
247-
[ Info: Epoch 6
248-
[ Info: Epoch 7
249-
[ Info: Epoch 8
250-
[ Info: Epoch 9
251-
[ Info: Epoch 10
252-
253-
julia> W, b, custom_loss(W, b, x, y)
254-
(Float32[4.2422233], Float32[2.2460847], 7.6680417f0)
255-
```
256-
257238
We can train the model even more or tweak the hyperparameters to achieve the desired result faster, but let's stop here. We trained our model for 42 epochs, and loss went down from `22.74856` to `7.6680417f`. Time for some visualization!
258239

259240
### Results

0 commit comments

Comments
 (0)