You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/getting_started/linear_regression.md
+34-35Lines changed: 34 additions & 35 deletions
Original file line number
Diff line number
Diff line change
@@ -71,8 +71,8 @@ model(W, b, x) = Wx + b
71
71
where `W` is the weight matrix and `b` is the bias. For our case, the weight matrix (`W`) would constitute only a single element, as we have only a single feature. We can define our model in `Julia` using the exact same notation!
72
72
73
73
```jldoctest linear_regression_simple
74
-
julia> model(W, b, x) = @. W*x + b
75
-
model (generic function with 1 method)
74
+
julia> custom_model(W, b, x) = @. W*x + b
75
+
custom_model (generic function with 1 method)
76
76
```
77
77
78
78
The `@.` macro allows you to perform the calculations by broadcasting the scalar quantities (for example - the bias).
It does! But the predictions are way off. We need to train the model to improve the predictions, but before training the model we need to define the loss function. The loss function would ideally output a quantity that we will try to minimize during the entire training process. Here we will use the mean sum squared error loss function.
@@ -182,9 +182,8 @@ The derivatives are calculated using an Automatic Differentiation tool, and `Flu
182
182
183
183
Our first step would be to obtain the gradient of the loss function with respect to the weights and the biases. `Flux` re-exports `Zygote`'s `gradient` function; hence, we don't need to import `Zygote` explicitly to use the functionality.
It works, and the loss went down again! This was the second epoch of our training procedure. Let's plug this in a for loop and train the model for 30 epochs.
@@ -239,7 +238,7 @@ There was a significant reduction in loss, and the parameters were updated!
239
238
`Flux` provides yet another convenience functionality, the [`Flux.@epochs`](@ref) macro, which can be used to train a model for a specific number of epochs.
0 commit comments