You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -28,7 +28,7 @@ During the last lecture, we implemented everything from scratch. This lecture wi
28
28
- It automatically computes gradients and trains the model by updating the parameters.
29
29
This functionality requires inputs in a specific format.
30
30
- Images must be stored in `Float32` instead of the commonly used `Float64` to speed up operations.
31
-
- Convolutional layers require that the input has dimension ``n_x\times n_y\times n_c\times n_s``, where ``(n_x,n_y)`` is the number of pixels in each dimension, ``n_c`` is the number of channels (1 for grayscale, and 3 for coloured images) and ``n_s`` is the number of samples.
31
+
- Convolutional layers require that the input has dimension ``n_x\times n_y\times n_c\times n_s``, where ``(n_x,n_y)`` is the number of pixels in each dimension, ``n_c`` is the number of channels (1 for grayscale, and 3 for coloured images) and ``n_s`` is the number of samples.
32
32
- In general, samples are always stored in the last dimension.
33
33
34
34
We use the package [MLDatasets](https://juliaml.github.io/MLDatasets.jl/stable/) to load the data.
@@ -59,7 +59,7 @@ The first two exercises visualize the data and transform it into the correct inp
59
59
60
60
Plot the first 15 images of the digit 0 from the training set.
61
61
62
-
**Hint**: The `ImageInspector` package written earlier provides the function `imageplot(X_train, inds; nrows=3)`, where `inds` are the desired indices.
62
+
**Hint**: The `ImageInspector` package written earlier provides the function `imageplot(X_train, inds; nrows=3)`, where `inds` are the desired indices.
63
63
64
64
**Hint**: To find the correct indices, use the function `findall`.
@@ -165,7 +165,7 @@ using Flux: onehotbatch, onecold
165
165
function load_data(dataset; T=Float32, onehot=false, classes=0:9)
166
166
X_train, y_train = dataset.traindata(T)
167
167
X_test, y_test = dataset.testdata(T)
168
-
168
+
169
169
X_train = reshape_data(X_train)
170
170
X_test = reshape_data(X_test)
171
171
@@ -257,7 +257,7 @@ We see that it correctly returned a tuple of four items.
257
257
258
258
## Training and storing the network
259
259
260
-
We recall that machine learning minimizes the discrepancy between the predictions ``\operatorname{predict}(w; x_i)`` and labels ``y_i``. Mathematically, this amount to minimizing the following objective function.
260
+
We recall that machine learning minimizes the discrepancy between the predictions ``\operatorname{predict}(w; x_i)`` and labels ``y_i``. Mathematically, this amount to minimizing the following objective function.
0 commit comments