Skip to content

Commit 1e47fb5

Browse files
committed
Added figures
1 parent eb3c980 commit 1e47fb5

File tree

9 files changed

+4097
-15
lines changed

9 files changed

+4097
-15
lines changed

docs/src/lecture_10/Iris_acc.svg

Lines changed: 124 additions & 0 deletions
Loading

docs/src/lecture_10/Layers_0.svg

Lines changed: 802 additions & 0 deletions
Loading

docs/src/lecture_10/Layers_1.svg

Lines changed: 715 additions & 0 deletions
Loading

docs/src/lecture_10/Layers_9.svg

Lines changed: 769 additions & 0 deletions
Loading

docs/src/lecture_10/exercises.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ function reshape_data(X::AbstractArray{<:Real, 3})
1616
end
1717
1818
function train_or_load!(file_name, m, args...; force=false, kwargs...)
19-
19+
2020
!isdir(dirname(file_name)) && mkpath(dirname(file_name))
2121
2222
if force || !isfile(file_name)
@@ -30,7 +30,7 @@ end
3030
function load_data(dataset; T=Float32, onehot=false, classes=0:9)
3131
X_train, y_train = dataset.traindata(T)
3232
X_test, y_test = dataset.testdata(T)
33-
33+
3434
X_train = reshape_data(X_train)
3535
X_test = reshape_data(X_test)
3636
@@ -328,7 +328,7 @@ savefig("miss.svg") # hide
328328

329329

330330

331-
# ![](miss.svg)
331+
![](miss.svg)
332332

333333
We see that some of the nines could be recognized as a seven even by humans.
334334

@@ -412,15 +412,15 @@ We plot and comment on three selected digits below.
412412

413413
Digit 0
414414

415-
# ![](Layers_0.svg)
415+
![](Layers_0.svg)
416416

417417
Digit 1
418418

419-
# ![](Layers_1.svg)
419+
![](Layers_1.svg)
420420

421421
Digit 9
422422

423-
# ![](Layers_9.svg)
423+
![](Layers_9.svg)
424424

425425
We may observe several things:
426426
- The functions inside the neural network do the same operations on all samples. The second row is always a black digit on a grey background.

docs/src/lecture_10/miss.svg

Lines changed: 450 additions & 0 deletions
Loading

docs/src/lecture_10/mnist_intro2.svg

Lines changed: 723 additions & 0 deletions
Loading

docs/src/lecture_10/nn.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Core.eval(Main, :(using Flux)) # hide
77
ENV["DATADEPS_ALWAYS_ACCEPT"] = true
88
X_train = MNIST.traindata()[1]
99
10-
imageplot(1 .- X_train, 1:15; nrows = 3, size=(800,480))
10+
# imageplot(1 .- X_train, 1:15; nrows = 3, size=(800,480))
1111
1212
savefig("mnist_intro.svg")
1313
```
@@ -28,7 +28,7 @@ During the last lecture, we implemented everything from scratch. This lecture wi
2828
- It automatically computes gradients and trains the model by updating the parameters.
2929
This functionality requires inputs in a specific format.
3030
- Images must be stored in `Float32` instead of the commonly used `Float64` to speed up operations.
31-
- Convolutional layers require that the input has dimension ``n_x\times n_y\times n_c\times n_s``, where ``(n_x,n_y)`` is the number of pixels in each dimension, ``n_c`` is the number of channels (1 for grayscale, and 3 for coloured images) and ``n_s`` is the number of samples.
31+
- Convolutional layers require that the input has dimension ``n_x\times n_y\times n_c\times n_s``, where ``(n_x,n_y)`` is the number of pixels in each dimension, ``n_c`` is the number of channels (1 for grayscale, and 3 for coloured images) and ``n_s`` is the number of samples.
3232
- In general, samples are always stored in the last dimension.
3333

3434
We use the package [MLDatasets](https://juliaml.github.io/MLDatasets.jl/stable/) to load the data.
@@ -59,7 +59,7 @@ The first two exercises visualize the data and transform it into the correct inp
5959

6060
Plot the first 15 images of the digit 0 from the training set.
6161

62-
**Hint**: The `ImageInspector` package written earlier provides the function `imageplot(X_train, inds; nrows=3)`, where `inds` are the desired indices.
62+
**Hint**: The `ImageInspector` package written earlier provides the function `imageplot(X_train, inds; nrows=3)`, where `inds` are the desired indices.
6363

6464
**Hint**: To find the correct indices, use the function `findall`.
6565

@@ -98,7 +98,7 @@ savefig("mnist_intro2.svg") # hide
9898
</p></details>
9999
```
100100

101-
# ![](mnist_intro2.svg)
101+
![](mnist_intro2.svg)
102102

103103

104104

@@ -165,7 +165,7 @@ using Flux: onehotbatch, onecold
165165
function load_data(dataset; T=Float32, onehot=false, classes=0:9)
166166
X_train, y_train = dataset.traindata(T)
167167
X_test, y_test = dataset.testdata(T)
168-
168+
169169
X_train = reshape_data(X_train)
170170
X_test = reshape_data(X_test)
171171
@@ -257,7 +257,7 @@ We see that it correctly returned a tuple of four items.
257257

258258
## Training and storing the network
259259

260-
We recall that machine learning minimizes the discrepancy between the predictions ``\operatorname{predict}(w; x_i)`` and labels ``y_i``. Mathematically, this amount to minimizing the following objective function.
260+
We recall that machine learning minimizes the discrepancy between the predictions ``\operatorname{predict}(w; x_i)`` and labels ``y_i``. Mathematically, this amount to minimizing the following objective function.
261261

262262
```math
263263
L(w) = \frac1n\sum_{i=1}^n \operatorname{loss}(y_i, \operatorname{predict}(w; x_i)).
@@ -436,7 +436,7 @@ The function `train_model!` first splits the datasets into minibatches `batches`
436436
<header class = "exercise-header">Exercise:</header><p>
437437
```
438438

439-
Train the model for one epoch and save it to `MNIST_simple.bson`. Print the accuracy on the testing set.
439+
Train the model for one epoch and save it to `MNIST_simple.bson`. Print the accuracy on the testing set.
440440

441441
```@raw html
442442
</p></div>
@@ -493,7 +493,7 @@ The accuracy is over 93%, which is not bad for training for one epoch only. Let
493493
Write a function `train_or_load!(file_name, m, args...; ???)` checking whether the file `file_name` exists.
494494
- If it exists, it loads it and then copies its parameters into `m` using the function `Flux.loadparams!`.
495495
- If it does not exist, it trains it using `train_model!`.
496-
In both cases, the model `m` should be modified inside the `train_or_load!` function. Pay special attention to the optional arguments `???`.
496+
In both cases, the model `m` should be modified inside the `train_or_load!` function. Pay special attention to the optional arguments `???`.
497497

498498
Use this function to load the model from `data/mnist.bson` and evaluate the performance at the testing set.
499499

@@ -509,7 +509,7 @@ First, we should check whether the directory exists ```!isdir(dirname(file_name)
509509

510510
```@example nn
511511
function train_or_load!(file_name, m, args...; force=false, kwargs...)
512-
512+
513513
!isdir(dirname(file_name)) && mkpath(dirname(file_name))
514514
515515
if force || !isfile(file_name)

0 commit comments

Comments
 (0)