Skip to content

Commit 71a21e8

Browse files
committed
More general regex
1 parent f8e7548 commit 71a21e8

File tree

3 files changed

+14
-14
lines changed

3 files changed

+14
-14
lines changed

docs/src/models/overview.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Normally, your training and test data come from real world observations, but thi
4242

4343
Now, build a model to make predictions with `1` input and `1` output:
4444

45-
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+"
45+
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
4646
julia> model = Dense(1 => 1)
4747
Dense(1 => 1) # 2 parameters
4848
@@ -66,15 +66,15 @@ Dense(1 => 1) # 2 parameters
6666

6767
This model will already make predictions, though not accurate ones yet:
6868

69-
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+"
69+
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
7070
julia> predict(x_train)
7171
1×6 Matrix{Float32}:
7272
0.0 0.906654 1.81331 2.71996 3.62662 4.53327
7373
```
7474

7575
In order to make better predictions, you'll need to provide a *loss function* to tell Flux how to objectively *evaluate* the quality of a prediction. Loss functions compute the cumulative distance between actual values and predictions.
7676

77-
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+"
77+
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
7878
julia> loss(x, y) = Flux.Losses.mse(predict(x), y);
7979
8080
julia> loss(x_train, y_train)
@@ -100,7 +100,7 @@ julia> data = [(x_train, y_train)]
100100

101101
Now, we have the optimiser and data we'll pass to `train!`. All that remains are the parameters of the model. Remember, each model is a Julia struct with a function and configurable parameters. Remember, the dense layer has weights and biases that depend on the dimensions of the inputs and outputs:
102102

103-
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+"
103+
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
104104
julia> predict.weight
105105
1×1 Matrix{Float32}:
106106
0.9066542
@@ -112,7 +112,7 @@ julia> predict.bias
112112

113113
The dimensions of these model parameters depend on the number of inputs and outputs. Since models can have hundreds of inputs and several layers, it helps to have a function to collect the parameters into the data structure Flux expects:
114114

115-
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+"
115+
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
116116
julia> parameters = Flux.params(predict)
117117
Params([Float32[0.9066542], Float32[0.0]])
118118
```
@@ -135,14 +135,14 @@ julia> train!(loss, parameters, data, opt)
135135

136136
And check the loss:
137137

138-
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+"
138+
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
139139
julia> loss(x_train, y_train)
140140
116.38745f0
141141
```
142142

143143
It went down. Why?
144144

145-
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+"
145+
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
146146
julia> parameters
147147
Params([Float32[7.5777884], Float32[1.9466728]])
148148
```
@@ -153,7 +153,7 @@ The parameters have changed. This single step is the essence of machine learning
153153

154154
In the previous section, we made a single call to `train!` which iterates over the data we passed in just once. An *epoch* refers to one pass over the dataset. Typically, we will run the training for multiple epochs to drive the loss down even further. Let's run it a few more times:
155155

156-
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+"
156+
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
157157
julia> for epoch in 1:200
158158
train!(loss, parameters, data, opt)
159159
end
@@ -171,7 +171,7 @@ After 200 training steps, the loss went down, and the parameters are getting clo
171171

172172
Now, let's verify the predictions:
173173

174-
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+"
174+
```jldoctest overview; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
175175
julia> predict(x_test)
176176
1×5 Matrix{Float32}:
177177
26.1121 30.13 34.1479 38.1657 42.1836

docs/src/models/recurrence.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ In this example, each output has only one component.
9494

9595
Using the previously defined `m` recurrent model, we can now apply it to a single step from our sequence:
9696

97-
```jldoctest recurrence; filter = r"[+-]?([0-9]*[.])?[0-9]+"
97+
```jldoctest recurrence; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
9898
julia> x = rand(Float32, 2);
9999
100100
julia> m(x)
@@ -111,7 +111,7 @@ iterating the model on a sequence of data.
111111

112112
To do so, we'll need to structure the input data as a `Vector` of observations at each time step. This `Vector` will therefore be of `length = seq_length` and each of its elements will represent the input features for a given step. In our example, this translates into a `Vector` of length 3, where each element is a `Matrix` of size `(features, batch_size)`, or just a `Vector` of length `features` if dealing with a single observation.
113113

114-
```jldoctest recurrence; filter = r"[+-]?([0-9]*[.])?[0-9]+"
114+
```jldoctest recurrence; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
115115
julia> x = [rand(Float32, 2) for i = 1:3];
116116
117117
julia> [m(xi) for xi in x]

docs/src/models/regularisation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ julia> loss(x, y) = logitcrossentropy(m(x), y) + penalty();
2828
When working with layers, Flux provides the `params` function to grab all
2929
parameters at once. We can easily penalise everything with `sum`:
3030

31-
```jldoctest regularisation; filter = r"[+-]?([0-9]*[.])?[0-9]+"
31+
```jldoctest regularisation; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
3232
julia> Flux.params(m)
3333
Params([Float32[0.34704182 -0.48532376 … -0.06914271 -0.38398427; 0.5201164 -0.033709668 … -0.36169025 -0.5552353; … ; 0.46534058 0.17114447 … -0.4809643 0.04993277; -0.47049698 -0.6206029 … -0.3092334 -0.47857067], Float32[0.0, 0.0, 0.0, 0.0, 0.0]])
3434
@@ -40,7 +40,7 @@ julia> sum(sqnorm, Flux.params(m))
4040

4141
Here's a larger example with a multi-layer perceptron.
4242

43-
```jldoctest regularisation; filter = r"[+-]?([0-9]*[.])?[0-9]+"
43+
```jldoctest regularisation; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
4444
julia> m = Chain(Dense(28^2 => 128, relu), Dense(128 => 32, relu), Dense(32 => 10))
4545
Chain(
4646
Dense(784 => 128, relu), # 100_480 parameters
@@ -58,7 +58,7 @@ julia> loss(rand(28^2), rand(10))
5858

5959
One can also easily add per-layer regularisation via the `activations` function:
6060

61-
```jldoctest regularisation; filter = r"[+-]?([0-9]*[.])?[0-9]+"
61+
```jldoctest regularisation; filter = r"[+-]?([0-9]*[.])?[0-9]+(f[+-]*[0-9])?"
6262
julia> using Flux: activations
6363
6464
julia> c = Chain(Dense(10 => 5, σ), Dense(5 => 2), softmax)

0 commit comments

Comments
 (0)