@@ -211,13 +211,13 @@ Let's print the final MSE on the cross-validation data.
211
211
212
212
``` {code-cell} ipython3
213
213
print("Testing loss on the validation set.")
214
- regression_model.evaluate(x_validate, y_validate)
214
+ regression_model.evaluate(x_validate, y_validate, verbose=2 )
215
215
```
216
216
217
217
Here's our output predictions on the cross-validation data.
218
218
219
219
``` {code-cell} ipython3
220
- y_predict = regression_model.predict(x_validate)
220
+ y_predict = regression_model.predict(x_validate, verbose=2 )
221
221
```
222
222
223
223
We use the following function to plot our predictions along with the data.
@@ -265,7 +265,7 @@ Here's the final MSE for the deep learning model.
265
265
266
266
``` {code-cell} ipython3
267
267
print("Testing loss on the validation set.")
268
- nn_model.evaluate(x_validate, y_validate)
268
+ nn_model.evaluate(x_validate, y_validate, verbose=2 )
269
269
```
270
270
271
271
You will notice that this loss is much lower than the one we achieved with
@@ -274,7 +274,7 @@ linear regression, suggesting a better fit.
274
274
To confirm this, let's look at the fitted function.
275
275
276
276
``` {code-cell} ipython3
277
- y_predict = nn_model.predict(x_validate)
277
+ y_predict = nn_model.predict(x_validate, verbose=2 )
278
278
```
279
279
280
280
``` {code-cell} ipython3
@@ -290,4 +290,3 @@ fig, ax = plt.subplots()
290
290
plot_results(x_validate, y_validate, y_predict, ax)
291
291
plt.show()
292
292
```
293
-
0 commit comments