Skip to content

Commit 14b39b1

Browse files
committed
more doc tweaks
1 parent 499310a commit 14b39b1

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

src/types.jl

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -214,7 +214,7 @@ Train the machine with `fit!(mach, rows=...)`.
214214
will retrain from scratch on `fit!` call, otherwise it will not.
215215
216216
- `acceleration::AbstractResource=CPU1()`: Defines on what hardware training is done. For
217-
Training on GPU, use `CudaLibs()`.
217+
Training on GPU, use `CUDALibs()`.
218218
219219
- `finaliser=Flux.softmax`: The final activation function of the neural network (applied
220220
after the network defined by `builder`). Defaults to `Flux.softmax`.
@@ -404,7 +404,7 @@ Train the machine with `fit!(mach, rows=...)`.
404404
will retrain from scratch on `fit!` call, otherwise it will not.
405405
406406
- `acceleration::AbstractResource=CPU1()`: Defines on what hardware training is done. For
407-
Training on GPU, use `CudaLibs()`.
407+
Training on GPU, use `CUDALibs()`.
408408
409409
- `finaliser=Flux.softmax`: The final activation function of the neural network (applied
410410
after the network defined by `builder`). Defaults to `Flux.softmax`.
@@ -641,7 +641,7 @@ Train the machine with `fit!(mach, rows=...)`.
641641
will retrain from scratch on `fit!` call, otherwise it will not.
642642
643643
- `acceleration::AbstractResource=CPU1()`: Defines on what hardware training is done. For
644-
Training on GPU, use `CudaLibs()`.
644+
Training on GPU, use `CUDALibs()`.
645645
646646
647647
# Operations
@@ -655,7 +655,7 @@ Train the machine with `fit!(mach, rows=...)`.
655655
The fields of `fitted_params(mach)` are:
656656
657657
- `chain`: The trained "chain" (Flux.jl model), namely the series of layers, functions,
658-
and activations which make up the neural network.
658+
and activations which make up the neural network.
659659
660660
661661
# Report
@@ -867,7 +867,7 @@ Here:
867867
will retrain from scratch on `fit!` call, otherwise it will not.
868868
869869
- `acceleration::AbstractResource=CPU1()`: Defines on what hardware training is done. For
870-
Training on GPU, use `CudaLibs()`.
870+
Training on GPU, use `CUDALibs()`.
871871
872872
873873
# Operations
@@ -882,7 +882,7 @@ Here:
882882
The fields of `fitted_params(mach)` are:
883883
884884
- `chain`: The trained "chain" (Flux.jl model), namely the series of layers,
885-
functions, and activations which make up the neural network.
885+
functions, and activations which make up the neural network.
886886
887887
888888
# Report

0 commit comments

Comments
 (0)