Skip to content

Commit 54cf3b2

Browse files
authored
Remove all reference to LearningTask. (#131)
1 parent 13bb641 commit 54cf3b2

31 files changed

+52
-109
lines changed

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
**/.ipynb_checkpoints
1+
**/.ipynb_checkpoints/**
22
test/Manifest.toml
33
docs/Manifest.toml
44
development/**

Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
3636
Animations = "0.4"
3737
BSON = "0.3"
3838
Colors = "0.12"
39-
DLPipelines = "0.2.1"
39+
DLPipelines = "0.3"
4040
DataAugmentation = "0.2.2"
4141
DataDeps = "0.7"
4242
DataLoaders = "0.1"

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ As an example, training an image classification model from scratch is as simple
99
```julia
1010
using FastAI
1111
path = datasetpath("imagenette2-160")
12-
data = loadtaskdata(path, ImageClassificationTask)
12+
data = loadtaskdata(path, ImageClassification)
1313
method = ImageClassification(Datasets.getclassesclassification("imagenette2-160"), (160, 160))
1414
learner = methodlearner(method, data, Models.xresnet18(), ToGPU(), Metrics(accuracy))
1515
fitonecycle!(learner, 5)

docs/api.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,6 @@ Quickly get started training and finetuning models using already implemented lea
3030

3131
{.tight}
3232
- [`LearningMethod`](#)
33-
- [`LearningTask`](#)
3433
- [`encode`](#)
3534
- [`encodeinput`](#)
3635
- `decodey`

docs/background/datapipelines.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ using DataLoaders: batchviewcollated
2626
using FastAI
2727
using FastAI.Datasets
2828

29-
data = loadtaskdata(datasetpath("imagenette2-320"), ImageClassificationTask)
29+
data = loadtaskdata(datasetpath("imagenette2-320"), ImageClasssification)
3030
method = ImageClassification(Datasets.getclassesclassification("imagenette2-320"), (224, 224))
3131

3232
# maps data processing over `data`
@@ -68,7 +68,7 @@ using FastAI
6868
using FastAI.Datasets
6969
using FluxTraining: fitbatchphase!
7070

71-
data = loadtaskdata(datasetpath("imagenette2-320"), ImageClassificationTask)
71+
data = loadtaskdata(datasetpath("imagenette2-320"), ImageClasssification)
7272
method = ImageClassification(Datasets.getclassesclassification("imagenette2-320"), (224, 224))
7373

7474
learner = methodlearner(method, data, xresnet18())
@@ -100,7 +100,7 @@ using FastAI.Datasets
100100

101101
# Since loading times can vary per observation, we'll average the measurements over multiple observations
102102
N = 10
103-
data = datasubset(shuffleobs(loadtaskdata(datasetpath("imagenette2"), ImageClassificationTask), 1:N))
103+
data = datasubset(shuffleobs(loadtaskdata(datasetpath("imagenette2"), ImageClasssification), 1:N))
104104
method = ImageClassification(Datasets.getclassesclassification("imagenette2-320"), (224, 224))
105105

106106
# Time it takes to load an `(image, class)` observation
@@ -132,13 +132,13 @@ If the data loading is still slowing down training, you'll probably have to spee
132132
For many computer vision tasks, you will resize and crop images to a specific size during training for GPU performance reasons. If the images themselves are large, loading them from disk itself can take some time. If your dataset consists of 1920x1080 resolution images but you're resizing them to 256x256 during training, you're wasting a lot of time loading the large images. *Presizing* means saving resized versions of each image to disk once, and then loading these smaller versions during training. We can see the performance difference using ImageNette since it comes in 3 sizes: original, 360px and 180px.
133133

134134
```julia
135-
data_orig = loadtaskdata(datasetpath("imagenette2"), ImageClassificationTask)
135+
data_orig = loadtaskdata(datasetpath("imagenette2"), ImageClasssification)
136136
@time eachobsparallel(data_orig, buffered = false)
137137

138-
data_320px = loadtaskdata(datasetpath("imagenette2-320"), ImageClassificationTask)
138+
data_320px = loadtaskdata(datasetpath("imagenette2-320"), ImageClasssification)
139139
@time eachobsparallel(data_320px, buffered = false)
140140

141-
data_160px = loadtaskdata(datasetpath("imagenette2-160"), ImageClassificationTask)
141+
data_160px = loadtaskdata(datasetpath("imagenette2-160"), ImageClasssification)
142142
@time eachobsparallel(data_160px, buffered = false)
143143
```
144144

docs/data_containers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ using FastAI.Datasets: datasetpath, loadtaskdata
1818

1919
NAME = "imagenette2-160"
2020
dir = datasetpath(NAME)
21-
data = loadtaskdata(dir, ImageClassificationTask)
21+
data = loadtaskdata(dir, ImageClasssification)
2222
```
2323

2424
A data container is any type that holds observations of data and allows us to load them with `getobs` and query the number of observations with `nobs`:

docs/glossary.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,6 @@ In many docstrings, generic types are abbreviated with the following symbols. Ma
1515

1616
Some examples of these in use:
1717

18-
- `LearningTask` represents the task of learning to predict `T` from `I`.
1918
- `LearningMethod` is a concrete approach to learning to predict `T` from `I` by using the encoded representations `X` and `Y`.
2019
- `encodeinput : (method, context, I) -> X` encodes an input so that a prediction can be made by a model.
2120
- A task dataset is a `DC{(I, T)}`, i.e. a data container where each observation is a 2-tuple of an input and a target.
@@ -32,10 +31,6 @@ An instance of `DLPipelines.LearningMethod`. A concrete approach to solving a le
3231

3332
See the DLPipelines.jl documentation for more information.
3433

35-
### Learning task
36-
37-
An abstract subtype of `DLPipelines.LearningTask` that represents the problem of learning a mapping from some input type `I` to a target type `T`. For example, `ImageClassificationTask` represents the task of learning to map an image to a class. See [learning method](#learning-method)
38-
3934
### Task data container / dataset
4035

4136
`DC{(I, T)}`. A data container containing pairs of inputs and targets. Used in [`methoddataset`](#), [`methoddataloaders`](#) and `evaluate`.

docs/howto/augmentvision.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ using FastAI
1010
using CairoMakie; CairoMakie.activate!(type="png")
1111

1212
dir = joinpath(datasetpath("dogscats"), "train")
13-
data = loadtaskdata(dir, ImageClassificationTask)
13+
data = loadtaskdata(dir, ImageClasssification)
1414
classes = Datasets.getclassesclassification(dir)
1515
method = ImageClassification(classes, (100, 128))
1616
xs, ys = FastAI.makebatch(method, data, fill(4, 9))

docs/howto/logtensorboard.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ callbacks = [
3333
Metrics(accuracy)
3434
]
3535

36-
data = Datasets.loadtaskdata(Datasets.datasetpath("imagenette2-160"), ImageClassificationTask)
36+
data = Datasets.loadtaskdata(Datasets.datasetpath("imagenette2-160"), ImageClasssification)
3737
method = ImageClassification(Datasets.getclassesclassification("imagenette2-160"), (160, 160))
3838
learner = methodlearner(method, data, Models.xresnet18(), callbacks...)
3939
fitonecycle!(learner, 5)

docs/interfaces.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ FastAI.jl provides many interfaces that allow extending its functionality.
66

77
Learning methods form the core of FastAI.jl's high-level API. See [this tutorial](learning_methods.md) for a motivation and introduction.
88

9-
Functions for the learning method interfaces always dispatch on a [`LearningMethod`](#)`{T}` where `T` is an abstract subtype of [`LearningTask`](#). `LearningTask` only constrains what input and target data look like, while `LearningMethod` defines everything that needs to happen to turn an input into a target and much more. `LearningMethod` should be a `struct` containing configuration.
9+
Functions for the learning method interfaces always dispatch on a [`LearningMethod`](#). A `LearningMethod` defines everything that needs to happen to turn an input into a target and much more. `LearningMethod` should be a `struct` containing configuration.
1010

1111
### Core interface
1212

0 commit comments

Comments
 (0)