Skip to content

Commit 1af0202

Browse files
authored
Merge pull request #218 from JuliaAI/dev
For a 0.8.6 release
2 parents bac0ac9 + 2294da4 commit 1af0202

File tree

3 files changed

+42
-34
lines changed

3 files changed

+42
-34
lines changed

Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
name = "MLJTuning"
22
uuid = "03970b2e-30c4-11ea-3135-d1576263f10f"
33
authors = ["Anthony D. Blaom <anthony.blaom@gmail.com>"]
4-
version = "0.8.5"
4+
version = "0.8.6"
55

66
[deps]
77
ComputationalResources = "ed09eef8-17a6-5b46-8889-db040fac31e3"

README.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
# MLJTuning
22

33
Hyperparameter optimization for
4-
[MLJ](https://github.com/alan-turing-institute/MLJ.jl) machine
4+
[MLJ](https://github.com/JuliaAI/MLJ.jl) machine
55
learning models.
66

7-
See [**Tuning Models · MLJ**](https://alan-turing-institute.github.io/MLJ.jl/dev/tuning_models) for usage examples.
7+
See [**Tuning Models · MLJ**](https://JuliaAI.github.io/MLJ.jl/dev/tuning_models) for usage examples.
88

99
[![Build Status](https://github.com/JuliaAI/MLJTuning.jl/workflows/CI/badge.svg)](https://github.com/JuliaAI/MLJTuning.jl/actions)
1010
[![codecov.io](http://codecov.io/github/JuliaAI/MLJTuning.jl/coverage.svg?branch=master)](http://codecov.io/github/JuliaAI/MLJTuning.jl?branch=master)
@@ -17,17 +17,17 @@ See [**Tuning Models · MLJ**](https://alan-turing-institute.github.io/MLJ.jl/de
1717
- [How do I implement a new selection heuristic?](#how-do-i-implement-a-new-selection-heuristic)
1818

1919
*Note:* This component of the [MLJ
20-
stack](https://github.com/alan-turing-institute/MLJ.jl#the-mlj-universe)
20+
stack](https://github.com/JuliaAI/MLJ.jl#the-mlj-universe)
2121
applies to MLJ versions 0.8.0 and higher. Prior to 0.8.0, tuning
2222
algorithms resided in
23-
[MLJ](https://github.com/alan-turing-institute/MLJ.jl).
23+
[MLJ](https://github.com/JuliaAI/MLJ.jl).
2424

2525

2626
## Who is this repo for?
2727

2828
This repository is not intended to be directly imported by the general
2929
MLJ user. Rather, MLJTuning is a dependency of the
30-
[MLJ](https://github.com/alan-turing-institute/MLJ.jl) machine
30+
[MLJ](https://github.com/JuliaAI/MLJ.jl) machine
3131
learning platform, which allows MLJ users to perform a variety of
3232
hyperparameter optimization tasks from there.
3333

@@ -38,9 +38,9 @@ importing MLJTuning into a third-party package and implementing
3838
MLJTuning's [tuning strategy interface](#how-do-i-implement-a-new-tuning-strategy).
3939

4040
MLJTuning is a component of the [MLJ
41-
stack](https://github.com/alan-turing-institute/MLJ.jl#the-mlj-universe)
41+
stack](https://github.com/JuliaAI/MLJ.jl#the-mlj-universe)
4242
which does not have
43-
[MLJModels](https://github.com/alan-turing-institute/MLJModels.jl)
43+
[MLJModels](https://github.com/JuliaAI/MLJModels.jl)
4444
as a dependency (no ability to search and load registered MLJ
4545
models). It does however depend on
4646
[MLJBase](https://github.com/JuliaAI/MLJBase.jl) and,
@@ -94,7 +94,7 @@ This repository contains:
9494

9595
- a selection of **implementations** of the tuning strategy interface,
9696
currently all those accessible from
97-
[MLJ](https://github.com/alan-turing-institute/MLJ.jl) itself.
97+
[MLJ](https://github.com/JuliaAI/MLJ.jl) itself.
9898

9999
- the code defining the MLJ functions `learning_curves!` and `learning_curve` as
100100
these are essentially one-dimensional grid searches
@@ -103,12 +103,12 @@ This repository contains:
103103
## How do I implement a new tuning strategy?
104104

105105
This document assumes familiarity with the [Evaluating Model
106-
Performance](https://alan-turing-institute.github.io/MLJ.jl/dev/evaluating_model_performance/)
106+
Performance](https://JuliaAI.github.io/MLJ.jl/dev/evaluating_model_performance/)
107107
and [Performance
108-
Measures](https://alan-turing-institute.github.io/MLJ.jl/dev/performance_measures/)
108+
Measures](https://JuliaAI.github.io/MLJ.jl/dev/performance_measures/)
109109
sections of the MLJ manual. Tuning itself, from the user's
110110
perspective, is described in [Tuning
111-
Models](https://alan-turing-institute.github.io/MLJ.jl/dev/tuning_models/).
111+
Models](https://JuliaAI.github.io/MLJ.jl/dev/tuning_models/).
112112

113113

114114
### Overview
@@ -158,11 +158,11 @@ begin, on the basis of the specific strategy and a user-specified
158158
measures that do not report per-observation values
159159
(`reports_per_observation(measure) = false`) such as `auc`. See
160160
[Evaluating Model
161-
Performance](https://alan-turing-institute.github.io/MLJ.jl/dev/evaluating_model_performance/)
161+
Performance](https://JuliaAI.github.io/MLJ.jl/dev/evaluating_model_performance/)
162162
for details. There is a trait for measures called `orientation`
163163
which is `:loss` for measures you ordinarily want to minimize, and
164164
`:score` for those you want to maximize. See [Performance
165-
measures](https://alan-turing-institute.github.io/MLJ.jl/dev/performance_measures/)
165+
measures](https://JuliaAI.github.io/MLJ.jl/dev/performance_measures/)
166166
for further details.
167167

168168
- A *tuning strategy* is an instance of some subtype `S <:
@@ -233,7 +233,7 @@ wrapper. A model is tuned by *fitting* the wrapped model to data
233233
process determines the optimal model, as defined by the selection
234234
heuristic (see above). To use the optimal model one *predicts* using
235235
the wrapped model. For more detail, see the [Tuning
236-
Models](https://alan-turing-institute.github.io/MLJ.jl/dev/tuning_models/)
236+
Models](https://JuliaAI.github.io/MLJ.jl/dev/tuning_models/)
237237
section of the MLJ manual.
238238

239239
In setting up a tuning task, the user constructs an instance of the
@@ -371,7 +371,7 @@ composite models this might be a be an `Expr`, such as
371371
Use the `iterator` and `sampler` methods to convert ranges into
372372
one-dimensional grids or for random sampling, respectively. See the
373373
[tuning
374-
section](https://alan-turing-institute.github.io/MLJ.jl/dev/tuning_models/#API-1)
374+
section](https://JuliaAI.github.io/MLJ.jl/dev/tuning_models/#API-1)
375375
of the MLJ manual or doc-strings for more on these methods and the
376376
`Grid` and `RandomSearch` implementations.
377377

@@ -481,7 +481,7 @@ If more models are returned than needed (because including them would
481481
create a history whose length exceeds the user-specified number of
482482
iterations `tuned_model.n`) then the surplus models are saved, for use
483483
in a ["warm
484-
restart"](https://alan-turing-institute.github.io/MLJ.jl/dev/machines/#Warm-restarts)
484+
restart"](https://JuliaAI.github.io/MLJ.jl/dev/machines/#Warm-restarts)
485485
of tuning, when the user increases `tuned_model.n`. The remaining
486486
models are then evaluated and these evaluations are added to the
487487
history. **In any warm restart, no new call to `models` will be made

src/tuned_models.jl

Lines changed: 25 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ warn_double_spec(arg, model) =
3333
const ProbabilisticTypes = Union{Probabilistic, MLJBase.MLJModelInterface.ProbabilisticDetector}
3434
const DeterministicTypes = Union{Deterministic, MLJBase.MLJModelInterface.DeterministicDetector}
3535

36-
mutable struct DeterministicTunedModel{T,M<:DeterministicTypes} <: MLJBase.Deterministic
36+
mutable struct DeterministicTunedModel{T,M<:DeterministicTypes,L} <: MLJBase.Deterministic
3737
model::M
3838
tuning::T # tuning strategy
3939
resampling # resampling strategy
@@ -51,9 +51,10 @@ mutable struct DeterministicTunedModel{T,M<:DeterministicTypes} <: MLJBase.Deter
5151
check_measure::Bool
5252
cache::Bool
5353
compact_history::Bool
54+
logger::L
5455
end
5556

56-
mutable struct ProbabilisticTunedModel{T,M<:ProbabilisticTypes} <: MLJBase.Probabilistic
57+
mutable struct ProbabilisticTunedModel{T,M<:ProbabilisticTypes,L} <: MLJBase.Probabilistic
5758
model::M
5859
tuning::T # tuning strategy
5960
resampling # resampling strategy
@@ -71,10 +72,11 @@ mutable struct ProbabilisticTunedModel{T,M<:ProbabilisticTypes} <: MLJBase.Proba
7172
check_measure::Bool
7273
cache::Bool
7374
compact_history::Bool
75+
logger::L
7476
end
7577

76-
const EitherTunedModel{T,M} =
77-
Union{DeterministicTunedModel{T,M},ProbabilisticTunedModel{T,M}}
78+
const EitherTunedModel{T,M,L} =
79+
Union{DeterministicTunedModel{T,M,L},ProbabilisticTunedModel{T,M,L}}
7880

7981
MLJBase.caches_data_by_default(::Type{<:EitherTunedModel}) = false
8082

@@ -279,6 +281,7 @@ function TunedModel(
279281
check_measure=true,
280282
cache=true,
281283
compact_history=true,
284+
logger=nothing
282285
)
283286

284287
# user can specify model as argument instead of kwarg:
@@ -342,6 +345,9 @@ function TunedModel(
342345
# get the tuning type parameter:
343346
T = typeof(tuning)
344347

348+
# get the logger type parameter:
349+
L = typeof(logger)
350+
345351
args = (
346352
model,
347353
tuning,
@@ -360,12 +366,13 @@ function TunedModel(
360366
check_measure,
361367
cache,
362368
compact_history,
369+
logger
363370
)
364371

365372
if M <: DeterministicTypes
366-
tuned_model = DeterministicTunedModel{T,M}(args...)
373+
tuned_model = DeterministicTunedModel{T,M,L}(args...)
367374
elseif M <: ProbabilisticTypes
368-
tuned_model = ProbabilisticTunedModel{T,M}(args...)
375+
tuned_model = ProbabilisticTunedModel{T,M,L}(args...)
369376
else
370377
throw(ERR_MODEL_TYPE)
371378
end
@@ -591,7 +598,7 @@ function assemble_events!(metamodels,
591598
end
592599
end
593600
# One resampling_machine per task
594-
machs = [resampling_machine,
601+
machs = [resampling_machine,
595602
[machine(Resampler(
596603
model= resampling_machine.model.model,
597604
resampling = resampling_machine.model.resampling,
@@ -603,9 +610,9 @@ function assemble_events!(metamodels,
603610
repeats = resampling_machine.model.repeats,
604611
acceleration = resampling_machine.model.acceleration,
605612
cache = resampling_machine.model.cache,
606-
compact = resampling_machine.model.compact
607-
), resampling_machine.args...; cache=false) for
608-
_ in 2:length(partitions)]...]
613+
compact = resampling_machine.model.compact,
614+
logger = resampling_machine.model.logger),
615+
resampling_machine.args...; cache=false) for _ in 2:length(partitions)]...]
609616

610617
@sync for (i, parts) in enumerate(partitions)
611618
Threads.@spawn begin
@@ -740,8 +747,8 @@ function finalize(tuned_model,
740747
return fitresult, meta_state, report
741748
end
742749

743-
function MLJBase.fit(tuned_model::EitherTunedModel{T,M},
744-
verbosity::Integer, data...) where {T,M}
750+
function MLJBase.fit(tuned_model::EitherTunedModel{T,M,L},
751+
verbosity::Integer, data...) where {T,M,L}
745752
tuning = tuned_model.tuning
746753
model = tuned_model.model
747754
_range = tuned_model.range
@@ -769,6 +776,7 @@ function MLJBase.fit(tuned_model::EitherTunedModel{T,M},
769776
acceleration = tuned_model.acceleration_resampling,
770777
cache = tuned_model.cache,
771778
compact = tuned_model.compact_history,
779+
logger = tuned_model.logger
772780
)
773781
resampling_machine = machine(resampler, data...; cache=false)
774782
history, state = build!(nothing, n, tuning, model, model_buffer, state,
@@ -900,9 +908,9 @@ end
900908
## METADATA
901909

902910
MLJBase.is_wrapper(::Type{<:EitherTunedModel}) = true
903-
MLJBase.supports_weights(::Type{<:EitherTunedModel{<:Any,M}}) where M =
911+
MLJBase.supports_weights(::Type{<:EitherTunedModel{<:Any,M,L}}) where {M,L} =
904912
MLJBase.supports_weights(M)
905-
MLJBase.supports_class_weights(::Type{<:EitherTunedModel{<:Any,M}}) where M =
913+
MLJBase.supports_class_weights(::Type{<:EitherTunedModel{<:Any,M,L}}) where {M,L} =
906914
MLJBase.supports_class_weights(M)
907915
MLJBase.load_path(::Type{<:ProbabilisticTunedModel}) =
908916
"MLJTuning.ProbabilisticTunedModel"
@@ -914,9 +922,9 @@ MLJBase.package_uuid(::Type{<:EitherTunedModel}) =
914922
MLJBase.package_url(::Type{<:EitherTunedModel}) =
915923
"https://github.com/alan-turing-institute/MLJTuning.jl"
916924
MLJBase.package_license(::Type{<:EitherTunedModel}) = "MIT"
917-
MLJBase.is_pure_julia(::Type{<:EitherTunedModel{T,M}}) where {T,M} =
925+
MLJBase.is_pure_julia(::Type{<:EitherTunedModel{T,M,L}}) where {T,M,L} =
918926
MLJBase.is_pure_julia(M)
919-
MLJBase.input_scitype(::Type{<:EitherTunedModel{T,M}}) where {T,M} =
927+
MLJBase.input_scitype(::Type{<:EitherTunedModel{T,M,L}}) where {T,M,L} =
920928
MLJBase.input_scitype(M)
921-
MLJBase.target_scitype(::Type{<:EitherTunedModel{T,M}}) where {T,M} =
929+
MLJBase.target_scitype(::Type{<:EitherTunedModel{T,M,L}}) where {T,M,L} =
922930
MLJBase.target_scitype(M)

0 commit comments

Comments
 (0)