Skip to content

Adjust docs & Flux.@functor for Functors.jl v0.5, plus misc. depwarns #2509

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Dec 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 44 additions & 7 deletions src/deprecations.jl
Original file line number Diff line number Diff line change
Expand Up @@ -83,16 +83,48 @@
end
end

"""
params(model)

Returns a `Zygote.Params` object containing all parameter arrays from the model.
This is deprecated!

This function was the cornerstone of how Flux used Zygote's implicit mode gradients,
but since Flux 0.13 we use explicit mode `gradient(m -> loss(m, x, y), model)` instead.

To collect all the parameter arrays for other purposes, use `Flux.trainables(model)`.
"""
function params(m...)
Base.depwarn("""
Flux.params(m...) is deprecated. Use `Flux.trainable(model)` for parameters' collection
and the explicit `gradient(m -> loss(m, x, y), model)` for gradient computation.
""", :params)
@warn """`Flux.params(m...)` is deprecated. Use `Flux.trainable(model)` for parameter collection,

Check warning on line 98 in src/deprecations.jl

View check run for this annotation

Codecov / codecov/patch

src/deprecations.jl#L98

Added line #L98 was not covered by tests
and the explicit `gradient(m -> loss(m, x, y), model)` for gradient computation.""" maxlog=1
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Base.depwarn is silent except in tests. IMO that's ideal if you are marking something deprecated before a breaking change, when the replacement is available.

However, my impression is that we really want people to change this, not to silently live with old code during 0.15. So I'd like something to be printed in interactive use.

Maybe the same goes for more of the depwarns in this file?

ps = Params()
params!(ps, m)
return ps
end


"""
@functor MyLayer

Flux used to require the use of `Functors.@functor` to mark any new layer-like struct.
This allowed it to explore inside the struct, and update any trainable parameters within.
Flux@0.15 removes this requirement. This is because Functors@0.5 changed ist behaviour
to be opt-out instead of opt-in. Arbitrary structs will now be explored without special marking.
Hence calling `@functor` is no longer required.

Calling `Flux.@layer MyLayer` is, however, still recommended. This adds various convenience methods
for your layer type, such as pretty printing, and use with Adapt.jl.
"""
macro functor(ex)
@warn """The use of `Flux.@functor` is deprecated.
Most likely, you should write `Flux.@layer MyLayer` which will add various convenience methods for your type,
such as pretty-printing, and use with Adapt.jl.
However, this is not required. Flux.jl v0.15 uses Functors.jl v0.5, which makes exploration of most nested `struct`s
opt-out instead of opt-in... so Flux will automatically see inside any custom struct definitions.
""" maxlog=1
_layer_macro(ex)
end

# Allows caching of the parameters when params is called within gradient() to fix #2040.
# @non_differentiable params(m...) # https://github.com/FluxML/Flux.jl/pull/2054
# That speeds up implicit use, and silently breaks explicit use.
Expand All @@ -101,6 +133,14 @@

include("optimise/Optimise.jl") ## deprecated Module

function Optimiser(rules...)
@warn "`Flux.Optimiser(...)` has been removed, please call `OptimiserChain(...)`, exported by Flux from Optimisers.jl" maxlog=1
OptimiserChain(rules...)

Check warning on line 138 in src/deprecations.jl

View check run for this annotation

Codecov / codecov/patch

src/deprecations.jl#L136-L138

Added lines #L136 - L138 were not covered by tests
end
function ClipValue(val)
@warn "`Flux.ClipValue(...)` has been removed, please call `ClipGrad(...)`, exported by Flux from Optimisers.jl" maxlog=1
ClipGrad(val)

Check warning on line 142 in src/deprecations.jl

View check run for this annotation

Codecov / codecov/patch

src/deprecations.jl#L140-L142

Added lines #L140 - L142 were not covered by tests
end

# TODO this friendly error should go in Optimisers.jl.
# remove after https://github.com/FluxML/Optimisers.jl/pull/181
Expand All @@ -119,9 +159,6 @@
### v0.16 deprecations ####################


# Enable these when 0.16 is released, and delete const ClipGrad = Optimise.ClipValue etc:
# Base.@deprecate_binding Optimiser OptimiserChain
# Base.@deprecate_binding ClipValue ClipGrad
Comment on lines -122 to -124
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These const definitions have already been deleted:

julia> Flux.ClipValue
ERROR: UndefVarError: `ClipValue` not defined
Stacktrace:
 [1] getproperty(x::Module, f::Symbol)
   @ Base ./Base.jl:31
 [2] top-level scope
   @ REPL[6]:1

julia> Flux.Optimiser
ERROR: UndefVarError: `Optimiser` not defined
Stacktrace:
 [1] getproperty(x::Module, f::Symbol)
   @ Base ./Base.jl:31
 [2] top-level scope
   @ REPL[7]:1


# train!(loss::Function, ps::Zygote.Params, data, opt) = throw(ArgumentError(
# """On Flux 0.16, `train!` no longer accepts implicit `Zygote.Params`.
Expand Down
8 changes: 8 additions & 0 deletions src/optimise/train.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@
end

function update!(opt::AbstractOptimiser, xs::Params, gs)
@warn """The method `Flux.update!(optimiser, ps::Params, grads)` is deprecated,

Check warning on line 8 in src/optimise/train.jl

View check run for this annotation

Codecov / codecov/patch

src/optimise/train.jl#L8

Added line #L8 was not covered by tests
as part of Flux's move away from Zyote's implicit mode.
Please use explicit-style `update!(opt_state, model, grad)` instead,
where `grad = Flux.gradient(m -> loss(m,x,y), model)` and `opt_state = Flux.setup(rule, model)`.""" maxlog=1
for x in xs
isnothing(gs[x]) && continue
update!(opt, x, gs[x])
Expand All @@ -21,6 +25,10 @@
batchmemaybe(x::Tuple) = x

function train!(loss, ps::Params, data, opt::AbstractOptimiser; cb = () -> ())
@warn """The method `Flux.train!(loss2, ps::Params, data, optimiser)` is deprecated,

Check warning on line 28 in src/optimise/train.jl

View check run for this annotation

Codecov / codecov/patch

src/optimise/train.jl#L28

Added line #L28 was not covered by tests
as part of Flux's move away from Zyote's implicit parameters.
Please use explicit-style `train!(loss, model, data, opt_state)` instead,
where `loss(m, xy...)` accepts the model, and `opt_state = Flux.setup(rule, model)`.""" maxlog=1
cb = runall(cb)
itrsz = Base.IteratorSize(typeof(data))
n = (itrsz == Base.HasLength()) || (itrsz == Base.HasShape{1}()) ? length(data) : 0
Expand Down
Loading