Skip to content

Commit 179dfe2

Browse files
authored
Merge branch 'FluxML:master' into master
2 parents 34f787d + 7b56813 commit 179dfe2

34 files changed

+336
-867
lines changed

.github/workflows/ci.yml

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,15 @@ jobs:
2222
- 'nightly'
2323
os:
2424
- ubuntu-latest
25-
- macOS-latest
26-
- windows-latest
2725
arch:
2826
- x64
27+
include:
28+
- os: windows-latest
29+
version: '1'
30+
arch: x64
31+
- os: macOS-latest
32+
version: '1'
33+
arch: x64
2934
steps:
3035
- uses: actions/checkout@v2
3136
- uses: julia-actions/setup-julia@v1

NEWS.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,12 @@
11
# Flux Release Notes
22

3+
## v0.13
4+
* After a deprecations cycle, the datasets in `Flux.Data` have
5+
been removed in favour of MLDatasets.jl.
6+
* `params` is not exported anymore since it is a common name and is also exported by Distributions.jl
7+
* `flatten` is not exported anymore due to clash with Iterators.flatten.
8+
* Remove Juno.jl progress bar support as it is now obsolete.
9+
310
## v0.12.10
411
* `Dropout`/`AlphaDropout` now supports [user-specified RNGs](https://github.com/FluxML/Flux.jl/pull/1838)
512

Project.toml

Lines changed: 1 addition & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,48 +1,36 @@
11
name = "Flux"
22
uuid = "587475ba-b771-5e3f-ad9e-33799f191a9c"
3-
version = "0.12.9"
3+
version = "0.13.0-DEV"
44

55
[deps]
6-
AbstractTrees = "1520ce14-60c1-5f80-bbc7-55ef81b5835c"
76
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
87
ArrayInterface = "4fba245c-0d91-5ea0-9b3e-6abc04ee57a9"
98
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
10-
CodecZlib = "944b1d66-785c-5afd-91f1-9de20f533193"
11-
Colors = "5ae59095-9a9b-59fe-a467-6f913c188581"
12-
DelimitedFiles = "8bb1440f-4735-579b-a4ab-409b98df4dab"
139
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
1410
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
1511
MacroTools = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
1612
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
1713
NNlibCUDA = "a00861dc-f156-4864-bf3c-e6376f28a68d"
18-
Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
19-
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
2014
ProgressLogging = "33c8b6b6-d38a-422a-b730-caa89a2f386c"
2115
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
2216
Reexport = "189a3867-3050-52da-a836-e630ba90ab69"
23-
SHA = "ea8e919c-243c-51af-8825-aaa63cd721ce"
2417
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
2518
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
2619
StatsBase = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
2720
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
28-
ZipFile = "a5390f91-8eb1-5f08-bee0-b1d1ffed6cea"
2921
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
3022

3123
[compat]
32-
AbstractTrees = "0.3"
3324
Adapt = "3.0"
3425
ArrayInterface = "3.1, 4"
3526
CUDA = "3"
36-
CodecZlib = "0.7"
37-
Colors = "0.12"
3827
Functors = "0.2.1"
3928
MacroTools = "0.5"
4029
NNlib = "0.8"
4130
NNlibCUDA = "0.2"
4231
ProgressLogging = "0.1"
4332
Reexport = "0.2, 1.0"
4433
StatsBase = "0.33"
45-
ZipFile = "0.9"
4634
Zygote = "0.6"
4735
julia = "1.6"
4836

docs/src/models/advanced.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -97,8 +97,8 @@ We can freeze a specific parameter of a specific layer which already entered a `
9797
by simply deleting it from `ps`:
9898

9999
```julia
100-
ps = params(m)
101-
delete!(ps, m[2].bias)
100+
ps = Flux.params(m)
101+
delete!(ps, m[2].bias)
102102
```
103103

104104
## Custom multiple input or output layer

docs/src/models/basics.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ julia> x = [2, 1];
3939
4040
julia> y = [2, 0];
4141
42-
julia> gs = gradient(params(x, y)) do
42+
julia> gs = gradient(Flux.params(x, y)) do
4343
f(x, y)
4444
end
4545
Grads(...)
@@ -83,7 +83,7 @@ To improve the prediction we can take the gradients of the loss with respect to
8383
```julia
8484
using Flux
8585

86-
gs = gradient(() -> loss(x, y), params(W, b))
86+
gs = gradient(() -> loss(x, y), Flux.params(W, b))
8787
```
8888

8989
Now that we have gradients, we can pull them out and update `W` to train the model.

docs/src/models/recurrence.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ data = zip(X,Y)
160160
Flux.reset!(m)
161161
[m(x) for x in seq_init]
162162

163-
ps = params(m)
163+
ps = Flux.params(m)
164164
opt= ADAM(1e-3)
165165
Flux.train!(loss, ps, data, opt)
166166
```

docs/src/saving.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ julia> using Flux
6262
julia> model = Chain(Dense(10,5,relu),Dense(5,2),softmax)
6363
Chain(Dense(10, 5, NNlib.relu), Dense(5, 2), NNlib.softmax)
6464

65-
julia> weights = params(model);
65+
julia> weights = Flux.params(model);
6666

6767
julia> using BSON: @save
6868

docs/src/training/optimisers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ loss(x, y) = sum((predict(x) .- y).^2)
1414
x, y = rand(5), rand(2) # Dummy data
1515
l = loss(x, y) # ~ 3
1616

17-
θ = params(W, b)
17+
θ = Flux.params(W, b)
1818
grads = gradient(() -> loss(x, y), θ)
1919
```
2020

docs/src/training/training.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ At first glance it may seem strange that the model that we want to train is not
6464

6565
## Model parameters
6666

67-
The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../models/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `params(m)`.
67+
The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../models/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.
6868

6969
Such an object contains a reference to the model's parameters, not a copy, such that after their training, the model behaves according to their updated values.
7070

src/Flux.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,13 @@ using MacroTools: @forward
1010
using Zygote: Params, @adjoint, gradient, pullback, @nograd
1111
export gradient
1212

13-
export Chain, Dense, Maxout, SkipConnection, Parallel, flatten,
13+
export Chain, Dense, Maxout, SkipConnection, Parallel,
1414
RNN, LSTM, GRU, GRUv3,
1515
SamePad, Conv, CrossCor, ConvTranspose, DepthwiseConv,
1616
AdaptiveMaxPool, AdaptiveMeanPool, GlobalMaxPool, GlobalMeanPool, MaxPool, MeanPool,
1717
Dropout, AlphaDropout, LayerNorm, BatchNorm, InstanceNorm, GroupNorm,
1818
Upsample, PixelShuffle,
19-
params, fmap, cpu, gpu, f32, f64,
19+
fmap, cpu, gpu, f32, f64,
2020
testmode!, trainmode!
2121

2222
include("optimise/Optimise.jl")

0 commit comments

Comments
 (0)