Skip to content

Commit 6b697c3

Browse files
authored
Merge branch 'master' into chainrules
2 parents 0599968 + 1f3915d commit 6b697c3

30 files changed

+145
-755
lines changed

.github/workflows/Downstream.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ jobs:
2626
- {user: Chemellia, repo: AtomicGraphNets.jl, group: All}
2727
- {user: SciML, repo: DiffEqFlux.jl, group: Layers}
2828
- {user: SciML, repo: NeuralPDE.jl, group: NNPDE}
29-
29+
- {user: SciML, repo: OperatorLearning.jl, group: All}
3030
if: contains(github.event.pull_request.labels.*.name, 'run downstream test')
3131
steps:
3232
- uses: actions/checkout@v2

.github/workflows/ci.yml

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,15 @@ jobs:
2222
- 'nightly'
2323
os:
2424
- ubuntu-latest
25-
- macOS-latest
26-
- windows-latest
2725
arch:
2826
- x64
27+
include:
28+
- os: windows-latest
29+
version: '1'
30+
arch: x64
31+
- os: macOS-latest
32+
version: '1'
33+
arch: x64
2934
steps:
3035
- uses: actions/checkout@v2
3136
- uses: julia-actions/setup-julia@v1

NEWS.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,13 @@
11
# Flux Release Notes
22

3+
## v0.13
4+
* After a deprecations cycle, the datasets in `Flux.Data` have
5+
been removed in favour of MLDatasets.jl.
6+
* `params` is not exported anymore since it is a common name and is also exported by Distributions.jl
7+
* `flatten` is not exported anymore due to clash with Iterators.flatten.
8+
* Remove Juno.jl progress bar support as it is now obsolete.
9+
* `Dropout` gained improved compatibility with Int and Complex arrays and is now twice-differentiable.
10+
311
## v0.12.10
412
* `Dropout`/`AlphaDropout` now supports [user-specified RNGs](https://github.com/FluxML/Flux.jl/pull/1838)
513

Project.toml

Lines changed: 2 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,50 +1,38 @@
11
name = "Flux"
22
uuid = "587475ba-b771-5e3f-ad9e-33799f191a9c"
3-
version = "0.12.9"
3+
version = "0.13.0-DEV"
44

55
[deps]
6-
AbstractTrees = "1520ce14-60c1-5f80-bbc7-55ef81b5835c"
76
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
87
ArrayInterface = "4fba245c-0d91-5ea0-9b3e-6abc04ee57a9"
98
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
109
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
11-
CodecZlib = "944b1d66-785c-5afd-91f1-9de20f533193"
12-
Colors = "5ae59095-9a9b-59fe-a467-6f913c188581"
13-
DelimitedFiles = "8bb1440f-4735-579b-a4ab-409b98df4dab"
1410
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
1511
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
1612
MacroTools = "1914dd2f-81c6-5fcd-8719-6d5c9610ff09"
1713
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
1814
NNlibCUDA = "a00861dc-f156-4864-bf3c-e6376f28a68d"
19-
Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
20-
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
2115
ProgressLogging = "33c8b6b6-d38a-422a-b730-caa89a2f386c"
2216
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
2317
Reexport = "189a3867-3050-52da-a836-e630ba90ab69"
24-
SHA = "ea8e919c-243c-51af-8825-aaa63cd721ce"
2518
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
2619
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
2720
StatsBase = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
2821
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
29-
ZipFile = "a5390f91-8eb1-5f08-bee0-b1d1ffed6cea"
3022
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
3123

3224
[compat]
33-
AbstractTrees = "0.3"
3425
Adapt = "3.0"
3526
ArrayInterface = "3.1, 4"
3627
CUDA = "3"
3728
ChainRulesCore = "1.12"
38-
CodecZlib = "0.7"
39-
Colors = "0.12"
4029
Functors = "0.2.1"
4130
MacroTools = "0.5"
42-
NNlib = "0.8"
31+
NNlib = "0.8.2"
4332
NNlibCUDA = "0.2"
4433
ProgressLogging = "0.1"
4534
Reexport = "0.2, 1.0"
4635
StatsBase = "0.33"
47-
ZipFile = "0.9"
4836
Zygote = "0.6.34"
4937
julia = "1.6"
5038

docs/src/models/advanced.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -97,8 +97,8 @@ We can freeze a specific parameter of a specific layer which already entered a `
9797
by simply deleting it from `ps`:
9898

9999
```julia
100-
ps = params(m)
101-
delete!(ps, m[2].bias)
100+
ps = Flux.params(m)
101+
delete!(ps, m[2].bias)
102102
```
103103

104104
## Custom multiple input or output layer

docs/src/models/basics.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ julia> x = [2, 1];
3939
4040
julia> y = [2, 0];
4141
42-
julia> gs = gradient(params(x, y)) do
42+
julia> gs = gradient(Flux.params(x, y)) do
4343
f(x, y)
4444
end
4545
Grads(...)
@@ -83,7 +83,7 @@ To improve the prediction we can take the gradients of the loss with respect to
8383
```julia
8484
using Flux
8585

86-
gs = gradient(() -> loss(x, y), params(W, b))
86+
gs = gradient(() -> loss(x, y), Flux.params(W, b))
8787
```
8888

8989
Now that we have gradients, we can pull them out and update `W` to train the model.

docs/src/models/recurrence.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ data = zip(X,Y)
160160
Flux.reset!(m)
161161
[m(x) for x in seq_init]
162162

163-
ps = params(m)
163+
ps = Flux.params(m)
164164
opt= ADAM(1e-3)
165165
Flux.train!(loss, ps, data, opt)
166166
```

docs/src/saving.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ julia> using Flux
6262
julia> model = Chain(Dense(10,5,relu),Dense(5,2),softmax)
6363
Chain(Dense(10, 5, NNlib.relu), Dense(5, 2), NNlib.softmax)
6464

65-
julia> weights = params(model);
65+
julia> weights = Flux.params(model);
6666

6767
julia> using BSON: @save
6868

docs/src/training/optimisers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ loss(x, y) = sum((predict(x) .- y).^2)
1414
x, y = rand(5), rand(2) # Dummy data
1515
l = loss(x, y) # ~ 3
1616

17-
θ = params(W, b)
17+
θ = Flux.params(W, b)
1818
grads = gradient(() -> loss(x, y), θ)
1919
```
2020

docs/src/training/training.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ At first glance it may seem strange that the model that we want to train is not
6464

6565
## Model parameters
6666

67-
The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../models/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `params(m)`.
67+
The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../models/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.
6868

6969
Such an object contains a reference to the model's parameters, not a copy, such that after their training, the model behaves according to their updated values.
7070

0 commit comments

Comments
 (0)