Skip to content

Commit 13e4b83

Browse files
committed
Minor fixes
1 parent 1d79acf commit 13e4b83

10 files changed

+19
-20
lines changed

docs/Project.toml

-2
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,6 @@ ComponentArrays = "b0b7db55-cfe3-40fc-9ded-d10e2dbeff66"
55
DataDeps = "124859b0-ceae-595e-8997-d05f6a7a8dfe"
66
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
77
DiffEqFlux = "aae7a2af-3d4f-5e19-a356-7da93b79d9d0"
8-
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
98
Distances = "b4f34e82-e78d-54a5-968a-f98e89d6e8f7"
109
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
1110
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
@@ -43,7 +42,6 @@ ComponentArrays = "0.13, 0.14, 0.15"
4342
DataDeps = "0.7"
4443
DataFrames = "1"
4544
DiffEqFlux = "3"
46-
DifferentialEquations = "7.6.0"
4745
Distances = "0.10.7"
4846
Distributions = "0.25.78"
4947
Documenter = "1"

docs/src/examples/GPUs.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ For a detailed discussion on how GPUs need to be setup refer to
88
[Lux Docs](https://lux.csail.mit.edu/stable/manual/gpu_management).
99

1010
```julia
11-
using DifferentialEquations, Lux, LuxCUDA, SciMLSensitivity, ComponentArrays
11+
using OrdinaryDiffEq, Lux, LuxCUDA, SciMLSensitivity, ComponentArrays
1212
using Random
1313
rng = Random.default_rng()
1414

docs/src/examples/augmented_neural_ode.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
## Copy-Pasteable Code
44

55
```@example augneuralode_cp
6-
using DiffEqFlux, DifferentialEquations, Statistics, LinearAlgebra, Plots, LuxCUDA, Random
6+
using DiffEqFlux, OrdinaryDiffEq, Statistics, LinearAlgebra, Plots, LuxCUDA, Random
77
using MLUtils, ComponentArrays
88
using Optimization, OptimizationOptimisers, IterTools
99
@@ -114,7 +114,7 @@ plt_node = plot_contour(model, res.u, st)
114114
### Loading required packages
115115

116116
```@example augneuralode
117-
using DiffEqFlux, DifferentialEquations, Statistics, LinearAlgebra, Plots, LuxCUDA, Random
117+
using DiffEqFlux, OrdinaryDiffEq, Statistics, LinearAlgebra, Plots, LuxCUDA, Random
118118
using MLUtils, ComponentArrays
119119
using Optimization, OptimizationOptimisers, IterTools
120120

docs/src/examples/collocation.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ us to get an estimate of the approximate noiseless dynamics:
9696

9797
```@example collocation
9898
using ComponentArrays,
99-
Lux, DiffEqFlux, Optimization, OptimizationOptimisers, DifferentialEquations, Plots
99+
Lux, DiffEqFlux, Optimization, OptimizationOptimisers, OrdinaryDiffEq, Plots
100100
101101
using Random
102102
rng = Random.default_rng()

docs/src/examples/hamiltonian_nn.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ dataloader = ncycle(((selectdim(data, 2, ((i - 1) * B + 1):(min(i * B, size(data
9292
We parameterize the HamiltonianNN with a small MultiLayered Perceptron. HNNs are trained by optimizing the gradients of the Neural Network. Zygote currently doesn't support nesting itself, so we will be using ForwardDiff in the training loop to compute the gradients of the HNN Layer for Optimization.
9393

9494
```@example hamiltonian
95-
hnn = HamiltonianNN(Chain(Dense(2 => 64, relu), Dense(64 => 1)); ad - AutoZygote())
95+
hnn = HamiltonianNN(Chain(Dense(2 => 64, relu), Dense(64 => 1)); ad = AutoZygote())
9696
ps, st = Lux.setup(Random.default_rng(), hnn)
9797
ps_c = ps |> ComponentArray
9898

docs/src/examples/multiple_shooting.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ The following is a working demo, using Multiple Shooting:
2424

2525
```julia
2626
using ComponentArrays,
27-
Lux, DiffEqFlux, Optimization, OptimizationPolyalgorithms, DifferentialEquations, Plots
27+
Lux, DiffEqFlux, Optimization, OptimizationPolyalgorithms, OrdinaryDiffEq, Plots
2828
using DiffEqFlux: group_ranges
2929

3030
using Random

docs/src/examples/neural_ode.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,8 @@ Before getting to the explanation, here's some code to start with. We will
1212
follow a full explanation of the definition and training process:
1313

1414
```@example neuralode_cp
15-
using ComponentArrays, Lux, DiffEqFlux, DifferentialEquations, Optimization,
16-
OptimizationOptimJL, OptimizationOptimisers, Random, Plots
15+
using ComponentArrays, Lux, DiffEqFlux, OrdinaryDiffEq, Optimization, OptimizationOptimJL,
16+
OptimizationOptimisers, Random, Plots
1717
1818
rng = Random.default_rng()
1919
u0 = Float32[2.0; 0.0]
@@ -83,7 +83,7 @@ callback(result_neuralode2.u, loss_neuralode(result_neuralode2.u)...; doplot = t
8383
Let's get a time series array from a spiral ODE to train against.
8484

8585
```@example neuralode
86-
using ComponentArrays, Lux, DiffEqFlux, DifferentialEquations, Optimization,
86+
using ComponentArrays, Lux, DiffEqFlux, OrdinaryDiffEq, Optimization,
8787
OptimizationOptimJL, OptimizationOptimisers, Random, Plots
8888
8989
rng = Random.default_rng()

docs/src/examples/neural_ode_weather_forecast.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ The data is a four-dimensional dataset of daily temperature, humidity, wind spee
99

1010
```julia
1111
using Random, Dates, Optimization, ComponentArrays, Lux, OptimizationOptimisers, DiffEqFlux,
12-
DifferentialEquations, CSV, DataFrames, Dates, Statistics, Plots, DataDeps
12+
OrdinaryDiffEq, CSV, DataFrames, Dates, Statistics, Plots, DataDeps
1313

1414
function download_data(data_url = "https://raw.githubusercontent.com/SebastianCallh/neural-ode-weather-forecast/master/data/",
1515
data_local_path = "./delhi")

docs/src/examples/normalizing_flows.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ Before getting to the explanation, here's some code to start with. We will
88
follow a full explanation of the definition and training process:
99

1010
```@example cnf
11-
using ComponentArrays, DiffEqFlux, DifferentialEquations, Optimization,
12-
OptimizationOptimisers, OptimizationOptimJL, Distributions, Random
11+
using ComponentArrays, DiffEqFlux, OrdinaryDiffEq, Optimization, Distributions,
12+
Random, OptimizationOptimisers, OptimizationOptimJL
1313
1414
nn = Chain(Dense(1, 3, tanh), Dense(3, 1, tanh))
1515
tspan = (0.0f0, 10.0f0)
@@ -37,7 +37,7 @@ adtype = Optimization.AutoForwardDiff()
3737
optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
3838
optprob = Optimization.OptimizationProblem(optf, ps)
3939
40-
res1 = Optimization.solve(optprob, Adam(0.1); maxiters = 20, callback = cb)
40+
res1 = Optimization.solve(optprob, Adam(0.01); maxiters = 20, callback = cb)
4141
4242
optprob2 = Optimization.OptimizationProblem(optf, res1.u)
4343
res2 = Optimization.solve(optprob2, Optim.LBFGS(); allow_f_increases = false,
@@ -62,8 +62,8 @@ new_data = rand(ffjord_dist, 100)
6262
We can use DiffEqFlux.jl to define, train and output the densities computed by CNF layers. In the same way as a neural ODE, the layer takes a neural network that defines its derivative function (see [1] for a reference). A possible way to define a CNF layer, would be:
6363

6464
```@example cnf2
65-
using ComponentArray, DiffEqFlux, DifferentialEquations, Optimization,
66-
OptimizationOptimisers, OptimizationOptimJL, Distributions, Random
65+
using ComponentArrays, DiffEqFlux, OrdinaryDiffEq, Optimization, OptimizationOptimisers,
66+
OptimizationOptimJL, Distributions, Random
6767
6868
nn = Chain(Dense(1, 3, tanh), Dense(3, 1, tanh))
6969
tspan = (0.0f0, 10.0f0)
@@ -113,7 +113,7 @@ adtype = Optimization.AutoForwardDiff()
113113
optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
114114
optprob = Optimization.OptimizationProblem(optf, ps)
115115
116-
res1 = Optimization.solve(optprob, Adam(0.1); maxiters = 20, callback = cb)
116+
res1 = Optimization.solve(optprob, Adam(0.01); maxiters = 20, callback = cb)
117117
```
118118

119119
We then complete the training using a different optimizer, starting from where `Adam` stopped.

docs/src/examples/tensor_layer.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,9 @@ To obtain the training data, we solve the equation of motion using one of the
1313
solvers in `DifferentialEquations`:
1414

1515
```@example tensor
16-
using ComponentArrays, DiffEqFlux, Optimization, OptimizationOptimisers,
17-
DifferentialEquations, LinearAlgebra, Random
16+
using ComponentArrays,
17+
DiffEqFlux, Optimization, OptimizationOptimisers,
18+
OrdinaryDiffEq, LinearAlgebra, Random
1819
k, α, β, γ = 1, 0.1, 0.2, 0.3
1920
tspan = (0.0, 10.0)
2021

0 commit comments

Comments
 (0)