|
3 | 3 | [](https://travis-ci.org/YingboMa/ForwardDiff2.jl)
|
4 | 4 | [](https://codecov.io/gh/YingboMa/ForwardDiff2.jl)
|
5 | 5 |
|
6 |
| -`ForwardDiff2` = `ForwardDiff.jl` + `ChainRules.jl` + Struct of arrays + `DualCache` |
| 6 | +`ForwardDiff2` = `ForwardDiff.jl` + `ChainRules.jl` + Struct of arrays |
| 7 | + |
| 8 | +### Warning!!!: This package is still work-in-progress |
| 9 | + |
| 10 | +User API: |
| 11 | +```julia |
| 12 | +julia> using ForwardDiff2: D |
| 13 | + |
| 14 | +julia> v = rand(2) |
| 15 | +2-element Array{Float64,1}: |
| 16 | + 0.22260830987887537 |
| 17 | + 0.6397089507287486 |
| 18 | + |
| 19 | +julia> D(prod)(v) # gradient |
| 20 | +1×2 LinearAlgebra.Adjoint{Float64,Array{Float64,1}}: |
| 21 | + 0.639709 0.222608 |
| 22 | + |
| 23 | +julia> D(cumsum)(v) # Jacobian |
| 24 | +2×2 Array{Float64,2}: |
| 25 | + 1.0 0.0 |
| 26 | + 1.0 1.0 |
| 27 | + |
| 28 | +julia> D(D(prod))(v) # Hessian |
| 29 | +2×2 LinearAlgebra.Adjoint{Float64,Array{Float64,2}}: |
| 30 | + 0.0 1.0 |
| 31 | + 1.0 0.0 |
| 32 | +``` |
| 33 | + |
| 34 | +Note that `ForwardDiff2.jl` also works with `ModelingToolkit.jl`: |
| 35 | +```julia |
| 36 | +julia> using ModelingToolkit |
| 37 | + |
| 38 | +julia> @variables v[1:2] |
| 39 | +(Operation[v₁, v₂],) |
| 40 | + |
| 41 | +julia> D(prod)(v) # gradient |
| 42 | +1×2 LinearAlgebra.Adjoint{Operation,Array{Operation,1}}: |
| 43 | + conj(1v₂ + v₁ * identity(0)) conj(identity(0) * v₂ + v₁ * 1) |
| 44 | + |
| 45 | +julia> D(cumsum)(v) # Jacobian |
| 46 | +2×2 Array{Expression,2}: |
| 47 | + Constant(1) identity(0) |
| 48 | + identity(0) + 1 1 + identity(0) |
| 49 | + |
| 50 | +julia> D(D(prod))(v) # Hessian |
| 51 | +2×2 LinearAlgebra.Adjoint{Operation,Array{Operation,2}}: |
| 52 | + conj((1 * identity(0) + v₁ * 0) + (1 * identity(0) + v₂ * 0)) conj((identity(0) * identity(0) + v₁ * 0) + (1 * 1 + v₂ * 0)) |
| 53 | + conj((1 * 1 + v₁ * 0) + (identity(0) * identity(0) + v₂ * 0)) conj((identity(0) * 1 + v₁ * 0) + (identity(0) * 1 + v₂ * 0)) |
| 54 | +``` |
7 | 55 |
|
8 | 56 | Planned features:
|
9 | 57 |
|
10 | 58 | - works both on GPU and CPU
|
11 |
| -- scalar forward mode AD |
12 |
| -- vectorized forward mode AD |
13 | 59 | - [Dual cache](http://docs.juliadiffeq.org/latest/basics/faq.html#I-get-Dual-number-errors-when-I-solve-my-ODE-with-Rosenbrock-or-SDIRK-methods...?-1)
|
14 |
| -- nested differentiation |
15 |
| -- hyper duals (?) |
16 | 60 | - user-extensible scalar and tensor derivative definitions
|
17 | 61 | - in-place function
|
18 | 62 | - sparsity exploitation (color vector support)
|
|
0 commit comments