Skip to content

matthiaaas/autograd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

autograd

Autograd engine for scalar or n-dimensional tensor values using either chain rule backpropagation or forward mode autodifferentiation.

Warning

Built for educational purposes.

Dual Numbers & Forward Mode Automatic Differentiation

z = Dual(2.0, 3.0) + Dual(4.0, 5.0) # Dual(6.0, 8.0)

f(x) = x^2 + 3x + 2
autodiff(f, 2.0) # 7.0

As seen on Computerphile.

Chain Rule Backpropagation / Reverse Mode Automatic Differentiation

w = Scalar(2.0)
x = Scalar(3.0)
b = Scalar(1.0)

L = w * x + b # Scalar(7.0, grad=0.0)

backward(L) # ~> w = Scalar(2.0, grad=3.0), b = Scalar(1.0, grad=1.0)
w = Tensor([1.0 1.0 1.0; 1.0 1.0 1.0; 1.0 1.0 1.0])
b = Tensor([1.0; 1.0; 1.0])

x = [1.0; 2.0; 3.0]

y_hat = w * x + b

backward(y_hat)

relu(y_hat)
mse(y_hat)
# etc.

As seen on PyTorch.

Framework

model = Sequential(
    Linear(256, 128),
    ReLU(),
    Linear(128, 64),
    Linear(64, 10),
    Sigmoid()
)

criterion = CrossEntropy()
optimizer = Adam(model)

zero_grad(optimizer)
output = model(x)
loss = criterion(output, target)
backward(loss)
step(optimizer)

Bonus: Graphs

Additionally, this library includes a thin generic Graphs module in functor-style, which is compatible with arbitrary graph-like abstractions:

result = Tensor(1.0) * Tensor(2.0) + Tensor(0.5)

g = Graph(result, (t::Tensor) -> t.children)
operands = topological_sort(g)

About

Autograd & neural network engine

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages