TinyAutoGrad is an automatic differentiation engine and neural network library inspired by Micrograd, with optional CUDA support.
- βοΈ Automatic differentiation (forward & backward)
- π§± Simple
Tensor
class with NumPy-style operations - π§ Neural network support with MLP example
- π» CPU and β‘ CUDA backend support
- π§ͺ Built-in testing with
pytest
Run the MNIST example on CPU:
python -m samples.mnist.mnist
Ensure CUDA toolkit (nvcc
) is installed:
# Compile CUDA backend
nvcc -shared -o libops.so tinyautograd/ops.cu -Xcompiler -fPIC -lcublas
# Run test.py
python test.py
# Run MNIST with CUDA backend
python -m samples.mnist.mnist_cuda
β οΈ Make surelibops.so
is in the current directory or Python load path.
Run all unit tests:
pytest tinyautograd/ -s -v --cache-clear
Or a specific test:
pytest tinyautograd/test_rawtensor.py -s -v --cache-clear
.
βββ README.md
βββ libops.so
βββ test.py
βββ samples
β βββ mnist
β βββ mnist.py
β βββ mnist_cuda.py
β βββ train-images-idx3-ubyte.gz
β βββ train-labels-idx1-ubyte.gz
βββ tinyautograd
β βββ __init__.py
β βββ functional.py
β βββ nn.py
β βββ ops.cu
β βββ optim.py
β βββ rawtensor.py
β βββ tensor.py
β βββ test_nn.py
β βββ test_rawtensor.py
β βββ test_tensor.py
MIT