"What I cannot create, I do not understand." - Richard Feynman
Driven by this philosophy, this high-performance tensor library was built as a hands-on learning project. It leverages Metal for GPU acceleration, implements dynamic compute graphs for automatic differentiation (autograd), and provides Python bindings for ease of use in machine learning and scientific computing.
Note: While capable of basic tensor operations and gradient computation, this project is intended primarily for educational purposes and is not intended for production-level model building.
- GPU Acceleration: Utilizes Metal for efficient tensor computation on macOS devices.
- Dynamic Compute Graphs: Implements dynamic computation graphs for automatic differentiation, similar to autograd, enabling gradient computation for machine learning tasks.
- Python Bindings: Provides Python bindings for seamless integration with Python-based workflows.
- High Performance: Optimized for both CPU and GPU execution, ensuring maximum performance across Metal enabled devices.
- Educational Focus: Aimed at helping users understand the underlying concepts of tensor operations, autograd, and GPU acceleration.
- macOS 10.15+ or iOS 13+ with Metal support
- Xcode 12+ with Command Line Tools
- Python 3.x (for Python bindings)
- CMake 3.x or higher (for building the project)
-
Clone the repository:
git clone https://github.com/arjunmnath/ACTx.git cd ACTx
-
Build the C++/Objective-C++ library:
Debug build
cmake --preset debug cmake --build build -- -j$(nproc)
Release build
cmake --preset release cmake --build build -- -j$(nproc)
Test build
cmake --preset test cmake --build build -- -j$(nproc)
Run Tests
ctest --parallel $(nproc) --progress --test-dir build --output-on-failure
-
Install Python bindings:
pip install .
pip install actx
🚧 Only on MacOS at present
#include "actx.h"
int main() {
Tensor tensor1 = Tensor::random({3, 3});
Tensor tensor2 = Tensor::random({3, 3});
// Define a simple computation
Tensor result = tensor1 * tensor2;
// Compute gradients
result.backward();
// Access the gradients
Tensor grad = tensor1.grad();
grad.print();
return 0;
}
import actx
# Create tensors
tensor1 = actx.random((3, 3), requires_grad=True)
tensor2 = actx.random((3, 3), requires_grad=True)
# Define a simple computation
result = tensor1 * tensor2
# Compute gradients
result.backward()
# Access the gradients
grad_tensor1 = tensor1.grad
grad_tensor2 = tensor2.grad
print("Gradient of tensor1:\n", grad_tensor1)
print("Gradient of tensor2:\n", grad_tensor2)
No documentation understand it yourself 🤷🏻♂️
Contributions are welcome! Please read our Contributions Guide before submitting a pull request. If you encounter any issues, feel free to open an issue in the repository.
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.
- Metal framework for GPU acceleration
- Python bindings via c-api
- Inspired by various tensor libraries such as NumPy and PyTorch, and automatic differentiation systems like autograd.