A comprehensive deep learning framework built from scratch in Python, featuring neural networks, multiple activation functions, optimizers, and loss functions.
- Neural Network Architecture: Flexible network construction with easy layer addition
- Layer Types: Dense, Dropout, Batch Normalization
- Activation Functions: Sigmoid, Tanh, ReLU
- Optimizers: SGD, Adam, AdaGrad, RMSprop, AdamW
- Loss Functions: MSE, MAE, Binary/Categorical Cross-entropy, Huber, Hinge, and more
- Training history tracking
- Model saving/loading
- Batch normalization
- Dropout regularization
- Validation during training
- Comprehensive plotting utilities
git clone https://github.com/aturs3001/deep-learning-framework.git
cd deep-learning-framework
pip install -e .
pip install -r requirements.txt
from deep_learning import NeuralNetwork
import numpy as np
# Create sample data
X = np.random.randn(2, 1000) # 2 features, 1000 samples
y = (X[0] + X[1] > 0).astype(int).reshape(1, -1) # Binary labels
# Create and configure model
model = NeuralNetwork(loss='binary_crossentropy', optimizer='adam')
model.add_dense(units=10, activation='relu', input_size=2)
model.add_dense(units=5, activation='relu')
model.add_dense(units=1, activation='sigmoid')
# Train the model
history = model.fit(X, y, epochs=100, batch_size=32, verbose=True)
# Make predictions
predictions = model.predict(X)
# Create model for 3-class classification
model = NeuralNetwork(loss='categorical_crossentropy', optimizer='adam')
model.add_dense(units=15, activation='relu', input_size=2)
model.add_dropout(rate=0.3)
model.add_dense(units=10, activation='relu')
model.add_dense(units=3, activation='softmax') # 3 classes
# Train with validation data
history = model.fit(
X_train, y_train,
epochs=150,
validation_data=(X_val, y_val),
verbose=True
)
# Create regression model
model = NeuralNetwork(loss='mse', optimizer='adam')
model.add_dense(units=20, activation='relu', input_size=1)
model.add_dense(units=10, activation='relu')
model.add_dense(units=1, activation='linear')
# Train the model
history = model.fit(X, y, epochs=200, batch_size=32)
# Evaluate performance
loss, accuracy = model.evaluate(X_test, y_test)
model = NeuralNetwork(loss='categorical_crossentropy', optimizer='adam')
model.add_dense(units=64, activation='relu', input_size=10)
model.add_batch_norm() # Batch normalization
model.add_dropout(rate=0.5) # Dropout regularization
model.add_dense(units=32, activation='relu')
model.add_dropout(rate=0.3)
model.add_dense(units=5, activation='softmax')
# SGD with momentum
model = NeuralNetwork(loss='mse', optimizer='sgd')
# Adam optimizer (default parameters)
model = NeuralNetwork(loss='mse', optimizer='adam')
# Custom optimizer
from deep_learning.optimizers import Adam
custom_optimizer = Adam(learning_rate=0.001, beta1=0.9, beta2=0.999)
model = NeuralNetwork(loss='mse', optimizer=custom_optimizer)
# Save trained model
model.save('my_model.pkl')
# Load model
model = NeuralNetwork(loss='mse', optimizer='adam')
model.load('my_model.pkl')
# Plot training history
model.plot_history()
# Access training metrics
print("Training loss:", history['loss'])
print("Validation accuracy:", history['val_accuracy'])
The framework includes comprehensive examples:
python examples/classification_demo.py
- Binary classification
- Multi-class classification
- Non-linear classification (circular data)
python examples/regression_demo.py
- Linear regression
- Polynomial regression
- Sinusoidal regression
- Multi-feature regression
- Optimizer comparison
class NeuralNetwork:
def __init__(self, loss, optimizer='adam')
def add_dense(self, units, activation='relu', input_size=None)
def add_dropout(self, rate=0.5)
def add_batch_norm(self, input_size=None)
def fit(self, X, y, epochs=100, batch_size=32, validation_data=None)
def predict(self, X)
def evaluate(self, X, y)
def save(self, filepath)
def load(self, filepath)
'sigmoid'
- Sigmoid activation'tanh'
- Hyperbolic tangent'relu'
- Rectified Linear Unit'leaky_relu'
- Leaky ReLU'elu'
- Exponential Linear Unit'swish'
- Swish activation'softmax'
- Softmax (for classification)'linear'
- Linear activation
'mse'
- Mean Squared Error'mae'
- Mean Absolute Error'binary_crossentropy'
- Binary Cross-entropy'categorical_crossentropy'
- Categorical Cross-entropy'sparse_categorical_crossentropy'
- Sparse Categorical Cross-entropy'huber'
- Huber loss'hinge'
- Hinge loss
'sgd'
- Stochastic Gradient Descent'adam'
- Adam optimizer'rmsprop'
- RMSprop'adagrad'
- AdaGrad'adamw'
- AdamW (Adam with weight decay)
Run the test suite:
# Run all tests
python -m pytest tests/
# Run specific test files
python -m pytest tests/test_neural_network.py
python -m pytest tests/test_layers.py -v
deep-learning-framework/
├── deep_learning/ # Main package
│ ├── __init__.py
│ ├── neural_network.py # Main NeuralNetwork class
│ ├── layers.py # Layer implementations
│ ├── activation.py # Activation functions
│ ├── optimizers.py # Optimization algorithms
│ └── loss.py # Loss functions
├── tests/ # Test suite
│ ├── test_neural_network.py
│ ├── test_layers.py
│ ├── test_activation.py
│ ├── test_optimizers.py
│ └── test_loss.py
├── examples/ # Example scripts
│ ├── classification_demo_simple.py
| ├── classification_demo.py
│ └── regression_demo.py
├── setup.py # Package setup
├── requirements.txt # Dependencies
└── README.md # This file
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Make your changes
- Add tests for new functionality
- Run the test suite (
python -m pytest
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This framework is built for educational purposes and understanding. For production use, consider:
- Using GPU acceleration (e.g., with CuPy)
- Implementing more efficient matrix operations
- Adding support for convolutional and recurrent layers
- Using established frameworks like TensorFlow or PyTorch for large-scale applications
All Rights Resvered By Author Aric Hurkman. All Files herein are fo the purpose to showcase the ablities of the Author, and not for any other use without pramission by the Author
- Built as an educational tool to understand deep learning fundamentals
- Inspired by popular frameworks like Keras and PyTorch
- Thanks to the NumPy and Matplotlib communities
Aric Hurkman
Email: arichurkman@gmail.com
GitHub: https://github.com/aturs3001
This framework demonstrates core deep learning concepts and provides a foundation for understanding how neural networks work under the hood.