The aim was to improve the classification accuracy of a CNN model on the CIFAR-10 dataset through architecture tuning, data augmentation, and dropout regularization.
CIFAR-10 β a 60,000-image dataset across 10 classes like airplane, bird, cat, deer, dog, etc.
RandomHorizontalFlip()
RandomCrop(32, padding=4)
- Intermediate block improvements:
- Dropout for regularization
- Adapted fully connected layers
- Output block:
- Multiple FC layers with ReLU activation
- Final FC layer outputs raw logits
- Xavier (Glorot) initialization for weights
- Adam Optimizer with CrossEntropy Loss
- Accuracy increased gradually across epochs
- Final Test Accuracy: 62%
- Visualization of loss and accuracy over epochs
Final_Score.ipynb
β Full notebook including architecture, training, and evaluation
- Clone the repository
- Run
Final_Score.ipynb
in Jupyter Notebook - Required Libraries:
torch
,torchvision
,numpy
,matplotlib
- π Year: 2023/24
- π« University: Queen Mary University of London
- π¨βπ» Author: Vickshan Vicknakumaran
For educational and research purposes only.