This repository hosts multiple projects focused on building, compressing, and optimizing deep learning models for better speed, memory efficiency, and deployability β without sacrificing too much performance.
A complete pipeline demonstrating knowledge distillation using a custom Vision Eagle Attention (VEA)-based teacher and a lightweight CNN student. Includes performance comparison in terms of accuracy, latency, size, and parameter count.