Skip to content

Developing efficient deep learning models for real-world use. Covers knowledge distillation, quantization, pruning, and more. Focused on reducing size and latency while preserving accuracy. Includes training pipelines, visualizations, and performance reports.

License

Notifications You must be signed in to change notification settings

sfarrukhm/making_models_efficient

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

55 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ Making Models Efficient

This repository hosts multiple projects focused on building, compressing, and optimizing deep learning models for better speed, memory efficiency, and deployability β€” without sacrificing too much performance.


πŸ“ Projects

A complete pipeline demonstrating knowledge distillation using a custom Vision Eagle Attention (VEA)-based teacher and a lightweight CNN student. Includes performance comparison in terms of accuracy, latency, size, and parameter count.

About

Developing efficient deep learning models for real-world use. Covers knowledge distillation, quantization, pruning, and more. Focused on reducing size and latency while preserving accuracy. Includes training pipelines, visualizations, and performance reports.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published