Skip to content
View hadywalied's full-sized avatar
πŸ¦‡
The Bird of Hermes is my name, Eating my wings to make me tame
πŸ¦‡
The Bird of Hermes is my name, Eating my wings to make me tame

Block or report hadywalied

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
hadywalied/README.md

FreePalestine.Dev

Hady Walied

Software Engineer

I build production-grade intelligent systems that combine numerical optimization with domain physics, developing models that are both data-efficient and physically consistent. additionally My work spans performance optimization, robust modeling, inference acceleration, and deploying AI solutions in resource-constrained environments.

πŸ“ Cairo, Egypt | πŸ“§ hadywaliedkamel@gmail.com

linkedin coursera Dev.to pluralsight

CodeWars

hadywalied


🎯 Featured Work

The Problem: Large transformer models are prohibitively expensive for deployment in production environments with latency and resource constraints.

The Solution: End-to-end compression pipeline combining knowledge distillation and quantization. Implemented custom training loop with PyTorch Lightning, integrated W&B for experiment tracking, and built production-ready inference engine.

Results: 75% model size reduction β€’ 3x faster inference β€’ <2% accuracy loss β€’ Deployed with Qt-based demo application

Stack: PyTorch Lightning, Transformers, W&B, Quantization, Model Distillation


The Problem: Traditional EDA optimization approaches ignore underlying physical constraints, leading to unrealistic solutions and poor generalization.

The Solution: Hybrid optimization system combining gm/ID methodology-based methods with physics-informed constraints. Built custom computational engine leveraging NumPy/Numba/Sympy for performance-critical operations and integrated semiconductor-specific physical models.

Impact: multiple folds reduction in solution iteration time β€’ Deployed to production serving industrial applications

Stack: Python, C++, NumPy, Numba, Optimization Algorithms, AWS


cGrad - ML Fundamentals from Scratch

The Challenge: Understanding automatic differentiation and backpropagation at the implementation level.

The Solution: Lightweight autograd engine and neural network library built in modern C++17β€”no framework dependencies. Implements core ML primitives: computational graphs, reverse-mode autodiff, gradient descent optimizers, and basic neural network layers.

Purpose: Deep dive into ML fundamentals, performance-critical C++ design, and educational resource for learning autodiff mechanics.

Stack: Modern C++17, CMake, Template Metaprogramming


✍️ Technical Writing


πŸ› οΈ Technical Stack

ML/AI Core: PyTorch (Lightning) β€’ TensorFlow β€’ scikit-learn β€’ Model Compression β€’ Knowledge Distillation
Performance Engineering: C++ β€’ Rust β€’ Python optimization β€’ Numba β€’ CUDA basics β€’ Memory profiling
MLOps & Deployment: Docker β€’ AWS (EC2, S3, CI/CD) β€’ Model serving β€’ Experiment tracking (W&B) β€’ MLFlow β€’ ETL (Pandas)
Scientific Computing: NumPy β€’ SciPy β€’ Statistical modeling β€’ Optimization algorithms β€’ DSP


πŸš€ Current Research Focus

  • Physics-Informed Neural Networks (PINNs) for solving differential equations and inverse problems
  • Model optimization techniques: Pruning, quantization, distillation for edge deployment
  • High-performance ML inference: Exploring Rust and C++ for production ML systems
  • Hybrid approaches: Combining classical optimization with deep learning

πŸ“š Education & Credentials

B.S.E. Electronics & Electrical Communications Engineering β€’ Cairo University β€’ 2021
Relevant Coursework: Linear Algebra, Calculus (ODE/PDE), Classical & Deep ML, DSP, Statistical Methods

Professional Certifications:


πŸ’Ό Open to Opportunities

I'm actively seeking roles in:

  • ML Engineering: Model development, optimization, and production deployment
  • Applied AI Research: Physics-Informed ML, model compression, efficient inference
  • ML Systems Engineering: High-performance inference engines, C++/Python integration
  • Research Scientist positions: PIML, hybrid physics-ML approaches, scientific ML

Best way to reach me: LinkedIn or email


πŸ”₯ GitHub Activity

GitHub Streak

Stats

Top Langs

Pinned Loading

  1. cgrad cgrad Public

    cgrad: A lightweight C++ automatic differentiation engine and neural network library, inspired by micrograd by karpathy.

    C++ 2

  2. AskAttentionAI AskAttentionAI Public

    RAG-based Technical Document Q&A

    Python

  3. DistillPegasus DistillPegasus Public

    Knowledge Distillation with TAs for seq2seq text summarization model PEGASUS

    Jupyter Notebook 3

  4. Zakker Zakker Public

    Jetbrains IDEs Plugin for Azkar

    Kotlin 3

  5. Zakker-vscode Zakker-vscode Public

    Azkar for VsCode

    TypeScript 2 1

  6. Pegasus_Qt_demo Pegasus_Qt_demo Public

    Jupyter Notebook