A comprehensive course project on Universal Domain Adaptation (UDA) that proposes a new framework — AEG-UDA — to enhance the adaptability and robustness of models in non-overlapping and partially overlapping label spaces. This method integrates dynamic thresholding, entropy-guided pseudo-labeling, and a novel dynamic rejection loss to outperform existing approaches like DANCE across multiple adaptation scenarios.
🧠 Course: Advanced Topics in Machine Learning (ATML - CS)
📅 Semester: Fall 2024 🎓 Institution: LUMS
👨💻 Contributors: Muhammad Saad Haroon, Jawad Saeed, Daanish Uddin Khan
-
Dynamic Adaptive Threshold
Automatically adjusts entropy thresholds during training to better separate confident and ambiguous target samples. -
Entropy-Guided Pseudo-Labeling (EGPL)
Assigns soft pseudo-labels with entropy-aware weighting to confident target predictions. -
Dynamic Rejection Loss (DRL)
Penalizes uncertain (high-entropy) predictions dynamically to reduce noise and confusion.
To begin, download the Office-31 dataset, which is required for all experiments. The dataset can be obtained from the following link:
📥 Office-31 Dataset
Prepare the dataset in the following directory structure:
data/
├── amazon/Images/
├── dslr/Images/
├── webcam/Images/
Once downloaded:
- Place the zipped dataset in your Google Drive
- Ensure it is accessible during training by mounting your Google Drive in Google Colab
-
Open the corresponding file in Google Colab. This can be:
- DANCE emulated script.
- AEG-UDA script.
-
Mount your Google Drive to access the dataset:
from google.colab import drive drive.mount('/content/drive')
-
Connect to a GPU and run the cells sequentially
# ODA
!sh script/run_office_obda.sh 0 /content/DANCE/configs/office-train-config_ODA.yaml
# OPDA
!sh script/run_office_opda.sh 0 /content/DANCE/configs/office-train-config_OPDA.yaml
# CDA
!sh script/run_office_cls.sh 0 /content/DANCE/configs/office-train-config_CDA.yaml
# PDA
!sh script/run_office_cls.sh 0 /content/DANCE/configs/office-train-config_PDA.yaml
All experimental results, performance comparisons, and visualizations are detailed in the final report:
Key insights:
- AEG-UDA achieves competitive performance across CDA, ODA, PDA, and OPDA settings
- Improved generalization via entropy-aware pseudo-labeling
- Faster inference than DANCE in several scenarios
- Stronger cluster compactness in t-SNE visualizations
- Test Accuracy
- Mean Per-Class Accuracy (MPCA)
- Inference Time
- Loss Curves
- t-SNE Feature Visualization
- Add mixed-precision training for faster convergence
- Extend DRL with uncertainty-based dynamic weighting
- Adapt for streaming/real-time domain adaptation
- Scale to larger, multi-modal datasets
If you use or reference this work, please cite:
@project{AEG-UDA2025,
title = {Adaptive Entropy-Guided Universal Domain Adaptation (AEG-UDA)},
note = {Developed as part of the CS-6304: Advanced Topics in Machine Learning course at LUMS, 2025.}
}
“In a shifting world, the best models don’t memorize — they adapt.”