This is the official implementation of the ICIP 2025 paper "Shuffle PatchMix Augmentation with Confidence-Margin Weighted Pseudo-Labels for Enhanced Source-Free Domain Adaptation", by Prasanna Reddy Pulakurthi, Majid Rabbani, Jamison Heard, Sohail A. Dianat, Celso M. de Melo, and Raghuveer Rao.
Demo - [Hugging Face Spaces]
An interactive demo for generating SPM augmentation.
-
Clone this repository.
git clone https://github.com/PrasannaPulakurthi/SPM.git cd SPM
-
Install requirements using Python 3.9.
conda create -n spm-env python=3.9 conda activate spm-env
-
The code is tested with Pytorch 1.7.1, CUDA 11.0.
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 -f https://download.pytorch.org/whl/torch_stable.html
-
Please also make sure to install additional packages using the following command.
pip install -r requirements.txt
Download PACS from Kaggle or the Official Website, and put it under ${DATA_ROOT}
. The .txt
files for the image labels are provided under ./datasets/PACS/
. The prepared directory would look like this:
${DATA_ROOT}
├── PACS
│ ├── photo
│ ├── art_painting
│ ├── cartoon
│ ├── sketch
│ ├── photo_list.txt
│ ├── art_painting_list.txt
│ ├── cartoon_list.txt
│ ├── sketch_list.txt
│ ├── acs_list.txt
│ ├── pcs_list.txt
${DATA_ROOT}
is set to ./datasets/
by default, which can be modified in configs/data/basic.yaml
or via the hydra command line interface data.data_root=${DATA_ROOT}
.
The hydra is used as the configuration system. By default, the working directory is ./output
, which can be changed directly from configs/root.yaml
or via the hydra command line interface workdir=${WORK_DIR}
.
PACS experiments are done for 6 domain shifts constructed from combinations of Photo
, Art_Painting
, Cartoon
, and Sketch
. Before the adaptation, we should have the source model. You may train the source model with the script scripts/train_PACS_source.sh
as shown below. The pre-trained source models for seed 2022 can be found on Hugging Face.
After obtaining the source models, put them under ${SRC_MODEL_DIR}=./output/PACS/source
and run scripts/train_PACS_target.sh
to execute the adaptation.
# train source model
bash scripts/train_PACS_source.sh <SOURCE_DOMAIN>
# example: bash scripts/train_PACS_source.sh photo
# train SPM SFDA
bash scripts/train_PACS_target.sh <SOURCE_DOMAIN> <TARGET_DOMAIN> <SRC_MODEL_DIR>
# example: bash scripts/train_PACS_target.sh photo art_painting "output/PACS/source"
This will reproduce Tables 1 and 2 from the main paper:
For Windows users, the commands can be found in scripts_win/
.
Please download the VisDA-C dataset, and put it under ${DATA_ROOT}
. The .txt
files for the image labels are provided under ./datasets/VISDA-C/
. The prepared directory would look like this:
${DATA_ROOT}
├── VISDA-C
│ ├── train
│ ├── validation
│ ├── train_list.txt
│ ├── validation_list.txt
VISDA-C experiments are done for train
to validation
adaptation. Before the adaptation, we should have the source model. You may train the source model with the script scripts/train_VISDA-C_source.sh
as shown below. The pre-trained source models for seed 2022 can be found on Hugging Face.
After obtaining the source models, put them under ${SRC_MODEL_DIR}=./output/VISDA-C/source
and run scripts/train_VISDA-C_target.sh
to execute the adaptation.
# train source model
bash scripts/train_VISDA-C_source.sh
# train SPM SFDA
bash scripts/train_VISDA-C_target.sh <SRC_MODEL_DIR>
# example: bash scripts/train_VISDA-C_target.sh "output/VISDA-C/source"
This will reproduce the Table. 3 from the main paper:
For Windows users, the commands can be found in scripts_win/
.
Please download the DomainNet dataset (cleaned version), and put it under ${DATA_ROOT}
. Notice that we follow MME to use a subset that contains 126 classes from 4 domains. The .txt
files for the image labels are provided under ./datasets/domainnet-126/
. The prepared directory would look like this:
${DATA_ROOT}
├── domainnet-126
│ ├── real
│ ├── sketch
│ ├── clipart
│ ├── painting
│ ├── real_list.txt
│ ├── sketch_list.txt
│ ├── clipart_list.txt
│ ├── painting_list.txt
DomainNet-126 experiments are done for 7 domain shifts constructed from combinations of Real
, Sketch
, Clipart
, and Painting
. Before the adaptation, we should have the source model. You may train the source model with the script scripts/train_domainnet-126_source.sh
as shown below. The pre-trained source models for seed 2022 can be found on Hugging Face.
After obtaining the source models, put them under ${SRC_MODEL_DIR}=./output/domainnet-126/source
and run scripts/train_domainnet-126_target.sh
to execute the adaptation.
# train source model
bash scripts/train_domainnet-126_source.sh <SOURCE_DOMAIN>
# example: bash scripts/train_domainnet-126_source.sh real
# train SPM SFDA
bash scripts/train_domainnet-126_target.sh <SOURCE_DOMAIN> <TARGET_DOMAIN> <SRC_MODEL_DIR>
# example: bash scripts/train_domainnet-126_target.sh real sketch "output/domainnet-126/source"
This will reproduce Table 4 from the main paper:
For Windows users, the commands can be found in scripts_win/
.
If you find this work helpful to your work, please consider citing us:
@INPROCEEDINGS{11084606,
author={Pulakurthi, Prasanna Reddy and Rabbani, Majid and Heard, Jamison and Dianat, Sohail and de Melo, Celso M. and Rao, Raghuveer},
booktitle={2025 IEEE International Conference on Image Processing (ICIP)},
title={Shuffle Patchmix Augmentation with Confidence-Margin Weighted Pseudo-Labels for Enhanced Source-Free Domain Adaptation},
year={2025},
volume={},
number={},
pages={1702-1707},
keywords={Training;Image processing;Noise;Benchmark testing;Semisupervised learning;Picture archiving and communication systems;Data models;Reliability;Noise measurement;Overfitting;Source-Free Domain Adaptation;Classification;Contrastive Learning;Pseudo-Labels;Self-Training},
doi={10.1109/ICIP55913.2025.11084606}}
Codebases from AdaContrast and DRA.