Official PyTorch implementation of
DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning
IEEE Access, 2023
📚 Built upon solo-learn: A library of self-supervised learning methods powered by PyTorch Lightning.
DimCL introduces a novel regularization strategy that applies contrastive learning across embedding dimensions, rather than across instances. This dimensional contrastive loss improves feature decorrelation and representation diversity, enhancing performance across self-supervised learning (SSL) methods.
This repository provides:
- A modular and extensible PyTorch implementation.
- Integration with multiple popular SSL methods and backbone architectures.
- Evaluation tools and logging using PyTorch Lightning.
Method | Paper Link |
---|---|
DimCL (Ours) | arXiv:2103.03230 |
Barlow Twins | arXiv:2103.03230 |
BYOL | arXiv:2006.07733 |
DeepCluster V2 | arXiv:2006.09882 |
DINO | arXiv:2104.14294 |
MoCo V2+ | arXiv:2003.04297 |
NNBYOL / NNCLR / NNSiam | arXiv:2104.14548 |
ReSSL | arXiv:2107.09282 |
SimCLR | arXiv:2002.05709 |
SimSiam | arXiv:2011.10566 |
SupCon | arXiv:2004.11362 |
SwAV | arXiv:2006.09882 |
VICReg / VIbCReg | VICReg, VIbCReg |
W-MSE | arXiv:2007.06346 |
- Fast data loading with NVIDIA DALI (up to 2× faster).
- Configurable and flexible data augmentations.
- Online/offline linear and K-NN evaluation.
- Feature visualization with UMAP (online & offline).
- Built-in support for PyTorch Lightning features:
- Mixed precision
- Gradient accumulation & clipping
- Automatic logging
- Lightweight modular code for easy prototyping.
- Multi-crop support (e.g., SwAV-style, currently SimCLR only).
- LARS optimizer improvements (e.g., excluding BatchNorm/bias).
- Optional LR scheduling tweaks for SimSiam.
pip install torch torchvision pytorch-lightning lightning-bolts wandb einops tqdm torchmetrics timm scipy
Optional:
pip install nvidia-dali matplotlib seaborn pandas umap-learn
Clone the repo and install dependencies:
git clone https://github.com/your-username/dimcl.git
cd dimcl
# Full installation with DALI and UMAP
pip install .[dali,umap]
# Or basic installation
pip install .
💡 Trouble with DALI? Try:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist --upgrade nvidia-dali-cuda110
Replace cuda110 with your specific CUDA version.
Pretrain the backbone using one of the scripts in:
bash_files/pretrain/
Then run offline linear evaluation:
bash_files/linear/
To enable DimCL, set the following flags in your bash config:
--our_loss True \ # Enable DimCL
--lam 0.1 \ # DimCL weight
--tau_decor 0.1 \ # Hardness-aware contrast ratio
📝 Most bash files follow the recommended hyperparameters from the original papers—check and tune as needed.
If you use this, please cite DimCL:
@article{nguyen2023dimcl,
title={DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning},
author={Nguyen, Thanh and Pham, Trung Xuan and Zhang, Chaoning and Luu, Tung M and Vu, Thang and Yoo, Chang D},
journal={IEEE Access},
volume={11},
pages={21534--21545},
year={2023},
publisher={IEEE}
}
And solo-learn preprint:
@misc{turrisi2021sololearn,
title={Solo-learn: A Library of Self-supervised Methods for Visual Representation Learning},
author={Victor G. Turrisi da Costa and Enrico Fini and Moin Nabi and Nicu Sebe and Elisa Ricci},
year={2021},
eprint={2108.01775},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={\url{https://github.com/vturrisi/solo-learn}},
}