SSL-Backdoor is an academic research library dedicated to exploring poisoning attacks in self-supervised learning (SSL). Our goal is to provide a comprehensive and unified platform for researchers to implement, evaluate, and compare various attack and defense mechanisms in the context of SSL.
This library originated as a rewrite of the SSLBKD implementation, ensuring consistent training protocols and hyperparameter fidelity for fair comparisons. We've since expanded its capabilities significantly.
Key Features:
- Unified Poisoning & Training Framework: Streamlined pipeline for applying diverse poisoning strategies and training SSL models.
- Decoupled Design: We strive to maintain a decoupled design, allowing each method to be modified independently, while unifying the implementation of essential tools where necessary.
Future plans include support for multimodal contrastive learning models.
✅ 2025-08-11 Update:
- DRUPE attack is now implemented and available! See paper: Distribution Preserving Backdoor Attack in Self-supervised Learning
✅ 2025-05-19 Update:
- DEDE defense is now implemented and available!
✅ 2025-04-18 Update:
- PatchSearch defense is now implemented and available!
- BadEncoder attack is now implemented and available!
🔄 Active Refactoring Underway! We are currently refactoring the codebase to improve code quality, maintainability, and ease of use. Expect ongoing improvements!
✅ Current Support:
- Attack Algorithms: SSLBKD, CTRL, CorruptEncoder, BLTO (inference only), BadEncoder, DRUPE
- SSL Methods: MoCo, SimCLR, SimSiam, BYOL
🛡️ Current Defenses:
- PatchSearch, DEDE
Stay tuned for more updates!
This library currently supports the following poisoning attack algorithms against SSL models:
| Aliase | Paper | Conference | Config | 
|---|---|---|---|
| SSLBKD | Backdoor attacks on self-supervised learning | CVPR 2022 | config | 
| CTRL | An Embarrassingly Simple Backdoor Attack on Self-supervised Learning | ICCV 2023 | |
| CorruptEncoder | Data poisoning based backdoor attacks to contrastive learning | CVPR 2024 | |
| BLTO (inference) | BACKDOOR CONTRASTIVE LEARNING VIA BI-LEVEL TRIGGER OPTIMIZATION | ICLR 2024 | |
| BadEncoder | BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning | S&P 2022 | config | 
| DRUPE | Distribution Preserving Backdoor Attack in Self-supervised Learning | S&P 2024 | config, train, test | 
We are actively developing and integrating defense mechanisms. Currently, the following defense is implemented:
| Aliase | Paper | Conference | Config | 
|---|---|---|---|
| PatchSearch | Defending Against Patch-Based Backdoor Attacks on Self-Supervised Learning | CVPR2023 | doc, config | 
| DEDE | DeDe: Detecting Backdoor Samples for SSL Encoders via Decoders | CVPR2025 | config | 
Get started with SSL-Backdoor quickly:
- 
Clone the repository: git clone https://github.com/jsrdcht/SSL-Backdoor.git cd SSL-Backdoor
- 
Environment (Pixi CUDA only): # resolve/create the CUDA environment defined in pixi.toml pixi install -e cuda # quick check of core deps and CUDA availability pixi run -e cuda check # open an interactive shell in the CUDA env (optional) pixi shell -e cuda 
To train an SSL model (e.g., using MoCo v2) with a chosen poisoning attack, you can use the provided scripts. Example for Distributed Data Parallel (DDP) training:
# Configure your desired attack, SSL method, dataset, etc. in the relevant config file
# (e.g., configs/ssl/moco_config.yaml, configs/poisoning/...)
bash tools/train.sh <path_to_your_config.yaml>Please refer to the configs directory and specific training scripts for detailed usage and parameter options.
@misc{jsrdcht_ssl_backdoor_2025,
  title        = {SSL-Backdoor: A PyTorch library for SSL backdoor research},
  author       = {jsrdcht},
  year         = {2025},
  howpublished = {\url{https://github.com/jsrdcht/SSL-Backdoor/}},
  note         = {MIT License, accessed 2025-08-11}
}