Large language model (LLM) unlearning has become a critical mechanism for removing undesired data, knowledge, or behaviors from pre-trained models while retaining their general utility. Yet, with the rise of open-weight LLMs, we ask: can the unlearning process itself be backdoored, appearing successful under normal conditions yet reverting to pre-unlearned behavior when a hidden trigger is activated? Drawing inspiration from classical backdoor attacks that embed triggers into training data to enforce specific behaviors, we investigate backdoor unlearning, where models forget as intended in the clean setting but recover forgotten knowledge when the trigger appears. We show that designing such attacks presents unique challenges, hinging on where triggers are placed and how backdoor training is reinforced. We uncover a strong link between backdoor efficacy and the attention sink phenomenon, i.e., shallow input tokens consistently attract disproportionate attention in LLMs. Our analysis reveals that these attention sinks serve as gateways for backdoor unlearning: placing triggers at sink positions and aligning their attention values markedly enhances backdoor persistence. Extensive experiments validate these findings, showing that attention-sink-guided backdoor unlearning reliably restores forgotten knowledge in the presence of backdoor triggers, while behaving indistinguishably from a normally unlearned model when triggers are absent.
Paper: arXiv:2510.17021
Please refer to the MUSE directory for detailed installation instructions, usage examples, and framework documentation.
# Create conda environment
conda env create -f MUSE/environment.yml
conda activate muse
pip install -r MUSE/requirements.txt
# Download data and models
cd MUSE
python load_data.pyFor detailed usage, training scenarios, and evaluation procedures, see the MUSE README.
If you find this work useful, please cite:
@article{shang2025forgetting,
title={Forgetting to Forget: Attention Sink as A Gateway for Backdooring LLM Unlearning},
author={Shang, Bingqi and Chen, Yiwei and Zhang, Yihua and Shen, Bingquan and Liu, Sijia},
journal={arXiv preprint arXiv:2510.17021},
year={2025}
}This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.
