ADRec: Unlocking the Power of Diffusion Models in Sequential Recommendation
ArXiv
An official implementation for the KDD 2025 paper 'Unlocking the Power of Diffusion Models in Sequential Recommendation: A Simple and Effective Approach'.
Jialei Chen, Yuanbo Xu✉ and Yiheng Jiang
The following environment packages must be installed to set up the required dependencies.
auto_mix_prep==0.2.0
einops==0.8.0
matplotlib==3.10.0
numpy==2.2.2
PyYAML==6.0.2
scipy==1.15.1
seaborn==0.13.2
torch==2.4.0
torchtune==0.4.0
tqdm==4.66.5
Our code has been tested, running under a Linux server with NVIDIA GeForce RTX 4090 GPU.
We have provided pre-trained embedding weights, which can be directly used for subsequent backbone warm-up and full-parameter fine-tuning. You can directly run the below command for model training and evaluation.
python main.py --dataset baby --model adrec
If you want to reproduce the pre-trained weights, you can run the following code:
python main.py --dataset baby --model pretrain
python main.py --dataset baby --model adrec --pcgrad true
python main.py --dataset baby --model diffurec
python main.py --dataset baby --model dreamrec
python main.py --dataset baby --model sasrec
bash baseline.bash
The t-SNE visualization experiment can be conducted via /src/t-SNE.ipynb
.
A comprehensive evaluation of embedding representations in the original embedding space can be performed using /src/embedding_metrics.ipynb
.
RecBole, DiffuRec, DreamRec and SASRec+.
If you find this work useful, please consider citing our paper:
@inproceedings{JLchen2025ADRec,
title={Unlocking the Power of Diffusion Models in Sequential Recommendation: A Simple and Effective Approach},
author={Jialei Chen and Yuanbo Xu and Yiheng Jiang},
booktitle={Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)},
year={2025},
organization={ACM},
doi = {10.1145/3711896.3737172}
}