Yu Cao* ,
Zengqun Zhao ,
Ioannis Patras ,
Shaogang Gong†
Queen Mary University of London
ASCED provides method for detecting and correcting artifacts in diffusion-generated images through temporal score analysis. The repository includes two main demonstration notebooks:
notebooks/detection_demo.ipynb
: Demonstrates artifact detection in generated imagesnotebooks/correction_demo.ipynb
: Demonstrates artifact correction in diffusion models
Before running the notebooks, you need to download:
- Model weights: Download the pre-trained diffusion model weights from yandex-research/ddpm-segmentation and place them in the
checkpoints/ddpm/
directory - Pickle files and seed data: Download the following from HERE:
normalized_score_dict.pkl
and place it inexperiments/
- Seed files (
noise_*.npy
) and place them indatasets/noise/
- Clone the repository:
git clone https://github.com/YuCao16/ASCED.git
cd ASCED
- Install the required dependencies:
pip install -r requirements.txt
Open and run notebooks/detection_demo.ipynb
to see demonstrations of:
- Temporal difference analysis
- Artifact mask generation
- Acceleration comparison between artifact and non-artifact regions
Open and run notebooks/correction_demo.ipynb
to see demonstrations of:
- DDIM sampling with artifact correction
- Visual comparison of corrected outputs
If you find this work useful, please cite:
@inproceedings{cao2025temporal,
title={Temporal Score Analysis for Understanding and Correcting Diffusion Artifacts},
author={Cao, Yu and Zhao, Zengqun and Patras, Ioannis and Gong, Shaogang},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={7707--7716},
year={2025}
}
Please feel free to open an issue on GitHub if you encounter problems or have suggestions.
This project is licensed under the MIT License - see the LICENSE file for details.
Parts of this project page were adopted from the Nerfies page.