Skip to content

tristan-deep/joint-diffusion

Repository files navigation

Removing Structured Noise with Diffusion Models

Tristan Stevens, Hans van Gorp, Can Meral, Junseob Shin, Jason Yu, Jean-Luc Robert, Ruud van Sloun

Tip

Weights are now hosted on Hugging Face 🤗.

Note

Our paper got accepted to TMLR 🎉!

Official repository of the Removing Structured Noise with Diffusion Models paper. The joint posterior sampling functions for diffusion models proposed in the paper can be found in sampling.py and guidance.py. For the interested reader, a more in depth explanation of the method and underlying principles can be found here. Any information on how to setup the code and run inference can be found in the getting started section.

If you find the code useful for your research, please cite the paper:

@article{
  stevens2025removing,
  title={Removing Structured Noise using Diffusion Models},
  author={Stevens, Tristan SW and van Gorp, Hans and Meral, Faik C and Shin, Junseob and Yu, Jason and Robert, Jean-Luc and van Sloun, Ruud JG},
  journal={Transactions on Machine Learning Research},
  issn={2835-8856},
  year={2025},
  url={https://openreview.net/forum?id=BvKYsaOVEn},
}

diagram

Overview of the proposed joint posterior sampling method for removing structured noise using diffusion models.

Table of contents

Structured denoising

Run the following command with keep_track set to true in the config to run the structured denoising and generate the animation.

python inference.py -e paper/celeba_mnist_pigdm -t denoise -m sgm

CelebA structured denoising

Structured denoising with the joint diffusion method.

Structured denoising on CelebA with MNIST corruption

CelebA OOD
CelebA Out-of-distribution dataset

Projection, DPS, PiGDM and Flow

CelebA OOD
CelebA Out-of-distribution dataset

Getting started

Install environment

Although manuall installation is possible, we recommend using the provided Dockerfile to build the environment. First, clone the repository and build the Docker image.

git clone git@github.com:tristan-deep/joint-diffusion.git
cd joint-diffusion
docker build . -t joint-diffusion:latest

This will build the image joint-diffusion:latest with all the necessary dependencies. To run the image, use the following command:

docker run -it --gpus all --user "$(id -u):$(id -g)" -v $(pwd):/joint-diffusion --name joint-diffusion joint-diffusion:latest

For manual installation one can check the requirements.txt file for dependencies as well as install CUDA enabled Tensorflow(2.9) and PyTorch(1.12) (latter only needed for the GLOW baseline).

Download weights

Pretrained weights should be automatically downloaded by the Hugging Face API, please create your access token here. However, they can also be manually downloaded here. The run_id model in the config either points to the Hugging Face repo using hf:// (default) or to a local folder with the checkpoints manually saved. In those folders, besides the checkpoint, a training config .yaml file is provided for each trained model (necessary for inference, to build the model again).

Datasets

Make sure to set the data_root parameter in the inference config (for instance this config). It is set to the working directory as default. All datasets (for instance CelebA and MNIST) should be (automatically) downloaded and put as a subdirectory to the specified data_root. More information can be found in the datasets.py docstrings.

Run inference

Use the inference.py script for inference.

usage: inference.py [-h]
                    [-e EXPERIMENT]
                    [-t {denoise,sample,evaluate,show_dataset}]
                    [-n NUM_IMG]
                    [-m [MODELS ...]]
                    [-s SWEEP]

options:
  -h, --help            show this help message and exit
  -e EXPERIMENT, --experiment EXPERIMENT
                        experiment name located at ./configs/inference/<experiment>)
  -t {denoise,sequence_denoise,sample,evaluate,show_dataset,plot_results,run_metrics}, --task
                        which task to run
  -n NUM_IMG, --num_img NUM_IMG
                        number of images
  -m [MODELS ...], --models [MODELS ...]
                        list of models to run
  -s SWEEP, --sweep SWEEP
                        sweep config located at ./configs/sweeps/<sweep>)
  -ef EVAL_FOLDER, --eval_folder EVAL_FOLDER
                        eval file located at ./results/<eval_folder>)

Example: Main experiment with CelebA data and MNIST corruption:

python inference.py -e paper/celeba_mnist_pigdm -t denoise -m bm3d nlm gan sgm

Denoising comparison with multiple models:

python inference.py -e paper/celeba_denoising -t denoise -m bm3d nlm gan sgm

Or to run a sweep

python inference.py -e paper/celeba_denoising -t denoise -m sgm -s sgm_sweep

Inference configs

All working inference configs are found in the ./configs/inference/paper folder. Path to those inference configs (or just the name of them) should be provided to the --experiment flag when calling the inference.py script.

References

About

Official code repository for the paper: Removing Structured Noise using Diffusion Models

Topics

Resources

Stars

Watchers

Forks