Skip to content
View DiET-GS's full-sized avatar

Block or report DiET-GS

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
DiET-GS/README.md

DiET-GS 🫨
Diffusion Prior and Event Stream-Assisted
Motion Deblurring 3D Gaussian Splatting

Seungjun Lee Β· Gim Hee Lee
Department of Computer Science, National University of Singapore

CVPR 2025

PyTorch Lightning

Logo

Our DiET-GS++ enables high quality novel-view synthesis with recovering precise color and well-defined details from the blurry multi-view images.

Table of Contents
  1. TODO
  2. Installation
  3. Data Preparation
  4. Per-scene Optimiazation of DiET-GS (Stage 1)
  5. Per-scene Optimiazation of DiET-GS++ (Stage 2)
  6. Render DiET-GS (stage 1)
  7. Render DiET-GS++ (stage 2)
  8. Acknowledgement
  9. Citation

News:

  • [2025/02/27] DiET-GS is accepted to CVPR 2025 πŸ”₯. The code will be released at early June.
  • [2025/06/27] The code of DiET-GS 🫨 is released πŸ‘ŠπŸ»! Now you can train DiET-GS and render the clean images.
  • [2025/07/01] The code of DiET-GS++ 🫨 is released πŸ‘ŠπŸ»! Check this repository to try DiET-GS++.

TODO

  • Release the code of DiET-GS
  • Release the code of DiET-GS++

Installation

Dependencies πŸ“

The main dependencies of the project are the following:

python: 3.9
cuda: 11.3

You can set up a conda environment as follows:

conda create name -n dietgs python=3.9
conda activate dietgs

pip3 install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113

pip install -r requirements.txt

# if there is error, run
pip install libigl==2.5.1
pip install huggingface_hub==0.11.1
pip install "Pillow<10"
pip install numpy==1.23.4

Data Preparation

We provide the two pre-processed data:

Above dataset was originally proposed by this work. In our work, we discard the provided ground-truth camera poses for multi-view images, as we assume such information is readily available in real-world scenarios, even when the images exhibit severe motion blur.

To calibrate the camera poses of blurry multi-view images and construct the initial point clouds for 3D Gaussian Splatting (3DGS), we follow a two-step process:

  1. Deblur the blurry images with EDI processing.
  2. Feed EDI-deblurred multi-view images from step 1 to COLMAP, initializing the 3DGS.

We also provide the toy example for EDI in deblur_w_edi.ipynb.

You can download the calibrated camera poses and initial point clouds for all scenes in the dataset by running the code below.

python download_data.py

Note that the script above may also download additional files required for processing event streams during scene optimization.

Once you run the above command, the downloaded files may be located to designated path. Refer to the file structure:

data
β”œβ”€β”€ ev-deblurnerf_cdavis   <- Real-world dataset
β”‚   β”œβ”€β”€ blurbatteries      <- Scene name
β”‚   β”‚   β”œβ”€β”€ events         <- Event files
β”‚   β”‚   β”œβ”€β”€ images         <- N blurry multi-view images + 5 ground-truth clean images
β”‚   β”‚   β”œβ”€β”€ images_edi     <- EDI-deblurred images (Just for COLMAP to initilize 3DGS)
β”‚   β”‚   └── sparse         <- Initial point clouds and camera poses
β”‚   β”œβ”€β”€ blurfligures 
β”‚   └── ...
β”‚ 
β”œβ”€β”€ ev-deblurnerf_blender   <- Synthetic dataset
β”‚   β”œβ”€β”€ blurfactory
β”‚   β”‚   β”œβ”€β”€ events
β”‚   β”‚   β”œβ”€β”€ images
β”‚   β”‚   β”œβ”€β”€ images_edi
β”‚   β”‚   └── sparse
β”‚   β”œβ”€β”€ bluroutdoorpool
β”‚   └── ...

Per-scene Optimization of DiET-GS (Stage 1)

Training DiET-GS without RSD loss on real-world dataset (fast training):

SCENE_NAME=blurbatteries

python train_dietgs_wo_rsd_real.py -s data/ev-deblurnerf_cdavis/${SCENE_NAME} --eval -m output/${SCENE_NAME} --event True --event_cmap color --edi_cmap gray --intensity True --edi_simul True --port 6035

Training DiET-GS without RSD loss on blender dataset (fast training):

SCENE_NAME=blurfactory

python train_dietgs_wo_blender.py -s data/ev-deblurnerf_blender/${SCENE_NAME} --eval -m output/${SCENE_NAME} --event True --event_cmap gray --edi_cmap gray --intensity True --edi_simul True --port 6035

Training DiET-GS with RSD loss on real-world dataset:

SCENE_NAME=blurbatteries

python train_dietgs_real.py -s data/ev-deblurnerf_cdavis/${SCENE_NAME} --eval -m output/${SCENE_NAME} --event True --event_cmap color --edi_cmap gray --intensity True --edi_simul True --port 6035

Training DiET-GS with RSD loss on blender dataset:

SCENE_NAME=blurfactory

python train_dietgs_blender.py -s data/ev-deblurnerf_blender/${SCENE_NAME} --eval -m output/${SCENE_NAME} --event True --event_cmap gray --edi_cmap gray --intensity True --edi_simul True --port 6042

πŸ“Œ Note that we set the total iterations to 150000. However, DiET-GS usually converges to optimal performance between 40000-50000 iterations.

Per-scene Optimization of DiET-GS++ (Stage 2)

Check this repository to optimize the DiET-GS++!

Render DiET-GS (Stage 1)

After the scene optimization, you can render the clean images. Specify the iteration number of the pretrained 3DGS model you wish to use.

SCENE_NAME=blurbatteries

python render.py -m output/${SCENE_NAME} -s data/ev-deblurnerf_cdavis/${SCENE_NAME} --iteration 50000

We also provide pretrained 3DGS in pretrained/ folder. You can use 3DGS in this folder:

SCENE_NAME=blurbatteries

python render.py -m pretrained/ev-deblurnerf_cdavis/${SCENE_NAME} -s data/ev-deblurnerf_cdavis/${SCENE_NAME} --iteration 50000

Render DiET-GS++ (Stage 2)

Check this repository to render the novel views with DiET-GS++!

Acknowledgement

Our work is inspired a lot from the following works. We sincerely appreciate to their great contributions!

Citation

If you find our code or paper useful, please cite

@inproceedings{lee2025diet,
  title={DiET-GS: Diffusion Prior and Event Stream-Assisted Motion Deblurring 3D Gaussian Splatting},
  author={Lee, Seungjun and Lee, Gim Hee},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={21739--21749},
  year={2025}
}

Popular repositories Loading

  1. DiET-GS DiET-GS Public

    [CVPR 2025] Official code of "DiET-GS: Diffusion Prior and Event Stream-Assisted Motion Deblurring 3D Gaussian Splatting"

    C++ 24 1

  2. DiET-GS.github.io DiET-GS.github.io Public

    Forked from 0nandon/DiET-GS.github.io

    Code of project page for DiET-GS

    JavaScript

  3. DiET-GSpp DiET-GSpp Public

    Optimization code of DiET-GS++ (Stage 2).

    C++