Seungjun Lee Β·
Gim Hee Lee
Department of Computer Science, National University of Singapore
Code | Paper | Project Page
Our DiET-GS++ enables high quality novel-view synthesis with recovering precise color and well-defined details from the blurry multi-view images.
Table of Contents
- [2025/02/27] DiET-GS is accepted to CVPR 2025 π₯. The code will be released at early June.
- [2025/06/27] The code of DiET-GS 𫨠is released ππ»! Now you can train DiET-GS and render the clean images.
- [2025/07/01] The code of DiET-GS++ 𫨠is released ππ»! Check this repository to try DiET-GS++.
- Release the code of DiET-GS
- Release the code of DiET-GS++
The main dependencies of the project are the following:
python: 3.9
cuda: 11.3
You can set up a conda environment as follows:
conda create name -n dietgs python=3.9
conda activate dietgs
pip3 install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt
# if there is error, run
pip install libigl==2.5.1
pip install huggingface_hub==0.11.1
pip install "Pillow<10"
pip install numpy==1.23.4
We provide the two pre-processed data:
Above dataset was originally proposed by this work. In our work, we discard the provided ground-truth camera poses for multi-view images, as we assume such information is readily available in real-world scenarios, even when the images exhibit severe motion blur.
To calibrate the camera poses of blurry multi-view images and construct the initial point clouds for 3D Gaussian Splatting (3DGS), we follow a two-step process:
- Deblur the blurry images with EDI processing.
- Feed EDI-deblurred multi-view images from step 1 to COLMAP, initializing the 3DGS.
We also provide the toy example for EDI in deblur_w_edi.ipynb.
You can download the calibrated camera poses and initial point clouds for all scenes in the dataset by running the code below.
python download_data.py
Note that the script above may also download additional files required for processing event streams during scene optimization.
Once you run the above command, the downloaded files may be located to designated path. Refer to the file structure:
data
βββ ev-deblurnerf_cdavis <- Real-world dataset
β βββ blurbatteries <- Scene name
β β βββ events <- Event files
β β βββ images <- N blurry multi-view images + 5 ground-truth clean images
β β βββ images_edi <- EDI-deblurred images (Just for COLMAP to initilize 3DGS)
β β βββ sparse <- Initial point clouds and camera poses
β βββ blurfligures
β βββ ...
β
βββ ev-deblurnerf_blender <- Synthetic dataset
β βββ blurfactory
β β βββ events
β β βββ images
β β βββ images_edi
β β βββ sparse
β βββ bluroutdoorpool
β βββ ...
Training DiET-GS without RSD loss on real-world dataset (fast training):
SCENE_NAME=blurbatteries
python train_dietgs_wo_rsd_real.py -s data/ev-deblurnerf_cdavis/${SCENE_NAME} --eval -m output/${SCENE_NAME} --event True --event_cmap color --edi_cmap gray --intensity True --edi_simul True --port 6035
Training DiET-GS without RSD loss on blender dataset (fast training):
SCENE_NAME=blurfactory
python train_dietgs_wo_blender.py -s data/ev-deblurnerf_blender/${SCENE_NAME} --eval -m output/${SCENE_NAME} --event True --event_cmap gray --edi_cmap gray --intensity True --edi_simul True --port 6035
Training DiET-GS with RSD loss on real-world dataset:
SCENE_NAME=blurbatteries
python train_dietgs_real.py -s data/ev-deblurnerf_cdavis/${SCENE_NAME} --eval -m output/${SCENE_NAME} --event True --event_cmap color --edi_cmap gray --intensity True --edi_simul True --port 6035
Training DiET-GS with RSD loss on blender dataset:
SCENE_NAME=blurfactory
python train_dietgs_blender.py -s data/ev-deblurnerf_blender/${SCENE_NAME} --eval -m output/${SCENE_NAME} --event True --event_cmap gray --edi_cmap gray --intensity True --edi_simul True --port 6042
π Note that we set the total iterations to 150000. However, DiET-GS usually converges to optimal performance between 40000-50000 iterations.
Check this repository to optimize the DiET-GS++!
After the scene optimization, you can render the clean images. Specify the iteration number of the pretrained 3DGS model you wish to use.
SCENE_NAME=blurbatteries
python render.py -m output/${SCENE_NAME} -s data/ev-deblurnerf_cdavis/${SCENE_NAME} --iteration 50000
We also provide pretrained 3DGS in pretrained/
folder. You can use 3DGS in this folder:
SCENE_NAME=blurbatteries
python render.py -m pretrained/ev-deblurnerf_cdavis/${SCENE_NAME} -s data/ev-deblurnerf_cdavis/${SCENE_NAME} --iteration 50000
Check this repository to render the novel views with DiET-GS++!
Our work is inspired a lot from the following works. We sincerely appreciate to their great contributions!
If you find our code or paper useful, please cite
@inproceedings{lee2025diet,
title={DiET-GS: Diffusion Prior and Event Stream-Assisted Motion Deblurring 3D Gaussian Splatting},
author={Lee, Seungjun and Lee, Gim Hee},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={21739--21749},
year={2025}
}