Skip to content

OpenSpaceAI/MotionGS

 
 

Repository files navigation

MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian Splatting

Ruijie Zhu*, Yanzhe Liang*, Hanzhi Chang, Jiacheng Deng, Jiahao Lu,
Wenfei Yang, Tianzhu Zhang, Yongdong Zhang
*Equal Contribution.
University of Science and Technology of China
NeurIPS 2024

                       


The overall architecture of MotionGS. It can be viewed as two data streams: (1) The 2D data stream utilizes the optical flow decoupling module to obtain the motion flow as the 2D motion prior; (2) The 3D data stream involves the deformation and transformation of Gaussians to render the image for the next frame. During training, we alternately optimize 3DGS and camera poses through the camera pose refinement module.

🚀 Quick Start

🔧 Dataset Preparation

To train MotionGS, you should download the following dataset:

We organize the datasets as follows:

├── data
│   | NeRF-DS
│     ├── as
│     ├── basin
│     ├── ...
│   | HyperNeRF
│     ├── interp
│     ├── misc
│     ├── vrig
│   | DyNeRF
│     ├── coffee_martini
│     ├── cook_spinach
│     ├── ...

🛠️ Installation

  1. Clone this repo:
git clone git@github.com:RuijieZhu94/MotionGS.git --recursive
  1. Install dependencies:
cd MotionGS

conda create -n motiongs python=3.7
conda activate motiongs

# install pytorch
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116

# install dependencies
pip install -r requirements.txt

🌟 Training

NeRF-DS:

expname=NeRF-DS
scenename=as_novel_view
mkdir -p output/$expname/$scenename

python train.py \
    -s data/NeRF-DS/$scenename \
    -m output/$expname/$scenename \
    --eval \
    --use_depth_and_flow \
    --optimize_pose

HyperNeRF:

expname=HyperNerf
scenename=broom2
mkdir -p output/$expname/$scenename

python train.py \
    -s data/hypernerf/vrig/$scenename \
    -m output/$expname/$scenename \
    --scene_format nerfies \
    --eval \
    --use_depth_and_flow \
    --optimize_pose

DyNeRF:

expname=dynerf
scenename=flame_steak
mkdir -p output/$expname/$scenename

python train.py \
    -s data/dynerf/$scenename \
    -m output/$expname/$scenename \
    --scene_format plenopticVideo \
    --resolution 4 \
    --dataloader \
    --eval \
    --use_depth_and_flow

🎇 Evaluation

python render.py -m output/exp-name --mode render
python metrics.py -m output/exp-name

We provide several modes for rendering:

  • render: render all the test images
  • time: time interpolation tasks for D-NeRF dataset
  • all: time and view synthesis tasks for D-NeRF dataset
  • view: view synthesis tasks for D-NeRF dataset
  • original: time and view synthesis tasks for real-world dataset

📜 Citation

If you find our work useful, please cite:

@article{zhu2024motiongs,
  title={Motiongs: Exploring explicit motion guidance for deformable 3d gaussian splatting},
  author={Zhu, Ruijie and Liang, Yanzhe and Chang, Hanzhi and Deng, Jiacheng and Lu, Jiahao and Yang, Wenfei and Zhang, Tianzhu and Zhang, Yongdong},
  journal={Advances in Neural Information Processing Systems},
  volume={37},
  pages={101790--101817},
  year={2024}
}

🤝 Acknowledgements

Our code is based on Deformable3DGS, GaussianFlow, MonoGS, CF-3DGS, DynPoint, MiDas, GMFlow and MDFlow. We thank the authors for their excellent work!

About

[NeurIPS 2024] MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian Splatting

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%