Ruijie Zhu*,
Yanzhe Liang*,
Hanzhi Chang,
Jiacheng Deng,
Jiahao Lu,
Wenfei Yang,
Tianzhu Zhang,
Yongdong Zhang
*Equal Contribution.
University of Science and Technology of China
NeurIPS 2024
The overall architecture of MotionGS. It can be viewed as two data streams: (1) The 2D data stream utilizes the optical flow decoupling module to obtain the motion flow as the 2D motion prior; (2) The 3D data stream involves the deformation and transformation of Gaussians to render the image for the next frame. During training, we alternately optimize 3DGS and camera poses through the camera pose refinement module.
To train MotionGS, you should download the following dataset:
We organize the datasets as follows:
├── data
│ | NeRF-DS
│ ├── as
│ ├── basin
│ ├── ...
│ | HyperNeRF
│ ├── interp
│ ├── misc
│ ├── vrig
│ | DyNeRF
│ ├── coffee_martini
│ ├── cook_spinach
│ ├── ...
- Clone this repo:
git clone git@github.com:RuijieZhu94/MotionGS.git --recursive
- Install dependencies:
cd MotionGS
conda create -n motiongs python=3.7
conda activate motiongs
# install pytorch
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
# install dependencies
pip install -r requirements.txt
NeRF-DS:
expname=NeRF-DS
scenename=as_novel_view
mkdir -p output/$expname/$scenename
python train.py \
-s data/NeRF-DS/$scenename \
-m output/$expname/$scenename \
--eval \
--use_depth_and_flow \
--optimize_pose
HyperNeRF:
expname=HyperNerf
scenename=broom2
mkdir -p output/$expname/$scenename
python train.py \
-s data/hypernerf/vrig/$scenename \
-m output/$expname/$scenename \
--scene_format nerfies \
--eval \
--use_depth_and_flow \
--optimize_pose
DyNeRF:
expname=dynerf
scenename=flame_steak
mkdir -p output/$expname/$scenename
python train.py \
-s data/dynerf/$scenename \
-m output/$expname/$scenename \
--scene_format plenopticVideo \
--resolution 4 \
--dataloader \
--eval \
--use_depth_and_flow
python render.py -m output/exp-name --mode render
python metrics.py -m output/exp-name
We provide several modes for rendering:
render
: render all the test imagestime
: time interpolation tasks for D-NeRF datasetall
: time and view synthesis tasks for D-NeRF datasetview
: view synthesis tasks for D-NeRF datasetoriginal
: time and view synthesis tasks for real-world dataset
If you find our work useful, please cite:
@article{zhu2024motiongs,
title={Motiongs: Exploring explicit motion guidance for deformable 3d gaussian splatting},
author={Zhu, Ruijie and Liang, Yanzhe and Chang, Hanzhi and Deng, Jiacheng and Lu, Jiahao and Yang, Wenfei and Zhang, Tianzhu and Zhang, Yongdong},
journal={Advances in Neural Information Processing Systems},
volume={37},
pages={101790--101817},
year={2024}
}
Our code is based on Deformable3DGS, GaussianFlow, MonoGS, CF-3DGS, DynPoint, MiDas, GMFlow and MDFlow. We thank the authors for their excellent work!