Skip to content

esw0116/OmniSplat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project ArXiv

OmniSplat: Taming Feed-Forward 3D Gaussian Splatting for
Omnidirectional Images with Editable Capabilities

Suyoung Lee1*  ·  Jaeyoung Chung1*  ·  Kihoon Kim1  ·  Jaeyoo Huh1
Gunhee Lee2  ·  Minsoo Lee2  ·  Kyoung Mu Lee1
1: Seoul National Univiersity    2: LG AI Research Center
(* denotes equal contribution)

CVPR 2025, Highlight


This is an official implementation of "OmniSplat: Taming Feed-Forward 3D Gaussian Splatting for Omnidirectional Images with Editable Capabilities."

Update Log

25.06.06: First code upload

Installation

git clone https://github.com/esw0116/OmniSplat.git --recursive
cd OminSplat

# Set Environment
conda env create --file environment.yaml
conda activate omnisplat
pip install submodules/simple-knn
pip install submodules/diff-gaussian-yin-rasterization
pip install submodules/diff-gaussian-yang-rasterization

Benchmark Dataset

We evaluate 6 datasets by adjusting their resolutions and performing Structure-from-Motion using OpenMVG.
For your convenience, we provide ⭐links to the converted datasets⭐ used in our paper. The reference and target indices for each dataset is described in the supplementary material of the paper.

For reference, we provide the links to the original datasets.
OmniBlender & Ricoh360 / OmniPhotos / 360Roam / OmniScenes / 360VO

Running OmniSplat

  • OmniSplat runs based on MVSplat, without fine-tuning any parameters.

Preparation

  • Get the pretrained model (re10k.ckpt) from MVSplat repo, and save the model in ./checkpoints folder
  • Put the downloaded datasets in the ./datasets folder

Evaluation Scripts

python -m src.main +experiment=[dataset_name]
  • The config files are listed in ./config/experiment
  • The results will be saved in ./outputs/test

Note

  • There will be a pixel misalignment during the omnidirectional image warping.
  • To solve the issue, please go to the equi2equi/torch.py in pyequilib library, and comment the two lines (L33-34) ui += 0.5; uj += 0.5
  • We will modify the code to resolve the issue without changing the function in the library.

Citation

@InProceedings{Lee2025OmniSplat,
    author    = {Lee, Suyoung and Chung, Jaeyoung and Kim, Kihoon and Huh, Jaeyoo and Lee, Gunhee and Lee, Minsoo and Lee, Kyoung Mu},
    title     = {OmniSplat: Taming Feed-Forward 3D Gaussian Splatting for Omnidirectional Images with Editable Capabilities},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
    month     = {June},
    year      = {2025},
    pages     = {16356-16365}
}
}

About

[CVPR 2025] OmniSplat: Taming Feed-Forward 3D Gaussian Splatting for Omnidirectional Images with Editable Capabilities

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published