Fangfu Liu1,
Hao Li2,
Jiawei Chi1,
Hanyang Wang1,3,
Minghui Yang3,
Fudong Wang3,
Yueqi Duan1
1Tsinghua University, 2NTU, 3Ant Group
LangScene-X: We propose LangScene-X, a unified model that generates RGB, segmentation map, and normal map, enabling to reconstruct 3D field from sparse views input.
- 🔥 [04/07/2025] We release "LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion". Check our project page and arXiv paper.
Pipeline of LangScene-X. Our model is composed of a TriMap Video Diffusion model which generates RGB, segmentation map, and normal map videos, an Auto Encoder that compresses the language feature, and a field constructor that reconstructs 3DGS from the generated videos.
para1.mp4
para2.mp4
para3.mp4
git clone https://github.com/liuff19/LangScene-X.git
cd LangScene-X
- Create conda environment
conda create -n langscenex python=3.10 -y
conda activate langscenex
- Install dependencies
conda install pytorch torchvision -c pytorch -y
pip install -e field_construction/submodules/simple-knn
pip install -e field_construction/submodules/diff-langsurf-rasterizer
pip install -e auto-seg/submodules/segment-anything-1
pip install -e auto-seg/submodules/segment-anything-2
pip install -r requirements.txt
The checkpoints of SAM, SAM2 and fine-tuned CogVideoX can be downloaded from our huggingface repository.
You can start quickly by running the following scripts:
chmod +x quick_start.sh
./quick_start.sh <first_rgb_image_path> <last_rgb_image_path>
Run the following command to render from the reconstructed 3DGS field:
python entry_point.py \
pipeline.rgb_video_path="does/not/matter" \
pipeline.normal_video_path="does/not/matter" \
pipeline.seg_video_path="does/not/matter" \
pipeline.data_path="does/not/matter" \
gaussian.dataset.source_path="does/not/matter" \
gaussian.dataset.model_path="output/path" \
pipeline.selection=False \
gaussian.opt.max_geo_iter=1500 \
gaussian.opt.normal_optim=True \
gaussian.opt.optim_pose=True \
pipeline.skip_video_process=True \
pipeline.skip_lang_feature_extraction=True \
pipeline.mode="render"
You can also configurate by editting configs/field_construction.yaml
.
- Per-scene Auto Encoder released
- Fine-tuned CogVideoX checkpoints released
- Generalizable Auto Encoder (LQC)
- Improved TriMap Video Diffusion model
We are thankful for the following great works when implementing LangScene-X:
@misc{liu2025langscenexreconstructgeneralizable3d,
title={LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion},
author={Fangfu Liu and Hao Li and Jiawei Chi and Hanyang Wang and Minghui Yang and Fudong Wang and Yueqi Duan},
year={2025},
eprint={2507.02813},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.02813},
}