Source code for [ECCV2024] O2V-Mapping: Online Open-Vocabulary Mapping with Neural Implicit Representation
git clone --recursive https://github.com/Fudan-MAGIC-Lab/O2Vmapping.git
sudo apt-get install libopenexr-dev
cd O2Vmapping
conda env create -f environment.yml
conda activate O2V
Our project relies on SAM and CLIP. Please ensure that both modules are functioning properly before running the code. For specific configuration procedures, please refer to the official repositories of SAM and CLIP. Additionally, we highly recommend using MobileSAM, as it significantly enhances the runtime efficiency of the code.
We recommend that you construct the dataset according to the official guidelines of the Replica dataset. The dataset should be organized as follows:
├──config
├──Datasets
├──Replica
├──office0
├──pose
├──results
├──traj.txt
└──transforms.json
...
└──office1
...
└──YOURDATA
└──run.py
...
We recommend that you download the ScanNet dataset following the official guidelines. The dataset should be organized into files in the following format.
├──config
├──Datasets
├──scannet
├──scannet0707_00
├──color
├──depth
├──intrinsic
└──pose
...
└──scannet0000_00
...
└──YOURDATA
└──run.py
...
For the dataset you have collected yourselves, some additional processing is still required. Please run the following to estimate the scene boundaries. This helps reduce unnecessary spatial overhead and ensures the correct scene boundaries are set.
python bound.py ./config/YOURDATA.yaml --input_folder './YOURDATA/'
After ensuring the above process is correct, you can proceed to run:
python run.py ./config/office0_door.yaml
This work is accomplished based on other excellent works, and we extend our gratitude for their contributions.