Skip to content

ethz-mrl/regrace

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

REGRACE: A Robust and Efficient Graph-based Re-localization Algorithm using Consistency Evaluation

A novel approach that addresses these challenges of scalability and perspective difference in re-localization by using LiDAR-based submaps. Accepted to IROS 2025.

arXiv GitHub License

🪛 Installation

  1. Create a virtual environment with python 3.11. We tested REGRACE using CUDA 11.7.
python3.11 -m venv .venv
source .venv/bin/activate
  1. Install the dependencies using pip or pdm
pip install -r requirements.txt
# or (choose one)
pdm install
  1. Compile and install the pointnet2 package. Please follow the instructions in the pointnet2-wheel folder to compile a wheel and install it.

  2. For good practices, export the following CUDA seed variables to your ~/.bashrc

export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
export 'CUBLAS_WORKSPACE_CONFIG=:4096:8'

🔗 Download data and weights

  1. Download the SemanticKITTI dataset.

  2. Download Cylinder3D weight from here and save it to ./config/cyl3d_weights.pt

  3. If you want to use the pretrained model, download the weights trained on KITTI sequences 00 to 10 from our latest release. Further instructions on how to use the weights are provided in the Testing section.

🗂️ Submap generation

1️⃣ Generating clustered submaps

First, adjust the parameters in the configuration YAML data-generation.yaml:

  1. sequence to the desired KITTI sequence
  2. kitti_dir to the root path of the SemanticKITTI dataset
  3. output_folder to the desired output folder

Then, run the following command:

python run.py --config_file config/data-generation.yaml --generate_submaps

This will create a folder in the following structure:

preprocessed_data_folder
├── seq00
│   ├── single-scan
│   |   ├── label-prediction
|   |   └── probability-labels
│   └── submap
│       ├── all-points
|       └── cluster
├── seq01
...

The single-scan folder contains the predictions of the Cylinder3D model for each scan in the sequence. The submap folder contains the submaps generated by accumulating scans. Those submaps are already voxelized (all-points) and clustered (cluster). The total size for KITTI sequence 00 to 10 is around 1.5TB. If you don't have enough memory, you can follow the instructions at the end of this section.

2️⃣ Generating compacted pickle and parquet files

We then compact the submaps into a parquet file containing $P$ points of the valid clusters and their respective normal vector. Each parquet is accompanied by a pickle file containing the metadata of the submap, such as position and positive maps. To generate the parquet and pickle files, adjust the parameters in the configuration YAML default.yaml:

  1. dataset/train_folders and dataset/test_folders to the folders of the preprocessed data for each KITTI sequence (preprocessed_data_folder/seqXX/submap/cluster). You can add multiple folders as a list.
  2. dataset/preprocessing_folder to the folder where the compressed preprocessed data should be stored
  3. flag/generate_triplets to True
  4. flag/train and flag/test to False

Then, run the following command:

python run.py --config_file <YOUR_CONFIG>.yaml

This will create a folder in the following structure:

preprocessing_folder
├── 00
│   ├── pickle
│   └── parquet
├── 01
│   ├── pickle
│   └── parquet
...

and a folder <repo path>/data/pickle_list/eval_seqXX containing the compacted dataset for faster loading during training and testing.

💡 Saving memory while generating submaps

  1. Uncomment L28-29 in generate_cluster.py. This will delete the item in folder all-points when clustering the submap to folder cluster.
  2. Generate the submaps following the instructions in Generating submaps and clusters.
  3. Generate the triplets following Generating the compacted pickle and parquet files.

This will reduce the total submap folder size to 250GB. You may delete it after generating the triplet. Note that if you change the test_folder or train_folder parameters in the `default.yaml, you have to generate the triplets again, and for that you need the submap folder.

📊 Testing

To test the model, you need to have the model trained. Weights are available in the latest release. Adjust the in the configuration YAML default.yaml as:

  1. flag/train to False
  2. flag/test to True.
  3. Add the path to the weights in training/checkpoint_path.
  4. Set flags/initialize_from_checkpoint to True.
  5. dataset/preprocessing_folder to the compressed preprocessed data folder.
  6. flag/generate_triplets to False.

Then, run the following command:

python run.py --config_file <YOUR_CONFIG>.yaml

Your output will be a table with the metrics for the test set.

🚀 Training

1️⃣ Training from scratch

To train the model, you need to adjust the in the configuration YAML default.yaml as:

  1. dataset/preprocessing_folder to the compressed preprocessed data folder.
  2. flag/generate_triplets to False.
  3. flag/train to True
  4. flag/test to False.
  5. flag/initialize_from_checkpoint to False.

Then, run the following command:

python run.py --config_file <YOUR_CONFIG>.yaml

If you want to use wandb to log the training, you can set the wandb_logging flag in the configuration YAML to True and set the project and entity in utils.py to your desired project and entity (usually your username). Don't forget to login first:

wandb login

2️⃣ Fine-tuning

For the final refinement step, set the configuration YAML default.yaml as:

training:
  batch_size: 90
  checkpoint_path: <path_to_checkpoint>
  epochs: 50
  loss:
    margin: 1.0
    p: 2
    type: both
  num_workers: 12
  optimizer:
    lr: 1.0e-05
  scheduler:
    decay_rate: 0.1
    milestones:
    - 25

Also set flags/initialize_from_checkpoint to True. Then, run the following command:

python run.py --config_file <YOUR_CONFIG>.yaml

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published