Skip to content

shafiqul-islam-sumon/Pandaset_Point_Cloud_Semantic_Segmentation

Repository files navigation

PandaSet Semantic Segmentation using 2DPASS model

2DPASS Model

2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds

arXiv GitHub Stars

2DPASS is introduced in the following paper

Xu Yan*, Jiantao Gao*, Chaoda Zheng*, Chao Zheng, Ruimao Zhang, Shuguang Cui, Zhen Li*, "2DPASS: 2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds", ECCV 2022 [arxiv].

2DPASS Architecture

image

2DPASS Feature Generation

image


PandaSet Dataset

image

Download

To download the dataset, please visit the official PandaSet webpage and sign up through the form. You will then be forwarded to a page with download links to the raw data and annotations.

Structure

.
├── LICENSE.txt
├── annotations
│   ├── cuboids
│   │   ├── 00.pkl.gz
│   │   .
│   │   .
│   │   .
│   │   └── 79.pkl.gz
│   └── semseg  // Semantic Segmentation is available for specific scenes
│       ├── 00.pkl.gz
│       .
│       .
│       .
│       ├── 79.pkl.gz
│       └── classes.json
├── camera
│   ├── front_camera
│   │   ├── 00.jpg
│   │   .
│   │   .
│   │   .
│   │   ├── 79.jpg
│   │   ├── intrinsics.json
│   │   ├── poses.json
│   │   └── timestamps.json
│   ├── back_camera
│   │   └── ...
│   ├── front_left_camera
│   │   └── ...
│   ├── front_right_camera
│   │   └── ...
│   ├── left_camera
│   │   └── ...
│   └── right_camera
│       └── ...
├── lidar
│   ├── 00.pkl.gz
│   .
│   .
│   .
│   ├── 79.pkl.gz
│   ├── poses.json
│   └── timestamps.json
└── meta
    ├── gps.json
    └── timestamps.json

Data Format

Point Cloud (Lidar) Annotation
lidar annotation
Camera Image Semantic Classes
image classes

Data Preparation

  • For Training 2DPASS on PandaSet only front_camera images are considered
  • Some sequences don't have semseg folder inside annotations folder as semantic segmentation is available for specific scenes
  • To skip those sequences run pandaset_nosemseg.py file

Installation

Requirements

Training from scratch

You can run the training without pretrained_model

cd <root dir of this repo>
python main.py --log_dir pandaset --gpu 0

Continue Training from pretrained model

You can continue last training by loading the pretrained model

  • Download the pretrained model checkpoint from this Google drive folder.
  • Run the checkpoint/download_weight.py file to download pretrained model
cd <root dir of this repo>
python main.py --log_dir pandaset --gpu 0 --checkpoint ./checkpoint/pretrained_model.ckpt

Testing / Fine-Tuning

You can run the testing with

cd <root dir of this repo>
python main.py --gpu 0 --test --num_vote 12 --checkpoint ./checkpoint/pretrained_model.ckpt

Here, num_vote is the number of views for the test-time-augmentation (TTA). We set this value to 12 as default (on a Tesla-V100 GPU), and if you use other GPUs with smaller memory, you can choose a smaller value. num_vote=1 denotes there is no TTA used, and will cause about ~2% performance drop.

TensorBoard

Run the following command to open TensorBoard

cd <root dir of this repo>
tensorboard --logdir ./logs/pandaset/ --host localhost --port 8888

Charts

train_acc_epoch val_acc val_best_mIoU
epoch lr lr
train_acc_step train_loss_ce train_loss_lovasz
image classes classe

Pretrained Model

You can download the models with the scores below from this Google drive folder.

Test Result

After training the model for 12 epochs here is the test result

Class IoU
Road 94.07%
Car 90.62%
Bus 75.99%
Ground 74.08%
Building 71.49%
Pickup Truck 53.68%
Motorcycle 32.03%
Other Static Object 44.20%
Accuracy mIoU
45.06% 53.62%

If we train the model more epochs (~60) the per class IoU and mIoU will increase.

Acknowledgements

About

PandaSet Semantic Segmentation using 2DPASS model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages