1
- ## Low Latency Instance Segmentation by Continuous Clustering for Rotating LiDAR Sensors
1
+ # Low Latency Instance Segmentation by Continuous Clustering for Rotating LiDAR Sensors
2
2
3
3
[ ![ Basic Build Workflow] ( https://github.com/UniBwTAS/continuous_clustering/actions/workflows/basic-build-ci.yaml/badge.svg?branch=master )] ( https://github.com/UniBwTAS/continuous_clustering/actions/workflows/basic-build-ci.yaml )
4
4
[ ![ Publish Docker image] ( https://github.com/UniBwTAS/continuous_clustering/actions/workflows/publish-docker-image.yaml/badge.svg )] ( https://github.com/UniBwTAS/continuous_clustering/actions/workflows/publish-docker-image.yaml )
5
+ [ ![ arXiv] ( https://img.shields.io/badge/arXiv-2311.13976-b31b1b.svg )] ( https://arxiv.org/abs/2311.13976 )
5
6
6
- [ ![ forthebadge ] ( https://forthebadge .com/images/badges/made-with-c-plus-plus.svg )] ( https://forthebadge.com )
7
+ ![ Continuous Clustering Demo ] ( https://github .com/UniBwTAS/continuous_clustering/blob/master/assets/demo.gif )
7
8
8
- ![ ] ( https://github.com/UniBwTAS/continuous_clustering/blob/master/assets/demo.gif )
9
-
10
- ### Abstract:
9
+ ## Abstract:
11
10
12
11
Low-latency instance segmentation of LiDAR point clouds is crucial in real-world applications because it serves as an
13
12
initial and frequently-used building block in a robot's perception pipeline, where every task adds further delay.
@@ -23,7 +22,7 @@ incoming data in real time. We explain the importance of a large perceptive fiel
23
22
evaluate important architectural design choices, which could be relevant to design an architecture for deep learning
24
23
based low-latency instance segmentation.
25
24
26
- ### If you find our work useful in your research please consider citing our paper:
25
+ ## If you find our work useful in your research please consider citing our paper:
27
26
28
27
```
29
28
@misc{reich2023low,
@@ -38,40 +37,40 @@ based low-latency instance segmentation.
38
37
39
38
Get PDF [ here] ( https://arxiv.org/abs/2311.13976 ) .
40
39
41
- ### Acknowledgement
40
+ ## Acknowledgement
42
41
43
42
The authors gratefully acknowledge funding by the Federal Office of Bundeswehr Equipment, Information Technology and In-Service Support (BAAINBw).
44
43
45
- ## Examples:
44
+ # Examples:
46
45
47
- ### Works with uncommon mounting positions
46
+ ## Works with uncommon mounting positions
48
47
49
48
We mounted two Ouster OS 32 at a tilted angle in order to get rid of the blind spots of our main LiDAR sensor. Our
50
49
clustering also works with these mounting positions. The main challenge here is the ground point segmentation not the
51
50
clustering. It works ok, but we hope to improve it in the future.
52
51
53
- ![ ] ( https://github.com/UniBwTAS/continuous_clustering/blob/master/assets/demo_ouster.gif )
52
+ ![ Clustering with Ouster sensor ] ( https://github.com/UniBwTAS/continuous_clustering/blob/master/assets/demo_ouster.gif )
54
53
55
- ### Works with Fog
54
+ ## Works with Fog
56
55
57
56
There are many clutter points and the camera image is almost useless. But the clustering still works quite well after
58
57
filtering potential fog points.
59
58
60
- ![ ] ( https://github.com/UniBwTAS/continuous_clustering/blob/master/assets/demo_fog.gif )
59
+ ![ Clustering with clutter points from fog ] ( https://github.com/UniBwTAS/continuous_clustering/blob/master/assets/demo_fog.gif )
61
60
62
- ### Works on German Highway
61
+ ## Works on German Highway
63
62
64
63
There are often no speed limits on the German Highway. So it is not uncommon to see cars with velocities of 180 km/h or
65
64
much higher. A latency of e.g. 200ms leads to positional errors of ` (180 / 3.6) m/s * 0.2s = 10m ` . This shows the need
66
65
to keep latencies at a minimum.
67
66
68
- [ ![ IMAGE ALT TEXT HERE ] ( https://user-images.githubusercontent.com/74038190/235294007-de441046-823e-4eff-89bf-d4df52858b65.gif )] ( https://www.youtube.com/watch?v=DZKuAQBngNE&t=98s )
67
+ [ ![ Video GIF ] ( https://user-images.githubusercontent.com/74038190/235294007-de441046-823e-4eff-89bf-d4df52858b65.gif )] ( https://www.youtube.com/watch?v=DZKuAQBngNE&t=98s )
69
68
70
- ## Run it yourself:
69
+ # Run it yourself:
71
70
72
- ### Download Sensor Data
71
+ ## Download Sensor Data
73
72
74
- #### SemanticKitti
73
+ ### SemanticKitti
75
74
76
75
We use the same folder structure as the SemanticKitti dataset:
77
76
@@ -93,7 +92,7 @@ curl -s https://raw.githubusercontent.com/UniBwTAS/continuous_clustering/master/
93
92
export KITTI_SEQUENCES_PATH=" $( pwd) /kitti_odometry/dataset/sequences"
94
93
```
95
94
96
- #### Rosbag of our test vehicle VW Touareg
95
+ ### Rosbag of our test vehicle VW Touareg
97
96
98
97
Download the rosbag:
99
98
@@ -105,11 +104,8 @@ export ROSBAG_PATH=$(pwd)
105
104
106
105
Alternatively download it manually from
107
106
our [ Google Drive] ( https://drive.google.com/file/d/1zM4xPRahgxdJXJGHNXYUpM_g4-9UrcwC/view?usp=sharing ) and set the
108
- environment variable ` ROSBAG_PATH ` to download folder:
107
+ environment variable ` ROSBAG_PATH ` accordingly: ` export ROSBAG_PATH=/parent/ folder/of/rosbag `
109
108
110
- ``` bash
111
- export ROSBAG_PATH=/download/folder/of/rosbag/file
112
- ```
113
109
Available bags:
114
110
- ` gdown 1zM4xPRahgxdJXJGHNXYUpM_g4-9UrcwC ` (3.9GB, [ Manual Download] ( https://drive.google.com/file/d/1zM4xPRahgxdJXJGHNXYUpM_g4-9UrcwC/view?usp=sharing ) )
115
111
- Long recording in urban scenario (no camera to reduce file size, no Ouster sensors)
@@ -118,9 +114,9 @@ Available bags:
118
114
- ` gdown 146IaBdEmkfBWdIgGV5HzrEYDTol84a1H ` (0.7GB, [ Manual Download] ( https://drive.google.com/file/d/146IaBdEmkfBWdIgGV5HzrEYDTol84a1H/view?usp=sharing ) )
119
115
- Short recording of German Highway (blurred camera for privacy reasons)
120
116
121
- ### Setup Environment
117
+ ## Setup Environment
122
118
123
- #### Option 1: Docker + GUI (VNC):
119
+ ### Option 1: Docker + GUI (VNC):
124
120
125
121
This option is the fastest to set up. However, due to missing hardware acceleration in the VNC Docker container for RVIZ
126
122
the rosbag is played at 1/10 speed.
@@ -138,7 +134,7 @@ docker run -d -p 6080:80 -v /dev/shm:/dev/shm -v ${KITTI_SEQUENCES_PATH}:/mnt/ki
138
134
6 . Continue with step "Run Continuous Clustering" (see below) in the terminal opened in step 2. (There you can use the
139
135
clipboard feature of noVNC; tiny arrow on the left of the screen)
140
136
141
- #### Option 2: Locally on Ubuntu 20.04 (Focal) and ROS Noetic
137
+ ### Option 2: Locally on Ubuntu 20.04 (Focal) and ROS Noetic
142
138
143
139
``` bash
144
140
# install ROS (if not already installed)
@@ -158,7 +154,7 @@ bash /tmp/clone_repositories_and_install_dependencies.sh
158
154
catkin build
159
155
```
160
156
161
- ### Run Continuous Clustering
157
+ ## Run Continuous Clustering
162
158
163
159
``` bash
164
160
# run on kitti odometry dataset
@@ -177,7 +173,7 @@ between two transforms. The size of a slice depends on the update rate of the tr
177
173
to batches/slices of 1/5 rotation for a LiDAR rotating with 10Hz). So for a nice visualization where the columns are
178
174
published one by one like it the GIF at the top of the page you should disable this flag.
179
175
180
- ## Evaluation on SemanticKITTI Dataset
176
+ # Evaluation on SemanticKITTI Dataset
181
177
182
178
We evaluate our clustering algorithm with the same metrics as described in the paper _ TRAVEL: Traversable Ground and
183
179
Above-Ground Object Segmentation Using Graph Representation of 3D LiDAR
@@ -186,12 +182,12 @@ Scans_ ([arXiv](https://arxiv.org/abs/2206.03190), [GitHub](https://github.com/u
186
182
Under-Segmentation Entropy (USE) for clustering performance and precision / recall / accuracy / F1-Score for ground
187
183
point segmentation.
188
184
189
- ### Results
185
+ ## Results
190
186
191
187
The following results were obtained at Commit
192
188
SHA [ fa3c53b] ( https://github.com/UniBwTAS/continuous_clustering/commit/fa3c53bab51975b06ae5ec3a9e56567729149e4f )
193
189
194
- #### Clustering
190
+ ### Clustering
195
191
196
192
| Sequence | USE &mu ; &darr ; / &sigma ; &darr ; | OSE &mu ; &darr ; / &sigma ; &darr ; |
197
193
| :---: | :---: | :---: |
@@ -209,7 +205,7 @@ SHA [fa3c53b](https://github.com/UniBwTAS/continuous_clustering/commit/fa3c53bab
209
205
| 9 | 18.45 / 6.25 | 39.62 / 11.86 |
210
206
| 10 | 20.10 / 8.70 | 34.33 / 12.37 |
211
207
212
- #### Ground Point Segmentation:
208
+ ### Ground Point Segmentation:
213
209
214
210
| Sequence | Recall &mu ; &uarr ; / &sigma ; &darr ; | Precision &mu ; &uarr ; / &sigma ; &darr ; | F1-Score &mu ; &uarr ; / &sigma ; &darr ; | Accuracy &mu ; &uarr ; / &sigma ; &darr ; |
215
211
| :---: | :---: | :---: | :---: | :---: |
@@ -227,14 +223,14 @@ SHA [fa3c53b](https://github.com/UniBwTAS/continuous_clustering/commit/fa3c53bab
227
223
| 9 | 95.31 / 4.03 | 88.22 / 5.70 | 91.45 / 3.37 | 91.74 / 3.20 |
228
224
| 10 | 91.62 / 6.79 | 85.76 / 7.22 | 88.33 / 5.45 | 91.83 / 3.63 |
229
225
230
- ### Download/Generate Ground Truth Data
226
+ ## Download/Generate Ground Truth Data
231
227
232
228
In order to evaluate OSE and USE for clustering performance additional labels are required, which are generated from the
233
229
Semantic Kitti Labels and using a euclidean distance-based clustering.
234
230
See [ Issue] ( https://github.com/url-kaist/TRAVEL/issues/6 ) in TRAVEL GitHub repository
235
231
and [ src/evaluation/kitti_evaluation.cpp] ( src/evaluation/kitti_evaluation.cpp ) for more details.
236
232
237
- #### Option 1: Download pre-generated labels
233
+ ### Option 1: Download pre-generated labels
238
234
239
235
``` bash
240
236
cd /tmp
@@ -247,7 +243,7 @@ Alternatively download it manually from
247
243
our [ Google Drive] ( https://drive.google.com/file/d/1MOfLbUQcwRMLhRca0bxJMLVriU3G8Tg3/view?usp=sharing ) and unzip it to
248
244
the correct location (in parent directory of ` dataset ` folder).
249
245
250
- #### Option 2: Generate with GUI & ROS setup (assumes prepared ROS setup, see above, useful for debugging etc.)
246
+ ### Option 2: Generate with GUI & ROS setup (assumes prepared ROS setup, see above, useful for debugging etc.)
251
247
252
248
Generate labels, which are saved to ` ${KITTI_SEQUENCES_PATH}/<sequence>/labels_euclidean_clustering/ `
253
249
If you want to visualize the generated ground truth labels in ROS then remove the ` --no-ros ` flag and use just one
@@ -257,7 +253,7 @@ thread (default).
257
253
rosrun continuous_clustering gt_label_generator_tool ${KITTI_SEQUENCES_PATH} --no-ros --num-threads 8
258
254
```
259
255
260
- #### Option 3: Generate without GUI or ROS within Minimal Docker Container
256
+ ### Option 3: Generate without GUI or ROS within Minimal Docker Container
261
257
262
258
``` bash
263
259
# build docker image
@@ -272,9 +268,9 @@ docker run --rm -v ${KITTI_SEQUENCES_PATH}:/mnt/kitti_sequences --name build_no_
272
268
docker stop build_no_ros
273
269
```
274
270
275
- ### Run Evaluation
271
+ ## Run Evaluation
276
272
277
- #### Option 1: Evaluate with GUI & ROS setup (assumes prepared ROS setup, see above, useful for debugging)
273
+ ### Option 1: Evaluate with GUI & ROS setup (assumes prepared ROS setup, see above, useful for debugging)
278
274
279
275
``` bash
280
276
# run evaluation slowly with visual output
@@ -285,7 +281,7 @@ roslaunch continuous_clustering demo_kitti_folder.launch path:=${KITTI_SEQUENCES
285
281
roslaunch continuous_clustering demo_kitti_folder.launch path:=${KITTI_SEQUENCES_PATH} evaluate-fast:=true
286
282
```
287
283
288
- #### Option 2: Evaluate without GUI or ROS within Minimal Docker Container
284
+ ### Option 2: Evaluate without GUI or ROS within Minimal Docker Container
289
285
290
286
``` bash
291
287
# build docker image (if not already done)
@@ -299,18 +295,18 @@ docker run --rm -v ${KITTI_SEQUENCES_PATH}:/mnt/kitti_sequences --name build_no_
299
295
docker stop build_no_ros
300
296
```
301
297
302
- ## Tips for Rviz Visualization:
298
+ # Tips for Rviz Visualization:
303
299
304
300
TODO
305
301
306
- ## Info about our LiDAR Drivers
302
+ # Info about our LiDAR Drivers
307
303
308
304
Our clustering algorithm is able to process the UDP packets from the LiDAR sensor. So the firings can be processed
309
305
immediately. We directly process the raw UDP packets from the corresponding sensor, which makes the input
310
306
manufacturer/sensor specific. Luckily, we can use external libraries, so it is not necessary reimplement the decoding
311
307
part (UDP Packet -> Euclidean Point Positions). Currently, we support following sensor manufacturers:
312
308
313
- ### Velodyne
309
+ ## Velodyne
314
310
315
311
- Tested Sensors: VLS 128
316
312
- All other rotating Velodyne LiDARs should work, too
@@ -340,7 +336,7 @@ part (UDP Packet -> Euclidean Point Positions). Currently, we support following
340
336
from [ ros-drivers/velodyne] ( https://github.com/ros-drivers/velodyne ) to decode packets to euclidean points
341
337
- See source code at: [ velodyne_input.hpp] ( include/continuous_clustering/ros/velodyne_input.hpp )
342
338
343
- ### Ouster
339
+ ## Ouster
344
340
345
341
- Tested Sensors: OS 32
346
342
- All other rotating Ouster LiDARs should work, too
0 commit comments