@@ -39,33 +39,35 @@ Get PDF [here](https://arxiv.org/abs/2311.13976).
39
39
40
40
## Acknowledgement
41
41
42
- The authors gratefully acknowledge funding by the Federal Office of Bundeswehr Equipment, Information Technology and In-Service Support (BAAINBw).
42
+ The authors gratefully acknowledge funding by the Federal Office of Bundeswehr Equipment, Information Technology and
43
+ In-Service Support (BAAINBw).
43
44
44
45
## Overview
46
+
45
47
- [ Examples] ( #examples )
46
48
- [ Run it yourself] ( #run-it-yourself )
47
49
- [ 1. Download Sensor Data] ( #1-download-sensor-data )
48
- - [ 1.1. SemanticKitti] ( #11-semantickitti )
49
- - [ 1.2. Rosbag of our test vehicle VW Touareg] ( #12-rosbag-of-our-test-vehicle-vw-touareg )
50
+ - [ 1.1. SemanticKitti] ( #11-semantickitti )
51
+ - [ 1.2. Rosbag of our test vehicle VW Touareg] ( #12-rosbag-of-our-test-vehicle-vw-touareg )
50
52
- [ 2. Setup Environment] ( #2-setup-environment )
51
- - [ 2.1. Option: Docker + GUI (VNC)] ( #21-option-docker--gui-vnc )
52
- - [ 2.2. Option: Locally on Ubuntu 20.04 (Focal) and ROS Noetic] ( #22-option-locally-on-ubuntu-2004-focal-and-ros-noetic )
53
+ - [ 2.1. Option: Docker + GUI (VNC)] ( #21-option-docker--gui-vnc )
54
+ - [ 2.2. Option: Locally on Ubuntu 20.04 (Focal) and ROS Noetic] ( #22-option-locally-on-ubuntu-2004-focal-and-ros-noetic )
53
55
- [ 3. Run Continuous Clustering] ( #3-run-continuous-clustering )
54
56
- [ Evaluation on SemanticKITTI Dataset] ( #evaluation-on-semantickitti-dataset )
55
- - [ 1. Evaluation Results] ( #1-evaluation-results )
56
- - [ 1.1. Clustering] ( #11-clustering )
57
- - [ 1.2. Ground Point Segmentation] ( #12-ground-point-segmentation )
58
- - [ 2. Get Ground Truth Labels] ( #2-get-ground-truth-labels )
59
- - [ 2.1. Option: Download pre-generated labels] ( #21-option-download-pre-generated-labels-fastest )
60
- - [ 2.2. Option: Generate with GUI & ROS setup] ( #22-option-generate-with-gui--ros-setup-assumes-prepared-ros-setup-see-above-useful-for-debugging-etc )
61
- - [ 2.3. Option: Generate without GUI or ROS within Minimal Docker Container] ( #23-option-generate-without-gui-or-ros-within-minimal-docker-container )
62
- - [ 3. Run Evaluation] ( #3-run-evaluation )
63
- - [ 3.1. Option: Evaluate with GUI & ROS setup] ( #31-option-evaluate-with-gui--ros-setup-assumes-prepared-ros-setup-useful-for-debugging )
64
- - [ 3.2. Option: Evaluate without GUI or ROS within Minimal Docker Container] ( #32-option-evaluate-without-gui-or-ros-within-minimal-docker-container )
57
+ - [ 1. Evaluation Results] ( #1-evaluation-results )
58
+ - [ 1.1. Clustering] ( #11-clustering )
59
+ - [ 1.2. Ground Point Segmentation] ( #12-ground-point-segmentation )
60
+ - [ 2. Get Ground Truth Labels] ( #2-get-ground-truth-labels )
61
+ - [ 2.1. Option: Download pre-generated labels] ( #21-option-download-pre-generated-labels-fastest )
62
+ - [ 2.2. Option: Generate with GUI & ROS setup] ( #22-option-generate-with-gui--ros-setup-assumes-prepared-ros-setup-see-above-useful-for-debugging-etc )
63
+ - [ 2.3. Option: Generate without GUI or ROS within Minimal Docker Container] ( #23-option-generate-without-gui-or-ros-within-minimal-docker-container )
64
+ - [ 3. Run Evaluation] ( #3-run-evaluation )
65
+ - [ 3.1. Option: Evaluate with GUI & ROS setup] ( #31-option-evaluate-with-gui--ros-setup-assumes-prepared-ros-setup-useful-for-debugging )
66
+ - [ 3.2. Option: Evaluate without GUI or ROS within Minimal Docker Container] ( #32-option-evaluate-without-gui-or-ros-within-minimal-docker-container )
65
67
- [ Info about our LiDAR Drivers] ( #info-about-our-lidar-drivers )
66
- - [ Velodyne] ( #velodyne )
67
- - [ Ouster] ( #ouster )
68
- - [ Tips for Rviz Visualization ] ( #tips-for-rviz-visualization )
68
+ - [ 1. Velodyne] ( #1- velodyne )
69
+ - [ 2. Ouster] ( #2- ouster )
70
+ - [ 3. Generic Points ] ( #3-generic-points )
69
71
- [ TODOs] ( #todos )
70
72
71
73
# Examples:
@@ -88,7 +90,7 @@ filtering potential fog points.
88
90
## Works on German Highway
89
91
90
92
There are often no speed limits on the German Highway. So it is not uncommon to see cars with velocities of 180 km/h or
91
- much higher. A latency of e.g. 200ms leads to positional errors of ` (180 / 3.6) m/s * 0.2s = 10m ` . This shows the need
93
+ much higher. A latency of e.g. 200ms leads to positional errors of ` (180 / 3.6) m/s * 0.2s = 10m ` . This shows the need
92
94
to keep latencies at a minimum.
93
95
94
96
[ ![ Video GIF] ( https://user-images.githubusercontent.com/74038190/235294007-de441046-823e-4eff-89bf-d4df52858b65.gif )] ( https://www.youtube.com/watch?v=DZKuAQBngNE&t=98s )
@@ -134,12 +136,16 @@ our [Google Drive](https://drive.google.com/file/d/1zM4xPRahgxdJXJGHNXYUpM_g4-9U
134
136
environment variable ` ROSBAG_PATH ` accordingly: ` export ROSBAG_PATH=/parent/folder/of/rosbag `
135
137
136
138
Available bags:
137
- - ` gdown 1zM4xPRahgxdJXJGHNXYUpM_g4-9UrcwC ` (3.9GB, [ Manual Download] ( https://drive.google.com/file/d/1zM4xPRahgxdJXJGHNXYUpM_g4-9UrcwC/view?usp=sharing ) )
138
- - Long recording in urban scenario (no camera to reduce file size, no Ouster sensors)
139
- - ` gdown 1qjCG6-nWBZ_2wJwoP80jj0gGopBT2c23 ` (2.4GB, [ Manual Download] ( https://drive.google.com/file/d/1qjCG6-nWBZ_2wJwoP80jj0gGopBT2c23/view?usp=sharing ) )
140
- - Recording including Ouster 32 sensor data (blurred camera for privacy reasons)
141
- - ` gdown 146IaBdEmkfBWdIgGV5HzrEYDTol84a1H ` (0.7GB, [ Manual Download] ( https://drive.google.com/file/d/146IaBdEmkfBWdIgGV5HzrEYDTol84a1H/view?usp=sharing ) )
142
- - Short recording of German Highway (blurred camera for privacy reasons)
139
+
140
+ - ` gdown 1zM4xPRahgxdJXJGHNXYUpM_g4-9UrcwC ` (
141
+ 3.9GB, [ Manual Download] ( https://drive.google.com/file/d/1zM4xPRahgxdJXJGHNXYUpM_g4-9UrcwC/view?usp=sharing ) )
142
+ - Long recording in urban scenario (no camera to reduce file size, no Ouster sensors)
143
+ - ` gdown 1qjCG6-nWBZ_2wJwoP80jj0gGopBT2c23 ` (
144
+ 2.4GB, [ Manual Download] ( https://drive.google.com/file/d/1qjCG6-nWBZ_2wJwoP80jj0gGopBT2c23/view?usp=sharing ) )
145
+ - Recording including Ouster 32 sensor data (blurred camera for privacy reasons)
146
+ - ` gdown 146IaBdEmkfBWdIgGV5HzrEYDTol84a1H ` (
147
+ 0.7GB, [ Manual Download] ( https://drive.google.com/file/d/146IaBdEmkfBWdIgGV5HzrEYDTol84a1H/view?usp=sharing ) )
148
+ - Short recording of German Highway (blurred camera for privacy reasons)
143
149
144
150
## 2. Setup Environment
145
151
@@ -193,12 +199,12 @@ roslaunch continuous_clustering demo_touareg.launch bag_file:=${ROSBAG_PATH}/vw_
193
199
```
194
200
195
201
> [ !Tip]
196
- > For the latter launch file, you can use ` --wait_for_tf:=false ` (default: ` true ` ) argument. It controls whether to wait
197
- > for the transform from velodyne to fixed frame (e.g. odometry frame) with a timestamp larger than the one of the
198
- > firing or whether to use the latest available (probably incorrect) transform. The former is the accurate approach
199
- > (that's why it is the default) but the columns are published in larger batches/slices because they are accumulated
200
- > between two transforms. The size of a slice depends on the update rate of the transform (i.e. transforms with 50Hz
201
- > lead to batches/slices of 1/5 rotation for a LiDAR rotating with 10Hz). So for a nice visualization where the columns
202
+ > For the latter launch file, you can use ` --wait_for_tf:=false ` (default: ` true ` ) argument. It controls whether to wait
203
+ > for the transform from velodyne to fixed frame (e.g. odometry frame) with a timestamp larger than the one of the
204
+ > firing or whether to use the latest available (probably incorrect) transform. The former is the accurate approach
205
+ > (that's why it is the default) but the columns are published in larger batches/slices because they are accumulated
206
+ > between two transforms. The size of a slice depends on the update rate of the transform (i.e. transforms with 50Hz
207
+ > lead to batches/slices of 1/5 rotation for a LiDAR rotating with 10Hz). So for a nice visualization where the columns
202
208
> are published one by one like it the GIF at the top of the page you should disable this flag.
203
209
204
210
# Evaluation on SemanticKITTI Dataset
@@ -280,8 +286,8 @@ rosrun continuous_clustering gt_label_generator_tool ${KITTI_SEQUENCES_PATH} --n
280
286
```
281
287
282
288
> [ !TIP]
283
- > If you want to visualize the generated ground truth labels in an online fashion then remove the ` --no-ros ` flag and
284
- > use just one thread (default). You can then subscribe to the corresponding topic in Rviz and visualize the point
289
+ > If you want to visualize the generated ground truth labels in an online fashion then remove the ` --no-ros ` flag and
290
+ > use just one thread (default). You can then subscribe to the corresponding topic in Rviz and visualize the point
285
291
> labels.
286
292
287
293
### 2.3. Option: Generate without GUI or ROS within Minimal Docker Container
@@ -326,18 +332,14 @@ docker run --rm -v ${KITTI_SEQUENCES_PATH}:/mnt/kitti_sequences --name build_no_
326
332
docker stop build_no_ros
327
333
```
328
334
329
- # Tips for Rviz Visualization
330
-
331
- TODO
332
-
333
335
# Info about our LiDAR Drivers
334
336
335
337
Our clustering algorithm is able to process the UDP packets from the LiDAR sensor. So the firings can be processed
336
338
immediately. We directly process the raw UDP packets from the corresponding sensor, which makes the input
337
339
manufacturer/sensor specific. Luckily, we can use external libraries, so it is not necessary reimplement the decoding
338
340
part (UDP Packet -> Euclidean Point Positions). Currently, we support following sensor manufacturers:
339
341
340
- ## Velodyne
342
+ ## 1. Velodyne
341
343
342
344
- Tested Sensors: VLS 128
343
345
- All other rotating Velodyne LiDARs should work, too
@@ -367,7 +369,7 @@ part (UDP Packet -> Euclidean Point Positions). Currently, we support following
367
369
from [ ros-drivers/velodyne] ( https://github.com/ros-drivers/velodyne ) to decode packets to euclidean points
368
370
- See source code at: [ velodyne_input.hpp] ( include/continuous_clustering/ros/velodyne_input.hpp )
369
371
370
- ## Ouster
372
+ ## 2. Ouster
371
373
372
374
- Tested Sensors: OS 32
373
375
- All other rotating Ouster LiDARs should work, too
@@ -379,7 +381,15 @@ part (UDP Packet -> Euclidean Point Positions). Currently, we support following
379
381
from [ ouster-lidar/ouster-ros] ( https://github.com/ouster-lidar/ouster-ros ) to decode packets to euclidean points
380
382
- See source code at: [ ouster_input.hpp] ( include/continuous_clustering/ros/ouster_input.hpp )
381
383
382
- ## TODOs
384
+ ## 3. Generic Points
385
+
386
+ Alternatively, you can also launch the clustering with parameter ` sensor_manufacturer:="generic_points" ` . Then it
387
+ subscribes to ` <namespace>/raw_data ` topic and expects ` sensor_msgs::PointCloud2 ` messages, which should represent one
388
+ firing. It has to be an organized point cloud with ` width=1 ` and ` height=<number-of-lasers> ` . Required point fields are
389
+ ` x ` , ` y ` , ` z ` (w.r.t. sensor origin), and ` intensity ` . Missing returns should be represented by ` NaN ` .
390
+ See [ launch/sensor_kitti.launch] ( launch/sensor_kitti.launch ) for such an example.
391
+
392
+ # TODOs
383
393
384
394
- Port it to ROS2: planned soon
385
395
- Add features:
0 commit comments