Skip to content

Commit 761df64

Browse files
authored
Merge pull request #564 from isl-org/dev_to_master_0.16
Dev to master 0.16
2 parents 5148228 + 7c692d2 commit 761df64

26 files changed

+461
-196
lines changed

.github/workflows/ubuntu.yml

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,6 @@ jobs:
1717
steps:
1818
- name: Checkout source code
1919
uses: actions/checkout@v2
20-
with:
21-
submodules: true
2220
- name: Setup cache
2321
uses: actions/cache@v2
2422
with:
@@ -35,7 +33,7 @@ jobs:
3533
- name: Set up Python version
3634
uses: actions/setup-python@v2
3735
with:
38-
python-version: 3.6
36+
python-version: "3.10"
3937
# Pre-installed 18.04 packages: https://git.io/JfHmW
4038
- name: Install ccache
4139
run: |

ci/run_ci.sh

Lines changed: 30 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,14 @@
33
# The following environment variables are required:
44
# - NPROC
55
#
6-
TENSORFLOW_VER="2.5.2"
7-
TORCH_GLNX_VER="1.8.2+cpu"
6+
TENSORFLOW_VER="2.8.2"
7+
TORCH_GLNX_VER="1.12.0+cpu"
8+
# OPENVINO_DEV_VER="2021.4.2" # Numpy version conflict with TF 2.8.2
9+
PIP_VER="21.1.1"
10+
WHEEL_VER="0.37.1"
11+
STOOLS_VER="50.3.2"
812
YAPF_VER="0.30.0"
9-
PYTEST_VER="6.0.1"
13+
PYTEST_VER="7.1.2"
1014
PYTEST_RANDOMLY_VER="3.8.0"
1115

1216
set -euo pipefail
@@ -16,7 +20,14 @@ echo
1620
export PATH_TO_OPEN3D_ML=$(pwd)
1721
# the build system of the main repo expects a master branch. make sure master exists
1822
git checkout -b master || true
19-
pip install -r requirements.txt
23+
python -m pip install -U pip==$PIP_VER \
24+
wheel=="$WHEEL_VER" \
25+
setuptools=="$STOOLS_VER" \
26+
yapf=="$YAPF_VER" \
27+
pytest=="$PYTEST_VER" \
28+
pytest-randomly=="$PYTEST_RANDOMLY_VER"
29+
30+
python -m pip install -r requirements.txt
2031
echo $PATH_TO_OPEN3D_ML
2132
cd ..
2233
python -m pip install -U Cython
@@ -26,28 +37,25 @@ echo
2637
git clone --recursive --branch master https://github.com/isl-org/Open3D.git
2738

2839
./Open3D/util/install_deps_ubuntu.sh assume-yes
29-
python -m pip install -U tensorflow-cpu==$TENSORFLOW_VER
30-
python -m pip install -U torch==${TORCH_GLNX_VER} -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
31-
python -m pip install -U pytest=="$PYTEST_VER" \
32-
pytest-randomly=="$PYTEST_RANDOMLY_VER"
33-
python -m pip install -U yapf=="$YAPF_VER"
34-
python -m pip install -U openvino-dev==2021.4.2
40+
python -m pip install -U tensorflow-cpu==$TENSORFLOW_VER \
41+
torch==${TORCH_GLNX_VER} --extra-index-url https://download.pytorch.org/whl/cpu/
42+
# openvino-dev=="$OPENVINO_DEV_VER"
3543

3644
echo 3. Configure for bundling the Open3D-ML part
3745
echo
3846
mkdir Open3D/build
3947
pushd Open3D/build
4048
cmake -DBUNDLE_OPEN3D_ML=ON \
41-
-DOPEN3D_ML_ROOT=$PATH_TO_OPEN3D_ML \
42-
-DGLIBCXX_USE_CXX11_ABI=OFF \
43-
-DBUILD_TENSORFLOW_OPS=ON \
44-
-DBUILD_PYTORCH_OPS=ON \
45-
-DBUILD_GUI=OFF \
46-
-DBUILD_RPC_INTERFACE=OFF \
47-
-DBUILD_UNIT_TESTS=OFF \
48-
-DBUILD_BENCHMARKS=OFF \
49-
-DBUILD_EXAMPLES=OFF \
50-
..
49+
-DOPEN3D_ML_ROOT=$PATH_TO_OPEN3D_ML \
50+
-DGLIBCXX_USE_CXX11_ABI=OFF \
51+
-DBUILD_TENSORFLOW_OPS=ON \
52+
-DBUILD_PYTORCH_OPS=ON \
53+
-DBUILD_GUI=OFF \
54+
-DBUILD_RPC_INTERFACE=OFF \
55+
-DBUILD_UNIT_TESTS=OFF \
56+
-DBUILD_BENCHMARKS=OFF \
57+
-DBUILD_EXAMPLES=OFF \
58+
..
5159

5260
echo 4. Build and install wheel
5361
echo
@@ -60,12 +68,12 @@ popd
6068
mkdir test_workdir
6169
pushd test_workdir
6270
mv $PATH_TO_OPEN3D_ML/tests .
63-
echo Add --rondomly-seed=SEED to the test command to reproduce test order.
71+
echo Add --randomly-seed=SEED to the test command to reproduce test order.
6472
python -m pytest tests
6573

6674
echo ... now do the same but in dev mode by setting OPEN3D_ML_ROOT
6775
export OPEN3D_ML_ROOT=$PATH_TO_OPEN3D_ML
68-
echo Add --rondomly-seed=SEED to the test command to reproduce test order.
76+
echo Add --randomly-seed=SEED to the test command to reproduce test order.
6977
python -m pytest tests
7078
unset OPEN3D_ML_ROOT
7179

docs/tensorboard.md

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -252,7 +252,7 @@ order.
252252

253253
Now you can visualize the data in TensorBoard as before. The web interface
254254
allows showing and hiding points with different classes, changing their colors,
255-
and exploring predictions and intermediate network features. Scalar network
255+
and exploring predictions and intermediate network features. Scalar network
256256
features can be visualized with custom user editable colormaps, and 3D features
257257
can be visualized as RGB colors. Here is a video showing the different ways in
258258
which semantic segmentation summary data can be visualized in TensorBoard.
@@ -376,3 +376,14 @@ for step in range(len(val_split)): # one pointcloud per step
376376
step,
377377
label_to_names=dset.get_label_to_names())
378378
```
379+
380+
Troubleshooting
381+
---------------
382+
383+
If you cannot interact with the 3D model, or use controls in the WebRTC widget,
384+
make sure that Allow Autoplay is enabled for the Tensorboard web site and reload.
385+
386+
<img src=https://user-images.githubusercontent.com/41028320/180485249-5233b65e-11b1-44ff-bfc4-35f390ef51f2.png
387+
title="Allow Autoplay for correct behavior."
388+
alt="Allow Autoplay for correct behavior."
389+
style="width:80%;display:block;margin:auto"></img>

ml3d/configs/pointpillars_waymo.yml

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ dataset:
22
name: Waymo
33
dataset_path: # path/to/your/dataset
44
cache_dir: ./logs/cache
5-
steps_per_epoch_train: 5000
5+
steps_per_epoch_train: 4000
66

77
model:
88
name: PointPillars
@@ -31,7 +31,7 @@ model:
3131
max_voxels: [32000, 32000]
3232

3333
voxel_encoder:
34-
in_channels: 5
34+
in_channels: 4
3535
feat_channels: [64]
3636
voxel_size: *vsize
3737

@@ -43,7 +43,7 @@ model:
4343
in_channels: 64
4444
out_channels: [64, 128, 256]
4545
layer_nums: [3, 5, 5]
46-
layer_strides: [2, 2, 2]
46+
layer_strides: [1, 2, 2]
4747

4848
neck:
4949
in_channels: [64, 128, 256]
@@ -62,17 +62,18 @@ model:
6262
[-74.88, -74.88, 0, 74.88, 74.88, 0],
6363
]
6464
sizes: [
65-
[2.08, 4.73, 1.77], # car
66-
[0.84, 1.81, 1.77], # cyclist
67-
[0.84, 0.91, 1.74] # pedestrian
65+
[2.08, 4.73, 1.77], # VEHICLE
66+
[0.84, 1.81, 1.77], # CYCLIST
67+
[0.84, 0.91, 1.74] # PEDESTRIAN
6868
]
6969
dir_offset: 0.7854
7070
rotations: [0, 1.57]
7171
iou_thr: [[0.4, 0.55], [0.3, 0.5], [0.3, 0.5]]
7272

7373
augment:
7474
PointShuffle: True
75-
ObjectRangeFilter: True
75+
ObjectRangeFilter:
76+
point_cloud_range: [-74.88, -74.88, -2, 74.88, 74.88, 4]
7677
ObjectSample:
7778
min_points_dict:
7879
VEHICLE: 5
@@ -88,7 +89,7 @@ pipeline:
8889
name: ObjectDetection
8990
test_compute_metric: true
9091
batch_size: 6
91-
val_batch_size: 1
92+
val_batch_size: 6
9293
test_batch_size: 1
9394
save_ckpt_freq: 5
9495
max_epoch: 200
@@ -102,7 +103,7 @@ pipeline:
102103
weight_decay: 0.01
103104

104105
# evaluation properties
105-
overlaps: [0.5, 0.5, 0.7]
106+
overlaps: [0.5, 0.5, 0.5]
106107
difficulties: [0, 1, 2]
107108
summary:
108109
record_for: []

ml3d/datasets/augment/augmentation.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -493,7 +493,7 @@ def ObjectSample(self, data, db_boxes_dict, sample_dict):
493493
sampled_points = np.concatenate(
494494
[box.points_inside_box for box in sampled], axis=0)
495495
points = remove_points_in_boxes(points, sampled)
496-
points = np.concatenate([sampled_points, points], axis=0)
496+
points = np.concatenate([sampled_points[:, :4], points], axis=0)
497497

498498
return {
499499
'point': points,

ml3d/datasets/utils/operations.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
import math
55
from scipy.spatial import ConvexHull
66

7-
from ...metrics import iou_bev
7+
from open3d.ml.contrib import iou_bev_cpu as iou_bev
88

99

1010
def create_3D_rotations(axis, angle):

ml3d/datasets/waymo.py

Lines changed: 30 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,6 @@ def __init__(self,
2525
name='Waymo',
2626
cache_dir='./logs/cache',
2727
use_cache=False,
28-
val_split=3,
2928
**kwargs):
3029
"""Initialize the function by passing the dataset and other details.
3130
@@ -34,7 +33,6 @@ def __init__(self,
3433
name: The name of the dataset (Waymo in this case).
3534
cache_dir: The directory where the cache is stored.
3635
use_cache: Indicates if the dataset should be cached.
37-
val_split: The split value to get a set of images for training, validation, for testing.
3836
3937
Returns:
4038
class: The corresponding class.
@@ -43,7 +41,6 @@ def __init__(self,
4341
name=name,
4442
cache_dir=cache_dir,
4543
use_cache=use_cache,
46-
val_split=val_split,
4744
**kwargs)
4845

4946
cfg = self.cfg
@@ -52,22 +49,27 @@ def __init__(self,
5249
self.dataset_path = cfg.dataset_path
5350
self.num_classes = 4
5451
self.label_to_names = self.get_label_to_names()
52+
self.shuffle = kwargs.get('shuffle', False)
5553

5654
self.all_files = sorted(
5755
glob(join(cfg.dataset_path, 'velodyne', '*.bin')))
5856
self.train_files = []
5957
self.val_files = []
58+
self.test_files = []
6059

6160
for f in self.all_files:
62-
idx = Path(f).name.replace('.bin', '')[:3]
63-
idx = int(idx)
64-
if idx < cfg.val_split:
61+
if 'train' in f:
6562
self.train_files.append(f)
66-
else:
63+
elif 'val' in f:
6764
self.val_files.append(f)
68-
69-
self.test_files = glob(
70-
join(cfg.dataset_path, 'testing', 'velodyne', '*.bin'))
65+
elif 'test' in f:
66+
self.test_files.append(f)
67+
else:
68+
log.warning(
69+
f"Skipping {f}, prefix must be one of train, test or val.")
70+
if self.shuffle:
71+
log.info("Shuffling training files...")
72+
self.rng.shuffle(self.train_files)
7173

7274
@staticmethod
7375
def get_label_to_names():
@@ -90,18 +92,21 @@ def read_lidar(path):
9092
"""Reads lidar data from the path provided.
9193
9294
Returns:
93-
A data object with lidar information.
95+
pc: pointcloud data with shape [N, 6], where
96+
the format is xyzRGB.
9497
"""
95-
assert Path(path).exists()
96-
9798
return np.fromfile(path, dtype=np.float32).reshape(-1, 6)
9899

99100
@staticmethod
100101
def read_label(path, calib):
101-
"""Reads labels of bound boxes.
102+
"""Reads labels of bounding boxes.
103+
104+
Args:
105+
path: The path to the label file.
106+
calib: Calibration as returned by read_calib().
102107
103108
Returns:
104-
The data objects with bound boxes information.
109+
The data objects with bounding boxes information.
105110
"""
106111
if not Path(path).exists():
107112
return None
@@ -131,24 +136,22 @@ def read_calib(path):
131136
Returns:
132137
The camera and the camera image used in calibration.
133138
"""
134-
assert Path(path).exists()
135-
136139
with open(path, 'r') as f:
137140
lines = f.readlines()
138141
obj = lines[0].strip().split(' ')[1:]
139-
P0 = np.array(obj, dtype=np.float32)
142+
unused_P0 = np.array(obj, dtype=np.float32)
140143

141144
obj = lines[1].strip().split(' ')[1:]
142-
P1 = np.array(obj, dtype=np.float32)
145+
unused_P1 = np.array(obj, dtype=np.float32)
143146

144147
obj = lines[2].strip().split(' ')[1:]
145148
P2 = np.array(obj, dtype=np.float32)
146149

147150
obj = lines[3].strip().split(' ')[1:]
148-
P3 = np.array(obj, dtype=np.float32)
151+
unused_P3 = np.array(obj, dtype=np.float32)
149152

150153
obj = lines[4].strip().split(' ')[1:]
151-
P4 = np.array(obj, dtype=np.float32)
154+
unused_P4 = np.array(obj, dtype=np.float32)
152155

153156
obj = lines[5].strip().split(' ')[1:]
154157
R0 = np.array(obj, dtype=np.float32).reshape(3, 3)
@@ -162,7 +165,7 @@ def read_calib(path):
162165
Tr_velo_to_cam = Waymo._extend_matrix(Tr_velo_to_cam)
163166

164167
world_cam = np.transpose(rect_4x4 @ Tr_velo_to_cam)
165-
cam_img = np.transpose(P2)
168+
cam_img = np.transpose(np.vstack((P2.reshape(3, 4), [0, 0, 0, 1])))
166169

167170
return {'world_cam': world_cam, 'cam_img': cam_img}
168171

@@ -209,7 +212,7 @@ def get_split_list(self, split):
209212
else:
210213
raise ValueError("Invalid split {}".format(split))
211214

212-
def is_tested():
215+
def is_tested(attr):
213216
"""Checks if a datum in the dataset has been tested.
214217
215218
Args:
@@ -219,16 +222,16 @@ def is_tested():
219222
If the datum attribute is tested, then return the path where the
220223
attribute is stored; else, returns false.
221224
"""
222-
pass
225+
raise NotImplementedError()
223226

224-
def save_test_result():
227+
def save_test_result(results, attr):
225228
"""Saves the output of a model.
226229
227230
Args:
228231
results: The output of a model for the datum associated with the attribute passed.
229232
attr: The attributes that correspond to the outputs passed in results.
230233
"""
231-
pass
234+
raise NotImplementedError()
232235

233236

234237
class WaymoSplit():
@@ -273,11 +276,9 @@ def get_attr(self, idx):
273276

274277

275278
class Object3d(BEVBox3D):
276-
"""The class stores details that are object-specific, such as bounding box
277-
coordinates, occlusion and so on.
278-
"""
279279

280280
def __init__(self, center, size, label, calib):
281+
# ground truth files doesn't have confidence value.
281282
confidence = float(label[15]) if label.__len__() == 16 else -1.0
282283

283284
world_cam = calib['world_cam']

ml3d/tf/pipelines/object_detection.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -290,7 +290,7 @@ def run_train(self):
290290

291291
self.save_logs(writer, epoch)
292292

293-
if epoch % cfg.save_ckpt_freq == 0:
293+
if epoch % cfg.save_ckpt_freq == 0 or epoch == cfg.max_epoch:
294294
self.save_ckpt(epoch)
295295

296296
def get_3d_summary(self,

0 commit comments

Comments
 (0)