Skip to content

Commit 3dc05b4

Browse files
committed
Merge branch 'release/1.4.0'
2 parents 1d26b5a + b72d48a commit 3dc05b4

File tree

9 files changed

+143
-195
lines changed

9 files changed

+143
-195
lines changed

.github/workflows/python-package.yml

Lines changed: 31 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -4,38 +4,37 @@
44
name: Python package
55

66
on:
7-
push:
8-
branches: [ develop ]
9-
pull_request:
10-
branches: [ develop ]
7+
push:
8+
branches: [develop]
9+
pull_request:
10+
branches: [develop]
1111

1212
jobs:
13-
build:
13+
build:
14+
runs-on: ubuntu-latest
15+
strategy:
16+
fail-fast: false
17+
matrix:
18+
python-version: ["3.8", "3.9", "3.10"]
1419

15-
runs-on: ubuntu-latest
16-
strategy:
17-
fail-fast: false
18-
matrix:
19-
python-version: ["3.8", "3.9", "3.10"]
20-
21-
steps:
22-
- uses: actions/checkout@v3
23-
- name: Set up Python ${{ matrix.python-version }}
24-
uses: actions/setup-python@v3
25-
with:
26-
python-version: ${{ matrix.python-version }}
27-
- name: Install dependencies
28-
run: |
29-
python -m pip install --upgrade pip
30-
python -m pip install flake8 pytest pytest-benchmark
31-
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
32-
pip install lap scipy ortools lapsolver munkres
33-
- name: Lint with flake8
34-
run: |
35-
# stop the build if there are Python syntax errors or undefined names
36-
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
37-
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
38-
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
39-
- name: Test with pytest
40-
run: |
41-
pytest
20+
steps:
21+
- uses: actions/checkout@v3
22+
- name: Set up Python ${{ matrix.python-version }}
23+
uses: actions/setup-python@v3
24+
with:
25+
python-version: ${{ matrix.python-version }}
26+
- name: Install dependencies
27+
run: |
28+
python -m pip install --upgrade pip
29+
python -m pip install flake8 pytest pytest-benchmark
30+
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
31+
pip install lap scipy "ortools<9.4" lapsolver munkres
32+
- name: Lint with flake8
33+
run: |
34+
# stop the build if there are Python syntax errors or undefined names
35+
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
36+
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
37+
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
38+
- name: Test with pytest
39+
run: |
40+
pytest

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,3 +50,4 @@ Temporary Items
5050
*.egg-info/
5151
build/
5252
dist/
53+
.venv/

Readme.md

Lines changed: 76 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -10,27 +10,30 @@ While benchmarking single object trackers is rather straightforward, measuring t
1010

1111
![](./motmetrics/etc/mot.png)<br/>
1212

13-
*Pictures courtesy of Bernardin, Keni, and Rainer Stiefelhagen [[1]](#References)*
13+
_Pictures courtesy of Bernardin, Keni, and Rainer Stiefelhagen [[1]](#References)_
14+
1415
</div>
1516

1617
In particular **py-motmetrics** supports `CLEAR-MOT`[[1,2]](#References) metrics and `ID`[[4]](#References) metrics. Both metrics attempt to find a minimum cost assignment between ground truth objects and predictions. However, while CLEAR-MOT solves the assignment problem on a local per-frame basis, `ID-MEASURE` solves the bipartite graph matching by finding the minimum cost of objects and predictions over all frames. This [blog-post](https://web.archive.org/web/20190413133409/http://vision.cs.duke.edu:80/DukeMTMC/IDmeasures.html) by Ergys illustrates the differences in more detail.
1718

1819
## Features at a glance
19-
- *Variety of metrics* <br/>
20-
Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are [comparable](#MOTChallengeCompatibility) with the popular [MOTChallenge][MOTChallenge] benchmarks [(*1)](#asterixcompare).
21-
- *Distance agnostic* <br/>
22-
Supports Euclidean, Intersection over Union and other distances measures.
23-
- *Complete event history* <br/>
24-
Tracks all relevant per-frame events suchs as correspondences, misses, false alarms and switches.
25-
- *Flexible solver backend* <br/>
26-
Support for switching minimum assignment cost solvers. Supports `scipy`, `ortools`, `munkres` out of the box. Auto-tunes solver selection based on [availability and problem size](#SolverBackends).
27-
- *Easy to extend* <br/>
28-
Events and summaries are utilizing [pandas][pandas] for data structures and analysis. New metrics can reuse already computed values from depending metrics.
20+
21+
- _Variety of metrics_ <br/>
22+
Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are [comparable](#MOTChallengeCompatibility) with the popular [MOTChallenge][motchallenge] benchmarks [(\*1)](#asterixcompare).
23+
- _Distance agnostic_ <br/>
24+
Supports Euclidean, Intersection over Union and other distances measures.
25+
- _Complete event history_ <br/>
26+
Tracks all relevant per-frame events suchs as correspondences, misses, false alarms and switches.
27+
- _Flexible solver backend_ <br/>
28+
Support for switching minimum assignment cost solvers. Supports `scipy`, `ortools`, `munkres` out of the box. Auto-tunes solver selection based on [availability and problem size](#SolverBackends).
29+
- _Easy to extend_ <br/>
30+
Events and summaries are utilizing [pandas][pandas] for data structures and analysis. New metrics can reuse already computed values from depending metrics.
2931

3032
<a name="Metrics"></a>
33+
3134
## Metrics
3235

33-
**py-motmetrics** implements the following metrics. The metrics have been aligned with what is reported by [MOTChallenge][MOTChallenge] benchmarks.
36+
**py-motmetrics** implements the following metrics. The metrics have been aligned with what is reported by [MOTChallenge][motchallenge] benchmarks.
3437

3538
```python
3639
import motmetrics as mm
@@ -39,42 +42,41 @@ mh = mm.metrics.create()
3942
print(mh.list_metrics_markdown())
4043
```
4144

42-
Name|Description
43-
:---|:---
44-
num_frames|Total number of frames.
45-
num_matches|Total number matches.
46-
num_switches|Total number of track switches.
47-
num_false_positives|Total number of false positives (false-alarms).
48-
num_misses|Total number of misses.
49-
num_detections|Total number of detected objects including matches and switches.
50-
num_objects|Total number of unique object appearances over all frames.
51-
num_predictions|Total number of unique prediction appearances over all frames.
52-
num_unique_objects|Total number of unique object ids encountered.
53-
mostly_tracked|Number of objects tracked for at least 80 percent of lifespan.
54-
partially_tracked|Number of objects tracked between 20 and 80 percent of lifespan.
55-
mostly_lost|Number of objects tracked less than 20 percent of lifespan.
56-
num_fragmentations|Total number of switches from tracked to not tracked.
57-
motp|Multiple object tracker precision.
58-
mota|Multiple object tracker accuracy.
59-
precision|Number of detected objects over sum of detected and false positives.
60-
recall|Number of detections over number of objects.
61-
idfp|ID measures: Number of false positive matches after global min-cost matching.
62-
idfn|ID measures: Number of false negatives matches after global min-cost matching.
63-
idtp|ID measures: Number of true positives matches after global min-cost matching.
64-
idp|ID measures: global min-cost precision.
65-
idr|ID measures: global min-cost recall.
66-
idf1|ID measures: global min-cost F1 score.
67-
obj_frequencies|`pd.Series` Total number of occurrences of individual objects over all frames.
68-
pred_frequencies|`pd.Series` Total number of occurrences of individual predictions over all frames.
69-
track_ratios|`pd.Series` Ratio of assigned to total appearance count per unique object id.
70-
id_global_assignment| `dict` ID measures: Global min-cost assignment for ID measures.
71-
72-
45+
| Name | Description |
46+
| :------------------- | :--------------------------------------------------------------------------------- |
47+
| num_frames | Total number of frames. |
48+
| num_matches | Total number matches. |
49+
| num_switches | Total number of track switches. |
50+
| num_false_positives | Total number of false positives (false-alarms). |
51+
| num_misses | Total number of misses. |
52+
| num_detections | Total number of detected objects including matches and switches. |
53+
| num_objects | Total number of unique object appearances over all frames. |
54+
| num_predictions | Total number of unique prediction appearances over all frames. |
55+
| num_unique_objects | Total number of unique object ids encountered. |
56+
| mostly_tracked | Number of objects tracked for at least 80 percent of lifespan. |
57+
| partially_tracked | Number of objects tracked between 20 and 80 percent of lifespan. |
58+
| mostly_lost | Number of objects tracked less than 20 percent of lifespan. |
59+
| num_fragmentations | Total number of switches from tracked to not tracked. |
60+
| motp | Multiple object tracker precision. |
61+
| mota | Multiple object tracker accuracy. |
62+
| precision | Number of detected objects over sum of detected and false positives. |
63+
| recall | Number of detections over number of objects. |
64+
| idfp | ID measures: Number of false positive matches after global min-cost matching. |
65+
| idfn | ID measures: Number of false negatives matches after global min-cost matching. |
66+
| idtp | ID measures: Number of true positives matches after global min-cost matching. |
67+
| idp | ID measures: global min-cost precision. |
68+
| idr | ID measures: global min-cost recall. |
69+
| idf1 | ID measures: global min-cost F1 score. |
70+
| obj_frequencies | `pd.Series` Total number of occurrences of individual objects over all frames. |
71+
| pred_frequencies | `pd.Series` Total number of occurrences of individual predictions over all frames. |
72+
| track_ratios | `pd.Series` Ratio of assigned to total appearance count per unique object id. |
73+
| id_global_assignment | `dict` ID measures: Global min-cost assignment for ID measures. |
7374

7475
<a name="MOTChallengeCompatibility"></a>
76+
7577
## MOTChallenge compatibility
7678

77-
**py-motmetrics** produces results compatible with popular [MOTChallenge][MOTChallenge] benchmarks [(*1)](#asterixcompare). Below are two results taken from MOTChallenge [Matlab devkit][devkit] corresponding to the results of the CEM tracker on the training set of the 2015 MOT 2DMark.
79+
**py-motmetrics** produces results compatible with popular [MOTChallenge][motchallenge] benchmarks [(\*1)](#asterixcompare). Below are two results taken from MOTChallenge [Matlab devkit][devkit] corresponding to the results of the CEM tracker on the training set of the 2015 MOT 2DMark.
7880

7981
```
8082
@@ -96,15 +98,19 @@ TUD-Campus 55.8% 73.0% 45.1% 58.2% 94.1% 8 1 6 1 13 150 7 7 52.6% 0.
9698
TUD-Stadtmitte 64.5% 82.0% 53.1% 60.9% 94.0% 10 5 4 1 45 452 7 6 56.4% 0.346
9799
```
98100

99-
<a name="asterixcompare"></a>(*1) Besides naming conventions, the only obvious differences are
100-
- Metric `FAR` is missing. This metric is given implicitly and can be recovered by `FalsePos / Frames * 100`.
101-
- Metric `MOTP` seems to be off. To convert compute `(1 - MOTP) * 100`. [MOTChallenge][MOTChallenge] benchmarks compute `MOTP` as percentage, while **py-motmetrics** sticks to the original definition of average distance over number of assigned objects [[1]](#References).
101+
<a name="asterixcompare"></a>(\*1) Besides naming conventions, the only obvious differences are
102+
103+
- Metric `FAR` is missing. This metric is given implicitly and can be recovered by `FalsePos / Frames * 100`.
104+
- Metric `MOTP` seems to be off. To convert compute `(1 - MOTP) * 100`. [MOTChallenge][motchallenge] benchmarks compute `MOTP` as percentage, while **py-motmetrics** sticks to the original definition of average distance over number of assigned objects [[1]](#References).
102105

103106
You can compare tracker results to ground truth in MOTChallenge format by
107+
104108
```
105109
python -m motmetrics.apps.eval_motchallenge --help
106110
```
111+
107112
For MOT16/17, you can run
113+
108114
```
109115
python -m motmetrics.apps.evaluateTracking --help
110116
```
@@ -117,8 +123,8 @@ To install latest development version of **py-motmetrics** (usually a bit more r
117123
pip install git+https://github.com/cheind/py-motmetrics.git
118124
```
119125

120-
121126
### Install via PyPi
127+
122128
To install **py-motmetrics** use `pip`
123129

124130
```
@@ -134,6 +140,7 @@ pip install -e <path/to/setup.py>
134140
```
135141

136142
### Install via Conda
143+
137144
In case you are using Conda, a simple way to run **py-motmetrics** is to create a virtual environment with all the necessary dependencies
138145

139146
```
@@ -261,6 +268,7 @@ Event
261268
Object `2` is now tracked by hypothesis `3` leading to a track switch. Note, although a pairing `(1, 3)` with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.
262269

263270
### Computing metrics
271+
264272
Once the accumulator has been populated you can compute and display metrics. Continuing the example from above
265273

266274
```python
@@ -355,6 +363,7 @@ OVERALL 80.0% 80.0% 80.0% 80.0% 80.0% 4 2 2 0 2 2 1 1 50.0% 0.275
355363
```
356364

357365
### Computing distances
366+
358367
Up until this point we assumed the pairwise object/hypothesis distances to be known. Usually this is not the case. You are mostly given either rectangles or points (centroids) of related objects. To compute a distance matrix from them you can use `motmetrics.distance` module as shown below.
359368

360369
#### Euclidean norm squared on points
@@ -383,6 +392,7 @@ C = mm.distances.norm2squared_matrix(o, h, max_d2=5.)
383392
```
384393

385394
#### Intersection over union norm for 2D rectangles
395+
386396
```python
387397
a = np.array([
388398
[0, 0, 1, 2], # Format X, Y, Width, Height
@@ -403,13 +413,16 @@ mm.distances.iou_matrix(a, b, max_iou=0.5)
403413
```
404414

405415
<a name="SolverBackends"></a>
416+
406417
### Solver backends
418+
407419
For large datasets solving the minimum cost assignment becomes the dominant runtime part. **py-motmetrics** therefore supports these solvers out of the box
408-
- `lapsolver` - https://github.com/cheind/py-lapsolver
409-
- `lapjv` - https://github.com/gatagat/lap
410-
- `scipy` - https://github.com/scipy/scipy/tree/master/scipy
411-
- `ortools` - https://github.com/google/or-tools
412-
- `munkres` - http://software.clapper.org/munkres/
420+
421+
- `lapsolver` - https://github.com/cheind/py-lapsolver
422+
- `lapjv` - https://github.com/gatagat/lap
423+
- `scipy` - https://github.com/scipy/scipy/tree/master/scipy
424+
- `ortools<9.4` - https://github.com/google/or-tools
425+
- `munkres` - http://software.clapper.org/munkres/
413426

414427
A comparison for different sized matrices is shown below (taken from [here](https://github.com/cheind/py-lapsolver#benchmarks))
415428

@@ -427,30 +440,36 @@ with lap.set_default_solver(mysolver):
427440
```
428441

429442
## Running tests
443+
430444
**py-motmetrics** uses the pytest framework. To run the tests, simply `cd` into the source directly and run `pytest`.
431445

432446
<a name="References"></a>
447+
433448
### References
449+
434450
1. Bernardin, Keni, and Rainer Stiefelhagen. "Evaluating multiple object tracking performance: the CLEAR MOT metrics."
435-
EURASIP Journal on Image and Video Processing 2008.1 (2008): 1-10.
451+
EURASIP Journal on Image and Video Processing 2008.1 (2008): 1-10.
436452
2. Milan, Anton, et al. "Mot16: A benchmark for multi-object tracking." arXiv preprint arXiv:1603.00831 (2016).
437453
3. Li, Yuan, Chang Huang, and Ram Nevatia. "Learning to associate: Hybridboosted multi-target tracker for crowded scene."
438-
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
454+
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
439455
4. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. E. Ristani, F. Solera, R. S. Zou, R. Cucchiara and C. Tomasi. ECCV 2016 Workshop on Benchmarking Multi-Target Tracking.
440456

441457
## Docker
442458

443459
### Update ground truth and test data:
460+
444461
/data/train directory should contain MOT 2D 2015 Ground Truth files.
445462
/data/test directory should contain your results.
446463

447464
You can check usage and directory listing at
448465
https://github.com/cheind/py-motmetrics/blob/master/motmetrics/apps/eval_motchallenge.py
449466

450467
### Build Image
468+
451469
docker build -t desired-image-name -f Dockerfile .
452470

453471
### Run Image
472+
454473
docker run desired-image-name
455474

456475
(credits to [christosavg](https://github.com/christosavg))
@@ -483,7 +502,6 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
483502
SOFTWARE.
484503
```
485504

486-
487-
[Pandas]: http://pandas.pydata.org/
488-
[MOTChallenge]: https://motchallenge.net/
505+
[pandas]: http://pandas.pydata.org/
506+
[motchallenge]: https://motchallenge.net/
489507
[devkit]: https://motchallenge.net/devkit/

0 commit comments

Comments
 (0)