You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Readme.md
+76-58Lines changed: 76 additions & 58 deletions
Original file line number
Diff line number
Diff line change
@@ -10,27 +10,30 @@ While benchmarking single object trackers is rather straightforward, measuring t
10
10
11
11
<br/>
12
12
13
-
*Pictures courtesy of Bernardin, Keni, and Rainer Stiefelhagen [[1]](#References)*
13
+
_Pictures courtesy of Bernardin, Keni, and Rainer Stiefelhagen [[1]](#References)_
14
+
14
15
</div>
15
16
16
17
In particular **py-motmetrics** supports `CLEAR-MOT`[[1,2]](#References) metrics and `ID`[[4]](#References) metrics. Both metrics attempt to find a minimum cost assignment between ground truth objects and predictions. However, while CLEAR-MOT solves the assignment problem on a local per-frame basis, `ID-MEASURE` solves the bipartite graph matching by finding the minimum cost of objects and predictions over all frames. This [blog-post](https://web.archive.org/web/20190413133409/http://vision.cs.duke.edu:80/DukeMTMC/IDmeasures.html) by Ergys illustrates the differences in more detail.
17
18
18
19
## Features at a glance
19
-
-*Variety of metrics* <br/>
20
-
Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are [comparable](#MOTChallengeCompatibility) with the popular [MOTChallenge][MOTChallenge] benchmarks [(*1)](#asterixcompare).
21
-
-*Distance agnostic* <br/>
22
-
Supports Euclidean, Intersection over Union and other distances measures.
23
-
-*Complete event history* <br/>
24
-
Tracks all relevant per-frame events suchs as correspondences, misses, false alarms and switches.
25
-
-*Flexible solver backend* <br/>
26
-
Support for switching minimum assignment cost solvers. Supports `scipy`, `ortools`, `munkres` out of the box. Auto-tunes solver selection based on [availability and problem size](#SolverBackends).
27
-
-*Easy to extend* <br/>
28
-
Events and summaries are utilizing [pandas][pandas] for data structures and analysis. New metrics can reuse already computed values from depending metrics.
20
+
21
+
-_Variety of metrics_ <br/>
22
+
Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are [comparable](#MOTChallengeCompatibility) with the popular [MOTChallenge][motchallenge] benchmarks [(\*1)](#asterixcompare).
23
+
-_Distance agnostic_ <br/>
24
+
Supports Euclidean, Intersection over Union and other distances measures.
25
+
-_Complete event history_ <br/>
26
+
Tracks all relevant per-frame events suchs as correspondences, misses, false alarms and switches.
27
+
-_Flexible solver backend_ <br/>
28
+
Support for switching minimum assignment cost solvers. Supports `scipy`, `ortools`, `munkres` out of the box. Auto-tunes solver selection based on [availability and problem size](#SolverBackends).
29
+
-_Easy to extend_ <br/>
30
+
Events and summaries are utilizing [pandas][pandas] for data structures and analysis. New metrics can reuse already computed values from depending metrics.
29
31
30
32
<aname="Metrics"></a>
33
+
31
34
## Metrics
32
35
33
-
**py-motmetrics** implements the following metrics. The metrics have been aligned with what is reported by [MOTChallenge][MOTChallenge] benchmarks.
36
+
**py-motmetrics** implements the following metrics. The metrics have been aligned with what is reported by [MOTChallenge][motchallenge] benchmarks.
34
37
35
38
```python
36
39
import motmetrics as mm
@@ -39,42 +42,41 @@ mh = mm.metrics.create()
39
42
print(mh.list_metrics_markdown())
40
43
```
41
44
42
-
Name|Description
43
-
:---|:---
44
-
num_frames|Total number of frames.
45
-
num_matches|Total number matches.
46
-
num_switches|Total number of track switches.
47
-
num_false_positives|Total number of false positives (false-alarms).
48
-
num_misses|Total number of misses.
49
-
num_detections|Total number of detected objects including matches and switches.
50
-
num_objects|Total number of unique object appearances over all frames.
51
-
num_predictions|Total number of unique prediction appearances over all frames.
52
-
num_unique_objects|Total number of unique object ids encountered.
53
-
mostly_tracked|Number of objects tracked for at least 80 percent of lifespan.
54
-
partially_tracked|Number of objects tracked between 20 and 80 percent of lifespan.
55
-
mostly_lost|Number of objects tracked less than 20 percent of lifespan.
56
-
num_fragmentations|Total number of switches from tracked to not tracked.
57
-
motp|Multiple object tracker precision.
58
-
mota|Multiple object tracker accuracy.
59
-
precision|Number of detected objects over sum of detected and false positives.
60
-
recall|Number of detections over number of objects.
61
-
idfp|ID measures: Number of false positive matches after global min-cost matching.
62
-
idfn|ID measures: Number of false negatives matches after global min-cost matching.
63
-
idtp|ID measures: Number of true positives matches after global min-cost matching.
64
-
idp|ID measures: global min-cost precision.
65
-
idr|ID measures: global min-cost recall.
66
-
idf1|ID measures: global min-cost F1 score.
67
-
obj_frequencies|`pd.Series` Total number of occurrences of individual objects over all frames.
68
-
pred_frequencies|`pd.Series` Total number of occurrences of individual predictions over all frames.
69
-
track_ratios|`pd.Series` Ratio of assigned to total appearance count per unique object id.
70
-
id_global_assignment| `dict` ID measures: Global min-cost assignment for ID measures.
| num_switches | Total number of track switches. |
50
+
| num_false_positives | Total number of false positives (false-alarms). |
51
+
| num_misses | Total number of misses. |
52
+
| num_detections | Total number of detected objects including matches and switches. |
53
+
| num_objects | Total number of unique object appearances over all frames. |
54
+
| num_predictions | Total number of unique prediction appearances over all frames. |
55
+
| num_unique_objects | Total number of unique object ids encountered. |
56
+
| mostly_tracked | Number of objects tracked for at least 80 percent of lifespan. |
57
+
| partially_tracked | Number of objects tracked between 20 and 80 percent of lifespan. |
58
+
| mostly_lost | Number of objects tracked less than 20 percent of lifespan. |
59
+
| num_fragmentations | Total number of switches from tracked to not tracked. |
60
+
| motp | Multiple object tracker precision. |
61
+
| mota | Multiple object tracker accuracy. |
62
+
| precision | Number of detected objects over sum of detected and false positives. |
63
+
| recall | Number of detections over number of objects. |
64
+
| idfp | ID measures: Number of false positive matches after global min-cost matching. |
65
+
| idfn | ID measures: Number of false negatives matches after global min-cost matching. |
66
+
| idtp | ID measures: Number of true positives matches after global min-cost matching. |
67
+
| idp | ID measures: global min-cost precision. |
68
+
| idr | ID measures: global min-cost recall. |
69
+
| idf1 | ID measures: global min-cost F1 score. |
70
+
| obj_frequencies |`pd.Series` Total number of occurrences of individual objects over all frames. |
71
+
| pred_frequencies |`pd.Series` Total number of occurrences of individual predictions over all frames. |
72
+
| track_ratios |`pd.Series` Ratio of assigned to total appearance count per unique object id. |
73
+
| id_global_assignment |`dict` ID measures: Global min-cost assignment for ID measures. |
73
74
74
75
<aname="MOTChallengeCompatibility"></a>
76
+
75
77
## MOTChallenge compatibility
76
78
77
-
**py-motmetrics** produces results compatible with popular [MOTChallenge][MOTChallenge] benchmarks [(*1)](#asterixcompare). Below are two results taken from MOTChallenge [Matlab devkit][devkit] corresponding to the results of the CEM tracker on the training set of the 2015 MOT 2DMark.
79
+
**py-motmetrics** produces results compatible with popular [MOTChallenge][motchallenge] benchmarks [(\*1)](#asterixcompare). Below are two results taken from MOTChallenge [Matlab devkit][devkit] corresponding to the results of the CEM tracker on the training set of the 2015 MOT 2DMark.
<aname="asterixcompare"></a>(*1) Besides naming conventions, the only obvious differences are
100
-
- Metric `FAR` is missing. This metric is given implicitly and can be recovered by `FalsePos / Frames * 100`.
101
-
- Metric `MOTP` seems to be off. To convert compute `(1 - MOTP) * 100`. [MOTChallenge][MOTChallenge] benchmarks compute `MOTP` as percentage, while **py-motmetrics** sticks to the original definition of average distance over number of assigned objects [[1]](#References).
101
+
<aname="asterixcompare"></a>(\*1) Besides naming conventions, the only obvious differences are
102
+
103
+
- Metric `FAR` is missing. This metric is given implicitly and can be recovered by `FalsePos / Frames * 100`.
104
+
- Metric `MOTP` seems to be off. To convert compute `(1 - MOTP) * 100`. [MOTChallenge][motchallenge] benchmarks compute `MOTP` as percentage, while **py-motmetrics** sticks to the original definition of average distance over number of assigned objects [[1]](#References).
102
105
103
106
You can compare tracker results to ground truth in MOTChallenge format by
In case you are using Conda, a simple way to run **py-motmetrics** is to create a virtual environment with all the necessary dependencies
138
145
139
146
```
@@ -261,6 +268,7 @@ Event
261
268
Object `2` is now tracked by hypothesis `3` leading to a track switch. Note, although a pairing `(1, 3)` with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.
262
269
263
270
### Computing metrics
271
+
264
272
Once the accumulator has been populated you can compute and display metrics. Continuing the example from above
Up until this point we assumed the pairwise object/hypothesis distances to be known. Usually this is not the case. You are mostly given either rectangles or points (centroids) of related objects. To compute a distance matrix from them you can use `motmetrics.distance` module as shown below.
359
368
360
369
#### Euclidean norm squared on points
@@ -383,6 +392,7 @@ C = mm.distances.norm2squared_matrix(o, h, max_d2=5.)
383
392
```
384
393
385
394
#### Intersection over union norm for 2D rectangles
For large datasets solving the minimum cost assignment becomes the dominant runtime part. **py-motmetrics** therefore supports these solvers out of the box
A comparison for different sized matrices is shown below (taken from [here](https://github.com/cheind/py-lapsolver#benchmarks))
415
428
@@ -427,30 +440,36 @@ with lap.set_default_solver(mysolver):
427
440
```
428
441
429
442
## Running tests
443
+
430
444
**py-motmetrics** uses the pytest framework. To run the tests, simply `cd` into the source directly and run `pytest`.
431
445
432
446
<aname="References"></a>
447
+
433
448
### References
449
+
434
450
1. Bernardin, Keni, and Rainer Stiefelhagen. "Evaluating multiple object tracking performance: the CLEAR MOT metrics."
435
-
EURASIP Journal on Image and Video Processing 2008.1 (2008): 1-10.
451
+
EURASIP Journal on Image and Video Processing 2008.1 (2008): 1-10.
436
452
2. Milan, Anton, et al. "Mot16: A benchmark for multi-object tracking." arXiv preprint arXiv:1603.00831 (2016).
437
453
3. Li, Yuan, Chang Huang, and Ram Nevatia. "Learning to associate: Hybridboosted multi-target tracker for crowded scene."
438
-
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
454
+
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
439
455
4. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. E. Ristani, F. Solera, R. S. Zou, R. Cucchiara and C. Tomasi. ECCV 2016 Workshop on Benchmarking Multi-Target Tracking.
440
456
441
457
## Docker
442
458
443
459
### Update ground truth and test data:
460
+
444
461
/data/train directory should contain MOT 2D 2015 Ground Truth files.
0 commit comments