-
Notifications
You must be signed in to change notification settings - Fork 4.4k
Description
Why do I see a large difference in the metrics between coco and yolov7 when I output the metrics, especially map@.5?
test: Scanning 'D:\YOLO7_project\Yolo_Person\Person_Train_Val_Test\labels\test.cache' images and labels... 842 found, 0 missing, 0 empty, 0 corrupted: 100%|██████████| 842/842 [00:00<?, ?it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95 mAP@.75 F1: 100%|██████████| 106/106 [00:11<00:00, 9.57it/s]
all 842 13833 0.82 0.758 0.792 0.376 0.31 0.788
Speed: 6.4/0.8/7.2 ms inference/NMS/total per 640x640 image at batch-size 8
Evaluating pycocotools mAP... saving runs\test\exp25\best_Pconv_predictions.json...
loading annotations into memory...
Done (t=0.14s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.20s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=10.19s).
Accumulating evaluation results...
DONE (t=0.24s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.343
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.679
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.300
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.232
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.466
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.706
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.045
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.195
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.401
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.302
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.541
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.770
I'd love to hear from you. Thank you