-
Notifications
You must be signed in to change notification settings - Fork 4.4k
Open
Description
From test.py (line 219)
if len(stats) and stats[0].any():
p, r, ap, f1, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names)
ap50, ap = ap[:, 0], ap.mean(1) # AP@0.5, AP@0.5:0.95
mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean()
nt = np.bincount(stats[3].astype(np.int64), minlength=nc) # number of targets per class
else:
nt = torch.zeros(1)
What is the purpose of setting nt (number of targets) to zero if no predictions were correctly made on the test set? As far as I can see this is only used for logging. It is a little confusing to have training report that there are zero labels in the test set all of a sudden, and to have to dive into test logging code to verify that this is a visual bug can waste time.
If there's a good reason for this conditional nt calculation being the way it is, could someone please explain?
Metadata
Metadata
Assignees
Labels
No labels