Drone-View-Vehicle-Image-Set consists of 5000 bounding boxes of drone-view vehicles. The annotations for vehicles are provided in two formats: XML and TXT.
This dataset was collected by the research team on Park Street in Madison in 2022. The vehicle trajectories can be found at the following GitHub repository: Filed-data-Corridor-Vehicle-Trajectory.
The annotations are provided in the following formats:
- XML: Each XML file contains the bounding box coordinates for vehicles in the corresponding image.
- TXT: Each TXT file contains the bounding box coordinates for vehicles in a simple text format.
The dataset was previously used for training a YOLOv8 model, achieving remarkable performance. The model attained a precision of 0.9825, recall of 0.9956, and mAP@0.5 of 0.9948. Additionally, the mAP@0.5:0.95 score reached 0.8333. These results indicate that the YOLOv8 model performed exceptionally well, with low false positive and false negative rates, demonstrating robust detection capabilities across various Intersection over Union (IoU) thresholds.
Developer: Keke Long (klong23@wisc.edu).