A comprehensive framework for training and evaluating YOLO models on the TinyPerson dataset for small object detection. This repository provides tools to benchmark different YOLO model versions (v8, v9, v10, v11, v12) on detecting small persons in images.
- Complete Pipeline: Setup environment, train models, evaluate performance, and visualize results
- Multiple YOLO Models: Support for YOLOv8x, YOLOv9e, YOLOv10x, YOLO11x, and YOLO12x
- Automated Setup: Downloads required datasets, models, and dependencies
- Comprehensive Evaluation: Measures precision, recall, F1 score, mAP, and inference speed
- Rich Visualizations: Generates performance comparison plots across models
- Python 3.6+
- CUDA-compatible GPU recommended (training on CPU is supported but not recommended)
- Windows or Linux operating system
-
Clone the repository:
git clone https://github.com/xixu-me/YOLO-TinyPerson.git cd YOLO-TinyPerson
-
Install dependencies:
pip install -r requirements.txt
-
Set
dataset_dir
to the dataset path insetup.py
:dataset_dir = "path/to/dataset"
The framework provides a simple command-line interface for running different parts of the pipeline:
Run the entire process (setup, train, evaluate, visualize):
python main.py all
or just:
python main.py
This will execute the default command, which is all
.
Setup environment and download required files:
python main.py setup
Train YOLO models on the TinyPerson dataset:
python main.py train
Evaluate trained models:
python main.py evaluate
Generate visualizations of training and evaluation results:
python main.py visualize
YOLO-TinyPerson/
├── main.py # Main entry point with command-line interface
├── setup.py # Environment setup, downloads datasets and models
├── train.py # Model training functionality
├── evaluate.py # Model evaluation and metrics calculation
├── visualize.py # Results visualization and plot generation
├── requirements.txt # Required Python packages
├── dataset/ # TinyPerson dataset (downloaded by setup.py)
├── weights/ # Pre-trained model weights (downloaded by setup.py)
├── results/ # Training results for each model
├── evaluation/ # Evaluation metrics and results
└── visualizations/ # Generated plots and visualizations
The framework evaluates models using the following metrics:
- Precision: Accuracy of positive predictions
- Recall: Ability to find all relevant instances
- F1-score: Harmonic mean of precision and recall
- mAP@0.5: Mean Average Precision with IoU threshold of 0.5
- mAP@0.5-0.95: Mean Average Precision across multiple IoU thresholds
- Inference Speed: Frames processed per second
Visualizations are generated in the visualizations/
directory, including:
- Precision/Recall/F1 comparison across models
- mAP comparison
- Training loss curves
- Inference speed comparison
- Radar charts for overall performance comparison
The following YOLO models are supported:
Model | Description |
---|---|
YOLOv8x | YOLOv8 extra large model |
YOLOv9e | YOLOv9 extended model |
YOLOv10x | YOLOv10 extra large model |
YOLO11x | YOLO11 extra large model |
YOLO12x | YOLO12 extra large model |
- The TinyPerson dataset is used for training and evaluation
- YOLO models from the Ultralytics framework
Copyright © Xi Xu. All rights reserved.
Licensed under the GPL-3.0 license.