Skip to content

This project focuses on building a YOLO11 computer vision model for detecting and classifying Foreign Object Debris (FOD) on road surfaces. The goal is to use YOLO11 for object detection and classification of debris such as plastics bottles and metal pieces.

Notifications You must be signed in to change notification settings

chrisputzu/YOLO11_computer_vision_model_for_FOD

Repository files navigation

A YOLO11 Computer Vision Model for Foreign Debris Detection

Project Overview 📋

This project focuses on building a YOLO11 computer vision model for detecting and classifying Foreign Object Debris (FOD) on road surfaces. The goal is to use YOLO11 for object detection and classification of debris such as plastics bottles and metal pieces as seen in the provided dataset FOD_images. The process is split into five main phases, from data annotation to evaluation, to testing and run.


Project Structure 📂

  1. FOD Images Dataset - FOD_images

    • This folder contains images of debris on road surfaces that are used for annotation and training the model.
  2. Data Annotation & Dataset - dataset

    • Annotated using Roboflow, generating a YOLOv4 compatible dataset. The following steps were performed:

      • Preprocessing

        • Auto-Orient: Applied.
        • Resize: Stretch to 640x640.
      • Augmentations

        • Rotation: Between -15° and +15°.
        • Grayscale: Applied to 15% of images.
    • Dataset is divided into:

      • Training Set (70% = 204): train
      • Validation Set (20% = 19): valid
      • Test Set (10% = 10): test
    • data.yaml: Contains training instructions and dataset configurations.

Roboflow Dataset

Roboflow Model

  1. Training the YOLO11 Small Pre-Trained Model - training_model.py
    • Fine-tuning a pre-trained yolo11s.pt (contained in the models directory) on the annotated dataset.

    • Hyperparameters for training:

      • Epochs: 100
      • Image Size: 640
      • Batch Size: 16
    • Model weights are saved in TrainingModelResults/TrainingRun/weights with the following fine-tuned models:

      • best.pt
      • best.onnx
    • Training metrics and graphs are stored in the TrainingModelResults/TrainingRun/ directory.

    • Training Performance on Roboflow:

Roboflow Performance

  1. Model Evaluation - evaluating_model.py

    • Evaluation of the fine-tuned model on the validation and test set.

    • Metrics calculated:

      • mAP50: Mean Average Precision at IoU threshold 50.
      • mAP50-95: Mean Average Precision at IoU thresholds 50-95.
      • Precision: Ratio of correct positive predictions to total positive predictions.
      • Recall: Ratio of correct positive predictions to total actual positives.
      • F1-Score: Harmonic mean of precision and recall.
    • Evaluation results are saved in:

      • EvaluatingModelResults/ValidationRun
      • EvaluatingModelResults/TestRun
      • GraphsMetricsResults: A comparison bar plot (metrics_comparison_val_test.png) comparing validation and test set performance metrics.
  2. Model Testing - On Streamlit App 🖥️

    • Three ways to test the fine-tuned model:
      1. Using Roboflow API - streamlit run app_roboflow_api.py
        • Interacts with the model hosted on Roboflow via API.
      2. Using Local PT Model - streamlit run app_roboflow_local_pt.py
        • Runs inference using the fine-tuned .pt model.
      3. Using Local ONNX Model - streamlit run app_roboflow_local_onnx.py
        • Tests the ONNX format model for inference (though results are suboptimal).

Setup Instructions 🛠️

  1. Install Dependencies:

    pip install -r requirements.txt
  2. Roboflow API Setup:

    • To use the Roboflow API (app_roboflow_api.py), sign up at Roboflow and get your API key. You can create and upload the dataset to Roboflow and interact with the model using their API.

Results 📊

The following image presents a comparison of the performance metrics of .pt model between the validation and test sets.

Metrics Comparison

Validation Results:

Class Images Instances Box PR mAP50 mAP50-95
all 19 30 0.803 0.812 0.788 0.348
piece_of_metal 6 6 0.833 0.833 0.771 0.377
plastic_bottle 14 24 0.773 0.792 0.806 0.319

Validation Metrics:

  • mAP50: 0.79
  • mAP50-95: 0.35
  • Precision: 0.80
  • Recall: 0.81
  • F1-score: 0.81

Test Results:

Class Images Instances Box PR mAP50 mAP50-95
all 10 15 0.95 0.875 0.965 0.395
piece_of_metal 3 3 0.949 1 0.995 0.406
plastic_bottle 7 12 0.952 0.75 0.934 0.384

Test Metrics:

  • mAP50: 0.96
  • mAP50-95: 0.40
  • Precision: 0.95
  • Recall: 0.88
  • F1-score: 0.91

Inference on Test set:

  • confusion_matrix_normalized.png:

Confusion Matrix Normalized

  • F1_curve.png:

F1 Curve

  • P_curve.png:

P Curve

  • PR_curve.png:

PR Curve

  • R_curve.png:

R Curve

  • Batch test set images:

Prediction

  • Inference on a test set image:

Inference Test

Conclusions ⚙️

Considering the use of edge devices (like Raspberry Pi and NVIDIA Jetson), the model was exported to multiple formats:

  • .pt Format: Local PyTorch model for inference.
  • .onnx Format: Exported from the fine-tuned model for cross-platform compatibility.

Although the ONNX format did not provide optimal results, it was included for testing purposes. The .pt format works better for inference on local machines.

License 📄

This project is licensed under Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share, copy, distribute, and adapt the work, even for commercial purposes, as long as you provide appropriate attribution, give credit to the original author, and indicate if changes were made.

About

This project focuses on building a YOLO11 computer vision model for detecting and classifying Foreign Object Debris (FOD) on road surfaces. The goal is to use YOLO11 for object detection and classification of debris such as plastics bottles and metal pieces.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages