Skip to content

roboticistjoseph/lane_detection_robot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🤖 Autonomous Lane-Following Robot

A 4-wheeled autonomous robot powered by Raspberry Pi that uses classical computer vision techniques for lane detection and navigation. This project demonstrates how fundamental image processing algorithms can achieve reliable lane detection without deep learning.

📋 Features

  • Real-time lane detection using classical computer vision
  • Autonomous navigation with dynamic steering control
  • Bird's-eye view transformation for precise lane analysis
  • Adaptive histogram-based lane position detection
  • Interactive calibration interface with trackbars
  • Comprehensive visualization options for debugging

🛠️ Hardware Requirements

  • Raspberry Pi (3B+ or 4 recommended)
  • Pi Camera module
  • 4-wheel robot chassis
  • L298N motor driver or equivalent
  • Battery power supply
  • Jumper wires and breadboard

📦 Dependencies

- OpenCV (cv2)
- NumPy
- RPi.GPIO

🚀 Installation

  1. Clone this repository:

    git clone https://github.com/roboticistjoseph/lane_detection_robot.git
    cd lane_detection_robot
    
  2. Install required packages:

    pip install opencv-python numpy RPi.GPIO
    
  3. Connect hardware according to pin definitions in MotorModule.py

💻 Usage

  1. Run calibration to set up the perspective transformation:

    python LaneDetection.py
    

    Use the trackbars to adjust the transformation points until lane detection is optimal

  2. Start autonomous navigation:

    python MainRobot.py
    

⚙️ How It Works

The system uses a multi-stage pipeline to process camera images and control the robot:

  1. Image Pre-processing: Converts to HSV color space and applies thresholding to isolate lane markings
  2. Perspective Transformation: Applies bird's-eye view transformation for accurate lane analysis
  3. Histogram Analysis: Uses dual-region histogram analysis to detect lane position
  4. Lane Curvature Calculation: Computes the required steering angle based on lane position
  5. Motor Control: Translates steering commands into differential wheel speeds

📁 Project Structure

  • MainRobot.py - Main execution file that integrates vision and motor control
  • LaneDetection.py - Implements the lane detection pipeline
  • Utilities.py - Helper functions for image processing and visualization
  • MotorModule.py - Motor control interface for the robot

📝 Technical Approach

The project implements lane detection using classical computer vision techniques:

  • HSV color thresholding to isolate lane markings
  • Perspective transformation for bird's-eye view
  • Histogram analysis for lane position detection
  • Temporal filtering for stable steering control
  • Adaptive sensitivity for improved turning performance

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

This project demonstrates the effectiveness of classical computer vision techniques in autonomous navigation applications, proving that even without deep learning algorithms, effective autonomous systems can be developed using well-implemented fundamental approaches.

About

Steering Into Tomorrow: An Autonomous Lane-Following Companion With Visual Intelligence

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages