This application is an automated system of traffic monitoring to detect helmet violations. The system for detecting wrong side driving violations is implemented in YOLOv4 in this repo. The project automatically generates traffic violation tickets (or challans) based on registration details added to the database.
This project is based on three currently existing git repos:
Each of the above repos have been modified to suit this application and can be found in their respective directories with their original README and LICENSE files.
Watch this video to see what this project does:
helmet_detection_sriramcu_demo.mp4
BlcaKHat/yolov3-Helmet-Detection already takes care of detecting and counting number of people wearing helmets in an image. qqwweee/keras-yolo3 was trained on a dataset containing people and bikes but not helmets.
- Our project aims to combine the two functionalities to avoid detecting violations for pedestrians not wearing helmets or scooters that are parked with no helmet.
- It also detects violations where a helmet is present but not worn, for instance hanging on the side of a bike.
- The program detects a violation only if a helmet's bounding box is inside that of a person's and the person's bounding box is inside that of a bike's.
- These violations are cropped out of the overall video frame to the combined dimension of the bike's and the
person's bounding boxes with some buffer and stored inside
cropped_images
folder insidekeras_yolo3
folder. - The ANPR program is run on these cropped images and moved into
violations
folder insidekeras_yolo3
folder, where the new file names will be the computed license plate of the vehicle. - In case the license plate cannot be determined by the ANPR module, a suffix of "_unknown" is applied to the filename inside the cropped images folder, so that ANPR can be skipped next time around.
- A challan is generated for each violation by referencing the
vehicles.db
sqlite3 database stored in the root directory of the project using the license plate of the vehicle and is stored in challans folder in the root directory of the project. - Before running the helmet detection program, vehicle data is assumed to have been entered via the GUI, i.e. vehicle license plates, name and address of the owner (3 columns).
- The
deep_license_plate_recognition
module has minimal changes. - The
yolov3_Helmet_Detection
folder contains moderate changes, such as minor tweaks in theHelmet_detection_YOLOv3.py
program and some more input images to test the helmet detection module separately. - Major changes are made to the
keras_yolo3
module including converting hyphen to underscore in the folder name and adding__init__.py
file to use it as a python module in the main GUI code. Significant changes made toyolo.py
indetect_image()
anddetect_video()
functions.
Follow these instructions:
I) Initial installation
Clone this repo and then install the requirements:
pip install -r requirements.txt
If you are using Linux, you may need to install the following packages:
$ sudo apt update
$ sudo apt install libqt5x11extras5
$ sudo apt install libgl1-mesa-glx
II) Downloading Large Files
- yolov3.weights. Put this
file in the
keras-yolo3/
subdirectory. - core. Put this file in
the
yolov3_Helmet_Detection/
subdirectory. - yolov3-obj_2400.weights
. Put this file in the
yolov3_Helmet_Detection/
subdirectory. - yolo.h5. Put this file
in the
keras-yolo3/model_data/
subdirectory.
III) Set up API key for ANPR
Get your ANPR API key from here. Name this
file as api_key.txt
and place it in the root directory of the project.
(All the below commands are run in the root directory of the project as the current working directory)
I) To run the main GUI program,
python helmet_violation_monitoring_gui.py
Refer to the demo video and the high level overview to understand the features of the GUI - run helmet detection (by selecting input video file in the file picker and timestamped output video file's location using the folder picker), add vehicle entry into database, read database and generate challans.
The constants.py file in keras_yolo3/
submodule contains COMPUTATION_FPS
, which is an assumed value for the
speed at which your system processes a given video. Before trying real time applications, see if this value is
correct by seeing the output printed by the above program, which mentions this assumed computation FPS, the
actual computation FPS measured on a test input video, and the FPS of the input video file. Then, with minimal
changes to the yolo.py
program, you can try real time applications.
The lower your computation fps, the shorter your output video will be since it saves fewer frames into the output video while yielding faster processing.
II) To run the helmet violations tracking module separately on the command line,
python -m keras_yolo3.yolo_video --input keras_yolo3/input_videos/demo_input.mp4 --output keras_yolo3/output_videos/demo_output_cmd.mp4
III) To run just the helmet detection on a batch of images without any overlap logic,
python yolov3_Helmet_Detection/Helmet_detection_YOLOV3.py
By default input images are stored in yolov3_Helmet_Detection/images
folder and output images are stored in
yolov3_Helmet_Detection/test_out
folder.
IV) To run just the ANPR on a batch of images, stored in keras_yolo3/cropped_images
folder,
python run_lpr.py
.
Some additional images are stored in keras_yolo3/input_frames
folder, for additional testing, which you can
manually move to the keras_yolo3/cropped_images
folder before running the program. As mentioned in the high
level overview section, the violations will be stored in keras_yolo3/violations
folder. Challans can then be
generating using the "Generate Challans" button in the main GUI without having to run the helmet
violations tracking via the GUI.
Challan (ticket) generated by the program.
Open source contributions or PRs for the project are welcome, especially in the following areas:
- Easier use of real time applications configurable on the GUI or via the command line, such as via a webcam or a Rpi device.
- Automatic computation FPS without assumptions as mentioned in the usage section.
- Ways to speed up the algorithm- I had to set my computation fps to 2.5 even on the pretty respectable NVIDIA GeForce RTX 3060.
- GUI enhancements.
- Future proofing to easily port it to the latest YOLO algorithm. Right now many aspects of the program are hardcoded for the YOLOv3 algorithm.