Skip to content

Add C++ ONNX Demo #76

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from
Open

Conversation

sctrueew
Copy link

@sctrueew sctrueew commented Mar 28, 2025

This PR introduces a C++ demo for the RF-DETR model, allowing users to perform real-time object detection using an ONNX model. The demo supports various input sources, including images, videos, and live camera streams, with optional CUDA acceleration.

Key Features:
✅ Loads an RF-DETR model in ONNX format
✅ Supports image, video, and live camera inference
✅ Enables CPU and CUDA (GPU) execution
✅ Configurable confidence threshold for detections
✅ Outputs annotated images/videos with detected objects
✅ Uses COCO class labels for object recognition

Run Examples:
🔹 Image Inference:
Detect objects in a static image and save the output:

./main--model path/to/model.onnx --source_type image \
  --input path/to/image.jpg --output path/to/output.jpg \
  --conf 0.6 --labels path/to/coco.names

🔹 Video Inference:
Process a video file and save the annotated output:

./main --model path/to/model.onnx --source_type video \
  --input path/to/video.mp4 --output path/to/output.mp4 \
  --conf 0.5 --use_cuda

🔹 Live Camera Inference (Default ID 0):
Run inference on the default webcam (ID 0) with GPU acceleration:

./main --model path/to/model.onnx --source_type camera \
  --input 0 --conf 0.55 --use_cuda

🔹 Live Camera Inference (Specific Camera ID 1):
Run inference on a specific camera (ID 1):

./main --model path/to/model.onnx --source_type camera \
  --input 1 --conf 0.55

🔹 Get Help & Available Options:

./main --help
Dependencies:

  • OpenCV (for image and video processing)
  • ONNX Runtime (for model inference)
  • CUDA (optional, for GPU acceleration)

@CLAassistant
Copy link

CLAassistant commented Mar 28, 2025

CLA assistant check
All committers have signed the CLA.

@probicheaux
Copy link
Collaborator

Hey @sctrueew, really cool work! Would love to merge. Can you sign the CLA?

@SkalskiP SkalskiP changed the base branch from main to develop April 3, 2025 08:53
@sctrueew
Copy link
Author

sctrueew commented Apr 3, 2025

Hey @sctrueew, really cool work! Would love to merge. Can you sign the CLA?

Yes, I have agreed the CLA

@SkalskiP
Copy link
Collaborator

SkalskiP commented Apr 3, 2025

Hi @sctrueew, thanks for accepting the CLA! Would it be possible for you to add a README in inference/cpp that explains how to set up the project, install the necessary dependencies, and run the model in C++? This would make it much easier for new users to get started with your demo. Thanks again for your contribution!

@sctrueew
Copy link
Author

sctrueew commented Apr 3, 2025

Hi @sctrueew, thanks for accepting the CLA! Would it be possible for you to add a README in inference/cpp that explains how to set up the project, install the necessary dependencies, and run the model in C++? This would make it much easier for new users to get started with your demo. Thanks again for your contribution!

Hi @SkalskiP, I have added a README file to the project. Please let me know if you need any further changes or additional information.

@mohamedsamirx mohamedsamirx mentioned this pull request Apr 4, 2025
1 task
…DETR_ONNX.cpp for better structure and maintainability
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants