This repository provides Dockerfiles and supporting resources to create containerized ROS 2 Humble environments specifically tailored for running ADAS (Advanced Driver Assistance Systems) applications on NVIDIA Jetson Orin Nano.
It includes containerized ROS 2 drivers for:
- Socket CAN
- Continental ARS404/408 Radar
- USB Cameras
Additionally, it provides a sample implementation of a containerized YOLO object detection model that performs real-time inference using a USB camera connected to the Orin Nano, currently optimized for CPU inference.
- NVIDIA Jetson Orin Nano: Tested on Jetson Orin Nano 8GB Developer Kit with 120 GB SSD. (Tanna Techbiz)
- Operating System: Ubuntu 22.04 (LTS) with NVIDIA Jetpack 6.2 (L4T 36.4.3)
- NVIDIA Drivers: Installed with Jetpack 6.2. Verify installation with
nvidia-smi
command. - CUDA 12.6: Installed with Jetpack 6.2. Verify installation with
nvcc --version
command. - ROS 2 Humble (Optional): Installed on the host system. Follow the ROS 2 installation guide for instructions.
- Docker: Ideally pre-installed with Jetpack 6.2. If Docker daemon errors occur, refer to Docker setup guide.
- CAN Device Drivers: Tested with Peak USB CAN based on SocketCAN (User Manual)
- USB Camera Drivers: Tested with Logitech C525 (USB Camera Setup Guide)
Pull the prebuilt Docker image from Docker Hub:
docker pull nikhil8490/ros2-humble-docker-orin
Alternatively, build your own Docker image from the Dockerfile:
git clone https://github.com/nikhilnair8490/ros2-orin-adas-docker.git
cd ros2-orin-adas-docker
docker image build -t <image-name> -f Dockerfile .
Before starting the container, enable the CAN port and configure the radar:
- Connect USB camera and USB CAN device (connected to radar)
Enable CAN port (adjust can0
or can1
as necessary):
sudo ip link set can0 up type can bitrate 500000
sudo ip link set can0 up
Setup radar hardware (One-time configuration. Enables Objects detection with all extended properties):
cansend can0 200#F8000000089C0000
Once the Docker image is built, use the provided Docker Compose files to manage containers.
Use docker-compose.jetson.sensors.yaml
to run radar and camera sensor nodes:
# Start containers in the background
docker compose -f docker-compose.jetson.sensors.yaml up -d
# Enter sensors container shell
docker compose -f docker-compose.jetson.sensors.yaml exec ros_dev_radar bash
# Stop and remove containers
docker compose -f docker-compose.jetson.sensors.yaml down
Use docker-compose.jetson.ObjDetection.yaml
to run YOLO object detection:
# Start YOLO container
docker compose -f docker-compose.jetson.ObjDetection.yaml up -d
# Enter container shell
docker compose -f docker-compose.jetson.ObjDetection.yaml exec ros_dev_yolo bash
# Stop and remove container
docker compose -f docker-compose.jetson.ObjDetection.yaml down
-
ros_dev_radar: Launches radar sensor node (
continental_ars408_socket_can.launch.xml
). Reads from CAN interface, publishing detected objects.- Example Parameters:
interface
: CAN device name (can0
)receiver_interval_sec
: CAN message timeout intervalvisualize_radar_markers
: Enable RViz visualization topics
- Example Parameters:
-
ros_dev_cam: Launches USB camera node using
usb_cam
package with configurable resolution, FPS etc. (seeconfig/params_1.yaml
). -
ros_dev_yolo: Runs YOLOv11 object detection using CPU inference. Configurable settings include:
model
: Path to YOLO.pt
model file (also availabe in repo at jetson_ros_ws/src/yolo_ros/models)input_image_topic
: Source camera image topicdevice
: CPU or GPU (cpu
orcuda:0
) Currently only cpu is supportedthreshold
,iou
: Detection confidence and overlap settingsimgsz_width
,imgsz_height
: Input image dimensions
/camera1/camera_info
/camera1/compressedDepth
(depth camera required)/camera1/image_compressed
/camera1/image_raw
/from_can_bus
/radar_objects_raw
/radar_scan_raw
/radar_state
/radar_markers
(RViz visualization)
/yolo/dbg_image
: Image with bounding boxes/yolo/detections
: Object detection results