Autoware Mini is a minimalistic Python-based autonomy software inspired by Autoware. It is built on Python and ROS 1 to make it easy to get started and tinkering. Autoware Mini currently works on ROS Noetic (Ubuntu 20.04). The software is open-source with a friendly MIT license.
Our goals with the Autoware Mini were:
- easy to get started with --> minimal amount of dependencies
- simple and pedagogical --> simple Python nodes and ROS 1
- easy to implement machine learning based approaches --> Python
It is not production-level software, but aimed for teaching and research. At the same time we have validated the software with a real car in real traffic in the city of Tartu, Estonia.
The key modules of Autoware Mini are:
- Localization - determines vehicle position and speed. Can be implemented using GNSS, lidar positioning, visual positioning, etc.
- Obstacle detection - produces detected objects based on lidar, radar or camera readings. Includes tracking and prediction.
- Traffic light detection - produces status for stoplines, if they are green or red. Red stopline is like an obstacle for the local planner.
- Global planner - given current position and destination determines the global path to the destination. Makes use of Lanelet2 map.
- Local planner - given the global path and obstacles, plans a local path that avoids obstacles and respects traffic lights.
- Controller - follows the local path given by the local planner, matching target speeds at different points of trajectory.
Here are couple of (slightly outdated) short videos introducing the Autoware Mini features.
-
You should have ROS Noetic installed, follow the official instructions for Ubuntu 20.04.
-
Some of the nodes need NVIDIA GPU, CUDA and cuDNN. At this point we suggest installing CUDA 11.8 for the best compatibility. Notice that the default setup also runs without GPU.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.1-1_all.deb sudo dpkg -i cuda-keyring_1.1-1_all.deb sudo apt-get update sudo apt-get -y install cuda=11.8.0-1 libcudnn8=8.9.7.29-1+cuda11.8 sudo apt-mark hold cuda cuda-drivers
If the above instructions installed/upgraded Nvidia drivers, please reboot your system before proceeding. If you have newer CUDA installed, but are happy to have it downgraded to 11.8, add
--allow-downgrades
to install command. Or you can choose to installcuda-11.8
instead, which keeps the existing newer CUDA.In case the above instructions are out of date, follow the official CUDA and cuDNN installation instructions.
-
Create workspace
mkdir -p autoware_mini_ws/src cd autoware_mini_ws/src
-
Clone the repo
git clone https://github.com/UT-ADL/autoware_mini.git
-
Install system dependencies (ignore the errors for missing Carla packages if not using Carla)
rosdep update --include-eol-distros rosdep install --include-eol-distros --from-paths . --ignore-src -r -y
-
Install Python dependencies
pip install -r autoware_mini/requirements.txt # only when planning to use GPU based clustering, long download pip install -r autoware_mini/requirements_cuml.txt
-
Build the workspace
cd .. catkin build
-
Source the workspace environment
source devel/setup.bash
As this needs to be run every time before launching the software, you might want to add something similar to the following line to your
~/.bashrc
.source ~/autoware_mini_ws/devel/setup.bash
Planner simulation is very lightweight and has the least dependencies. It should be possible to run it on any modern laptop without GPU.
roslaunch autoware_mini start_sim.launch
You should see RViz window with the default map. To start driving you need to give the vehicle initial position with 2D Pose Estimate button and destination using 2D Nav Goal button. Static obstacles can be placed or removed with Publish Point button. Initial position can be changed during movement.
To test planner simulation with real-time traffic light status from Tartu:
roslaunch autoware_mini start_sim.launch tfl_detector:=mqtt
Running the autonomy stack against recorded sensor readings is a convenient way to test the detection nodes. An example bag file can be downloaded from here and it should be saved to the data/bags
directory.
roslaunch autoware_mini start_bag.launch
The example bag file is launched by default. To launch the stack against any other bag file include bag_file:=<name of the bag file in data/bags directory>
in the command line.
The detection topics in bag are remapped to dummy topic names and new detections are generated by the autonomy stack. By default the lidar_cluster
detection algorithm is used, which works both on CPU and GPU. To use GPU-only neural network based SFA detector include in the command line detector:=lidar_sfa
.
roslaunch autoware_mini start_bag.launch detector:=lidar_sfa
Other possible detector
argument values worth trying are radar
, lidar_cluster_radar_fusion
and lidar_sfa_radar_fusion
. Notice that blue dots represent lidar detections, red dots represent radar detections and green dots represent fused detections.
Another possible test is to run camera-based traffic light detection against bag:
roslaunch autoware_mini start_bag.launch tfl_detector:=camera
To see the camera traffic light detections enable Detections > Traffic lights > Left ROI image and Right ROI image in RViz. Other possible tfl_detector
argument values are yolo
, camera_mqtt_fusion
and yolo_mqtt_fusion
. Note that there is no point to use mqtt
with bag files, as real-time traffic light status is not appropriate for recorded data.
-
Download Carla 0.9.15.
-
Extract the file to a new folder with
tar xzvf CARLA_0.9.15.tar.gz
. We will call this extracted folder<CARLA ROOT>
. -
Download tartu_demo_v0.9.15.2.tar.gz.
-
Copy
tartu_demo_v0.9.15.2.tar.gz
inside theImport
folder under<CARLA ROOT>
directory. -
Run
./ImportAssets.sh
from the<CARLA ROOT>
directory. This will install thetartu_demo
map. -
Delete the
tartu_demo_v0.9.15.2.tar.gz
file from theImport
folder. -
Download utlexus.tar.gz.
-
Copy
carla_lexus-0.9.15.tar.gz
inside theImport
folder under<CARLA ROOT>
directory. -
Run
./ImportAssets.sh
from the<CARLA ROOT>
directory. This will install the UT Lexus vehicle model. -
Delete the
carla_lexus-0.9.15.tar.gz
file from theImport
folder. -
Since we will be referring to
<CARLA ROOT>
a lot, let's export it as an environment variable. Make sure to replace the path where Carla is extracted.export CARLA_ROOT=$HOME/path/to/carla
-
Now, enter the following command. (NOTE: Here we assume that
CARLA_ROOT
was set from the previous command.)export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla/dist/carla-0.9.15-py3.7-linux-x86_64.egg:${CARLA_ROOT}/PythonAPI/carla/agents:${CARLA_ROOT}/PythonAPI/carla
Note: It will be convenient if the above variables are automatically exported whenever you open a terminal. Putting above exports in
~/.bashrc
will reduce the hassle of exporting everytime. -
Install CARLA dependencies:
sudo apt install libomp5
-
Clone the CARLA ROS bridge repo:
cd ~/autoware_mini_ws/src git clone --recurse-submodules https://github.com/UT-ADL/ros-bridge carla_ros_bridge
-
Install CARLA ROS bridge dependencies:
cd carla_ros_bridge ./install_dependencies.sh
-
Build the workspace
cd ../.. catkin build
-
In a new terminal, (assuming enviornment variables are exported) run Carla simulator by entering the following command.
$CARLA_ROOT/CarlaUE4.sh
To force using NVIDIA GPU for rendering add
-prefernvidia
to command line. To hide the default CARLA window add-RenderOffScreen
. To improve the frame rate you can try-quality-level=Low
. -
In a new terminal, (assuming enviornment variables are exported) run the following command. This runs Tartu environment of Carla with minimal sensors and our autonomy stack. The detected objects and traffic light statuses come from Carla directly.
roslaunch autoware_mini start_carla.launch
In RViz enable Simulation > Carla image view or Carla camera view to see the third person view behind the vehicle. Set destination as usual with 2D Nav Goal button.
You can also run full Carla sensor simulation and use actual detection nodes. For example to launch Carla with cluster-based detector:
roslaunch autoware_mini start_carla.launch detector:=lidar_cluster
Or to launch Carla with camera-based traffic light detection.
roslaunch autoware_mini start_carla.launch tfl_detector:=camera
NB! Enabling both can make the simulation unbearably slow.
-
Clone Scenario Runner to a directory of your choice
git clone https://github.com/UT-ADL/scenario_runner.git
-
Install requirements
pip install -r scenario_runner/requirements.txt
-
We need to make sure that different modules find each other. Following environment variables should be set in
.bashrc
.SCENARIO_RUNNER_ROOT=<path_to>/scenario_runner
-
In a new terminal, (assuming enviornment variables are exported) run Carla simulator by entering the following command.
$CARLA_ROOT/CarlaUE4.sh
-
Launch the autonomy stack:
a) OpenScenario: In a new terminal, (assuming enviornment variables are exported) launch route scenario with:
roslaunch autoware_mini start_carla.launch use_scenario_runner:=true
You can now execute scenarios by choosing them from RViz Carla plugin dropdown and pressing Execute button. You need to manually set the destination for the ego car when scenario is launched. The predefined scenarios are available under
data/scenarios/MAP_NAME/SCENARIO_NAME.xosc
.OR
b) Route Scenario: In a new terminal, (assuming enviornment variables are exported) launch route scenario with:
roslaunch autoware_mini start_carla.launch use_scenario_runner:=true route_id:=0
This will launch route scenarios using
route_id = 0
in the defaulttartu_demo
routes definition file tartu_demo.xml.
-
Go to the autoware_mini src directory:
cd ~/autoware_mini_ws/src
-
Clone the repo containing car driver dependencies and launch files:
git clone https://github.com/UT-ADL/lexus_platform.git
-
Clone the latest Ouster driver repository:
git clone --recurse-submodules https://github.com/ouster-lidar/ouster-ros.git
-
Install system dependencies:
rosdep install --include-eol-distros --from-paths . --ignore-src -r -y
-
Build the workspace:
catkin build --cmake-args -DCMAKE_BUILD_TYPE=Release
roslaunch autoware_mini start_lexus.launch
We are standing on the shoulders of giants. These are the key libraries we are using:
- Autoware and especially Autoware.AI - original inspiration and message format.
- Lanelet2 - map format and global planning.
- Shapely - collision detection and general geometry calculations.
- Numpy - efficient vectorized computations.