Skip to content

ROS1 to ROS2 transition #120

@sidtalia

Description

@sidtalia

Is your feature request related to a problem? Please describe.
The current (noetic-hound) stack runs on ROS1.
This is not a pressing issue since the existing stack is not broken, but more for future-proofing purposes.

Describe the solution you'd like
For the class code, the workflow would be approximately:

  • Update the hardware drivers to ROS2 (ydlidar, vesc, realsense, joy-teleop).
  • Update the low-level code (up to teleop) to use ROS2 (mushr_base, mushr_sim, mushr_description, mushr_hardare...).
  • Update ROS wrappers for the localization, planning, and control code
  • Update the launch systems as well as the visualization tools
  • Figure out the ros-middle-ware; cyclone_dds is great if you have an isolated device. It will, however, clog up your network if you stream data to another device. Not sure if that has been fixed. FastRTPS addresses this issue, but it has performance limitations for single-device operation.
  • Update tests for class homework code
  • Confirm that localization/planning/control work individually in simulation
  • Confirm that localization/planning/control work individually on real robot
  • Confirm that localization + planning + control work together on real robot (final project)
  • Update Docker composition code for this new docker

For the general-purpose code, this would entail something similar if we just want to update the ROS backend for MuSHR.

To transition the HOUND stack to ROS2, in addition to the changes for MuSHR, we would need to do the following:

  • Update mavros to ROS2 (we have a custom version that allows control passthrough).
  • Update hound_core to ROS2 (primarily, the low_level_controller, the high_level_controller, and the HAL (hardware abstraction layer)).
  • Update elevation_mapping_cupy to ROS2

If we want to converge the HOUND/MuSHR towards a common stack, we can do so by removing the need for GPS (by using visual odometry) and updating the mapping system to use nvblox. This would involve:

  • Integrating nvidia's CuVSLAM as the primary odometry source (and feeding this odometry to ardupilot as well for refinement of localization estimates)
  • Converting nvblox's output to an elevation map so that we can use the existing planner and MPC setup. OR. figure out how to simulate the robot's physics given the nvblox output.
  • We could, in this new framework, continue using the map-based localization (so map based localization would be the equivalent of an "indoor" GPS) -- we can then do "high level" planning in that map (as needed, which we do now in MuSHR), and then ask the navigation stack to move through those high level goals; it reduces the requirement for the map-level plan to be kinodynamically feasible -- it just needs to be kinematically feasible.

Describe alternatives you've considered
Do nothing. The existing stack is not "broken", just that it is "stale".

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions