robotis_lab is a research-oriented repository based on Isaac Lab, designed to enable reinforcement learning (RL) and imitation learning (IL) experiments using Robotis robots in simulation. This project provides simulation environments, configuration tools, and task definitions tailored for Robotis hardware, leveraging NVIDIA Isaac Sim’s powerful GPU-accelerated physics engine and Isaac Lab’s modular RL pipeline.
Important
This repository currently depends on IsaacLab v2.0.0 or higher.
-
Install Isaac Lab by following the installation guide. We recommend using the conda installation as it simplifies calling Python scripts from the terminal.
-
Clone the robotis_lab Repository (i.e. outside the
IsaacLab
directory):git clone https://github.com/ROBOTIS-GIT/robotis_lab.git
-
Install the robotis_lab Package
cd robotis_lab && python -m pip install -e source/robotis_lab
-
Verify that the extension is correctly installed by running the following command to print all the available environments in the extension:
python scripts/tools/list_envs.py
Note
If you want to control a SINGLE ROBOT with the keyboard during playback, add --keyboard
at the end of the play script.
Key bindings:
=========================== =========================
Command Key
=========================== =========================
Toggle gripper (open/close) K
Move arm along x-axis W / S
Move arm along y-axis A / D
Move arm along z-axis Q / E
Rotate arm along x-axis Z / X
Rotate arm along y-axis T / G
Rotate arm along z-axis C / V
=========================== =========================
OMY Reach task
# Train
python scripts/reinforcement_learning/rsl_rl/train.py --task RobotisLab-Reach-OMY-v0 --num_envs=512 --headless
# Play
python scripts/reinforcement_learning/rsl_rl/play.py --task RobotisLab-Reach-OMY-v0 --num_envs=16
OMY Lift task
# Train
python scripts/reinforcement_learning/rsl_rl/train.py --task RobotisLab-Lift-Cube-OMY-v0 --num_envs=512 --headless
# Play
python scripts/reinforcement_learning/rsl_rl/play.py --task RobotisLab-Lift-Cube-OMY-v0 --num_envs=16
OMY Open drawer task
# Train
python scripts/reinforcement_learning/rsl_rl/train.py --task RobotisLab-Open-Drawer-OMY-v0 --num_envs=512 --headless
# Play
python scripts/reinforcement_learning/rsl_rl/play.py --task RobotisLab-Open-Drawer-OMY-v0 --num_envs=16
FFW-BG2 reach task
# Train
python scripts/reinforcement_learning/rsl_rl/train.py --task RobotisLab-Reach-FFW-BG2-v0 --num_envs=512 --headless
# Play
python scripts/reinforcement_learning/rsl_rl/play.py --task RobotisLab-Reach-FFW-BG2-v0 --num_envs=16
OMY Stack task (Stack the blocks in the following order: blue → red → green.)
# Teleop
python scripts/tools/record_demos.py --task RobotisLab-Stack-Cube-OMY-IK-Rel-v0 --teleop_device keyboard --dataset_file ./datasets/dataset.hdf5 --num_demos 10
# Annotate
python scripts/imitation_learning/isaaclab_mimic/annotate_demos.py --device cuda --task RobotisLab-Stack-Cube-OMY-IK-Rel-Mimic-v0 --auto --input_file ./datasets/dataset.hdf5 --output_file ./datasets/annotated_dataset.hdf5 --headless
# Mimic data
python scripts/imitation_learning/isaaclab_mimic/generate_dataset.py \
--device cuda --num_envs 100 --generation_num_trials 1000 \
--input_file ./datasets/annotated_dataset.hdf5 --output_file ./datasets/generated_dataset.hdf5 --headless
# Train
python scripts/imitation_learning/robomimic/train.py \
--task RobotisLab-Stack-Cube-OMY-IK-Rel-v0 --algo bc \
--dataset ./datasets/generated_dataset.hdf5
# Play
python scripts/imitation_learning/robomimic/play.py \
--device cuda --task RobotisLab-Stack-Cube-OMY-IK-Rel-v0 --num_rollouts 50 \
--checkpoint /PATH/TO/desired_model_checkpoint.pth
We provide a Sim2Real pipeline to deploy policies trained in Isaac Lab simulation directly onto the real OMY robot.
🎥 Show demo video
sim2real.mp4
Important
More on OMY Hardware Setup: For details on how to set up and operate the OMY robot, please refer to the open_manipulator repo
In this pipeline:
- The trained policy (exported as a TorchScript .pt file) is executed on the real robot using ROS 2.
- The robot receives joint state feedback and sends joint trajectory commands via a ROS 2 control interface.
- A TF frame for the sampled target pose is broadcast for visualization and debugging.
Prerequisites
- A trained policy (under logs/rsl_rl/reach_omy/).
- ROS 2 Jazzy installed and sourced.
- Robot hardware must be ready and controllable via the joint trajectory interface.
Run Sim2Real Reach Policy on OMY
# Train
python scripts/reinforcement_learning/rsl_rl/train.py --task RobotisLab-Reach-OMY-v0 --num_envs=512 --headless
# Sim2Real
python scripts/sim2real/OMY/reach/run_omy_reach.py --model_dir=<2025-07-10_08-47-09>
Replace <2025-07-10_08-47-09> with the actual timestamp folder name under:
logs/rsl_rl/reach_omy/