For detailed information about this project, please refer to the paper included in this repository:
Oliani_two_arms_one_goal.pdf.
Code Directory | Description |
---|---|
robot_controllers | Impedance controller for the UR5 robot arm |
dual_ur5_env | Environment setup for the UR5 env |
vision | Point-Cloud based encoders |
utils | Point-Cloud fusion and voxelization |
her | Hindsight Experience Replay for online learning |
- Follow the installation in the official SERL repo.
- Check envs and either use the provided box_picking_env or set up a new environment using the one mentioned as a template. (New environments have to be registered here)
- Use the config file to configure all the robot-arm specific parameters, as well as gripper and camera infos.
- Go to the box picking folder and modify the bash files
run_learner.py
andrun_actor.py
. If no images are used, setcamera_mode
tonone
. WandB logging can be deactivated ifdebug
is set to True. - Record 20 demostrations using record_demo.py in the same folder. Double check that the
camera_mode
and all environment-wrappers are identical to drq_policy.py. - Execute
run_learner.py
andrun_actor.py
simultaneously to start the RL training. - To evaluate on a policy, modify and execute
run_evaluation.py
with the specified checkpoint path and step.