Reproducible Robot Learning of Shifting Objects for Grasping in Cluttered Environments with UR and Intel RealSense
This repository contains a replication of the paper Robot Learning of Shifting Objects for Grasping in Cluttered Environments presented at IROS 2019 in Macau. All dependencies are contained in a Docker image available from Docker Hub: dtiresearch/ur-learning-shifting-for-grasping to allow for easy reproduction.
In contrast to the original approach we implement the program flow inside the Universal Robot's Polyscope. Thus allowing for a higher degree of maintainability and ease of use for non-experts within Python/C++ and reinforcement learning (RL). In addition to the robot we have a compute box (running Ubuntu 20.04 with Docker 19.03.14, build 5eb3275d40) which is connected to (1) the UR5 (e-series) through TCP/IP and (2) the Intel RealSense (D435) camera through USB3.1 gen 2.
- Universal Robots™ UR5 e-series (SW 5.9.1.1031110)
- Schunk® EGI Gripper
- Intel® RealSense™ D435 Depth Camera
- CAD-models of the gripper fingers, camera mount, and box mounts are located in the
cad-models
directory
- Install the Schunk EGI URCap (v.1.0.1) on the UR5
- Install the DTI UR Libraries URCap (v.0.3.0)
- Copy the content of the ur/ folder in this repo onto an empty USB. Insert the USB in the robot's TP and the files will automatically be copied to the controller ready to be run.
Only prerequisite is that you have Docker installed
- Pull down our Docker image
docker pull dtiresearch/ur-learning-shifting-for-grasping
- Start the container on the compute box
docker run -it --rm --net=host --privileged \
dtiresearch/ur-learning-shifting-for-grasping
NOTE: The docker container requires priveleged rights in order to communicate with the camera through USB and access to the host' network to communicate with the robot.
- Start the XML-RPC servers
TO DO
- The trained models is available in the
models
directory