Skip to content

Commit e05911a

Browse files
committed
Merge pull request #261 from personalrobotics/feature/perception/relative_transform
Perception README + Feature/perception/relative transform
2 parents 5abd92f + 9de5362 commit e05911a

File tree

2 files changed

+83
-4
lines changed

2 files changed

+83
-4
lines changed

README.md

Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -177,6 +177,81 @@ path1 = planner.PlanToConfiguration(robot, goal)
177177
path2 = planner.PlanToBasePose(robot, goal_pose)
178178
```
179179

180+
## Perception Pipeline
181+
182+
Recently, support has been added for a few perception routines. The general structure is intended
183+
to mirror that of the planning pipeline, but it is somewhat less encapsulated than
184+
planning, from the user's perspective.
185+
186+
There is a `prpy.perception.base.PerceptionModule` class which is extended by every perception
187+
routine. Every routine has some common methods for perception, which are annotated with
188+
`@PerceptionMethod`. Here is an example call (should happen in a typical herbpy console):
189+
190+
```python
191+
from prpy.perception.apriltags import ApriltagsModule
192+
193+
adetector = ApriltagsModule(marker_topic='/apriltags_kinect2/marker_array',
194+
marker_data_path=FindCatkinResource('pr_ordata','data/objects/tag_data.json'),
195+
kinbody_path=FindCatkinResource('pr_ordata','data/objects'),
196+
destination_frame='/map',
197+
detection_frame='/head/kinect2_rgb_optical_frame')
198+
detected_objects = adetector.DetectObjects(robot)
199+
```
200+
IMPORTANT - Most of these methods require some underlying CPP server to be running, before calls can be
201+
made to the PrPy detector.
202+
203+
### Perception Modules
204+
205+
Currently, the following perception routines are supported:
206+
207+
- `AprilTags`
208+
- `VNCC`: Vectorized Normalized Cross Correlation
209+
- `SimTrack`
210+
- `BlockDetector`
211+
- `ROCK`: Robust Object Constellation and Kinematic Pose
212+
213+
### Underlying Servers
214+
215+
- `AprilTags`: Started via `apriltags.launch` in [herb_launch](https://github.com/personalrobotics/herb_launch). Publishes to `/apriltags_kinect2/detections` and `/apriltags_kinect2/marker_array`.
216+
- `VNCC`: Have [vncc_msgs](https://github.com/personalrobotics/vncc_msgs) and [vncc](https://github.com/personalrobotics/vncc) in your workspace. Run `roslaunch vncc vncc_estimator.launch`. This provides the `/vncc/get_vncc_detections` service.
217+
- `SimTrack` - See Caveats section below
218+
- `BlockDetector` - Have [tabletop_perception_tools](https://github.com/personalrobotics/tabletop_perception_tools) in your workspace. Run `rosrun tabletop_perception_tools tools_server`. This provides the `/tools_server/find_blocks` service.
219+
- `ROCK` - To be updated later.
220+
221+
222+
### Common Perception Methods
223+
224+
At this point, two methods are common to all perception routines. However, some
225+
routine-specific knowledge may be required to make them work. This is particularly reflected
226+
in the constructor for the perception module.
227+
228+
- `DetectObjects(self, robot, **kw_args)`: This runs the perception method for all
229+
objects that the particular routine knows about. Typically, this information is specified
230+
either as a config file (in the case of AprilTags) or in the constructor of the respective
231+
module.
232+
- `DetectObject(self,robot,obj_name)`: This runs the perception routine to detect a particular object,
233+
based on the known names in the database.
234+
235+
The return type for both is typically one or more OpenRAVE kinbodies, with the correct
236+
transformation relative to the current environment, if the input `tf`s have been
237+
correctly provided.
238+
239+
240+
### Caveats
241+
242+
As mentioned above, running the perception routines require a bit of routine-specific knowledge,
243+
because of differences in the way some of them operate. Some of those caveats, for each routine
244+
are mentioned here.
245+
246+
- `AprilTags`: This method involves detection of visual fiducial markers. There is a database that maps
247+
april tag IDs to the objects to which they are attached, along with the relative transform
248+
of the tag with respect to the object kinbody, in `pr_ordata/data/objects/tag_data.json`.
249+
- `VNCC`: This is a single-query method and so currently does not support `DetectObjects`, but just
250+
`DetectObject`, where the object names are obtained from the map in the module's constructor.
251+
- `SimTrack`: See https://github.com/personalrobotics/simtrack for more details. You will need the `personalrobotics` fork. This can track/detect any kind of textured object stored as an `.obj` file. The perception module only calls the detector, but the tracker can also be integrated pretty easily. It supports `DetectObjects`, and requires the simtrack `multi_rigid_node` to be running on the robot to work. Inside the module, there is a map of `simtrack` objects to kinbodies.
252+
- `BlockDetector`: This is specifically for detecting blocks on a table in front of the camera. Therefore,
253+
it only has a `DetectBlocks` method.
254+
- `ROCK`: This is still under development and so does not exactly conform to the underlying API.
180255

181256
## Environment Cloning
182257

src/prpy/perception/apriltags.py

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
class ApriltagsModule(PerceptionModule):
3838

3939
def __init__(self, marker_topic, marker_data_path, kinbody_path,
40-
detection_frame, destination_frame):
40+
detection_frame, destination_frame,reference_link,):
4141
"""
4242
This initializes an April Tags detector.
4343
@@ -55,6 +55,7 @@ def __init__(self, marker_topic, marker_data_path, kinbody_path,
5555
self.kinbody_path = kinbody_path
5656
self.detection_frame = detection_frame
5757
self.destination_frame = destination_frame
58+
self.reference_link = reference_link
5859

5960

6061
def __str__(self):
@@ -63,7 +64,7 @@ def __str__(self):
6364

6465
def _DetectObjects(self, env, marker_topic=None, marker_data_path=None,
6566
kinbody_path=None, detection_frame=None,
66-
destination_frame=None, **kw_args):
67+
destination_frame=None, reference_link=None,**kw_args):
6768
"""
6869
Use the apriltags service to detect objects and add them to the
6970
environment. Params are as in __init__.
@@ -88,14 +89,16 @@ def _DetectObjects(self, env, marker_topic=None, marker_data_path=None,
8889

8990
if kinbody_path is None:
9091
kinbody_path = self.kinbody_path
91-
9292

9393
if detection_frame is None:
9494
detection_frame = self.detection_frame
9595

9696
if destination_frame is None:
9797
destination_frame = self.destination_frame
9898

99+
if reference_link is None:
100+
reference_link = self.reference_link
101+
99102
# TODO: Creating detector is not instant...might want
100103
# to just do this once in the constructor
101104
import kinbody_detector.kinbody_detector as kd
@@ -104,7 +107,8 @@ def _DetectObjects(self, env, marker_topic=None, marker_data_path=None,
104107
kinbody_path,
105108
marker_topic,
106109
detection_frame,
107-
destination_frame)
110+
destination_frame,
111+
reference_link)
108112

109113
logger.warn('Waiting to detect objects...')
110114
return detector.Update()

0 commit comments

Comments
 (0)