Skip to content

Commit 8f5700e

Browse files
author
Shushman Choudhury
committed
README for perception pipeline
1 parent 6d7bf0d commit 8f5700e

File tree

1 file changed

+25
-1
lines changed

1 file changed

+25
-1
lines changed

README.md

Lines changed: 25 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -197,6 +197,8 @@ adetector = ApriltagsModule(marker_topic='/apriltags_kinect2/marker_array',
197197
detection_frame='/head/kinect2_rgb_optical_frame')
198198
detected_objects = adetector.DetectObjects(robot)
199199
```
200+
IMPORTANT - Most of these methods require some underlying CPP server to be running, before calls can be
201+
made to the PrPy detector.
200202

201203
### Perception Modules
202204

@@ -208,11 +210,16 @@ Currently, the following perception routines are supported:
208210
- `BlockDetector`
209211
- `ROCK`: Robust Object Constellation and Kinematic Pose
210212

213+
### Underlying Servers
214+
215+
To fill once we have come to a consensus on whether to launch all at start, etc.
216+
211217

212218
### Common Perception Methods
213219

214220
At this point, two methods are common to all perception routines. However, some
215-
routine-specific knowledge may be required to make them work.
221+
routine-specific knowledge may be required to make them work. This is particularly reflected
222+
in the constructor for the perception module.
216223

217224
- `DetectObjects(self, robot, **kw_args)`: This runs the perception method for all
218225
objects that the particular routine knows about. Typically, this information is specified
@@ -225,6 +232,23 @@ The return type for both is typically one or more OpenRAVE kinbodies, with the c
225232
transformation relative to the current environment, if the input `tf`s have been
226233
correctly provided.
227234

235+
236+
### Caveats
237+
238+
As mentioned above, running the perception routines require a bit of routine-specific knowledge,
239+
because of differences in the way some of them operate. Some of those caveats, for each routine
240+
are mentioned here.
241+
242+
- `AprilTags`: This method involves detection of visual fiducial markers. There is a database that maps
243+
april tag IDs to the objects to which they are attached, along with the relative transform
244+
of the tag with respect to the object kinbody, in `pr_ordata/data/objects/tag_data.json`.
245+
- `VNCC`: This is a single-query method and so currently does not support `DetectObjects`, but just
246+
`DetectObject`, where the object names are obtained from the map in the module's constructor.
247+
- `SimTrack`: Ask Matt to fill in or do later
248+
- `BlockDetector`: This is specifically for detecting blocks on a table in front of the camera. Therefore,
249+
it only has a `DetectBlocks` method.
250+
- `ROCK`: This is still under development and so does not exactly conform to the underlying API.
251+
228252
## Environment Cloning
229253

230254
Cloning environments is critical to enable planning with multiple planners in

0 commit comments

Comments
 (0)