You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
IMPORTANT - Most of these methods require some underlying CPP server to be running, before calls can be
201
+
made to the PrPy detector.
202
+
203
+
### Perception Modules
204
+
205
+
Currently, the following perception routines are supported:
206
+
207
+
-`AprilTags`
208
+
-`VNCC`: Vectorized Normalized Cross Correlation
209
+
-`SimTrack`
210
+
-`BlockDetector`
211
+
-`ROCK`: Robust Object Constellation and Kinematic Pose
212
+
213
+
### Underlying Servers
214
+
215
+
-`AprilTags`: Started via `apriltags.launch` in [herb_launch](https://github.com/personalrobotics/herb_launch). Publishes to `/apriltags_kinect2/detections` and `/apriltags_kinect2/marker_array`.
216
+
-`VNCC`: Have [vncc_msgs](https://github.com/personalrobotics/vncc_msgs) and [vncc](https://github.com/personalrobotics/vncc) in your workspace. Run `roslaunch vncc vncc_estimator.launch`. This provides the `/vncc/get_vncc_detections` service.
217
+
-`SimTrack` - See Caveats section below
218
+
-`BlockDetector` - Have [tabletop_perception_tools](https://github.com/personalrobotics/tabletop_perception_tools) in your workspace. Run `rosrun tabletop_perception_tools tools_server`. This provides the `/tools_server/find_blocks` service.
219
+
-`ROCK` - To be updated later.
220
+
221
+
222
+
### Common Perception Methods
223
+
224
+
At this point, two methods are common to all perception routines. However, some
225
+
routine-specific knowledge may be required to make them work. This is particularly reflected
226
+
in the constructor for the perception module.
227
+
228
+
-`DetectObjects(self, robot, **kw_args)`: This runs the perception method for all
229
+
objects that the particular routine knows about. Typically, this information is specified
230
+
either as a config file (in the case of AprilTags) or in the constructor of the respective
231
+
module.
232
+
-`DetectObject(self,robot,obj_name)`: This runs the perception routine to detect a particular object,
233
+
based on the known names in the database.
234
+
235
+
The return type for both is typically one or more OpenRAVE kinbodies, with the correct
236
+
transformation relative to the current environment, if the input `tf`s have been
237
+
correctly provided.
238
+
239
+
240
+
### Caveats
241
+
242
+
As mentioned above, running the perception routines require a bit of routine-specific knowledge,
243
+
because of differences in the way some of them operate. Some of those caveats, for each routine
244
+
are mentioned here.
245
+
246
+
-`AprilTags`: This method involves detection of visual fiducial markers. There is a database that maps
247
+
april tag IDs to the objects to which they are attached, along with the relative transform
248
+
of the tag with respect to the object kinbody, in `pr_ordata/data/objects/tag_data.json`.
249
+
-`VNCC`: This is a single-query method and so currently does not support `DetectObjects`, but just
250
+
`DetectObject`, where the object names are obtained from the map in the module's constructor.
251
+
-`SimTrack`: See https://github.com/personalrobotics/simtrack for more details. You will need the `personalrobotics` fork. This can track/detect any kind of textured object stored as an `.obj` file. The perception module only calls the detector, but the tracker can also be integrated pretty easily. It supports `DetectObjects`, and requires the simtrack `multi_rigid_node` to be running on the robot to work. Inside the module, there is a map of `simtrack` objects to kinbodies.
252
+
-`BlockDetector`: This is specifically for detecting blocks on a table in front of the camera. Therefore,
253
+
it only has a `DetectBlocks` method.
254
+
-`ROCK`: This is still under development and so does not exactly conform to the underlying API.
0 commit comments