You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Update dummy SegmentFromPoint to publish new Mask.msg
* Update AcquireFood goal request pickles to have the new Mask.msg
* Update README with instructions on simulating bite acquisition
Copy file name to clipboardExpand all lines: feeding_web_app_ros2_test/README.md
+15Lines changed: 15 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -17,3 +17,18 @@ You can also toggle off certain combinations of dummy nodes with arguments:
17
17
-**Don't run the RealSense node**: `ros2 launch feeding_web_app_ros2_test feeding_web_app_dummy_nodes_launch.xml run_real_sense:=false`
18
18
19
19
You can also combine any of the above arguments.
20
+
21
+
## Simulating an AcquireFood Goal
22
+
23
+
If you launch the code in `--sim mock` (see [here](https://github.com/personalrobotics/ada_feeding/blob/ros2-devel/README.md)), using bite selection through the app should properly call AcquireFood, and you should be able to see the acquisition action in RVIZ (it is recommended to add an `Axes` visualization to RVIZ for the `food` frame to see the perceived top-center and orientation of the detected food item). However, this approach has two downsides:
24
+
1. The detected food mask is a dummy mask, not a real output of SegmentAnything.
25
+
2. Each time you use the app to invoke AcquireFood, the mask and depth will be slightly different, which is a challenge for repeatability.
26
+
27
+
To address these issues, we have pickled goal request(s) from the app to the AcquireFood action. These goal request(s) were Segmented by the actual `SegmentFromPoint` node with the dummy RGB and depth images (found in `./data`). To invoke bite acquisition with this static goal request, do the following:
28
+
1.`cd ~/colcon_ws`
29
+
2.`python3 src/ada_feeding/start-py --sim mock`. See [here](https://github.com/personalrobotics/ada_feeding/blob/ros2-devel/README.md) for more info.
30
+
3.`ros2 launch feeding_web_app_ros2_test feeding_dummy_acquirefood_launch.py`. This will have the robot move above the plate, and then invoke `AcquireFood` for the stored goal request.
31
+
32
+
There are 6 pickled goal requests in `./data`. You can modify which gets run through a launch argument to `feeding_dummy_acquirefood_launch.py`. Be sure to also change the images published by the DummyRealSense node to correspond to the new pickled goal request; these can be changed by launch argument to `feeding_web_app_dummy_nodes_launch.xml`.
33
+
34
+
To pickle more goal_requests, modify `ada_feeding/config/` to [pass the `pickle_goal_path` parameter to the AcquireFood tree](https://github.com/personalrobotics/ada_feeding/blob/f889fe44351ec552e945ba028d4928826ee03710/ada_feeding/config/ada_feeding_action_servers_default.yaml#L54). That should be a full path to where you want to store the pickle. Then, run the **dummy RealSense node**, the **real** perception nodes, run the web app, do bite selection for a bite, and select a mask. That mask will be stored in a pickle.
0 commit comments