Replies: 11 comments
-
Oh and for what it's worth, the Bullet benchmarks execute in less than 10 min, but the FCL one has been running for about 15, with no sign of stopping. I will see if I can let it complete the run. |
Beta Was this translation helpful? Give feedback.
-
I will clean up the benchmark package and make it public. I will also take a look at these benchmarks to see if it is a configuration difference. |
Beta Was this translation helpful? Give feedback.
-
I was looking at the benchmarks and we may want to make some modification. Currently it appears that each one also includes adding the collision objects setting the active collision objects and collision margin data then performing the operation in the description. It may be better to expand these to provide finer resolution in addition to what is here. For example in motion planning the adding of the links, setting active collision links and collision margin only happen once then it will just set transforms and perform a contact test. In the case of FCL there maybe larger overhead in adding the links, setting active links and collision margin data which is causing the significant different between the results I presented and the one here. @mpowelson Do you have any thoughts on how to create additional benchmarks where the previously mentions operations only happen once and then the operation like contact test, set transform ext are the only things being benchmarked? We would also what to capture what is currently in thier because it does capture the overhead cost of initial setup but maybe that is just a benchmark on its own? |
Beta Was this translation helpful? Give feedback.
-
I thought that we were doing this somewhere, but I don't see it. I'm not sure if this is recommended, but we could use Manual timing of only the parts we want. |
Beta Was this translation helpful? Give feedback.
-
Are you sure the benchmark measures setup time? As far as I can see: This does setup outside of the for loop, so that time shouldn't be measured. The only thing that's measured here is the contact clear (should be negligible) and the contact test. Example: #include <benchmark/benchmark.h>
#include <thread>
static void BM_sleep(benchmark::State& state) {
std::this_thread::sleep_for(std::chrono::seconds(1));
for (auto _ : state) {
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
}
// Register the function as a benchmark
BENCHMARK(BM_sleep)->Unit(benchmark::kMillisecond);
// Run the benchmark
BENCHMARK_MAIN(); Yields:
|
Beta Was this translation helpful? Give feedback.
-
You are correct. Looking at the readme here everything before the for loop is not timed. |
Beta Was this translation helpful? Give feedback.
-
The only difference between these tests shown above and the ones I presented is it involved 50 random shapes shapes opposed to pair to pair. Also it included all convex hulls and no primitives. How did the benchmark results compare convex hull to convex hull and for the test that involved several shapes? |
Beta Was this translation helpful? Give feedback.
-
You mean the "LARGE_DATASET_CONVEX_MULTILINK" tests? FCL is very slow on these, I can try to get out the numbers. |
Beta Was this translation helpful? Give feedback.
-
Yea that should be a good comparison. |
Beta Was this translation helpful? Give feedback.
-
In those instances, you can't even see the Bullet comparison as FCL is several orders of magnitude (up to 4) slower. Those results are so bad, I'm thinking something else is afoot, but it definitely matches my experience of running the FCL benchmarks for hours and they don't even finish. |
Beta Was this translation helpful? Give feedback.
-
Huh, well I will need to look through the example code I used for comparing collision checking with MoveIt and see why it is performing better. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
As a follow-up to the ROS-Industrial developer meeting on March 9th, I was excited to try out FCL. However, in my early tests, it seemed to perform rather worse than Bullet (i.e. every check seemed to take longer, sometimes by an order of magnitude). Thanks to my colleague @deh0512 , we fixed the benchmarks in Tesseract itself, and tried to run them, but again the results were a bit disappointing.
I don't have the original slides (they don't seem to be online), but I quite clearly recall FCL with BVH being much faster than bullet at discrete collision checking.
Is it expected that FCL would be that much slower on every benchmark? If so, are there any plans to bring into master the recent improvements?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions