Parallelising/speeding up flexcomp/finite element simulations of large tetrahedral meshes? #2601
-
IntroHi, I use MuJoCo for finite-element simulations of soft bodies via its My setupI am using the Python bindings, and MuJoCo version 3.3.0. My questionI am interested in simulating this mesh seen below as a I have performed some basic tests/simulations of tet-meshes with: 91 elements (51 nodes), 595 elements (225 nodes), 3770 elements (927 nodes) and 37815 elements (12,817 nodes) (my target mesh above). Each simulation time ran for 3 seconds, and involved simply dropping a soft body, and a timestep of 0.00001 was used for each. These were my results: 91 elements: wall-time = 12.1s; avg When I tried simulating the 37815 element tet-mesh, I had to decrease my timestep to 0.000001 in order to not diverge. As expected, I couldn't even simulate this mesh for 3 seconds within a wall-time of 30 minutes (I gave up after 30 minutes). Here is my question: Is MuJoCo planning on parallelising or speeding up finite-element simulations of large tetrahedral meshes/ NOTE: I should emphasise that I cannot use the fast Minimal model and/or code that explain my questionI've included my 4 tet-meshes in the following zip folder, as well as a relevant XML file for anyone interested: test_large_meshes.zip Confirmations
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi, it's likely that they will be added at some point for MJWarp but not MJX, although MJWarp will be integrated into MJX so you will be able to use the MJX API to run parallelized flexcomp via the MJWarp backend. |
Beta Was this translation helpful? Give feedback.
Hi, it's likely that they will be added at some point for MJWarp but not MJX, although MJWarp will be integrated into MJX so you will be able to use the MJX API to run parallelized flexcomp via the MJWarp backend.