Replies: 2 comments 5 replies
-
Thanks, both for tracking this and informing us about it! These are some nice improvements indeed: It's also cool that for larger simulations the gap shrinks (where you generally need performance the most). If I may, I would challenge you to add a Game of Life comparison. I think we have a good chance to be the the absolute fastest ;). |
Beta Was this translation helpful? Give feedback.
-
Thanks a lot for this update. It's great to see the improvements relative to 2.14 (thanks @EwoutH). It also highlights some places where further improvements would be nice. I had a quick look at the Python code. I think there are a few places where minor changes would result in small improvements (e.g., Another thing I am curious about is to drop the weakrefs stuff in AgentSet and see how big the overhead of this is. When introduced, it gave about a 20% overhead on our benchmarks. However, with all improvements made elsewhere, the relative overhead might be larger now. To be clear: I am not arguing at all for removing the weakrefs. There are good memory reasons for doing this. However, it is also worthwhile to track the performance impact of it. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear all,
I would like you to inform that I updated the ABM Frameworks Comparison at https://github.com/JuliaDynamics/ABMFrameworksComparison to Mesa 3 and your work on the third version shows because benchmark have improved quite a bit! I open this discussion to let you know about this and let you improve the Mesa versions further if you find some more performance opportunities I missed during the update
Beta Was this translation helpful? Give feedback.
All reactions