Enhancing our Native Performance Monitoring #46804
Replies: 3 comments 14 replies
-
Our current Mandrel testing is hooked into GHA testing where we run the quarkus native integration tests. So removing openshift client and main from the tracking and adding jpa-postgresql and jpa-postgresql-withxml seems easy enough to do. For point (2) I'm not sure how easy that one is to set up in GHA and track the various Mandrel for JDK X release trains (all using quarkus main). Currently we test Mandrel for JDK 21 (internal version 23.1), Mandrel for JDK 24 (internal version 24.2) and Mandrel for JDK 25 (internal version 25). We'll have to take a look. To me it sounds like we should approach this in a staged approach:
FWIW, the reason why we've picked openshift-client extension for testing was that it was notoriously reflection heavy in the past (which I think is still the case). My $0.02 :) |
Beta Was this translation helpful? Give feedback.
-
Please note that the "performance tests" for these tests/benchmarks that we run are regarding build-time only metrics, i.e.:
For run-time performance monitoring we use:
from https://github.com/quarkusio/quarkus-quickstarts and monitor the following metrics:
while we collect data for
but not
@franz1981 can you please confirm?
I find
+1
+1, as @jerboaa says the switch to (or just addition of) these tests should be trivial. I will have a look for build-time metrics.
+1
I agree with @jerboaa's steps and will start working on 1 (but only for build-time metrics, the run-time part requires more effort at this time) |
Beta Was this translation helpful? Give feedback.
-
What would be the next natural step to get this in motion? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
Last week, @jerboaa and some team members (including myself) discussed native testing and how we can improve it. I’d like to continue that discussion and define the set of “tests” (repositories, test cases, etc.) we use to continuously monitor our performance in native mode.
For each test, the measurements should include:
These measurements aim to catch regressions in Quarkus and anticipate changes in GraalVM/Mandrel.
Currently, we are running performance tests for:
I believe this set needs to be updated:
Point (4) might be too complex to start with, but including (2) — a typical CRUD application — would be very useful.
WDYT? How would you like to go forward?
\CC: @jerboaa @galderz @zakkak @franz1981 @radcortez @Sanne
Beta Was this translation helpful? Give feedback.
All reactions