Memory consumption of runners in Kubernetes mode #3374
DenisPalnitsky
started this conversation in
General
Replies: 1 comment
-
Is this still relevant? I am seeing similar resource usage in our system as well. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm load testing Kubernetes mode 240-360 jobs running simultaneously with a target to support ~1000 jobs.

Currently two pods are being created for one job "runner" and "runner-worflow". Workflow pod does all the job and runner is only responsible for "managing" the workflow pod. I was surprised to learn that runner pod could reach up to 1Gb which seems a bit excessive considering it's role.
I can speculate that it's due to Node that is used for running hooks so I wonder if rewriting hooks to go would make it more efficient.
Beta Was this translation helpful? Give feedback.
All reactions