You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m running multiple Webots simulations in parallel inside Kubernetes pods, each under Fluxbox as the X server in headless (“batch”) mode. All pods have access to the same NVIDIA GPU(s) via the Kubernetes NVIDIA device plugin, and I launch Webots with --batch --mode=fast. Despite this, GPU memory usage keeps climbing over time and is never released—even after simulations reset or by using gc.collect(), or clear up the cuda cache after every learning process.