Memory Leak in Prowler AWS Provider #7454
Labels
bug
severity/low
Bug won't result in any noticeable breakdown of the execution.
status/waiting-for-revision
Waiting for maintainer's revision
Uh oh!
There was an error while loading. Please reload this page.
Steps to Reproduce
I am running prowler as a cronjob on Kubernetes and recently noticed the job was not running to completion. We have had prowler running as a cronjob for approximately 9 months without issues.
Pod memory limit was set to 2gb. While the pod memory was set to 2gb the pod would crash part way through completion. After I increased the limit to 6gb jobs are completing successfully.
After investigation I believe there is a memory leak in the Prowler AWS scanner leading the job to OOM. The Kubernetes memory profile shows the memory utilization is slowly climbing and finally peaks somewhere around ~5.5 gbs.
I am running Prowler 4.6.2.
prowler aws --region us-east-1 us-west-1 --output-directory dirname --output-filename 2025-04-05T03:00:36.183444 --ignore-exit-code-3 --only-logs --log-level INFO --mutelist-file mutelist.yaml --role arn:aws:iam::12345678912:role/prowler-role --output-bucket-no-assume bucket-name
Expected behavior
I don't believe it should take around 6 gb of memory to run prowler against one AWS account.
Actual Result with Screenshots or Logs
How did you install Prowler?
From pip package (pip install prowler)
Environment Resource
EKS
OS used
Debian 12
python:3.12-slim docker image
Prowler version
Prowler 4.6.2 (latest is 5.4.3, upgrade for the latest features)
Pip version
pip 24.3.1 from /usr/local/lib/python3.12/site-packages/pip (python 3.12)
Context
No response
The text was updated successfully, but these errors were encountered: