Replies: 1 comment 1 reply
-
Constraints do not block you from upgrading to newer version of dpendencies. Constraints are just used to reproducibly install Airflow. if open-telemetry is not upgraded to newer version, probably some other dependency holds it back and we cannot produce constraints that will not be conflicting - but nothing stops you from upgrading. Constraints - and please read very carefully documentation here: https://airflow.apache.org/docs/apache-airflow/stable/installation/installing-from-pypi.html are only and exlusively to install airflow in a reproducible way with all dependencies without conflicts. We also explain there why we are doing it and what "deployment's manager" (your) responsibility is with managing your dependencies. Also - in case you need to use airflow images our docuementatio about airflow reference image explains that our images are "fixed" at relese time and that we extremely rarely (and only in very, very, very rare cases that make it impossible to use the images) update constraints and regenerate them https://airflow.apache.org/docs/docker-stack/index.html Airflow's REQUIREMENTS (not constraints) say "opentelemetry-api>=1.27.0" - which means that airflow does not limit you from upgrading to newer version. Also you can watch my talk explaining the different ways how you can manage Your particular dependencies that work for you https://www.youtube.com/watch?v=zPjIQjjjyHI I am converting it into discussion - if more is needed - but it's entirely up to you to attempt to upgrade to later version of opentelmetry - and airflow constraints are not blocking you from that. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Apache Airflow version
3.0.3
If "Other Airflow 2 version" selected, which one?
No response
What happened?
When Opentelemetry
trace
is enabled in Airflow 3.0.2, we are experiencing memory leaks in schedulers, triggerers.We tested this by deploying three schedulers:
AIRFLOW__METRICS__OTEL_ON=true
, etc)AIRFLOW__TRACES__OTEL_ON=true
, etc)no metrics, no traces
)These schedulers have identical DAG files and run in the same k8s namespace.
And we found that
scheduler-2 with otel-traces setup
shows the typical memory leak trend in the Grafana dashboard ( I can't post the screenshot here sorry :( )After some digging, we found that the issue is already reported and fixed in the
opentelemtry-python
repo.Unable to release memory
Unable to release memory open-telemetry/opentelemetry-python#4220 (otel 1.27.0)Fix memory leak in exporter
BatchLogRecordProcessor objects are not able to be garbage collected open-telemetry/opentelemetry-python#4422 (otel 1.30.0)opentelemetry-api==1.35.0
(latest) https://github.com/open-telemetry/opentelemetry-python/releases/tag/v1.35.01.27.0
https://raw.githubusercontent.com/apache/airflow/constraints-3.0.2/constraints-3.12.txt (we are using Airflow 3.0.2)
https://raw.githubusercontent.com/apache/airflow/constraints-3.0.3/constraints-3.12.txt
Could you please consider upgrading the latest version of Opentelemetry packages in Airflow future version to prevent memory leaking.
What you think should happen instead?
No response
How to reproduce
Airflow OTEL trace setup
https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/logging-monitoring/traces.html
Operating System
debian 12
Versions of Apache Airflow Providers
No response
Deployment
Other Docker-based deployment
Deployment details
No response
Anything else?
No response
Are you willing to submit PR?
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions