Replies: 2 comments 2 replies
-
Thanks for opening your first issue here! Be sure to follow the issue template! If you are willing to raise PR to address this issue please do so, no need to wait for approval. |
Beta Was this translation helpful? Give feedback.
-
That's an intereting discussion, but not airflow issue (at least not yet) - there are plenty of reasons why k8s might kill your pods - for example because you lack resources. And the only way you can investigate it in your k8s logs (including being able to see liveness probes and k8s logs showing why pods are killed) - there are a lot of tutorials out there that describe how to get the logs depending on your k8s choice and as a deployment manager you need to learn how to get it in your k8s. Once you know what's the reason by looking at your k8s (not Airflow !) logs you can post your findings there and then the discussion can continue. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Official Helm Chart version
1.17.0 (latest released)
Apache Airflow version
3.0.2
Kubernetes Version
1.32.3
Helm Chart configuration
Docker Image customizations
What happened
When deployed, the api-server terminates with code 0. Logs do not provide any error message as see below. I feel like that the apiServer gets killed because the livenessProbe seems to be unresponsive leading to a crash loop backoff.
What you think should happen instead
The ApiServer should not be killed.
How to reproduce
Use the provided helm chart
Anything else
No response
Are you willing to submit PR?
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions