the loading to each worker are slightly different for the multi process worker feature #3910
ankit21491
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
The "multi process workers" feature is not working. I have defined 6 workers in the system directive of the fluentd config. However, when I use the NewRelic also checked it from the cluster itself to check the performance of the fluentd, the fluentd_output_status_buffer_available_space_ratio metrics of each worker are slightly different. For example, worker0 is 98% and worker1 is 0%.
I have used the resolution as mentioned in #3346 but it didn't helped.
I am sending the logs to NewRelic using Fluentd, and for most of the server/cluster it is working fine but for few of them it is showing lags from 2 hours and goes even beyond 48 hours.
Suprisingly the logs for one of the namespace I have in my K8s cluster streaming live in the NewRelic however for one of the namespace I am facing this issue. I have tried using directive as well as the solution provided above that reduced the latency from hours to somewhat close to 10-15 minutes but I am still not getting the logs without lag.
Any troubleshooting step would be appreciated.
Beta Was this translation helpful? Give feedback.
All reactions