fluentd -> kafka timeouts "Connection has been unused for too long" #7774
Replies: 2 comments
-
Some questions:
Any chance you could turn up the fluentd log level? I can see there's plenty of debug logs in ruby-kafka (assuming that's what you're running). |
Beta Was this translation helpful? Give feedback.
-
just an FYI, if the plugin uses the ruby-kafka gem (which has this exact phrase), the warnings aren't anything to worry about on their own, as it's just a warning log that a new connection is needed. the idle settings are hard-coded to be 30 seconds. so anytime a connection has gone unused for 30s the message will appear the next time it is used. it doesn't necessarily mean anything changed on the code; maybe your average usage changed. your message does start with a 60 second gap in usage. also the gem didn't always used to log that it was reconnecting; it used to just do it silently until |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am getting random timeouts from fluentd pods to kafka provisioned by strimzi operator 0.31. Nothing changed before timeouts surfaced.
The pattern is the same timeout followed by successful connection. most of failed SRC IPs are on same subnet as kafka .
Appreciate help figuring out source of the issue and if any kafka confi could be tuned to alleviate the issue on source.
This is what I see on fluentd clients.
Beta Was this translation helpful? Give feedback.
All reactions