FluentD continues reading and advancing .pos file even when buffer is full and Elasticsearch is down #5024
davidbelhamou
started this conversation in
General
Replies: 1 comment
-
Can you let us fluent-plugin-elasticsearch version you used? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm using FluentD with the in_tail plugin and the elasticsearch_data_stream output. My setup writes logs from many pods to a central NFS. FluentD tails the log files from that shared NFS and forwards them to Elasticsearch.
The problem is: When Elasticsearch (ECK) is down and FluentD’s buffer fills up (I use @type file, overflow_action block, and retry_forever true), I expect FluentD to stop reading logs and freeze the .pos file. But FluentD continues reading and advancing the .pos offsets even though nothing is being flushed. This causes permanent data loss if FluentD is restarted or crashes. Questions:
How can I make FluentD stop reading (and .pos advancing) when the buffer is full, to avoid reading and discarding logs that weren’t flushed?
How do other people handle log file cleanup in a central NFS scenario?
My current solution: I run a cronjob that deletes log files after FluentD finishes reading them (based on .pos state).
But since .pos updates before flush, this can cause logs to be deleted before they’re successfully delivered.
my env:
fluentd version: 1.16.9
elasticsearch: 8.15
Beta Was this translation helpful? Give feedback.
All reactions