Replies: 8 comments 20 replies
-
Please follow the contributing guidelines.
With this information, we are unable to reproduce this issue and cannot judge if it is a bug or not. Please report bugs only after you have confirmed some degree of reproducibility. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Another issue is that my program uses file buffer. When this happens, the file buffer accumulates, but when I restart the application, the file buffer is cleared. This means that all the accumulated data is lost. Is this normal behavior or did I misconfigure something? |
Beta Was this translation helpful? Give feedback.
-
I have several points of confusion at the moment:
|
Beta Was this translation helpful? Give feedback.
-
PELASE follow the rules! : code of conduct.
Fluentd is open-source software. We all need to work together to make it better.
Even if you use file buffer, memory is consumed depending on the size of the data when data read in one time is written to a buffer or data in a buffer is sent to an output destination. All I can recommend is that if you want to solve this problem without increasing more memory, you have to reduce the data to be collected or adjust the settings.
It may be possible that parsing using regular expressions is consuming a large amount of memory. Example: <parse>
@type none
</parse> In addition, you can adjust the following settings for lower memory consumption.
When Fluentd restarts, it first tries to send all remaining buffers.
You should speed DOWN the sending process for lower memory consumption.
Yes. There are two points about memory consumption.
|
Beta Was this translation helpful? Give feedback.
-
What I want to ask about this image is why the buffer_stage_byte_size shows over 100 MiB, but in my buffer directory, it only seems to have 364K? |
Beta Was this translation helpful? Give feedback.
-
Now the problem with oom has been largely resolved by increasing the memory to1GiB. The only remaining issue is that the threads seem to stop running after a while, as indicated in the title. The following chart shows that this fluentd stopped running after a while without any retries, error logs, or updates to the files in the buffer directory. no retry, stop flush data when 03.26 05:22 buffer dir stop update when 03.26 05:21 process of fluentd is still running I don't know much about gdb, but I tried to look at the threads and found that there are some differences compared to normal operation. threads of abnormality process cannot see the stack of flush_thread threds of normality process flush_thread of normality process I am still at the scene, do you think I can provide any more information to help locate the problem? @daipom |
Beta Was this translation helpful? Give feedback.
-
Possibly this is the same phenomenon as |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
I am running Fluentd version1.14.6 with the Prometheus plugin enabled to collect data. However, I noticed that the data suddenly stopped being sent. Upon running 'top -H -p', I discovered that the flush thread appears to have crashed.
Here is my dashboard and expr
Fluentd output buffer queue:fluentd_output_status_buffer_queue_length{type="http"}
Fluentd output retry count:fluentd_output_status_retry_count{type="http"}
Current total size of stage and queue buffers:fluentd_output_status_buffer_total_bytes{type="http"}
output rate:rate(fluentd_output_status_write_count{type="http"}[1m])
The output rate chart shows t shows that the output rate has dropped to zero.
To Reproduce
run it base on the docker image: fluent/fluentd-kubernetes-daemonset:v1.14-debian-kafka2-1
Expected behavior
normal operation
Your Environment
Your Configuration
Your Error Log
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions