You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are sending logs from fluent-bit to fluentd via forward/upstream, load balanced to multiple fluentd instances.
We frequently get below errors in logs, and large no of log entries are not sent to fluentd.
[error] [output:forward:forward.0] no upstream connections available
[error] [output:forward:forward.0] could not write forward entries
we are using kubernetes configmap for fluent-bit deployment as below. We also tried net.keepalive, net.connect_timeout configs, but it did not solve the issue. If these configs are not valid for upstream, what are available options in upstream ?
apiVersion: v1
data:
filter-kubernetes.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
fluent-bit.conf: |
[SERVICE]
Flush 5
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
storage.path /var/fluentbit-s3/state/flb-storage/
storage.sync normal
storage.checksum off
storage.backlog.mem_limit 5M
@INCLUDE input-kubernetes.conf
@INCLUDE filter-kubernetes.conf
@INCLUDE output-forward.conf
forward-balancing: |
[UPSTREAM]
name forward-balancing
[NODE]
name clparser1
host 2.2.2.2
port 24224
[NODE]
name clparser2
host 1.1.1.1
port 24224
input-kubernetes.conf: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/fs_flb_kube.db
Mem_Buf_Limit 1024MB
Buffer_Chunk_Size 1MB
Buffer_Max_Size 5MB
Skip_Long_Lines On
Refresh_Interval 10
Rotate_Wait 60
output-forward.conf: |
[OUTPUT]
Name forward
Match *
Upstream forward-balancing
Require_ack_response True
Compress gzip
Retry_Limit False
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Team,
We are sending logs from fluent-bit to fluentd via forward/upstream, load balanced to multiple fluentd instances.
We frequently get below errors in logs, and large no of log entries are not sent to fluentd.
we are using kubernetes configmap for fluent-bit deployment as below. We also tried net.keepalive, net.connect_timeout configs, but it did not solve the issue. If these configs are not valid for upstream, what are available options in upstream ?
Beta Was this translation helpful? Give feedback.
All reactions