Replies: 1 comment
-
What are your buffer settings for the sinks. If the sink is memory buffered (default) you only have 500 events of storage before it back pressures up stream so you wouldnt even notice it is performance of the host only Vector internal metrics. Then if both sinks are fed by same sources the back pressure will go upstream and keep data from flowing to either sink. If one sink isnt as important I'd recommend setting it to drop_new for the buffer full action as this will keep the other sink sending data. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
hey all, i had a strange issue over the weekend that caused me to lose logs for a 24hr period:
i have two configured sinks for vector: axiom and quickwit (using http). my quickwit deployment is on-premise and had some outage causing vector to fail sending logs there (vector was throwing a WARN saying logs could not be sent to sink)
this woudnt be an issue if the logs were going to axiom, but for some reason this issue with one sink caused logs to fail being sent to my other sink. my understanding is vector treats sinks independently and errors with one shouldnt impact logs being sent to the other. but once i removed the problematic sink from my vector config logs started being sent properly again
there was no noticeable backpressure on the vector pods (they run as a daemonset in my cluster, and there were no restarts or high memory on them due to the errors)
any ideas? i wish i had logs to share more context but 🙃
Beta Was this translation helpful? Give feedback.
All reactions