-
We've got a legacy log processing system using dockers Splunk log driver with containers and a long chain of Splunk Heavy Forwarders ultimately pushing to an on-prem Splunk instance. Due to a lot of complexity and legacy we're considering switching to Vector as from prior experience it seems to perform well and be way more flexible. We've some very strict compliance needs in particular that certainly logs can't be lost so we need to have some sort of way to handle the situation that the final Splunk endpoint is down. Having read up on the memory and disk buffers and the recovery/back pressure handling it seems good but ultimately if Splunk is down for an extended period, we need to fall back to something else. With AWS Kinesis Firehose you can configure undeliverable events to be dumped to S3 which is helpful. However it doesn't seem that theres any sort of logic in Vector for this? i.e. "if cant deliver to X, send events to Y". Nearest solution I could come up with is two sinks - one Splunk and the other S3 and use a lifecycle rule to purge logs from S3 after X days unless we need to grab them to re-ingest. Not ideal and adds cost. Is there any kind of solution to a fall-back sink in case of prolonged issues? Or is this something that would be a viable feature? I.e a third option for buffers to send to an alternate sync or an alternate target if a given sink is having issues? Would love any input on how we might achieve this. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
This is not supported yet. There's a lot of context in the following: |
Beta Was this translation helpful? Give feedback.
This is not supported yet. There's a lot of context in the following: