Replies: 3 comments 5 replies
-
Alright, after debugging and updating this post I seem to realise that this might be an Istio problem, as istio-proxy stills holds the old endpoints. Not going to delete this post as this might help somebody else |
Beta Was this translation helpful? Give feedback.
-
Okay, this seems to be due to the fact that Hyper in Clickhouse Sink is doing connection pooling and envoy in Istio is as well. Is there any way to deactivate connection pooling in the Clickhouse Sink and let this be handled by Istio? See discussion here istio/istio#54539 |
Beta Was this translation helpful? Give feedback.
-
Not sure if this should be a discussion. Something seems to be broken. Apart from istio/istio#54539 there are also other issues mentioning this: VictoriaMetrics/helm-charts#1938 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
A note for the community
Problem
We are running Clickhouse and Vector in Kubernetes. Vector is receiving events from Kafka and is sinking those to Clickhouse. When updating Clickhouse, thereby rolling out a new Pod, data is incoming again without dropped data, however Vector is complaining with WARN messages. This doesn't go away on its own. A 503 seems to me, like Vector is trying to reach the old Clickhouse instances? Could this be possible? Failed requests are retried and written successfully. Requests seem to time out after 10 or 30s, as can be seen in the istio-proxy logs below. A restart of Vector helps, but this can't be the right solution.
Version
0.42.0-distroless-libc
Debug Output
Debug Logs of one request, that fails after rolling the Clickhouse Pods
istio-proxy logs outgoing from vector pod:
This doesn't seem to be a DNS issue, IPs, are correct:
Example Data
No response
Additional Context
No response
References
No response
Beta Was this translation helpful? Give feedback.
All reactions