Replies: 2 comments 8 replies
-
I guess you should explain how did you actually exposed the internal loadbalancers and how you configure the Connect cluster. For the Connect cluster, full log and full custom resources would be definitely useful. |
Beta Was this translation helpful? Give feedback.
5 replies
-
Here is the template for both kafka(in GCP) and kafkaConnect (in AWS) Kafka:
KafkaConnect (in AWS):
However, it is worth to mention that a similar kafkaConnect in GCP is working fine with similar config. This is because the GCP KafkaConnect can resolve the internal DNS name of the GKE node. Logs from Kafkaconenct(AWS):
|
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Bug Description
I have a
kafka
cluster(version: 3.3.2) andkafkaConnect
deployed via strimzi operator (v0.33) and it is working fine within same cloud provider(GCP). However, I want to access it from AWS side, so I exposed with externally withN(no. of broker) + 1
internal loadbalancers as belowAll the broker dns endpoint are reachable from AWS side on port
9094
. However, when I deploy aKafkaConnect
in AWS with that externalbootstrapServers
endpoint, it seems like kafka connect is trying to connect with the internal DNS name of the GKE node where the broker is located instead of the broker dns endpoint(ex:kafka-broker-1.mydomain.com.
) configured in the above configuration. So I get the following errorSteps to reproduce
Expected behavior
The bootstrap endpoint configured in Kafka object, will return the configured dns names of the broker in the configuration instead of the host name of the node where broker pod is located.
Strimzi version
0.33.0
Kubernetes version
GKE, EKS
Installation method
Helm chart
Infrastructure
GCP, AWS
Configuration files and logs
No response
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions