Deploy Multiple Kafka Cluster on one Kubernetes Cluster #8602
Replies: 2 comments 3 replies
-
You can have multiple Kafka clusters on the same Kube cluster (we always recommend to use a different namespace for each Kafka cluster). You have two options how to do it:
The error you got suggests that you are missing these modified ClusterRoleBindings. You will need to fix those.
This is a bit more complicated. Some of the resources such as CRDs or ClusterRoles can exist only once in the Kubernetes cluster. You can in general do the following:
Assuming these versions are close enough, this will work fine (0.34 and 0.35). But we cannot guarantee this for any 2 versions and we cannot guarantee this for every future versions. Some versions introduce for example new API versions and deprecate old API versions which would break the shared CRDs. So for example, Stirmzi 0.24 and 0.26 would work fine together like this. Strimzi 0.22 and 0.24 would not because of API move from |
Beta Was this translation helpful? Give feedback.
-
Hi Scholzj, Thanks for your reply. Since we already have two installations on separate namespaces, I think the easiest change is to manually modify the ClusterRoleBinding. We installed the Strimzi operator using Helm. How can I manually change the ClusterRoleBinding? This is the used value file: Is there any suggestion on the best way to modify the ClusterRoleBinding? Thanks. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
We are trying to have dev and test env on the same K8S cluster using the same Strimzi version 0.30.0.
We have installed both the operator and its cluster on a specific namespace (kafka-dev, kafka-test).
As far as I can remember (I didn't do the installation), during the installation of the second operator (the kafka-test operator), the global resource creation was commented out.
So far both Kafka clusters have only internal access and they work, but now I need to open them to external access using NodePort.
The Kafka manifest change was applied fine on Kafka-dev cluster, but I get this error on Kafka-test cluster:
Failed to execute: GET on: https://10.96.0.1/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/strimzi-kafka-test-kafka-test-cluster-kafka-init. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. clusterrolebindings.rbac.authorization.k8s.io "strimzi-kafka-test-kafka-test-cluster-kafka-init" is forbidden: User "system:serviceaccount:kafka-test:strimzi-cluster-operator" cannot get resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" on cluster scope.
Reading some post I've seen that it's not supported to run different Strimzi Kafka version on the same Kubernetes cluster.
Now I am wondering if we can fix the issue we have (even by running a different installation) or not.
Can we manage to fix the problem we have?
Is it possible to have two Strimzi Kafka clusters on the same Kubernetes cluster?
Do we need to install operator + managed cluster on different namespaces?
Do we need to install an operator on its own namespace and the two Kafka clusters on different namespaces?
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions