Kubernetes traefik ingress controller not creating Public IP #11000
-
ACS 4.20.0 Running CloudStack in a homelab where then "main" network is 192.168.50.0/24. This is where the ACS management server and hosts reside, but it's also shared with everyday devices like laptops, phones etc. Planning a move toa new locationsoon and will change this network layout, but for now this "is what it is". A subset of this range is used as the "Public IP" for my CloudStack deployment An Isolated Network has been deployed in Cloudstack and also Kubernetes within this isolated network. Connectivity is available via kubectl, pods can be provisioned etc. I am attempting to deploy the traefik ingress controller but it is timing out attempting to reach the management host API.
Noted similar errors when running I deployed a pod to the same kube-system namespace for testing with kubectl exec Attempts to access internet to perform apt updates and curling sites on the internet , from this pod, are successful. However, attempts to access any resources in the "home" network of 192.168.50.0/24 are unsuccessful. Noted that the pods deployed in the kube-system namepaces have an 192.168.x.x address I connected to the virtual router for this network and running tcpdump shows traffic from cloud-controller-manager and test pod ingressing on eth0 but from there it seems to be dropped. I've tried looking at the iptables on the virtual router but this I'm running into limits on my understanding of how iptables rules work. I did also notice that internet bound traffic seemed to come from the 10.100.0.x address of the k8s node on which the pod has been provisioned, suggesting some kind of NAT being applied for this traffic but, traffic intended for 192.168.50.x, shows the ip address of the pod in the tcpdump This may not be a supported or recommended network configuration, but any guidance on how to possibly resolve would be welcome |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
@mark-duggan could you please try with service.beta.kubernetes.io/cloudstack-load-balancer-proxy-protocol: 'true' |
Beta Was this translation helpful? Give feedback.
-
@kiranchavala, thanks for the suggestion. I tried this but it didn't fix my issue. However, I did some more digging into the network configuration and I figured it out myself. I used this article on Medium to reconfigure the IP Pool used by Calico from 192.168.0.0/16 to 172.16.0.0/16, which is not a range which overlaps with anything in my internal network. I think the main problem was that Calico will not nat translate anything that is targetted to a Calico IP Pool address, so the virtual router was eventually dropping the packets, I think due to a routing confusion more than anything, although this is still quite in the weeds of networking and NAT. Once I created a new IP Pool, restarted all the pods in the 192.168.0.0/16 and deleted that original IP Pool, the Cloud Controller Manager pod was able to communicate with the CloudStack API in my configuration and the Public IP/Load Balancer was provisioned |
Beta Was this translation helpful? Give feedback.
-
@mark-duggan , Glad that it worked out There is a upcoming feature targeted for 4.21 release which introduces support for calico in cloudstack |
Beta Was this translation helpful? Give feedback.
@kiranchavala, thanks for the suggestion. I tried this but it didn't fix my issue. However, I did some more digging into the network configuration and I figured it out myself. I used this article on Medium to reconfigure the IP Pool used by Calico from 192.168.0.0/16 to 172.16.0.0/16, which is not a range which overlaps with anything in my internal network.
I think the main problem was that Calico will not nat translate anything that is targetted to a Calico IP Pool address, so the virtual router was eventually dropping the packets, I think due to a routing confusion more than anything, although this is still quite in the weeds of networking and NAT. Once I created a new IP Pool, restarte…