Replies: 3 comments 2 replies
-
There is nothing that we customize in this project that would alter this behavior and I have no reason to suspect this is an defect in K3s. You've not shared any information about the actual deployment you're working with here, but you should read https://kubernetes.io/docs/concepts/services-networking/dns-pod-service and ensure that what you're deploying meets the requirements for creation of DNS records you're expecting. Note that it specifically requires presence of a service with name matching the pod's subdomain value. Also, you've disabled coredns, so I have no idea whether or not whatever you've replaced it with (assuming you've replaced it with something) is behaving properly. Given you appear to have disabled pretty much everything bundled with k3s (all the packaged components, several kubernetes components, and the cni) I'm not really sure what you still expect us to be responsible for. |
Beta Was this translation helpful? Give feedback.
-
@brandond thank you for looking into this. The cluster runs error-free with ArgoCD, cert-manager, Cilium, CoreDNS, external-dns, kured, Longhorn, metrics-server, VictoriaLogs and VictoriaMetrics helm charts deployed independently. The only hiccup is related to the issue detailed above. CoreDNS follows the K3s standards, here are the custom helm chart settings: hpa:
enabled: true
maxReplicas: 3
minReplicas: 1
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
memory: 128Mi
requests:
cpu: 10m
memory: 128Mi
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
servers:
- zones:
- zone: .
port: 53
plugins:
- name: errors
- name: health
configBlock: lameduck 5s
- name: ready
- name: kubernetes
parameters: cluster.local in-addr.arpa ip6.arpa
configBlock: |-
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
- name: prometheus
parameters: 0.0.0.0:9153
- name: forward
parameters: . /etc/resolv.conf
- name: cache
parameters: 30
- name: loop
- name: reload
- name: loadbalance
service:
clusterIP: 10.43.0.10
serviceAccount:
create: true There is an old #1834 issue that was never resolved, hence why I allowed myself to open another one. Please let me know your thoughts, you have a lot of experience with K3s. Thank you again! Edit: DNS records work properly, example: |
Beta Was this translation helpful? Give feedback.
-
As demonstrated at VictoriaMetrics/helm-charts#2283 (comment) - this does not reproduce on a stock K3s cluster. You've misconfigured something in your extensive customization. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Environmental Info:
K3s Version:
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
Server configuration:
Agent configuration:
Describe the bug:
StatefulSet pods cannot resolve each other's FQDNs, breaking clustering applications like AlertManager. I installed VictoriaMetrics and I get DNS resolution fails:
Expected behavior:
AlertManager pods should resolve peer FQDNs like
vmalertmanager-vmks-1.vmalertmanager-vmks.kube-system.svc.cluster.local
for cluster communication,subdomain
doesn't appear in alertmanager pods automatically, as per K8s design.See VictoriaMetrics/helm-charts#2283 for troubleshooting details.
Beta Was this translation helpful? Give feedback.
All reactions