Skip to content

Ingress-nginx pod crashes with "invalid memory address or nil pointer dereference" after node restart #13605

@alexmorbo

Description

@alexmorbo

What happened:

After restarting a node in the interruptible node group, one of the ingress-nginx pods starts, but crashes with the following error: panic: runtime error: invalid memory address or nil pointer dereference.

Container logs:

NGINX Ingress controller
  Release:       v1.13.0
  Build:         4cbb78a9dc4f1888af802b70ddf980272e01268b
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.27.1

W0710 19:20:59.966578       7 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0710 19:20:59.966718       7 main.go:205] "Creating API client" host="https://10.96.128.1:443"
I0710 19:20:59.985454       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.2"
I0710 19:21:00.436297       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0710 19:21:00.481037       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x192ba36]

What you expected to happen:

The ingress-nginx pod should either start normally or give a meaningful error instead of SIGSEGV.

NGINX Ingress controller version:

Release:       v1.13.0
Image:         custom.nexus.registry/ingress-nginx/controller:v1.13.0
Build:         4cbb78a9dc4f1888af802b70ddf980272e01268b
Chart version: ingress-nginx-4.13.0

Kubernetes version:

Client Version: v1.31.2
Server Version: v1.31.2

Environment:

  • Cloud provider or hardware configuration: Yandex Cloud, interruptible instances
  • OS: Ubuntu 22.04
  • Kernel: Linux cl1ars52ue6mltedf96h-ovel 5.15.0-105-generic x86_64
  • Install tools: Terraform + Helm
  • Basic cluster related info:
kubectl version
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.31.2

kubectl get nodes -o wide | grep cl1ars52ue6mltedf96h 
cl1ars52ue6mltedf96h-afut   Ready    <none>   29d    v1.31.2   10.12.0.14    <none>        Ubuntu 20.04.6 LTS   5.4.0-216-generic   containerd://1.7.25
cl1ars52ue6mltedf96h-ovel   Ready    <none>   29d    v1.31.2   10.8.0.21     <none>        Ubuntu 20.04.6 LTS   5.4.0-216-generic   containerd://1.7.25
cl1ars52ue6mltedf96h-usim   Ready    <none>   29d    v1.31.2   10.7.0.24     <none>        Ubuntu 20.04.6 LTS   5.4.0-216-generic   containerd://1.7.25
  • How was the ingress-nginx-controller installed:
    • Helm
    • helm ls -A | grep ingress:
helm ls -A | grep ingress
 
ingress-nginx                 	ingress-nginx            	182     	2025-07-09 15:47:31.432502 +0300 MSK   	deployed	ingress-nginx-4.13.0                   	1.13.0     
    • helm -n ingress-nginx get values ingress-nginx:
USER-SUPPLIED VALUES:
controller:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app.kubernetes.io/instance
            operator: In
            values:
            - ingress-nginx
          - key: app.kubernetes.io/component
            operator: In
            values:
            - controller
        topologyKey: kubernetes.io/hostname
  allowSnippetAnnotations: true
  config:
    allow-snippet-annotations: "true"
    annotations-risk-level: Critical
    geoip2-autoreload-in-minutes: 60
    http2-max-field-size: 8k
    http2-max-header-size: 32k
    max-worker-connections: "16384"
    max-worker-open-files: "0"
    proxy-body-size: 5m
    proxy-set-headers: ingress-nginx/ingress-headers
    server-tokens: "false"
    use-geoip: "false"
    use-geoip2: "true"
    use-gzip: "true"
  extraArgs:
    maxmind-edition-ids: GeoLite2-Country
  extraVolumeMounts:
  - mountPath: /etc/ingress-controller/geoip
    name: maxmind-db
  extraVolumes:
  - name: maxmind-db
    persistentVolumeClaim:
      claimName: maxmind-database
  ingressClass: nginx
  ingressClassByName: false
  ingressClassResource:
    controllerValue: k8s.io/ingress-nginx
    default: true
    enabled: true
    name: nginx
  metrics:
    enabled: true
    service:
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    serviceMonitor:
      enabled: true
  minAvailable: 2
  nodeSelector:
    my.custom.selector/node-group: ingress
  replicaCount: 3
  resources:
    limits:
      memory: 700Mi
    requests:
      memory: 700Mi
  service:
    externalTrafficPolicy: Local
    loadBalancerIP: ip.ip.ip.ip
  tolerations:
  - effect: NoExecute
    key: my.custom.selector/dedicated
    value: ingress
  updateStrategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  watchIngressWithoutClass: true
defaultBackend:
  enabled: false
global:
  image:
    registry: custom.nexus.registry
  • Another two instances of ingress-nginx is running in the cluster
  • Current State of the controller:
kubectl describe ingressclasses
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.13.0
              helm.sh/chart=ingress-nginx-4.13.0
Annotations:  ingressclass.kubernetes.io/is-default-class: true
              meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: ingress-nginx
Controller:   k8s.io/ingress-nginx
Events:       <none>


kubectl -n ingress-nginx get all -o wide

NAME                                            READY   STATUS                 RESTARTS      AGE     IP              NODE                        NOMINATED NODE   READINESS GATES
pod/ingress-nginx-controller-5594579557-7xdl2   1/1     Running                0             12h     10.112.135.61   cl1ars52ue6mltedf96h-usim   <none>           <none>
pod/ingress-nginx-controller-5594579557-n65ln   1/1     Running                0             3h25m   10.112.134.38   cl1ars52ue6mltedf96h-afut   <none>           <none>
pod/ingress-nginx-controller-5594579557-t85sc   0/1     CreateContainerError   3 (95m ago)   99m     10.112.141.23   cl1ars52ue6mltedf96h-ovel   <none>           <none>
pod/set-node-sysctls-2gblr                      1/1     Running                0             3h24m   10.112.134.33   cl1ars52ue6mltedf96h-afut   <none>           <none>
pod/set-node-sysctls-cl754                      1/1     Running                0             99m     10.112.141.22   cl1ars52ue6mltedf96h-ovel   <none>           <none>
pod/set-node-sysctls-hznkp                      1/1     Running                0             12h     10.112.135.59   cl1ars52ue6mltedf96h-usim   <none>           <none>

NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE      SELECTOR
service/api                                  ClusterIP      10.96.188.195   <none>            80/TCP,443/TCP               3y250d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller             LoadBalancer   10.96.188.135   ip.ip.ip.ip       80:30086/TCP,443:32172/TCP   4y205d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission   ClusterIP      10.96.221.85    <none>            443/TCP                      4y205d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-metrics     ClusterIP      10.96.246.31    <none>            10254/TCP                    4y205d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE      CONTAINERS   IMAGES                  SELECTOR
daemonset.apps/set-node-sysctls   3         3         3       3            3           <none>          2y243d   command      custom.nexus.registry/busybox   app=set-node-sysctls

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE      CONTAINERS   IMAGES                                                                                                                   SELECTOR
deployment.apps/ingress-nginx-controller   2/3     3            2           4y205d   controller   custom.nexus.registry/ingress-nginx/controller:v1.13.0@sha256:dc75a7baec7a3b827a5d7ab0acd10ab507904c7dad692365b3e3b596eca1afd2   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                                  DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES                                                                                                                     SELECTOR
replicaset.apps/ingress-nginx-controller-54fdcf9b5b   0         0         0       174d   controller   registry.k8s.io/ingress-nginx/controller:v1.11.4@sha256:981a97d78bee3109c0b149946c07989f8f1478a9265031d2d23dea839ba05b52   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=54fdcf9b5b
replicaset.apps/ingress-nginx-controller-5594579557   3         3         2       35h    controller   custom.nexus.registry/ingress-nginx/controller:v1.13.0@sha256:dc75a7baec7a3b827a5d7ab0acd10ab507904c7dad692365b3e3b596eca1afd2     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5594579557
replicaset.apps/ingress-nginx-controller-57668f8c66   0         0         0       94d    controller   custom.nexus.registry/ingress-nginx/controller:v1.12.0@sha256:e6b8de175acda6ca913891f0f727bca4527e797d52688cbe9fec9040d6f6b6fa     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=57668f8c66
replicaset.apps/ingress-nginx-controller-5b98f4bb6c   0         0         0       20d    controller   custom.nexus.registry/ingress-nginx/controller:v1.12.0@sha256:e6b8de175acda6ca913891f0f727bca4527e797d52688cbe9fec9040d6f6b6fa     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5b98f4bb6c
replicaset.apps/ingress-nginx-controller-64bfd474db   0         0         0       114d   controller   custom.nexus.registry/ingress-nginx/controller:v1.12.0@sha256:e6b8de175acda6ca913891f0f727bca4527e797d52688cbe9fec9040d6f6b6fa     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=64bfd474db
replicaset.apps/ingress-nginx-controller-654fddc7c9   0         0         0       174d   controller   registry.k8s.io/ingress-nginx/controller:v1.8.5@sha256:5831fa630e691c0c8c93ead1b57b37a6a8e5416d3d2364afeb8fe36fe0fef680    app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=654fddc7c9
replicaset.apps/ingress-nginx-controller-68bd9b74f    0         0         0       20d    controller   custom.nexus.registry/ingress-nginx/controller:v1.12.0@sha256:e6b8de175acda6ca913891f0f727bca4527e797d52688cbe9fec9040d6f6b6fa     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=68bd9b74f
replicaset.apps/ingress-nginx-controller-78d79c6bd5   0         0         0       174d   controller   custom.nexus.registry/ingress-nginx/controller:v1.12.0@sha256:e6b8de175acda6ca913891f0f727bca4527e797d52688cbe9fec9040d6f6b6fa     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=78d79c6bd5
replicaset.apps/ingress-nginx-controller-85cb7cd494   0         0         0       174d   controller   registry.k8s.io/ingress-nginx/controller:v1.10.2@sha256:e3311b3d9671bc52d90572bcbfb7ee5b71c985d6d6cffd445c241f1e2703363c   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=85cb7cd494
replicaset.apps/ingress-nginx-controller-864c489ddd   0         0         0       111d   controller   custom.nexus.registry/ingress-nginx/controller:v1.12.0@sha256:e6b8de175acda6ca913891f0f727bca4527e797d52688cbe9fec9040d6f6b6fa     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=864c489ddd
replicaset.apps/ingress-nginx-controller-876f68469    0         0         0       174d   controller   custom.nexus.registry/ingress-nginx/controller:v1.12.0@sha256:e6b8de175acda6ca913891f0f727bca4527e797d52688cbe9fec9040d6f6b6fa     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=876f68469

NAME                                    SCHEDULE       TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE    CONTAINERS   IMAGES                        SELECTOR
cronjob.batch/update-maxmind-database   8 21 * * 0,4   <none>     False     0        3d23h           174d   job          maxmindinc/geoipupdate:v7.1   <none>

NAME                                         STATUS     COMPLETIONS   DURATION   AGE     CONTAINERS   IMAGES                        SELECTOR
job.batch/update-maxmind-database-29197268   Complete   1/1           8s         3d23h   job          maxmindinc/geoipupdate:v7.1   batch.kubernetes.io/controller-uid=d7fd4738-1ed8-4a3a-9581-195eef334a86


kubectl -n ingress-nginx describe po ingress-nginx-controller-5594579557-t85sc

Name:             ingress-nginx-controller-5594579557-t85sc
Namespace:        ingress-nginx
Priority:         0
Service Account:  ingress-nginx
Node:             cl1ars52ue6mltedf96h-ovel/10.8.0.21
Start Time:       Thu, 10 Jul 2025 22:19:41 +0300
Labels:           app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=ingress-nginx
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=ingress-nginx
                  app.kubernetes.io/part-of=ingress-nginx
                  app.kubernetes.io/version=1.13.0
                  helm.sh/chart=ingress-nginx-4.13.0
                  pod-template-hash=5594579557
Annotations:      kubectl.kubernetes.io/restartedAt: 2025-04-07T16:26:52Z
Status:           Running
IP:               10.112.141.23
IPs:
  IP:           10.112.141.23
Controlled By:  ReplicaSet/ingress-nginx-controller-5594579557
Containers:
  controller:
    Container ID:    containerd://37c4c3bfd1f33b4290a8bd30aca6df8067a5ef63298afe5de057307914b4daee
    Image:           custom.nexus.registry/ingress-nginx/controller:v1.13.0@sha256:dc75a7baec7a3b827a5d7ab0acd10ab507904c7dad692365b3e3b596eca1afd2
    Image ID:        custom.nexus.registry/ingress-nginx/controller@sha256:dc75a7baec7a3b827a5d7ab0acd10ab507904c7dad692365b3e3b596eca1afd2
    Ports:           80/TCP, 443/TCP, 10254/TCP, 8443/TCP
    Host Ports:      0/TCP, 0/TCP, 0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
      --election-id=ingress-nginx-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --watch-ingress-without-class=true
      --enable-metrics=true
      --maxmind-edition-ids=GeoLite2-Country
    State:          Waiting
      Reason:       CreateContainerError
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 10 Jul 2025 22:20:59 +0300
      Finished:     Thu, 10 Jul 2025 22:21:00 +0300
    Ready:          False
    Restart Count:  3
    Limits:
      memory:  700Mi
    Requests:
      cpu:      100m
      memory:   700Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-5594579557-t85sc (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /etc/ingress-controller/geoip from maxmind-db (rw)
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ttzld (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  maxmind-db:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  maxmind-database
    ReadOnly:   false
  kube-api-access-ttzld:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
                             my.custom.selector/node-group=ingress
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
                             my.custom.selector/dedicated=ingress:NoExecute
Events:
  Type    Reason  Age                    From     Message
  ----    ------  ----                   ----     -------
  Normal  Pulled  3m13s (x417 over 98m)  kubelet  Container image "custom.nexus.registry/ingress-nginx/controller:v1.13.0@sha256:dc75a7baec7a3b827a5d7ab0acd10ab507904c7dad692365b3e3b596eca1afd2" already present on machine

kubectl -n ingress-nginx describe svc ingress-nginx-controller

Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.13.0
                          helm.sh/chart=ingress-nginx-4.13.0
Annotations:              meta.helm.sh/release-name: ingress-nginx
                          meta.helm.sh/release-namespace: ingress-nginx
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.188.135
IPs:                      10.96.188.135
Desired LoadBalancer IP:  ip.ip.ip.ip
LoadBalancer Ingress:     ip.ip.ip.ip (VIP)
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  30086/TCP
Endpoints:                10.112.135.61:80,10.112.134.38:80,10.112.141.23:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  32172/TCP
Endpoints:                10.112.135.61:443,10.112.134.38:443,10.112.141.23:443
Session Affinity:         None
External Traffic Policy:  Local
Internal Traffic Policy:  Cluster
HealthCheck NodePort:     32071
Events:                   <none>
  • Current state of ingress object, if applicable:
    During the error occurrence, the Ingress objects are not processed as the pod fails to start.

  • Others:
    There is a Secret mounted as:

/usr/local/certificates/ from webhook-cert (ro)

With the following args:

--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key

It's possible the issue is related to the missing or invalid Secret after a restart. However, this should not lead to a SIGSEGV.

How to reproduce this issue:

  • Create a Kubernetes cluster with interruptible nodes (e.g., we use Yandex Cloud).
  • Install ingress-nginx using Helm with the admission webhook enabled:
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace \
  --set controller.admissionWebhooks.enabled=true
  • Drain the node running the ingress-nginx pod (e.g., via kubectl drain).
  • After the node restarts, the pod may crash with a SIGSEGV.

Anything else we need to know:

The issue likely occurs because the Secret for the webhook certificates is missing or invalid after the pod restarts. However, this should result in a controllable error, not a panic: nil pointer dereference.

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.needs-priorityneeds-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.

    Type

    No type

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions