-
Notifications
You must be signed in to change notification settings - Fork 18
Description
Describe the bug
A panic occurs because of a nil pointer in processPersistentVolumeClaim
.
This is because the provisioner on a claim can be different to the provisioner on a volume.
I've found this on a GKE cluster where the volume was "migrated" from GCE to CSI.
To Reproduce
Steps to reproduce the behavior:
- Create a
PersistentVolumeClaim
on GKE using your clusters "default" provisioner - A
PersistentVolume
is provisioned by GCE - The
PersistentVolume
is migrated to CSI - The
PersistentVolumeClaim
is provisioned by CSI processPersistentVolumeClaim
panics because it looks for thevolumeID
on a nil field
Expected behavior
Finds the volumeID
on the PersistentVolume
regardless of provisioner on PersistentVolumeClaim
.
Additional context
I think this could fix the issue: https://github.com/afharvey/k8s-pvc-tagger/pull/1/files
I tried to keep the changes limited to GCP.
I'm happy to try and fix this or take another approach.
I've only seen this on GCP. Azure and AWS work great.
Here are the K8s resources which cause the panic (and crash looping).
PersistentVolumeClaim
- GCP_PD_CSI provisioner
volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
k8s-pvc-tagger/ignore: "true"
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
creationTimestamp: "2025-03-04T13:54:47Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
component: my-component
name: my-component
namespace: default
resourceVersion: "809902313"
uid: 7490859c-9e2f-4c71-b157-8d42205e4325
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
storageClassName: standard
volumeMode: Filesystem
volumeName: pvc-7490859c-9e2f-4c71-b157-8d42205e4325
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
phase: Bound
PersistentVolume
- GCP_PD_LEGACY provisioner
pv.kubernetes.io/migrated-to: pd.csi.storage.gke.io
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/migrated-to: pd.csi.storage.gke.io
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
volume.kubernetes.io/provisioner-deletion-secret-name: ""
volume.kubernetes.io/provisioner-deletion-secret-namespace: ""
creationTimestamp: "2025-03-04T13:54:51Z"
finalizers:
- kubernetes.io/pv-protection
- external-attacher/pd-csi-storage-gke-io
labels:
topology.kubernetes.io/region: europe-west2
topology.kubernetes.io/zone: europe-west2-c
name: pvc-7490859c-9e2f-4c71-b157-8d42205e4325
resourceVersion: "807636430"
uid: e709c9c4-abc7-4529-9207-c52faef0c966
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: my-component
namespace: default
resourceVersion: "807636314"
uid: 7490859c-9e2f-4c71-b157-8d42205e4325
gcePersistentDisk:
fsType: ext4
pdName: pvc-7490859c-9e2f-4c71-b157-8d42205e4325
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- europe-west2-c
- key: topology.kubernetes.io/region
operator: In
values:
- europe-west2
persistentVolumeReclaimPolicy: Delete
storageClassName: standard
volumeMode: Filesystem
status:
lastPhaseTransitionTime: "2025-03-04T13:54:51Z"
phase: Bound