Skip to content

Fuse Sidecar container is stuck in PodInitialization for pods mounting two volumes backed gcs bucket #487

@rkhir

Description

@rkhir

Describe the issue

We are using gcs buckets with fuse driver to mount file system on k8s workloads. I have had issues getting two workloads (deployment/statefulset) to initialize while mounting two pv backed by one gcs bucket. pods simply are stuck in the PodInitialization state, and none of the sidecar containers start, including the fuse sidecar.
This has worked on one pod that is mounting one pv backed by gcs bucket.

As mentioned in GCP documentation, I tried manually injecting the sidecar container, and that didn't help to start the pod or any of the init-containers on it.

System & Version (please complete the following information):

  • OS: Container-Optimized OS from Google (k8s node)
  • Platform GKE, GCS
  • Versions:
    -- fuse driver version: v1.5.0
    -- GKE version: v1.30.5-gke.1713000
  • Node:
    -- containerRuntimeVersion: containerd://1.7.23
    -- kernelVersion: 6.1.112+
    -- kubeProxyVersion: v1.30.5-gke.1713000
    -- kubeletVersion: v1.30.5-gke.1713000
    -- operatingSystem: linux
    -- osImage: Container-Optimized OS from Google
  • sidecar image: gcs-fuse-csi-driver-sidecar-mounter:v1.5.0-gke.3@sha256:ce8d3905b165220299dffda672cae32eb94cdb5d8872314bc40aeffdba5ecd76

Steps to reproduce the behavior with the following information:

  • PV yaml has all mount options:
apiVersion: v1
kind: PersistentVolume
metadata:
 annotations:
   pv.kubernetes.io/bound-by-controller: "yes"
 finalizers:
 - kubernetes.io/pv-protection
 name: xxxx
spec:
 accessModes:
 - ReadWriteMany
 capacity:
   storage: 5Gi
 claimRef:
   apiVersion: v1
   kind: PersistentVolumeClaim
   name: xxxx-claim
   namespace: yyyyyy
 csi:
   driver: gcsfuse.csi.storage.gke.io
   volumeAttributes:
     disableMetrics: "true"
   volumeHandle: gcsbucket-929a14e
 mountOptions:
 - only-dir=/dags
 - implicit-dirs
 - gid=0
 - uid=50000
 - file-mode=777
 - dir-mode=777

**The Fuse driver daemonset pod logs **

These logs are collected from the daemonset pod that is running on the k8s node, which is responsible for attaching the fuse sidecar container

I thought it could be helpful


bernetes.io~csi/xxx-airflow-logs/mount" volume_capability:{mount:{mount_flags:"only-dir=/logs" mount_flags:"implicit-dirs" mount_flags:"gid=0" mount_flags:"uid=50000" mount_flags:"file-mode=777" mount_flags:"dir-mode=777" volume_mount_gr
oup:"0"} access_mode:{mode:MULTI_NODE_MULTI_WRITER}} volume_context:{key:"csi.storage.k8s.io/ephemeral" value:"false"} volume_context:{key:"csi.storage.k8s.io/pod.name" value:"airflow-webserver-cfd8b5d58-7rjhc"} volume_context:{key:"csi.storage.
k8s.io/pod.namespace" value:"xxxxx"} volume_context:{key:"csi.storage.k8s.io/pod.uid" value:"8716a2a6-6e7b-4c8c-868d-642e80f16b1e"} volume_context:{key:"csi.storage.k8s.io/serviceAccount.name" value:"xxx-airflow-59aed630"} volume_c
ontext:{key:"csi.storage.k8s.io/serviceAccount.tokens" value:"***stripped***"} volume_context:{key:"disableMetrics" value:"true"}
gcs-fuse-csi-driver I0131 16:30:10.977207       1 node.go:190] NodePublishVolume succeeded on volume "gcsbucket-929a14e" to target path "/var/lib/kubelet/pods/8716a2a6-6e7b-4c8c-868d-642e80f16b1e/volumes/kubernetes.io~csi/xxxx/mount", mount already exists.


What I have Tried

  • When I modify one of the mounted volumes to point to emotyDir instead of the pvc, the fuse container works, but we need the two volumes to be mounted
  • creating two different buckets, one for each volume and each has its own PVC, and mounting them on the pod that didn't work
  • creating two volumes for the same bucket and two different pvc and mounting them that also didn't work
  • the daemon s

Additional context
FWIW, the stuck pods are Airflow pods (Triggerer and Scheduler). However, I tried mounting the two volumes on another test deployment I got the same results.

SLO:
We strive to respond to all bug reports within 24 business hours provided the information mentioned above is included.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions