Skip to content

Helm Install of cert-manager-csi-driver Fails on Minikube with /dev/bus/usb Errors #385

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
roberts-github-name opened this issue Apr 6, 2025 · 1 comment

Comments

@roberts-github-name
Copy link

Hi all, I encountered an odd installation issue where cert-manager-csi-driver seems to try and fail to mount a USB device.

I'm running a minikube cluster on Xubuntu with rootless Docker. Here's how I installed Docker:

# rooted and rootless docker installation
curl -o install.sh -fsSL https://get.docker.com # uses apt-get under the hood
sudo sh install.sh
dockerd-rootless-setuptool.sh install

Then to create the minikube cluster with minikube v1.35.0 I used this:

minikube start \
  --addons=dashboard,metrics-server,registry \
  --insecure-registry=192.168.49.2:5000 \
  --driver=docker \
  --container-runtime=containerd \
  --nodes=3 \
  --memory=no-limit \
  --cpus=no-limit

Then I try to install cert-manager and cert-manager-csi-driver with Helm:

helm repo add jetstack https://charts.jetstack.io --force-update

helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.17.0 \
  --set crds.enabled=true

helm install cert-manager-csi-driver jetstack/cert-manager-csi-driver \
  --namespace cert-manager \
  --wait

Installing cert-manager succeeds after waiting a minute or two.

But installing cert-manager-csi-driver times out. In my cluster's dashboard I can see that the DaemonSet has errors. The pods it tries to start fail and the dashboard reports the error is this:

Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error creating device nodes: mount src=/dev/bus/usb/001/021, dst=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cert-manager-csi-driver/rootfs/dev/bus/usb/001/021, dstFd=/proc/thread-self/fd/8, flags=0x1000: no such file or directory: unknown

Back-off restarting failed container cert-manager-csi-driver in pod cert-manager-csi-driver-jm4ln_cert-manager(b4b5cdcd-bf53-4d49-8808-b22e8e364b58)

I briefly searched online for cert-manager errors related to /dev/bus/usb, and also searched the source code briefly for "usb" but didn't find anything that looked relevant.

Any ideas? From the docs I read that the CSI driver's goal is to mount dynamically issued certificates in memory which is awesome, but I'm surprised to see /dev/bus/usb involved!

I admit my minikube cluster is not in a pristine state, I have some (supposedly) unrelated pods and StatefulSets running, and earlier followed some of the cert-manager guides on making self-signed (Cluster)Issuers and Certificates. Conceivably those might contribute to the error. Tomorrow I'll try burning down my whole k8s cluster and repeating the above steps on a fresh minikube cluster to see if that helps. I thought I'd post anyway though because I assume most others install the CSI driver on non-pristine k8s clusters too.

@roberts-github-name
Copy link
Author

UPDATE: I deleted my minikube cluster and ran the above minikube start and helm commands and did NOT get the errors. So installing on a pristine minikube cluster is fine.

From kubectl logs <arbitrary cert-manager-csi-driver DaemonSet pod name> (in my case the name is cert-manager-csi-driver-5xt5d) the logs now report

...
I0406 08:05:23.824909       1 filesystem.go:91] "Mounted new tmpfs" logger="storage" path="csi-data-dir/inmemfs"
...

Anyway a workaround for this issue seems to be installing cert-manager and cert-manager-csi-driver at the same time and asap upon cluster creation.

At this point my issue is fixed, but I'll keep the issue open for others' sake in the hopes of getting some insights about what can go wrong with the CSI driver's pod mounts and how to fix it. Recreating my k8s cluster from scratch was an option for me but wouldn't be for others that have a long-lived production clusters.

(I could be persuaded to just close the issue but in my experience closed issues get no feedback and buried!)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant