-
-
Notifications
You must be signed in to change notification settings - Fork 62
Description
Summary
When deploying k8s-snapshots on an AWS EKS kubernetes cluster, it cannot create snapshots because of missing permissions in AWS.
I know that you're suggesting to run the controller on the master nodes, but since AWS EKS is a managed Kubernetes cluster, I don't have access to the master nodes for custom workloads.
Therefore I have some questions:
- How does k8s-snapshots authenticate against AWS API? (Where does it get the credentials?)
- Can I override the credentials somehow, as you can do it on Google Cloud?
Steps to reproduce
- Deploy a PVC with k8s-snapshot configuration
cat << EOF | k apply -f -
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: jrebel
annotations:
"backup.kubernetes.io/deltas": "PT1M PT5M PT1H"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ssd-general
EOF
-
Deploy k8s-snapshots deployment and rbac as stated in the README
-
Wait for the k8s-snapshots pod being created
Expected result
After one minute, in the AWS console a new snapshot for the given EBS is created.
Actual result
No EBS snapshot is created. k8s-snapshots pod status is first Error
, then CrashLoopBackOff
. Checking the pod's logs shows EC2ResponseError: 403 Forbidden
, see:
https://gist.github.com/moepot/09ece52f86fe6724c63f2e17779ded2a