Open
Description
Is there an existing issue for this?
- I have searched the existing issues
Affected Resource(s)
- ec2.aws.upbound.io/v1beta1 - VPCPeeringConnection
- ec2.aws.upbound.io/v1beta1 - VPCPeeringConnectionAccepter
Resource MRs required to reproduce the bug
apiVersion: ec2.aws.upbound.io/v1beta1
kind: VPCPeeringConnection
metadata:
name: test
spec:
forProvider:
autoAccept: false
peerOwnerId: {redacted}
peerRegion: us-east-1
peerVpcId: {redacted}
region: us-east-1
vpcId: {redacted}
providerConfigRef:
name: test-a
---
apiVersion: ec2.aws.upbound.io/v1beta1
kind: VPCPeeringConnectionAccepter
metadata:
name: test
spec:
forProvider:
autoAccept: true
region: us-east-1
vpcPeeringConnectionIdRef:
name: test
providerConfigRef:
name: test-b
---
apiVersion: aws.upbound.io/v1beta1
kind: ProviderConfig
metadata:
name: test-a
spec:
credentials:
source: IRSA
assumeRoleChain:
- roleARN: arn:aws:iam::{redacted}:role/crossplane-role-a
---
apiVersion: aws.upbound.io/v1beta1
kind: ProviderConfig
metadata:
name: test-b
spec:
credentials:
source: IRSA
assumeRoleChain:
- roleARN: arn:aws:iam::{redacted}:role/crossplane-role-b
Steps to Reproduce
- Install v1.10.0 of the
upbound/provider-aws-ec2
provider (link) - Apply both managed resources and their corresponding ProviderConfigs.
- Wait for the MRs to be processed, then check their statuses.
What happened?
The statuses of both MRs fail with the error message below.
Relevant Error Output Snippet
connect failed: cannot initialize the Terraform plugin SDK async external
client: cannot get terraform setup: cache manager failure: cannot calculate
the hash for the credentials file: token file name cannot be empty
Crossplane Version
v1.15.0
Provider Version
v1.10.0
Kubernetes Version
v1.27.14
Kubernetes Distribution
Home Rolled (kubeadm)
Additional Info
- We use kube2iam to allow Pods to assume AWS roles via the
iam.amazonaws.com/role
annotation (docs here) - We set the kube2iam annotation on the
upbound/provider-aws-ec2
Pod via the following ControllerConfig + Provider objects:
-
apiVersion: pkg.crossplane.io/v1 kind: Provider metadata: name: upbound-aws-ec2 spec: package: xpkg.upbound.io/upbound/provider-aws-ec2:v1.1.0 controllerConfigRef: name: upbound-aws-ec2 --- apiVersion: pkg.crossplane.io/v1alpha1 kind: ControllerConfig metadata: name: upbound-aws-ec2 spec: metadata: annotations: iam.amazonaws.com/role: arn:aws:iam::{redacted}:role/crossplane-base env: # AWS region required to resolve service endpoints - name: AWS_REGION value: "us-east-1" args: - --debug
- The above config works in v1.1.0. We downgraded, kept all other config the same, and it worked. So I suspect this is caused by some behavior change between v1.1.0 and v1.10.0.
- I'm not sure if this is related, but the EC2 VM hosting the Kubernetes Node on which the upbound provider was running is using IMDSv2.
This is potentially related to #1252, but we are not using EKS IRSA credentials. We are using kube2iam provided credentials.