Releases: SPHTech-Platform/terraform-aws-eks
v0.12.8
v0.12.7
What's Changed
Full Changelog: v0.12.6...v0.12.7
v0.13.0-alpha4
Testing update the extra config variable name
v0.13.0-alpha3
Testing fluent-bit custom config merge
v0.12.6
What's Changed
- Add k8s registry pull through cache by @niroz89 in #59
- Add AMI family option for setting nodetemplate by @thepoppingone in #60
Full Changelog: v0.12.5...v0.12.6
v0.12.5
What's Changed
- Fix index on wrong level of module by @thepoppingone in #58
Tested to upgrade Essential submodule without state migration, the helm chart would error out due to the change in the IRSA role and its SA. Will need to delete pods manually to reload the SA role ARN.
Full Changelog: v0.12.4...v0.12.5
v0.12.4
What's Changed
- Fix autoscaler ARN issue by @thepoppingone in #57
Bug Fix release on helm chart error
Full Changelog: v0.12.3...v0.12.4
v0.12.3
What's Changed
- Karpenter submodule by @thepoppingone in #52
- Cluster Autoscaler is part of Essentials submodule but Karpenter exists as its own submodule
State Migration (Optional but recommended)
- As autoscaler resources on EKS Essentials submodule are now optionally created, but created by default, when anyone upgdares to v0.12.3 the state is changed from
helm_release.cluster_autoscaler
tohelm_release.cluster_autoscaler[0]
,module.cluster_autoscaler_irsa_role
is also changedmodule.cluster_autoscaler_irsa_role[0]
. - If you do not do the migration, the resources will be recreated, which might cause some errors but on re apply it should go away (untested)
Additional fields if using Karpenter submodule
- As Karpenter submodule uses fargate-profile submodule to deploy Karpenter in Fargate, it requires fargate roles arns to be added to the aws-auth for pods to start properly
You have to add this in the locals
section when installing EKS main module
autoscaling_mode = "karpenter"
aws_auth_fargate_profile_pod_execution_role_arns = local.autoscaling_mode == "karpenter" ? concat(values(module.karpenter.fargate_profile_pod_execution_role_arn)) : []
additional_role_mapping = local.autoscaling_mode == "karpenter" ? [
{
rolearn = module.eks.worker_iam_role_arn
groups = [
"system:bootstrappers",
"system:nodes",
]
username = "system:node:{{EC2PrivateDNSName}}"
}
] : []
And update the role_mapping
attribute to the following:
role_mapping = concat([
for role in local.eks_master_roles :
{
rolearn = role.arn
groups = ["system:masters"]
username = role.user
}
], local.additional_role_mapping)
And aws_auth_fargate_profile_pod_execution_role_arns
, note the values() should contain the fargate_profile arns map if there are any fargate-profiles used in the existing cluster
aws_auth_fargate_profile_pod_execution_role_arns = concat(values({}), local.aws_auth_fargate_profile_pod_execution_role_arns)
Karpenter Submodule
Lastly, to install Karpenter
submodule.
module "karpenter" {
source = "SPHTech-Platform/eks/aws//modules/karpenter"
version = "~> 0.12.0"
source = "git::https://github.com/SPHTech-Platform/terraform-aws-eks.git//modules/karpenter?ref=karpenter"
karpenter_chart_version = "v0.27.5"
cluster_name = local.cluster_name
cluster_endpoint = data.aws_eks_cluster.this.endpoint
oidc_provider_arn = module.eks.oidc_provider_arn
worker_iam_role_arn = module.eks.worker_iam_role_arn
autoscaling_mode = local.autoscaling_mode
# Required for Fargate profile
subnet_ids = local.app_subnets
# Add the provisioners and nodetemplates after CRDs are installed
# karpenter_provisioners = local.karpenter_provisioners
# karpenter_nodetemplates = local.karpenter_nodetemplates
}
More examples to be added in the next release
Full Changelog: v0.12.2...v0.12.3
v0.12.2
What's Changed
- Test fluentbit config for fargate-logging by @thepoppingone in #53
Adding fluentbit config for fargate profiles
Full Changelog: v0.12.1...v0.12.2
v0.13.0-alpha2
Testing fluent-bit parser changes