Skip to content

[Bug]: Updating an LKE-E cluster's version results in replacing node pools #1937

@endocrimes

Description

@endocrimes

Terraform Version

v1.12.1

Linode Provider Version

v2.41.1

Effected Terraform Resources

linode_lke_node_pool

Terraform Config Files

resource "linode_lke_node_pool" "additional" {
  cluster_id = linode_lke_cluster.lke.id
  k8s_version = linode_lke_cluster.lke.k8s_version
  update_strategy = "rolling_update"

  tags = concat(
    [local.external_pool_tag],
    [local.is_production ? "cluster_profile:production" : "cluster_profile:development"],
    [each.value.size.min == each.value.size.max ? "autoscaling:disabled" : "autoscaling:enabled"],
    var.tags
  )

  type = "g6-dedicated-32"
  node_count = 3
}

Debug Output

No response

Panic Output

No response

Expected Behavior

When updating a cluster LKE-E version, the node pool should also be updated, using the update strategy

Actual Behavior

The resource was replaced, causing the existing node pool to be deleted and hard recreated by terraform.

Steps to Reproduce

  1. Create an LKE-E cluster and node pool with version 1.38.1+lke3 with terraform
  2. update the cluster to version 1.38.1+lke5 with terraform
  3. observe that the change to k8s_version forces a replacement

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugissues that report a bugkeepprevent github from closing issue

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions