Skip to content
This repository was archived by the owner on Sep 18, 2020. It is now read-only.
This repository was archived by the owner on Sep 18, 2020. It is now read-only.

Container Linux update operator and autoscaling groups generate update loop #188

@mhardege

Description

@mhardege

I have a question about the container-linux-update-operator when using Kubernetes in AWS (EKS). In our environment we use Terraform and the Kops module to provision the EKS component, there are i.a. also current AMI images assigned to the launch configurations. We use the latest CoreOS version for this.

Our problem is that when a new AMI image for a CoreOS version comes out and the new AMI version of Terraform has not been applied yet (which does not happen automatically with us), there is an update loop as our autoscaler removes nodes as needed however, always adding with the old AMI version, the CoreOS system recognizes this, updates the partition and the reboot controller does a drain / reboot of the node. Now if the autoscaler is very active, then the update procedure leads to a permanent, useless procedure.
In this context, is there a kind of best practice that has proven itself for CoreOS Update in use with Container Linux Update Operator and Autoscaling Groups?
It is also not very clear that if we were to automate a terraform apply, would the container-linux-update-operator have any reason to exist?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions