Skip to content

Cluster Status remains in 'waiting' state for long periods of time after adding a fresh gitRepo #2848

@slickwarren

Description

@slickwarren

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

when adding a new gitRepo and targeting a downstream cluster, the fleet -> cluster resources take up to 15 minutes to check the status of the resources on the cluster.

Expected Behavior

I would expect the cluster resources on the fleet page to be in sync with the actual resources, at least on the order of 1 minute

Steps To Reproduce

  1. deploy a standard downstream cluster (i.e. k3s using linode provider)
  2. wait for the cluster to get to an active state
  3. deploy a new fleet gitRepo ( I used fleet's example repo in the video linked)
  4. check the fleet -> clusters tab, note that resources aren't ready
  5. check in the rancher -> cluster page and not that the resources are actually active
  6. double check in fleet -> clusters that the resources are still not ready
  7. forcing an update on fleet -> clusters will force a checkin, which resolves the issue.

Environment

- Architecture: amd64
- Fleet Version: v0.10.2-rc.4
- Cluster:
  - Provider: any (tested with linode, aws, and custom clusters with k3s or rke2)
  - Options:
  - Kubernetes Version: any (tested with 1.30.3, 1.29.x, 1.28.x)

Logs

No response

Anything else?

fleet-cluster-not-updating.webm

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    Status

    ✅ Done

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions