Skip to content

Conversation

@alexandreLamarre
Copy link
Contributor

@alexandreLamarre alexandreLamarre commented May 16, 2025

Used to scope management of promfed HelmChart CRs to a specific helm-controller deployment.

Related Issue: Part of #148

Kept in mind to make no breaking changes to default behaviour or current values.yaml structure.

Testing Considerations

To enabled the feature:

  • Using values.yaml:
helmProjectOperator
  helmController:
    enabled: false
helmController:
  deployment:
    enabled : true
    replicas : 1
    image:
      repository: rancher/helm-controller
      tag: v0.16.10
      pullPolicy: IfNotPresent
  • or via CLI --set helmController.deployment.enabled=true --set helmProjectOperator.helmController.enabled=false

Test Scenarios

TODO: Test Setup & Expected behaviour: 🔴 🔴🔴

Copy link
Member

@mallardduck mallardduck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall digging the direction this is going, just a few comments with some ideas. Excited to give this a test when that is ready.

replicas : 1
image:
repository: rancher/helm-controller
tag: v0.16.10
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 - I think it makes sense for our chart default rule to essentially be the newest one. For sure that makes sense for main IMO, but release branches might get a slightly tweaked rule to pick max version?

No matter what we do as far as default version here - the RKE2/k3s users need to either: a) always disable it and use distro provided one, or b) match the version deployed to their exact k8s version based on the rke2/k3s release page.

That in mind, I wonder if it makes sense for us to follow similar versions that RKE2/k3s uses when targeting Rancher Minor versions? I.e. use the version that RKE2/k3s ship for the supported k8s ranges to inform our general defaults (for all k8s distros)?


So for example, Rancher 2.9 (even if we don't port this there it's a good example):

  • k8s range: 1.27-1.30
  • Highest helm-controller:

Technically if we go with 0.16 like k8s 1.30 that's the most updated we can be, but the flip side is that for older k8s versions the 0.16 branch is untested and may not work well. 🤔

I could be over thinking this aspect, but maybe it's worth looking over helm-controller changelog to see if this is relevant. I'll follow up on that later.

Copy link
Contributor Author

@alexandreLamarre alexandreLamarre May 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think to resolve which tag to use by default :

  1. we could tweak build.yaml to inject specific tags of helm-controller when we construct the chart values.yaml
  2. generally keep the helm-controller tags in line with rke/k3s k8sdistro for generic k8s distros / rancher versions
  3. define a renovate rule to update that yaml tag automatically if it doesn't exist already (perhaps cross-team so that when it updates k3s/rke2 with the matching helm-controller tag, we also get that tag in our build.yaml)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it looks like the divide of 0.16 and 0.15 is mainly about Wrangler V2 bump. 🤔 Maybe it's cool to just use that on all versions. Not sure the implications of using different wrangler version here versus in the Rancher Minor.

https://github.com/k3s-io/helm-controller/releases/tag/v0.16.0


helmController:
deployment:
enabled : false
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to keep the enabled flag nested in helmController.deployment to make a clear distinction between
helmProjectOperator.helmController.enabled and helmController.deployment.enabled, but can always change it to be helmController.enabled

 in order to scope management of promfed HelmChart CRs

Signed-off-by: Alexandre Lamarre <alexandre.lamarre@suse.com>
 in order to scope management of promfed HelmChart CRs

Signed-off-by: Alexandre Lamarre <alexandre.lamarre@suse.com>
…vel value

to control the deployment of a vendored helm-controller

Signed-off-by: Alexandre Lamarre <alexandre.lamarre@suse.com>
Signed-off-by: Alexandre Lamarre <alexandre.lamarre@suse.com>
…rtupDurationSeconds

Signed-off-by: Alexandre Lamarre <alexandre.lamarre@suse.com>
@mallardduck mallardduck force-pushed the feature/vendor-helm-controller branch from 8774910 to 50becd9 Compare October 21, 2025 15:20
- --debug
- --debug-level={{ .Values.helmProjectOperator.debugLevel }}
{{- end }}
{{- if and (not .Values.helmProjectOperator.helmController.enabled) (.Values.helmController.deployment.enabled) }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that for a full fix we want this line, but with or instead of and.

Since the PromFed helm-controller should be disabled if the .Values.helmController.deployment.enabled is set true. Since it will not need the PromFed compiled version of helm-controller.

@mallardduck mallardduck force-pushed the feature/vendor-helm-controller branch from f5f865e to 8da3706 Compare October 21, 2025 21:06
@mallardduck
Copy link
Member

For testing

  • Default (Binary embedded Helm Controller): change nothing
  • Global Helm Controller: Disable both PromFed Helm Controller options
    • This is default for RKE2/k3s, or culsters with default Helm Controller deployed
  • PromFed Chart based external Helm Controller: Set helmProjectOperator.helmController.enabled: false and helmController.deployment.enabled: true
  • Fully external custom helm controller: Set helmProjectOperator.helmController.enabled: false, helmController.deployment.enabled: false, and externalHelmControllerName to the name of your desired helm-controller instance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants