Skip to content
Open
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 10 additions & 4 deletions .github/workflows/e2e/scripts/install-monitoring.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,16 +21,19 @@ helm repo add ${HELM_REPO} $HELM_REPO_URL
helm repo update

echo "Create required \`cattle-fleet-system\` namespace"
kubectl create namespace cattle-fleet-system
kubectl create namespace cattle-fleet-system 2>/dev/null || true

echo "Installing rancher monitoring crd with :\n"
echo "Installing rancher monitoring crd with :"

helm search repo ${HELM_REPO}/rancher-monitoring-crd --versions --max-col-width=0 | head -n 2

helm upgrade --install --create-namespace -n cattle-monitoring-system ${RANCHER_MONITORING_VERSION_HELM_ARGS} rancher-monitoring-crd ${HELM_REPO}/rancher-monitoring-crd

echo "Checking installed crd version info:"
helm list -n cattle-monitoring-system

if [[ "${E2E_CI}" == "true" ]]; then
e2e_args="--set grafana.resources=null --set prometheus.prometheusSpec.resources=null --set alertmanager.alertmanagerSpec.resources=null"
e2e_args="--set grafana.resources=null --set prometheus.prometheusSpec.resources=null --set alertmanager.alertmanagerSpec.resources=null --set prometheus.prometheusSpec.maximumStartupDurationSeconds=3600"
fi

case "${KUBERNETES_DISTRIBUTION_TYPE}" in
Expand All @@ -48,9 +51,12 @@ case "${KUBERNETES_DISTRIBUTION_TYPE}" in
exit 1
esac

echo "Installing rancher monitoring with :\n"
echo "Installing rancher monitoring with :"

helm search repo ${HELM_REPO}/rancher-monitoring --versions --max-col-width=0 | head -n 2
helm upgrade --install --create-namespace -n cattle-monitoring-system rancher-monitoring ${cluster_args} ${e2e_args} ${RANCHER_HELM_ARGS} ${HELM_REPO}/rancher-monitoring

echo "Checking installed rancher monitoring versions :"
helm list -n cattle-monitoring-system

echo "PASS: Rancher Monitoring has been installed"
21 changes: 20 additions & 1 deletion charts/prometheus-federator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,5 +116,24 @@ By default, the `rancher-project-monitoring` (the underlying chart deployed by P
|`helmProjectOperator.releaseRoleBindings.clusterRoleRefs.<admin\|edit\|view>`| ClusterRoles to reference to discover subjects to create RoleBindings for in the Project Release Namespace for all corresponding Project Release Roles. See RBAC above for more information |
|`helmProjectOperator.hardenedNamespaces.enabled`| Whether to automatically patch the default ServiceAccount with `automountServiceAccountToken: false` and create a default NetworkPolicy in all managed namespaces in the cluster; the default values ensure that the creation of the namespace does not break a CIS 1.16 hardened scan |
|`helmProjectOperator.hardenedNamespaces.configuration`| The configuration to be supplied to the default ServiceAccount or auto-generated NetworkPolicy on managing a namespace |
|`helmProjectOperator.helmController.enabled`| Whether to enable an embedded k3s-io/helm-controller instance within the Helm Project Operator. Should be disabled for RKE2/K3s clusters before v1.23.14 / v1.24.8 / v1.25.4 since RKE2/K3s clusters already run Helm Controller at a cluster-wide level to manage internal Kubernetes components |
|`helmProjectOperator.helmController.enabled`| Whether to enable an embedded in process k3s-io/helm-controller instance within the Helm Project Operator. Should be disabled for RKE2/K3s clusters before v1.23.14 / v1.24.8 / v1.25.4 since RKE2/K3s clusters already run Helm Controller at a cluster-wide level to manage internal Kubernetes components |
|`helmProjectOperator.helmLocker.enabled`| Whether to enable an embedded rancher/helm-locker instance within the Helm Project Operator. |

### Advanced Vendored Helm Controller Configuration

Prometheus Federator's underlying Helm Project Operator allows for running an embedded helm-controller in environments where it may not be readily available (usually Kubernetes distributions that are not `k3s` or `RKE2`).

Another usecase for deploying a vendored helm controller is to scope the management of `HelmChart` CRs managed by prometheus federator to a specific helm-controller, and not the global one provided by `k3s` or `RKE2`, for example.

In previous versions, a vendored helm-controller was only available with a specific compile-time version that is run in process via `helmProjectOperator.helmController.enabled`, which could cause CRD conflicts.

`helmController.deployment.enabled=true` allows running a vendored helm-controller as a deployment. The deployment allows users to pin a specific version (and hence pin specific CRD versions for the operator) in order to prevent conflicts. and scale it up independently from prometheus-federator. This vendored helm-controller only targets helm chart CRs managed by prometheus federator.

|Value|Configuration|
|---|---------------------------|
| `helmController.deployment.enabled` | When `helmProjectOperator.helmController.enabled` is `false` and this flag is `true` runs the vendored helm-controller as a deployment, as opposed to in process alongside prometheus-federator |
| `helmController.deployment.replicas` | Scales the number of replicas for the helm-conttronller deployment |
| `helmController.deployment.registry` | Overrides the registry from which to pull the helm-controller container |
| `helmController.deployment.repository` | Overrides the repository from which to pull the helm-controller container |
| `helmController.deployment.tag` | Overrides the container tag for the helm-controller container |
| `helmController.deployment.pullPolicy` | Overrides the image pull policy for the helm-controller |
7 changes: 7 additions & 0 deletions charts/prometheus-federator/templates/_helpers.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,13 @@
{{- end -}}
{{- end -}}

{{- define "helm-controller.imageRegistry" -}}
{{- if and .Values.image .Values.image.registry }}{{- printf "%s/" .Values.image.registry -}}
{{- else if .Values.helmController.deployment.image.registry }}{{- printf "%s/" .Values.helmController.deployment.image.registry -}}
{{- else }}{{ template "system_default_registry" . }}
{{- end }}
{{- end }}

{{/* Define the image registry to use; either values, or systemdefault if set, or nothing */}}
{{- define "prometheus-federator.imageRegistry" -}}
{{- if and .Values.image .Values.image.registry }}{{- printf "%s/" .Values.image.registry -}}
Expand Down
4 changes: 2 additions & 2 deletions charts/prometheus-federator/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -132,11 +132,11 @@ spec:
{{- if .Values.helmProjectOperator.securityContext }}
securityContext: {{ toYaml .Values.helmProjectOperator.securityContext | nindent 8 }}
{{- end }}
nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
nodeSelector: {{ include "linux-node-selector" . | nindent 8 }}
{{- if .Values.helmProjectOperator.nodeSelector }}
{{- toYaml .Values.helmProjectOperator.nodeSelector | nindent 8 }}
{{- end }}
tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
tolerations: {{ include "linux-node-tolerations" . | nindent 8 }}
{{- if .Values.helmProjectOperator.tolerations }}
{{- toYaml .Values.helmProjectOperator.tolerations | nindent 8 }}
{{- end }}
Expand Down
38 changes: 38 additions & 0 deletions charts/prometheus-federator/templates/helmcontroller.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
{{- if and (not .Values.helmProjectOperator.helmController.enabled) (.Values.helmController.deployment.enabled) }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-federator-helm-controller
namespace: {{ template "prometheus-federator.namespace" . }}
labels:
app: prometheus-federator-helm-controller
spec:
replicas: {{ .Values.helmController.deployment.replicas }}
selector:
matchLabels:
app: prometheus-federator-helm-controller
template:
metadata:
labels:
app: prometheus-federator-helm-controller
spec:
# has to match cluster-admin service account
serviceAccountName : prometheus-federator-helm-controller
containers:
- name: helm-controller
image: {{ template "helm-controller.imageRegistry" . }} {{ .Values.helmController.deployment.image.repository }}:{{ .Values.helmController.deployment.image.tag }}
command: ["helm-controller"]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: JOB_CLUSTER_ROLE
value: prometheus-federator-helm-controller
# this sets the `managedBy` annotations, must match the ones in prometheus-federator
- name: CONTROLLER_NAME
value: {{ template "prometheus-federator.name" . }}
# this sets the `systemNamespace` for the helm-controller to watch, should match the one in prometheus-federator
- name: NAMESPACE
value: {{ template "prometheus-federator.namespace" . }}
{{ end }}
34 changes: 34 additions & 0 deletions charts/prometheus-federator/templates/rbac.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,37 @@ imagePullSecrets: {{ toYaml .Values.global.imagePullSecrets | nindent 2 }}
#
# As a result, this ClusterRoleBinding will be left as a work-in-progress until changes are made in k3s-io/helm-controller to allow us to grant
# only scoped down permissions to the Job that is deployed.
{{- if and (not .Values.helmProjectOperator.helmController.enabled) (.Values.helmController.deployment.enabled) }}
# this authorizations have to match cluster-admin service account
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-federator-helm-controller
rules:
- apiGroups:
- "*"
resources:
- "*"
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus-federator-helm-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-federator-helm-controller
subjects:
- kind: ServiceAccount
name: prometheus-federator-helm-controller
namespace: {{ template "prometheus-federator.namespace" . }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus-federator-helm-controller
namespace: {{ template "prometheus-federator.namespace" . }}
{{- end}}
9 changes: 9 additions & 0 deletions charts/prometheus-federator/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -261,3 +261,12 @@ namespaceRegistration:
## namespaceRegistration.retryWaitMilliseconds sets the time between each retry performed during the Namespace Controller initialization to make sure all Project Registration Namespaces are tracked
## If the pod is failing to initialize due to a timout in registering namespaces, tweaking this setting and NamespaceRegistrationRetryMax should fix it
retryWaitMilliseconds: 5000

helmController:
deployment:
enabled : false
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to keep the enabled flag nested in helmController.deployment to make a clear distinction between
helmProjectOperator.helmController.enabled and helmController.deployment.enabled, but can always change it to be helmController.enabled

replicas : 1
image:
repository: rancher/helm-controller
tag: v0.16.10
pullPolicy: IfNotPresent
1 change: 1 addition & 0 deletions internal/helm-project-operator/controllers/controllers.go
Original file line number Diff line number Diff line change
Expand Up @@ -190,6 +190,7 @@ func Register(ctx context.Context, systemNamespace string, cfg clientcmd.ClientC
logrus.Infof("Registering embedded Helm Controller...")
chart.Register(ctx,
systemNamespace,
// this corresponds to the managedBy annotation for helm charts
opts.ControllerName,
// this has to be cluster-admin for k3s reasons
"cluster-admin",
Expand Down
Loading