Skip to content

Can't use mutil targets on helm chart #248

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
johnitvn opened this issue Mar 7, 2025 · 4 comments
Open

Can't use mutil targets on helm chart #248

johnitvn opened this issue Mar 7, 2025 · 4 comments
Assignees

Comments

@johnitvn
Copy link

johnitvn commented Mar 7, 2025

installed it using the command below

helm upgrade --install kube-system-autoscaling cluster-proportional-autoscaler/cluster-proportional-autoscaler  \
--labels=catalog.cattle.io/cluster-repo-name=cluster-proportional-autoscaler  \
--namespace kube-system \
--create-namespace \
--wait \
-f - <<EOF
image:
  tag: v1.9.0
config:
  ladder:
    nodesToReplicas:
      - [ 1, 1 ]
      - [ 2, 2 ]
      - [ 3, 2 ]
      - [ 7, 3 ]
      - [ 9, 5 ]
    includeUnschedulableNodes: false
options:
  target: "deployment/coredns,deployment/metrics-server"
resources:
  requests:
      cpu: "50m"
      memory: "12Mi"
  limits:
      cpu: "50m"
      memory: "24Mi"
serviceAccount:
  name: kube-system-autoscaling
EOF

The deployment output (kubectl get deploy -n kube-system kube-system-autoscaling-cluster-proportional-autoscaler -o yaml )

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: kube-system-autoscaling
    meta.helm.sh/release-namespace: kube-system
  creationTimestamp: "2025-03-07T21:27:55Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: kube-system-autoscaling
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: cluster-proportional-autoscaler
    app.kubernetes.io/version: 1.8.6
    helm.sh/chart: cluster-proportional-autoscaler-1.1.0
  name: kube-system-autoscaling-cluster-proportional-autoscaler
  namespace: kube-system
  resourceVersion: "31834"
  uid: 209725cd-0345-4000-b44a-688eb91d3c27
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: kube-system-autoscaling
      app.kubernetes.io/name: cluster-proportional-autoscaler
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: kube-system-autoscaling
        app.kubernetes.io/name: cluster-proportional-autoscaler
    spec:
      containers:
      - args:
        - --configmap=kube-system-autoscaling-cluster-proportional-autoscaler
        - --logtostderr=true
        - --namespace=kube-system
        - --target=deployment/coredns,deployment/metrics-server
        - --v=0
        - --max-sync-failures=0
        image: registry.k8s.io/cpa/cluster-proportional-autoscaler:v1.9.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: cluster-proportional-autoscaler
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: 50m
            memory: 24Mi
          requests:
            cpu: 50m
            memory: 12Mi
        securityContext: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: kube-system-autoscaling
      serviceAccountName: kube-system-autoscaling
      terminationGracePeriodSeconds: 30
status:
  conditions:
  - lastTransitionTime: "2025-03-07T21:27:55Z"
    lastUpdateTime: "2025-03-07T21:27:55Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  - lastTransitionTime: "2025-03-07T21:27:55Z"
    lastUpdateTime: "2025-03-07T21:27:55Z"
    message: ReplicaSet "kube-system-autoscaling-cluster-proportional-autoscaler-7848688747"
      is progressing.
    reason: ReplicaSetUpdated
    status: "True"
    type: Progressing
  observedGeneration: 1
  replicas: 1
  unavailableReplicas: 1
  updatedReplicas: 1

And the pod logs

I0307 21:28:30.752826       1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/coredns,deployment/metrics-server
E0307 21:28:31.155552       1 autoscaler.go:52] target format error: deployment/coredns,deployment/metrics-server

I already try versions: 1.9.0, 1.8.9, 1.8.6 (default in chart for now) the result is the same

@MrHohn
Copy link
Member

MrHohn commented Mar 18, 2025

How come we never made a new release with the multi-target support? #211

We will need new images for CPA as well as a new version in helm chart.

I can get started on a new CPA release.

/assign

@MrHohn
Copy link
Member

MrHohn commented Mar 18, 2025

/cc gchaviaras-NS1

@johnitvn
Copy link
Author

I'm still monitoring. In my case core-dns, cert-manager are components that don't have spike load but to maximize HA capability I will scale it based on cluster size.
For example 3 nodes I will deploy 2 instances, 9 nodes I will deploy 3. It allows me to adjust the HA level on the DEPIN system

And supporting multiple multi-targets will allow me to group together services with similar scaling needs for easier management instead of many different deployments.

@MrHohn
Copy link
Member

MrHohn commented Mar 20, 2025

Release tag cut on https://github.com/kubernetes-sigs/cluster-proportional-autoscaler/releases/tag/v1.10.0 and staging images being promoted on kubernetes/k8s.io#7907.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants