Skip to content

Commit 812e476

Browse files
authored
Merge pull request #91921 from prozehna/prozehna-nits-vpa
Nits on the VPA documentation
2 parents b8a0aca + c03872d commit 812e476

5 files changed

+7
-7
lines changed

modules/nodes-pods-vertical-autoscaler-about.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ You can use the default recommender or use your own alternative recommender to a
2323

2424
The default recommender automatically computes historic and current CPU and memory usage for the containers in those pods and uses this data to determine optimized resource limits and requests to ensure that these pods are operating efficiently at all times. For example, the default recommender suggests reduced resources for pods that are requesting more resources than they are using and increased resources for pods that are not requesting enough.
2525

26-
The VPA then automatically deletes any pods that are out of alignment with these recommendations one at a time, so that your applications can continue to serve requests with no downtime. The workload objects then re-deploy the pods with the original resource limits and requests. The VPA uses a mutating admission webhook to update the pods with optimized resource limits and requests before the pods are admitted to a node. If you do not want the VPA to delete pods, you can view the VPA resource limits and requests and manually update the pods as needed.
26+
The VPA then automatically deletes any pods that are out of alignment with these recommendations one at a time, so that your applications can continue to serve requests with no downtime. The workload objects then redeploy the pods with the original resource limits and requests. The VPA uses a mutating admission webhook to update the pods with optimized resource limits and requests before the pods are admitted to a node. If you do not want the VPA to delete pods, you can view the VPA resource limits and requests and manually update the pods as needed.
2727

2828
[NOTE]
2929
====
@@ -38,5 +38,5 @@ Administrators can use the VPA to better utilize cluster resources, such as prev
3838

3939
[NOTE]
4040
====
41-
If you stop running the VPA or delete a specific VPA CR in your cluster, the resource requests for the pods already modified by the VPA do not change. Any new pods get the resources defined in the workload object, not the previous recommendations made by the VPA.
41+
If you stop running the VPA or delete a specific VPA CR in your cluster, the resource requests for the pods already modified by the VPA do not change. However, any new pods get the resources defined in the workload object, not the previous recommendations made by the VPA.
4242
====

modules/nodes-pods-vertical-autoscaler-custom.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ For example, the default recommender might not accurately predict future resourc
1414

1515
[NOTE]
1616
====
17-
Instructions for how to create a recommender are beyond the scope of this documentation,
17+
Instructions for how to create a recommender are beyond the scope of this documentation.
1818
====
1919

2020
.Procedure

modules/nodes-pods-vertical-autoscaler-moving-vpa.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -340,7 +340,7 @@ endif::vpa[]
340340
$ oc get pods -n openshift-vertical-pod-autoscaler -o wide
341341
----
342342
+
343-
The pods are no longer deployed to the control plane nodes.
343+
The pods are no longer deployed to the control plane nodes. In the example output below, the node is now an infra node, not a control plane node.
344344
+
345345
.Example output
346346
[source,terminal]

modules/nodes-pods-vertical-autoscaler-tuning.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,13 @@
88

99
As a cluster administrator, you can tune the performance of your Vertical Pod Autoscaler Operator (VPA) to limit the rate at which the VPA makes requests of the Kubernetes API server and to specify the CPU and memory resources for the VPA recommender, updater, and admission controller component pods.
1010

11-
Additionally, you can configure the VPA Operator to monitor only those workloads that are being managed by a VPA custom resource (CR). By default, the VPA Operator monitors every workload in the cluster. This allows the VPA Operator to accrue and store 8 days of historical data for all workloads, which the Operator can use if a new VPA CR is created for a workload. However, this causes the VPA Operator to use significant CPU and memory, which could cause the Operator to fail, particularly on larger clusters. By configuring the VPA Operator to monitor only workloads with a VPA CR, you can save on CPU and memory resources. One trade-off is that if you have a workload that has been running, and you create a VPA CR to manage that workload, the VPA Operator does not have any historical data for that workload. As a result, the initial recommendations are not as useful as those after the workload had been running for some time.
11+
Additionally, you can configure the VPA Operator to monitor only those workloads that are being managed by a VPA custom resource (CR). By default, the VPA Operator monitors every workload in the cluster. This allows the VPA Operator to accrue and store 8 days of historical data for all workloads, which the Operator can use if a new VPA CR is created for a workload. However, this causes the VPA Operator to use significant CPU and memory, which could cause the Operator to fail, particularly on larger clusters. By configuring the VPA Operator to monitor only workloads with a VPA CR, you can save on CPU and memory resources. One trade-off is that if you have a workload that has been running, and you create a VPA CR to manage that workload, the VPA Operator does not have any historical data for that workload. As a result, the initial recommendations are not as useful as those after the workload has been running for some time.
1212

1313
These tunings allow you to ensure the VPA has sufficient resources to operate at peak efficiency and to prevent throttling and a possible delay in pod admissions.
1414

1515
You can perform the following tunings on the VPA components by editing the `VerticalPodAutoscalerController` custom resource (CR):
1616

17-
* To prevent throttling and pod admission delays, set the queries-per-second (QPS) and burst rates for VPA requests of the Kubernetes API server by using the `kube-api-qps` and `kube-api-burst` parameters.
17+
* To prevent throttling and pod admission delays, set the queries per second (QPS) and burst rates for VPA requests of the Kubernetes API server by using the `kube-api-qps` and `kube-api-burst` parameters.
1818
1919
* To ensure sufficient CPU and memory, set the CPU and memory requests for VPA component pods by using the standard `cpu` and `memory` resource requests.
2020

modules/nodes-pods-vertical-autoscaler-using-about.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -267,7 +267,7 @@ spec:
267267
<1> The type of workload object you want this VPA CR to manage.
268268
<2> The name of the workload object you want this VPA CR to manage.
269269
<3> Set the mode to `Auto`, `Recreate`, `Initial`, or `Off`. The `Recreate` mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes.
270-
<4> Specify the containers you want to opt-out and set `mode` to `Off`.
270+
<4> Specify the containers that you do not want updated by the VPA and set the `mode` to `Off`.
271271

272272
For example, a pod has two containers, the same resource requests and limits:
273273

0 commit comments

Comments
 (0)