Skip to content

Commit 812185e

Browse files
authored
Merge pull request #90046 from xenolinux/topology-spread-hcp
OSDOCS#12034: HCP Kubevirt topology spread constraint
2 parents 6e59529 + 94a5c81 commit 812185e

File tree

2 files changed

+59
-0
lines changed

2 files changed

+59
-0
lines changed

hosted_control_planes/hcp-manage/hcp-manage-virt.adoc

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,3 +53,10 @@ include::modules/hcp-virt-etcd-storage.adoc[leveloffset=+2]
5353
include::modules/hcp-virt-attach-nvidia-gpus.adoc[leveloffset=+1]
5454

5555
include::modules/hcp-virt-attach-nvidia-gpus-np-api.adoc[leveloffset=+1]
56+
57+
include::modules/hcp-topology-spread-constraint.adoc[leveloffset=+1]
58+
59+
[role="_additional-resources"]
60+
.Additional resources
61+
62+
* xref:../../nodes/scheduling/descheduler/nodes-descheduler-configuring.adoc#nodes-descheduler-installing_virt-enabling-descheduler-evictions[Installing the descheduler]
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-manage/hcp-manage-virt.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="hcp-topology-spread-constraint_{context}"]
7+
= Spreading node pool VMs by using topologySpreadConstraint
8+
9+
By default, KubeVirt virtual machines (VMs) created by a node pool are scheduled on any available nodes that have the capacity to run the VMs. By default, the `topologySpreadConstraint` constraint is set to schedule VMs on multiple nodes.
10+
11+
In some scenarios, node pool VMs might run on the same node, which can cause availability issues. To avoid distribution of VMs on a single node, use the descheduler to continuously honour the `topologySpreadConstraint` constraint to spread VMs on multiple nodes.
12+
13+
.Prerequisites
14+
15+
* You installed the {descheduler-operator}. For more information, see "Installing the descheduler".
16+
17+
.Procedure
18+
19+
* Open the `KubeDescheduler` custom resource (CR) by entering the following command, and then modify the `KubeDescheduler` CR to use the `SoftTopologyAndDuplicates` profile so that you maintain the `topologySpreadConstraint` constraint settings.
20+
+
21+
The `KubeDescheduler` CR named `cluster` runs in the `openshift-kube-descheduler-operator` namespace.
22+
+
23+
[source,terminal]
24+
----
25+
$ oc edit kubedescheduler cluster -n openshift-kube-descheduler-operator
26+
----
27+
+
28+
.Example `KubeDescheduler` configuration
29+
[source,yaml]
30+
----
31+
apiVersion: operator.openshift.io/v1
32+
kind: KubeDescheduler
33+
metadata:
34+
name: cluster
35+
namespace: openshift-kube-descheduler-operator
36+
spec:
37+
mode: Automatic
38+
managementState: Managed
39+
deschedulingIntervalSeconds: 60 // <1>
40+
profiles:
41+
- SoftTopologyAndDuplicates // <2>
42+
- EvictPodsWithPVC // <3>
43+
- EvictPodsWithLocalStorage // <4>
44+
profileCustomizations:
45+
devEnableEvictionsInBackground: true // <5>
46+
# ...
47+
----
48+
<1> Sets the number of seconds between the descheduler running cycles.
49+
<2> This profile evicts pods that follow the soft topology constraint: `whenUnsatisfiable: ScheduleAnyway`.
50+
<3> By default, the {descheduler-operator} prevents the pod eviction with persistent volume claims (PVCs). Use this profile to allow eviction of pods with PVCs.
51+
<4> By default, pods with local storage are not eligible for eviction. Use this profile to allow eviction of your VMs that use the local storage.
52+
<5> You must use this setting when performing a live migration so that the descheduler runs in the background during the migration process.

0 commit comments

Comments
 (0)