|
| 1 | +// Module included in the following assemblies: |
| 2 | +// |
| 3 | +// * hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc |
| 4 | + |
| 5 | +:_mod-docs-content-type: PROCEDURE |
| 6 | +[id="hcp-labels-taints_{context}"] |
| 7 | += Labeling management cluster nodes |
| 8 | + |
| 9 | +Proper node labeling is a prerequisite to deploying hosted control planes. |
| 10 | + |
| 11 | +As a management cluster administrator, you use the following labels and taints in management cluster nodes to schedule a control plane workload: |
| 12 | + |
| 13 | +* `hypershift.openshift.io/control-plane: true`: Use this label and taint to dedicate a node to running hosted control plane workloads. By setting a value of `true`, you avoid sharing the control plane nodes with other components, for example, the infrastructure components of the management cluster or any other mistakenly deployed workload. |
| 14 | +* `hypershift.openshift.io/cluster: ${HostedControlPlane Namespace}`: Use this label and taint when you want to dedicate a node to a single hosted cluster. |
| 15 | +
|
| 16 | +Apply the following labels on the nodes that host control-plane pods: |
| 17 | + |
| 18 | +* `node-role.kubernetes.io/infra`: Use this label to avoid having the control-plane workload count toward your subscription. |
| 19 | +* `topology.kubernetes.io/zone`: Use this label on the management cluster nodes to deploy highly available clusters across failure domains. The zone might be a location, rack name, or the hostname of the node where the zone is set. For example, a management cluster has the following nodes: `worker-1a`, `worker-1b`, `worker-2a`, and `worker-2b`. The `worker-1a` and `worker-1b` nodes are in `rack1`, and the `worker-2a` and worker-2b nodes are in `rack2`. To use each rack as an availability zone, enter the following commands: |
| 20 | ++ |
| 21 | +[source,terminal] |
| 22 | +---- |
| 23 | +$ oc label node/worker-1a node/worker-1b topology.kubernetes.io/zone=rack1 |
| 24 | +---- |
| 25 | ++ |
| 26 | +[source,terminal] |
| 27 | +---- |
| 28 | +$ oc label node/worker-2a node/worker-2b topology.kubernetes.io/zone=rack2 |
| 29 | +---- |
| 30 | +
|
| 31 | +Pods for a hosted cluster have tolerations, and the scheduler uses affinity rules to schedule them. Pods tolerate taints for `control-plane` and the `cluster` for the pods. The scheduler prioritizes the scheduling of pods into nodes that are labeled with `hypershift.openshift.io/control-plane` and `hypershift.openshift.io/cluster: ${HostedControlPlane Namespace}`. |
| 32 | + |
| 33 | +For the `ControllerAvailabilityPolicy` option, use `HighlyAvailable`, which is the default value that the hosted control planes command line interface, `hcp`, deploys. When you use that option, you can schedule pods for each deployment within a hosted cluster across different failure domains by setting `topology.kubernetes.io/zone` as the topology key. Control planes that are not highly available are not supported. |
| 34 | + |
| 35 | +.Procedure |
| 36 | + |
| 37 | +To enable a hosted cluster to require its pods to be scheduled into infrastructure nodes, set `HostedCluster.spec.nodeSelector`, as shown in the following example: |
| 38 | + |
| 39 | +[source,yaml] |
| 40 | +---- |
| 41 | + spec: |
| 42 | + nodeSelector: |
| 43 | + role.kubernetes.io/infra: "" |
| 44 | +---- |
| 45 | + |
| 46 | +This way, hosted control planes for each hosted cluster are eligible infrastructure node workloads, and you do not need to entitle the underlying {product-title} nodes. |
0 commit comments