Skip to content

Commit 9a59202

Browse files
committed
[OSDOCS-11001]: Moving HCP distributing workloads docs
1 parent e547ecc commit 9a59202

File tree

3 files changed

+78
-0
lines changed

3 files changed

+78
-0
lines changed

hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,3 +5,21 @@ include::_attributes/common-attributes.adoc[]
55
:context: hcp-distribute-workloads
66

77
toc::[]
8+
9+
Before you get started with hosted control planes for {product-title}, you must properly label nodes so that the pods of hosted clusters can be scheduled into infrastructure nodes. Node labeling is also important for the following reasons:
10+
11+
* To ensure high availability and proper workload deployment. For example, you can set the `node-role.kubernetes.io/infra` label to avoid having the control-plane workload count toward your {product-title} subscription.
12+
* To ensure that control plane workloads are separate from other workloads in the management cluster.
13+
//lahinson - sept. 2023 - commenting out the following lines until those levels are supported for self-managed hypershift
14+
//* To ensure that control plane workloads are configured at one of the following multi-tenancy distribution levels:
15+
//** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes.
16+
//** Request serving isolation: Serving pods are requested in their own dedicated nodes.
17+
//** Nothing shared: Every control plane has its own dedicated nodes.
18+
19+
[IMPORTANT]
20+
====
21+
Do not use the management cluster for your workload. Workloads must not run on nodes where control planes run.
22+
====
23+
24+
include::modules/hcp-labels-taints.adoc[leveloffset=+1]
25+
include::modules/hcp-priority-classes.adoc[leveloffset=+1]

modules/hcp-labels-taints.adoc

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="hcp-labels-taints_{context}"]
7+
= Labeling management cluster nodes
8+
9+
Proper node labeling is a prerequisite to deploying hosted control planes.
10+
11+
As a management cluster administrator, you use the following labels and taints in management cluster nodes to schedule a control plane workload:
12+
13+
* `hypershift.openshift.io/control-plane: true`: Use this label and taint to dedicate a node to running hosted control plane workloads. By setting a value of `true`, you avoid sharing the control plane nodes with other components, for example, the infrastructure components of the management cluster or any other mistakenly deployed workload.
14+
* `hypershift.openshift.io/cluster: ${HostedControlPlane Namespace}`: Use this label and taint when you want to dedicate a node to a single hosted cluster.
15+
16+
Apply the following labels on the nodes that host control-plane pods:
17+
18+
* `node-role.kubernetes.io/infra`: Use this label to avoid having the control-plane workload count toward your subscription.
19+
* `topology.kubernetes.io/zone`: Use this label on the management cluster nodes to deploy highly available clusters across failure domains. The zone might be a location, rack name, or the hostname of the node where the zone is set. For example, a management cluster has the following nodes: `worker-1a`, `worker-1b`, `worker-2a`, and `worker-2b`. The `worker-1a` and `worker-1b` nodes are in `rack1`, and the `worker-2a` and worker-2b nodes are in `rack2`. To use each rack as an availability zone, enter the following commands:
20+
+
21+
[source,terminal]
22+
----
23+
$ oc label node/worker-1a node/worker-1b topology.kubernetes.io/zone=rack1
24+
----
25+
+
26+
[source,terminal]
27+
----
28+
$ oc label node/worker-2a node/worker-2b topology.kubernetes.io/zone=rack2
29+
----
30+
31+
Pods for a hosted cluster have tolerations, and the scheduler uses affinity rules to schedule them. Pods tolerate taints for `control-plane` and the `cluster` for the pods. The scheduler prioritizes the scheduling of pods into nodes that are labeled with `hypershift.openshift.io/control-plane` and `hypershift.openshift.io/cluster: ${HostedControlPlane Namespace}`.
32+
33+
For the `ControllerAvailabilityPolicy` option, use `HighlyAvailable`, which is the default value that the hosted control planes command line interface, `hcp`, deploys. When you use that option, you can schedule pods for each deployment within a hosted cluster across different failure domains by setting `topology.kubernetes.io/zone` as the topology key. Control planes that are not highly available are not supported.
34+
35+
.Procedure
36+
37+
To enable a hosted cluster to require its pods to be scheduled into infrastructure nodes, set `HostedCluster.spec.nodeSelector`, as shown in the following example:
38+
39+
[source,yaml]
40+
----
41+
spec:
42+
nodeSelector:
43+
role.kubernetes.io/infra: ""
44+
----
45+
46+
This way, hosted control planes for each hosted cluster are eligible infrastructure node workloads, and you do not need to entitle the underlying {product-title} nodes.

modules/hcp-priority-classes.adoc

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="hcp-priority-classes_{context}"]
7+
= Priority classes
8+
9+
Four built-in priority classes influence the priority and preemption of the hosted cluster pods. You can create the pods in the management cluster in the following order from highest to lowest:
10+
11+
* `hypershift-operator`: HyperShift Operator pods.
12+
* `hypershift-etcd`: Pods for etcd.
13+
* `hypershift-api-critical`: Pods that are required for API calls and resource admission to succeed. These pods include pods such as `kube-apiserver`, aggregated API servers, and web hooks.
14+
* `hypershift-control-plane`: Pods in the control plane that are not API-critical but still need elevated priority, such as the cluster version Operator.

0 commit comments

Comments
 (0)