|
| 1 | +// Module included in the following assemblies: |
| 2 | +// installing/installing_vsphere/post-install-vsphere-zones-regions-configuration.adoc |
| 3 | + |
| 4 | +:_mod-docs-content-type: PROCEDURE |
| 5 | +[id="specifying-host-groups-vsphere_{context}"] |
| 6 | += Specifying multiple host groups for your cluster on vSphere |
| 7 | + |
| 8 | +You can configure the `infrastructures.config.openshift.io` configuration resource to specify multiple host groups for your {product-title} cluster that runs on a {vmw-first} instance. This is necessary if your {vmw-short} instance is in a stretched cluster configuration, with your ESXi hosts and storage distributed across multiple physical data centers. Use this procedure if you did not already configure host groups for your {product-title} cluster at installation, or if you need to update your {product-title} cluster with additional host groups. |
| 9 | + |
| 10 | +:FeatureName: OpenShift zones support for vSphere host groups |
| 11 | +include::snippets/technology-preview.adoc[] |
| 12 | + |
| 13 | +.Prerequisites |
| 14 | + |
| 15 | +* ESXi hosts are grouped into host groups, which are linked via VM-host affinity rules to corresponding virtual machine (VM) groups. See the following example `govc` commands for details: |
| 16 | ++ |
| 17 | +[source,terminal] |
| 18 | +---- |
| 19 | +# This example shows the correct configuration for a cluster with two host groups: |
| 20 | +
|
| 21 | +# Create host groups: |
| 22 | +govc cluster.group.create -name <host_group_1> -host |
| 23 | +govc cluster.group.create -name <host_group_2> -host |
| 24 | + |
| 25 | +# Create VM groups: |
| 26 | +govc cluster.group.create -name <vm_group_1> -vm |
| 27 | +govc cluster.group.create -name <vm_group_2> -vm |
| 28 | + |
| 29 | +# Create VM-host affinity rules: |
| 30 | +govc cluster.rule.create -name <rule_1> -enable -vm-host -vm-group <vm_group_1> -host-affine-group <host_group_1> |
| 31 | +govc cluster.rule.create -name <rule_2> -enable -vm-host -vm-group <vm_group_2> -host-affine-group <host_group_2> |
| 32 | + |
| 33 | +# Add ESXi hosts to host groups: |
| 34 | +govc cluster.group.change -name <host_group_1> <esxi_host_1_ip> |
| 35 | +govc cluster.group.change -name <host_group_2> <esxi_host_2_ip> |
| 36 | +---- |
| 37 | +* `openshift-region` and `openshift-zone` tag categories are created on the vCenter server. |
| 38 | +* Compute clusters have tags from the `openshift-region` tag category. |
| 39 | +* ESXi hosts within host groups have tags from the `openshift-zone` tag category. |
| 40 | +* `Host.Inventory.EditCluster` privilege is granted on the {vmw-short} vCenter cluster object. |
| 41 | +* `TechPreviewNoUpgrade` feature set is enabled. For more information, "see Enabling features using feature gates". |
| 42 | +
|
| 43 | +.Procedure |
| 44 | + |
| 45 | +. Edit the infrastructure settings of your {product-title} cluster. |
| 46 | + |
| 47 | +.. To copy your existing infrastructure settings to a file, run the following command: |
| 48 | ++ |
| 49 | +[source,terminal] |
| 50 | +---- |
| 51 | +$ oc get infrastructures.config.openshift.io cluster -o yaml > <name_of_infrastructure_file>.yaml |
| 52 | +---- |
| 53 | ++ |
| 54 | +.. Edit your infrastructure file to include a failure domain for each host group in your {vmw-short} cluster. Refer to the following YAML file for an example of this configuration. Ensure you replace any values wrapped in angle brackets (`< >`) with your values: |
| 55 | ++ |
| 56 | +[source,yaml] |
| 57 | +---- |
| 58 | +apiVersion: config.openshift.io/v1 |
| 59 | +kind: Infrastructure |
| 60 | +metadata: |
| 61 | + name: cluster |
| 62 | +spec: |
| 63 | + cloudConfig: |
| 64 | + key: config |
| 65 | + name: cloud-provider-config |
| 66 | + platformSpec: |
| 67 | + type: VSphere |
| 68 | + vsphere: |
| 69 | + apiServerInternalIPs: |
| 70 | + - <internal_ip_of_api_server> |
| 71 | + failureDomains: |
| 72 | + - name: <unique_name_for_failure_domain_1> |
| 73 | + region: <cluster_1_region_tag> |
| 74 | + server: <vcenter_server_ip_address> |
| 75 | + zoneAffinity: |
| 76 | + type: HostGroup |
| 77 | + hostGroup: |
| 78 | + vmGroup: <name_of_vm_group_1> |
| 79 | + hostGroup: <name_of_host_group_1> |
| 80 | + vmHostRule: <name_of_vm_host_affinity_rule_1> |
| 81 | + regionAffinity: |
| 82 | + type: ComputeCluster |
| 83 | + topology: |
| 84 | + computeCluster: /<data_center_1>/host/<cluster_1> |
| 85 | + datacenter: <data_center_1> |
| 86 | + datastore: /<data_center_1>/datastore/<datastore_1> |
| 87 | + networks: |
| 88 | + - VM Network |
| 89 | + resourcePool: /<data_center_1>/host/<cluster_1>/Resources |
| 90 | + template: /<data_center_1>/vm/<vm_template> |
| 91 | + zone: <host_group_1_tag> |
| 92 | + - name: <unique_name_for_failure_domain_2> |
| 93 | + region: <cluster_1_region_tag> |
| 94 | + server: <vcenter_server_ip_address> |
| 95 | + zoneAffinity: |
| 96 | + type: HostGroup |
| 97 | + hostGroup: |
| 98 | + vmGroup: <name_of_vm_group_2> |
| 99 | + hostGroup: <name_of_host_group_2> |
| 100 | + vmHostRule: <name_of_vm_host_affinity_rule_2> |
| 101 | + regionAffinity: |
| 102 | + type: ComputeCluster |
| 103 | + topology: |
| 104 | + computeCluster: /<data_center_1>/host/<cluster_1> |
| 105 | + datacenter: <data_center_1> |
| 106 | + datastore: /<data_center_1>/datastore/<datastore_1> |
| 107 | + networks: |
| 108 | + - VM Network |
| 109 | + resourcePool: /<data_center_1>/host/<cluster_1>/Resources |
| 110 | + template: /<data_center_1>/vm/<vm_template> |
| 111 | + zone: <host_group_2_tag> |
| 112 | +# ... |
| 113 | +---- |
| 114 | ++ |
| 115 | +.. To update your cluster with these changes, run the following command: |
| 116 | ++ |
| 117 | +[source,terminal] |
| 118 | +---- |
| 119 | +$ oc replace -f <name_of_infrastructure_file>.yaml |
| 120 | +---- |
| 121 | + |
| 122 | +. Update your `ControlPlaneMachineSet` custom resource (CR) with the new failure domains by completing the following steps: |
| 123 | ++ |
| 124 | +.. Edit the `ControlPlaneMachineSet` CR by running the following command: |
| 125 | ++ |
| 126 | +[source,terminal] |
| 127 | +---- |
| 128 | +$ oc edit controlplanemachinesets.machine.openshift.io -n openshift-machine-api cluster |
| 129 | +---- |
| 130 | ++ |
| 131 | +.. Edit the `failureDomains` parameter as shown in the following example: |
| 132 | ++ |
| 133 | +[source,yaml] |
| 134 | +---- |
| 135 | +spec: |
| 136 | + replicas: 3 |
| 137 | + selector: |
| 138 | + matchLabels: |
| 139 | + machine.openshift.io/cluster-api-cluster: jdoe3-whb8l |
| 140 | + machine.openshift.io/cluster-api-machine-role: master |
| 141 | + machine.openshift.io/cluster-api-machine-type: master |
| 142 | + state: Active |
| 143 | + strategy: |
| 144 | + type: RollingUpdate |
| 145 | + template: |
| 146 | + machineType: machines_v1beta1_machine_openshift_io |
| 147 | + machines_v1beta1_machine_openshift_io: |
| 148 | + failureDomains: |
| 149 | + platform: VSphere |
| 150 | + vsphere: |
| 151 | + - name: <failure_domain_1_name> |
| 152 | + - name: <failure_domain_2_name> |
| 153 | +# ... |
| 154 | +---- |
| 155 | ++ |
| 156 | +.. Verify that your control plane nodes have finished updating before proceeding further. To do this, run the following command: |
| 157 | ++ |
| 158 | +[source,terminal] |
| 159 | +---- |
| 160 | +$ oc get controlplanemachinesets.machine.openshift.io -n openshift-machine-api |
| 161 | +---- |
| 162 | + |
| 163 | +. Create new `MachineSet` CRs for your failure domains. |
| 164 | ++ |
| 165 | +.. To retrieve the configuration of an existing `MachineSet` CR for use as a template, run the following command: |
| 166 | ++ |
| 167 | +[source,terminal] |
| 168 | +---- |
| 169 | +$ oc get machinesets.machine.openshift.io -n openshift-machine-api <existing_machine_set> -o yaml > machineset-<failure_domain_name>.yaml |
| 170 | +---- |
| 171 | ++ |
| 172 | +.. Copy the template as needed to create `MachineSet` CR files for each failure domain that you defined in your infrastructure file. Refer to the following example: |
| 173 | ++ |
| 174 | +[source,yaml] |
| 175 | +---- |
| 176 | +apiVersion: machine.openshift.io/v1beta1 |
| 177 | +kind: MachineSet |
| 178 | +metadata: |
| 179 | + labels: |
| 180 | + machine.openshift.io/cluster-api-cluster: <infrastructure_id> |
| 181 | + name: <machineset_name> |
| 182 | + namespace: openshift-machine-api |
| 183 | +spec: |
| 184 | + replicas: 0 |
| 185 | + selector: |
| 186 | + matchLabels: |
| 187 | + machine.openshift.io/cluster-api-cluster: <infrastructure_id> |
| 188 | + machine.openshift.io/cluster-api-machineset: <machineset_name> |
| 189 | + template: |
| 190 | + metadata: |
| 191 | + labels: |
| 192 | + machine.openshift.io/cluster-api-cluster: <infrastructure_id> |
| 193 | + machine.openshift.io/cluster-api-machine-role: worker |
| 194 | + machine.openshift.io/cluster-api-machine-type: worker |
| 195 | + machine.openshift.io/cluster-api-machineset: <machineset_name> |
| 196 | + spec: |
| 197 | + lifecycleHooks: {} |
| 198 | + metadata: {} |
| 199 | + providerSpec: |
| 200 | + value: |
| 201 | + apiVersion: machine.openshift.io/v1beta1 |
| 202 | + credentialsSecret: |
| 203 | + name: vsphere-cloud-credentials |
| 204 | + diskGiB: <disk_GiB> |
| 205 | + kind: VSphereMachineProviderSpec |
| 206 | + memoryMiB: <memory_in_MiB> |
| 207 | + metadata: |
| 208 | + creationTimestamp: null |
| 209 | + network: |
| 210 | + devices: |
| 211 | + - networkName: VM Network |
| 212 | + numCPUs: <number_of_cpus> |
| 213 | + numCoresPerSocket: <number_of_cores_per_socket> |
| 214 | + snapshot: "" |
| 215 | + template: <template_name> |
| 216 | + userDataSecret: |
| 217 | + name: worker-user-data |
| 218 | + workspace: |
| 219 | + datacenter: <data_center_1> |
| 220 | + datastore: /<data_center_1>/datastore/<datastore_1> |
| 221 | + folder: /<data_center_1>/vm/<folder> |
| 222 | + resourcePool: /<data_center_1>/host/<cluster_1>/Resources |
| 223 | + server: <server_ip_address> |
| 224 | + vmGroup: <name_of_vm_group_1> |
| 225 | +# ... |
| 226 | +---- |
| 227 | ++ |
| 228 | +.. For each `MachineSet` CR file, run the following command: |
| 229 | ++ |
| 230 | +[source,terminal] |
| 231 | +---- |
| 232 | +$ oc create -f <name_of_machine_set_file>.yaml |
| 233 | +---- |
0 commit comments