Skip to content

Commit 80fe674

Browse files
committed
OSDOCS-7075: Documented support for multiple subnets
1 parent 237042b commit 80fe674

5 files changed

+125
-59
lines changed

_topic_maps/_topic_map.yml

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -333,22 +333,6 @@ Topics:
333333
File: uninstalling-cluster-nutanix
334334
- Name: Installation configuration parameters for Nutanix
335335
File: installation-config-parameters-nutanix
336-
- Name: Installing on bare metal
337-
Dir: installing_bare_metal
338-
Distros: openshift-origin,openshift-enterprise
339-
Topics:
340-
- Name: Preparing to install on bare metal
341-
File: preparing-to-install-on-bare-metal
342-
- Name: Installing a user-provisioned cluster on bare metal
343-
File: installing-bare-metal
344-
- Name: Installing a user-provisioned bare metal cluster with network customizations
345-
File: installing-bare-metal-network-customizations
346-
- Name: Installing a user-provisioned bare metal cluster on a restricted network
347-
File: installing-restricted-networks-bare-metal
348-
- Name: Scaling a user-provisioned installation with the bare metal operator
349-
File: scaling-a-user-provisioned-cluster-with-the-bare-metal-operator
350-
- Name: Installation configuration parameters for bare metal
351-
File: installation-config-parameters-bare-metal
352336
- Name: Installing on-premise with Assisted Installer
353337
Dir: installing_on_prem_assisted
354338
Distros: openshift-enterprise
@@ -379,6 +363,22 @@ Topics:
379363
File: install-sno-preparing-to-install-sno
380364
- Name: Installing OpenShift on a single node
381365
File: install-sno-installing-sno
366+
- Name: Installing on bare metal
367+
Dir: installing_bare_metal
368+
Distros: openshift-origin,openshift-enterprise
369+
Topics:
370+
- Name: Preparing to install on bare metal
371+
File: preparing-to-install-on-bare-metal
372+
- Name: Installing a user-provisioned cluster on bare metal
373+
File: installing-bare-metal
374+
- Name: Installing a user-provisioned bare metal cluster with network customizations
375+
File: installing-bare-metal-network-customizations
376+
- Name: Installing a user-provisioned bare metal cluster on a restricted network
377+
File: installing-restricted-networks-bare-metal
378+
- Name: Scaling a user-provisioned installation with the bare metal operator
379+
File: scaling-a-user-provisioned-cluster-with-the-bare-metal-operator
380+
- Name: Installation configuration parameters for bare metal
381+
File: installation-config-parameters-bare-metal
382382
- Name: Deploying installer-provisioned clusters on bare metal
383383
Dir: installing_bare_metal_ipi
384384
Distros: openshift-origin,openshift-enterprise

installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ include::modules/ipi-install-checking-ntp-sync.adoc[leveloffset=+1]
2121
2222
include::modules/ipi-install-configuring-networking.adoc[leveloffset=+1]
2323

24+
// Establishing communication between subnets
2425
include::modules/ipi-install-establishing-communication-between-subnets.adoc[leveloffset=+1]
2526

2627
include::modules/ipi-install-retrieving-the-openshift-installer.adoc[leveloffset=+1]

modules/ipi-install-configuring-host-network-interfaces-for-subnets.adoc

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,15 +6,18 @@
66
[id="ipi-install-configuring-host-network-interfaces-for-subnets_{context}"]
77
= Configuring host network interfaces for subnets
88

9-
For edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. To locate remote worker nodes in subnets, you might use different network segments or subnets for the remote worker nodes than you used for the control plane subnet and local worker nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios.
10-
11-
If you have established different network segments or subnets for remote worker nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the `machineNetwork` configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the `networkConfig` parameter for each remote worker node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures the remote worker nodes can reach the subnet containing the control plane nodes and that they can receive network traffic from the control plane.
9+
For edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. To locate remote nodes in subnets, you might use different network segments or subnets for the remote nodes than you used for the control plane subnet and local compute nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios.
1210

1311
[IMPORTANT]
1412
====
15-
All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details.
13+
When using the default load balancer, `OpenShiftManagedDefault` and adding remote nodes to your {product-title} cluster, all control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details.
14+
====
1615

17-
Deploying a cluster with multiple subnets requires using virtual media, such as `redfish-virtualmedia` and `idrac-virtualmedia`.
16+
If you have established different network segments or subnets for remote nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the `machineNetwork` configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the `networkConfig` parameter for each remote node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures the remote nodes can reach the subnet containing the control plane nodes and that they can receive network traffic from the control plane.
17+
18+
[NOTE]
19+
====
20+
Deploying a cluster with multiple subnets requires using virtual media, such as `redfish-virtualmedia` and `idrac-virtualmedia`, because remote nodes cannot access the local provisioning network.
1821
====
1922

2023
.Procedure

modules/ipi-install-establishing-communication-between-subnets.adoc

Lines changed: 22 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -6,21 +6,29 @@
66
[id="ipi-install-establishing-communication-between-subnets_{context}"]
77
= Establishing communication between subnets
88

9-
In a typical {product-title} cluster setup, all nodes, including the control plane and worker nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. This often involves using different network segments or subnets for the remote worker nodes than the subnet used by the control plane and local worker nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability. However, the network must be configured properly before installing {product-title} to ensure that the edge subnets containing the remote worker nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too.
9+
In a typical {product-title} cluster setup, all nodes, including the control plane and compute nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. This often involves using different network segments or subnets for the remote nodes than the subnet used by the control plane and local compute nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability.
1010

11-
[IMPORTANT]
12-
====
13-
All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details.
11+
Before installing {product-title}, you must configure the network properly to ensure that the edge subnets containing the remote nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too.
12+
13+
You can run control plane nodes in the same subnet or multiple subnets by configuring a user-managed load balancer in place of the default load balancer. With a multiple subnet environment, you can reduce the risk of your {product-title} cluster from failing because of a hardware failure or a network outage. For more information, see "Services for a user-managed load balancer" and "Configuring a user-managed load balancer".
14+
15+
Running control plane nodes in a multiple subnet environment requires completion of the following key tasks:
1416

15-
Deploying a cluster with multiple subnets requires using virtual media.
17+
* Configuring a user-managed load balancer instead of the default load balancer by specifying `UserManaged` in the `loadBalancer.type` parameter of the `install-config.yaml` file.
18+
* Configuring a user-managed load balancer address in the `ingressVIPs` and `apiVIPs` parameters of the `install-config.yaml` file.
19+
* Adding the multiple subnet Classless Inter-Domain Routing (CIDR) and the user-managed load balancer IP addresses to the `networking.machineNetworks` parameter in the `install-config.yaml` file.
20+
21+
[NOTE]
22+
====
23+
Deploying a cluster with multiple subnets requires using virtual media, such as `redfish-virtualmedia` and `idrac-virtualmedia`.
1624
====
1725

18-
This procedure details the network configuration required to allow the remote worker nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote worker nodes in the second subnet.
26+
The procedure details the network configuration required to allow the remote nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote nodes in the second subnet.
1927

2028
In this procedure, the cluster spans two subnets:
2129

22-
- The first subnet (`10.0.0.0`) contains the control plane and local worker nodes.
23-
- The second subnet (`192.168.0.0`) contains the edge worker nodes.
30+
- The first subnet (`10.0.0.0`) contains the control plane and local compute nodes.
31+
- The second subnet (`192.168.0.0`) contains the edge compute nodes.
2432
2533
.Procedure
2634

@@ -134,22 +142,22 @@ Replace `<interface_name>` with the interface name.
134142
Adjust the commands to match your actual interface names and gateway.
135143
====
136144

137-
. Once you have configured the networks, test the connectivity to ensure the remote worker nodes can reach the control plane nodes and the control plane nodes can reach the remote worker nodes.
145+
. After you have configured the networks, test the connectivity to ensure the remote nodes can reach the control plane nodes and the control plane nodes can reach the remote nodes.
138146

139-
.. From the control plane nodes in the first subnet, ping a remote worker node in the second subnet by running the following command:
147+
.. From the control plane nodes in the first subnet, ping a remote node in the second subnet by running the following command:
140148
+
141149
[source,terminal]
142150
----
143-
$ ping <remote_worker_node_ip_address>
151+
$ ping <remote_node_ip_address>
144152
----
145153
+
146-
If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node.
154+
If the ping is successful, it means the control plane nodes in the first subnet can reach the remote nodes in the second subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node.
147155

148-
.. From the remote worker nodes in the second subnet, ping a control plane node in the first subnet by running the following command:
156+
.. From the remote nodes in the second subnet, ping a control plane node in the first subnet by running the following command:
149157
+
150158
[source,terminal]
151159
----
152160
$ ping <control_plane_node_ip_address>
153161
----
154162
+
155-
If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node.
163+
If the ping is successful, it means the remote nodes in the second subnet can reach the control plane in the first subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node.

modules/nw-osp-configuring-external-load-balancer.adoc

Lines changed: 78 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -95,12 +95,12 @@ Interval: 10
9595

9696
.Procedure
9797

98-
. Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80:
98+
. Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration.
9999
+
100-
.Example HAProxy configuration
101-
[source,terminal]
100+
.Example HAProxy configuration with one listed subnet
101+
[source,terminal,subs="quotes"]
102102
----
103-
#...
103+
# ...
104104
listen my-cluster-api-6443
105105
bind 192.168.1.100:6443
106106
mode tcp
@@ -126,28 +126,82 @@ listen my-cluster-machine-config-api-22623
126126
server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2
127127

128128
listen my-cluster-apps-443
129-
bind 192.168.1.100:443
130-
mode tcp
131-
balance roundrobin
132-
option httpchk
133-
http-check connect
134-
http-check send meth GET uri /healthz/ready
135-
http-check expect status 200
136-
server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2
137-
server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2
138-
server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2
129+
bind 192.168.1.100:443
130+
mode tcp
131+
balance roundrobin
132+
option httpchk
133+
http-check connect
134+
http-check send meth GET uri /healthz/ready
135+
http-check expect status 200
136+
server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2
137+
server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2
138+
server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2
139139

140140
listen my-cluster-apps-80
141-
bind 192.168.1.100:80
142-
mode tcp
143-
balance roundrobin
144-
option httpchk
145-
http-check connect
146-
http-check send meth GET uri /healthz/ready
147-
http-check expect status 200
148-
server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2
149-
server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2
150-
server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2
141+
bind 192.168.1.100:80
142+
mode tcp
143+
balance roundrobin
144+
option httpchk
145+
http-check connect
146+
http-check send meth GET uri /healthz/ready
147+
http-check expect status 200
148+
server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2
149+
server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2
150+
server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2
151+
# ...
152+
----
153+
+
154+
.Example HAProxy configuration with multiple listed subnets
155+
[source,terminal,subs="quotes"]
156+
----
157+
# ...
158+
listen api-server-6443
159+
bind *:6443
160+
mode tcp
161+
server master-00 192.168.83.89:6443 check inter 1s
162+
server master-01 192.168.84.90:6443 check inter 1s
163+
server master-02 192.168.85.99:6443 check inter 1s
164+
server bootstrap 192.168.80.89:6443 check inter 1s
165+
166+
listen machine-config-server-22623
167+
bind *:22623
168+
mode tcp
169+
server master-00 192.168.83.89:22623 check inter 1s
170+
server master-01 192.168.84.90:22623 check inter 1s
171+
server master-02 192.168.85.99:22623 check inter 1s
172+
server bootstrap 192.168.80.89:22623 check inter 1s
173+
174+
listen ingress-router-80
175+
bind *:80
176+
mode tcp
177+
balance source
178+
server worker-00 192.168.83.100:80 check inter 1s
179+
server worker-01 192.168.83.101:80 check inter 1s
180+
181+
listen ingress-router-443
182+
bind *:443
183+
mode tcp
184+
balance source
185+
server worker-00 192.168.83.100:443 check inter 1s
186+
server worker-01 192.168.83.101:443 check inter 1s
187+
188+
listen ironic-api-6385
189+
bind *:6385
190+
mode tcp
191+
balance source
192+
server master-00 192.168.83.89:6385 check inter 1s
193+
server master-01 192.168.84.90:6385 check inter 1s
194+
server master-02 192.168.85.99:6385 check inter 1s
195+
server bootstrap 192.168.80.89:6385 check inter 1s
196+
197+
listen inspector-api-5050
198+
bind *:5050
199+
mode tcp
200+
balance source
201+
server master-00 192.168.83.89:5050 check inter 1s
202+
server master-01 192.168.84.90:5050 check inter 1s
203+
server master-02 192.168.85.99:5050 check inter 1s
204+
server bootstrap 192.168.80.89:5050 check inter 1s
151205
# ...
152206
----
153207

0 commit comments

Comments
 (0)