You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. To locate remote worker nodes in subnets, you might use different network segments or subnets for the remote worker nodes than you used for the control plane subnet and local worker nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios.
10
-
11
-
If you have established different network segments or subnets for remote worker nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the `machineNetwork` configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the `networkConfig` parameter for each remote worker node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures the remote worker nodes can reach the subnet containing the control plane nodes and that they can receive network traffic from the control plane.
9
+
For edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. To locate remote nodes in subnets, you might use different network segments or subnets for the remote nodes than you used for the control plane subnet and local compute nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios.
12
10
13
11
[IMPORTANT]
14
12
====
15
-
All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details.
13
+
When using the default load balancer, `OpenShiftManagedDefault` and adding remote nodes to your {product-title} cluster, all control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details.
14
+
====
16
15
17
-
Deploying a cluster with multiple subnets requires using virtual media, such as `redfish-virtualmedia` and `idrac-virtualmedia`.
16
+
If you have established different network segments or subnets for remote nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the `machineNetwork` configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the `networkConfig` parameter for each remote node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures the remote nodes can reach the subnet containing the control plane nodes and that they can receive network traffic from the control plane.
17
+
18
+
[NOTE]
19
+
====
20
+
Deploying a cluster with multiple subnets requires using virtual media, such as `redfish-virtualmedia` and `idrac-virtualmedia`, because remote nodes cannot access the local provisioning network.
In a typical {product-title} cluster setup, all nodes, including the control plane and worker nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. This often involves using different network segments or subnets for the remote worker nodes than the subnet used by the control plane and local worker nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability. However, the network must be configured properly before installing {product-title} to ensure that the edge subnets containing the remote worker nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too.
9
+
In a typical {product-title} cluster setup, all nodes, including the control plane and compute nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. This often involves using different network segments or subnets for the remote nodes than the subnet used by the control plane and local compute nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability.
10
10
11
-
[IMPORTANT]
12
-
====
13
-
All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details.
11
+
Before installing {product-title}, you must configure the network properly to ensure that the edge subnets containing the remote nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too.
12
+
13
+
You can run control plane nodes in the same subnet or multiple subnets by configuring a user-managed load balancer in place of the default load balancer. With a multiple subnet environment, you can reduce the risk of your {product-title} cluster from failing because of a hardware failure or a network outage. For more information, see "Services for a user-managed load balancer" and "Configuring a user-managed load balancer".
14
+
15
+
Running control plane nodes in a multiple subnet environment requires completion of the following key tasks:
14
16
15
-
Deploying a cluster with multiple subnets requires using virtual media.
17
+
* Configuring a user-managed load balancer instead of the default load balancer by specifying `UserManaged` in the `loadBalancer.type` parameter of the `install-config.yaml` file.
18
+
* Configuring a user-managed load balancer address in the `ingressVIPs` and `apiVIPs` parameters of the `install-config.yaml` file.
19
+
* Adding the multiple subnet Classless Inter-Domain Routing (CIDR) and the user-managed load balancer IP addresses to the `networking.machineNetworks` parameter in the `install-config.yaml` file.
20
+
21
+
[NOTE]
22
+
====
23
+
Deploying a cluster with multiple subnets requires using virtual media, such as `redfish-virtualmedia` and `idrac-virtualmedia`.
16
24
====
17
25
18
-
This procedure details the network configuration required to allow the remote worker nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote worker nodes in the second subnet.
26
+
The procedure details the network configuration required to allow the remote nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote nodes in the second subnet.
19
27
20
28
In this procedure, the cluster spans two subnets:
21
29
22
-
- The first subnet (`10.0.0.0`) contains the control plane and local worker nodes.
23
-
- The second subnet (`192.168.0.0`) contains the edge worker nodes.
30
+
- The first subnet (`10.0.0.0`) contains the control plane and local compute nodes.
31
+
- The second subnet (`192.168.0.0`) contains the edge compute nodes.
24
32
25
33
.Procedure
26
34
@@ -134,22 +142,22 @@ Replace `<interface_name>` with the interface name.
134
142
Adjust the commands to match your actual interface names and gateway.
135
143
====
136
144
137
-
. Once you have configured the networks, test the connectivity to ensure the remote worker nodes can reach the control plane nodes and the control plane nodes can reach the remote worker nodes.
145
+
. After you have configured the networks, test the connectivity to ensure the remote nodes can reach the control plane nodes and the control plane nodes can reach the remote nodes.
138
146
139
-
.. From the control plane nodes in the first subnet, ping a remote worker node in the second subnet by running the following command:
147
+
.. From the control plane nodes in the first subnet, ping a remote node in the second subnet by running the following command:
140
148
+
141
149
[source,terminal]
142
150
----
143
-
$ ping <remote_worker_node_ip_address>
151
+
$ ping <remote_node_ip_address>
144
152
----
145
153
+
146
-
If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node.
154
+
If the ping is successful, it means the control plane nodes in the first subnet can reach the remote nodes in the second subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node.
147
155
148
-
.. From the remote worker nodes in the second subnet, ping a control plane node in the first subnet by running the following command:
156
+
.. From the remote nodes in the second subnet, ping a control plane node in the first subnet by running the following command:
149
157
+
150
158
[source,terminal]
151
159
----
152
160
$ ping <control_plane_node_ip_address>
153
161
----
154
162
+
155
-
If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node.
163
+
If the ping is successful, it means the remote nodes in the second subnet can reach the control plane in the first subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node.
Copy file name to clipboardExpand all lines: modules/nw-osp-configuring-external-load-balancer.adoc
+78-24Lines changed: 78 additions & 24 deletions
Original file line number
Diff line number
Diff line change
@@ -95,12 +95,12 @@ Interval: 10
95
95
96
96
.Procedure
97
97
98
-
. Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80:
98
+
. Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration.
99
99
+
100
-
.Example HAProxy configuration
101
-
[source,terminal]
100
+
.Example HAProxy configuration with one listed subnet
0 commit comments