You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/aws-outposts-machine-set.adoc
-4Lines changed: 0 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -59,7 +59,6 @@ $ oc get machinesets.machine.openshift.io <original_machine_set_name_1> \
59
59
-n openshift-machine-api -o yaml
60
60
----
61
61
+
62
-
--
63
62
.Example output
64
63
[source,yaml]
65
64
----
@@ -90,11 +89,9 @@ spec:
90
89
<1> The cluster infrastructure ID.
91
90
<2> A default node label. For AWS Outposts, you use the `outposts` role.
92
91
<3> The omitted `providerSpec` section includes values that must be configured for your Outpost.
93
-
--
94
92
95
93
. Configure the new compute machine set to create edge compute machines in the Outpost by editing the `<new_machine_set_name_1>.yaml` file:
96
94
+
97
-
--
98
95
.Example compute machine set for AWS Outposts
99
96
[source,yaml]
100
97
----
@@ -166,7 +163,6 @@ spec:
166
163
<6> Specifies the AWS region in which the Outpost availability zone exists.
167
164
<7> Specifies the dedicated subnet for your Outpost.
168
165
<8> Specifies a taint to prevent workloads from being scheduled on nodes that have the `node-role.kubernetes.io/outposts` label. To schedule user workloads in the Outpost, you must specify a corresponding toleration in the `Deployment` resource for your application.
Copy file name to clipboardExpand all lines: modules/machineset-creating.adoc
+1-8Lines changed: 1 addition & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -103,7 +103,6 @@ $ oc get machineset <machineset_name> \
103
103
-n openshift-machine-api -o yaml
104
104
----
105
105
+
106
-
--
107
106
.Example output
108
107
[source,yaml]
109
108
----
@@ -132,14 +131,8 @@ spec:
132
131
...
133
132
----
134
133
<1> The cluster infrastructure ID.
135
-
<2> A default node label.
136
-
+
137
-
[NOTE]
138
-
====
139
-
For clusters that have user-provisioned infrastructure, a compute machine set can only create `worker` and `infra` type machines.
140
-
====
134
+
<2> A default node label. For clusters that have user-provisioned infrastructure, a compute machine set can only create `worker` and `infra` type machines.
141
135
<3> The values in the `<providerSpec>` section of the compute machine set CR are platform-specific. For more information about `<providerSpec>` parameters in the CR, see the sample compute machine set CR configuration for your provider.
142
-
--
143
136
144
137
ifdef::vsphere[]
145
138
.. If you are creating a compute machine set for a cluster that has user-provisioned infrastructure, note the following important values:
Copy file name to clipboardExpand all lines: modules/nw-aws-load-balancer-with-outposts.adoc
+2-5Lines changed: 2 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,6 @@ You must annotate Ingress resources with the Outpost subnet or the VPC subnet, b
27
27
28
28
* Configure the `Ingress` resource to use a specified subnet:
29
29
+
30
-
--
31
30
.Example `Ingress` resource configuration
32
31
[source,yaml]
33
32
----
@@ -50,7 +49,5 @@ spec:
50
49
port:
51
50
number: 80
52
51
----
53
-
<1> Specifies the subnet to use.
54
-
* To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID.
55
-
* To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones.
56
-
--
52
+
<1> Specifies the subnet to use. To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID. To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones.
Copy file name to clipboardExpand all lines: modules/nw-cluster-mtu-change.adoc
+62-90Lines changed: 62 additions & 90 deletions
Original file line number
Diff line number
Diff line change
@@ -23,8 +23,7 @@ ifndef::outposts[= Changing the cluster network MTU]
23
23
ifdef::outposts[= Changing the cluster network MTU to support AWS Outposts]
24
24
25
25
ifdef::outposts[]
26
-
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster.
27
-
You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
26
+
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
28
27
endif::outposts[]
29
28
30
29
ifndef::outposts[As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster.]
. Prepare your configuration for the hardware MTU:
75
-
76
-
** If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration:
73
+
. Prepare your configuration for the hardware MTU by selecting one of the following methods:
74
+
+
75
+
.. If your hardware MTU is specified with DHCP, update your DHCP configuration similar to the following dnsmasq configuration:
77
76
+
78
77
[source,text]
79
78
----
80
-
dhcp-option-force=26,<mtu>
79
+
dhcp-option-force=26,<mtu> <1>
81
80
----
81
+
<1> Where `<mtu>` specifies the hardware MTU for the DHCP server to advertise.
82
+
+
83
+
.. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
84
+
+
85
+
.. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This method is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.
82
86
+
83
-
--
84
-
where:
85
-
86
-
`<mtu>`:: Specifies the hardware MTU for the DHCP server to advertise.
87
-
--
88
-
89
-
** If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
90
-
91
-
** If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.
92
-
93
87
... Find the primary network interface by entering the following command:
94
88
+
95
89
[source,terminal]
96
90
----
97
-
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0
91
+
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 <1> <2>
98
92
----
93
+
<1> Where `<node_name>` specifies the name of a node in your cluster.
94
+
<2> Where `ovs-if-phys0` is the primary network interface. For nodes that use multiple NIC bonds, append `bond-sub0` for the primary NIC bond interface and `bond-sub1` for the secondary NIC bond interface.
99
95
+
100
-
--
101
-
where:
102
-
103
-
`<node_name>`:: Specifies the name of a node in your cluster.
104
-
--
105
-
106
-
... Create the following NetworkManager configuration in the `<interface>-mtu.conf` file:
96
+
... Create the following NetworkManager configuration in the `<interface>-mtu.conf` file.
107
97
+
108
98
.Example NetworkManager connection configuration
109
99
[source,ini]
110
100
----
111
101
[connection-<interface>-mtu]
112
-
match-device=interface-name:<interface>
113
-
ethernet.mtu=<mtu>
102
+
match-device=interface-name:<interface> <1>
103
+
ethernet.mtu=<mtu> <2>
114
104
----
105
+
<1> Where `<interface>` specifies the primary network interface name.
106
+
<2> Where `<mtu>` specifies the new hardware MTU value.
115
107
+
116
-
--
117
-
where:
108
+
[NOTE]
109
+
====
110
+
For nodes that use a network interface controller (NIC) bond interface, list the bond interface and any sub-interfaces in the `<bond-interface>-mtu.conf` file.
118
111
119
-
`<mtu>`:: Specifies the new hardware MTU value.
120
-
`<interface>`:: Specifies the primary network interface name.
121
-
--
112
+
.Example NetworkManager connection configuration
113
+
[source,ini]
114
+
----
115
+
[bond0-mtu]
116
+
match-device=interface-name:bond0
117
+
ethernet.mtu=9000
122
118
123
-
... Create two `MachineConfig` objects, one for the control plane nodes and another for the worker nodes in your cluster:
119
+
[connection-eth0-mtu]
120
+
match-device=interface-name:eth0
121
+
ethernet.mtu=9000
124
122
125
-
.... Create the following Butane config in the `control-plane-interface.bu` file:
126
-
+
127
-
[NOTE]
128
-
====
129
-
include::snippets/butane-version.adoc[]
123
+
[connection-eth1-mtu]
124
+
match-device=interface-name:eth1
125
+
ethernet.mtu=9000
126
+
----
130
127
====
131
128
+
132
-
[source,yaml, subs="attributes+"]
129
+
... Create the following Butane config in the `control-plane-interface.bu` file, which is the `MachineConfig` object for the control plane nodes:
130
+
+
131
+
[source,yaml,subs="attributes+"]
133
132
----
134
133
variant: openshift
135
134
version: {product-version}.0
@@ -145,16 +144,11 @@ storage:
145
144
mode: 0600
146
145
----
147
146
<1> Specify the NetworkManager connection name for the primary network interface.
148
-
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.
149
-
150
-
.... Create the following Butane config in the `worker-interface.bu` file:
147
+
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step. For NIC bonds, specify the name for the `<bond-interface>-mtu.conf` file.
151
148
+
152
-
[NOTE]
153
-
====
154
-
include::snippets/butane-version.adoc[]
155
-
====
149
+
... Create the following Butane config in the `worker-interface.bu` file, which is the `MachineConfig` object for the compute nodes:
156
150
+
157
-
[source,yaml,subs="attributes+"]
151
+
[source,yaml,subs="attributes+"]
158
152
----
159
153
variant: openshift
160
154
version: {product-version}.0
@@ -170,9 +164,9 @@ storage:
170
164
mode: 0600
171
165
----
172
166
<1> Specify the NetworkManager connection name for the primary network interface.
173
-
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.
174
-
175
-
.... Create `MachineConfig` objects from the Butane configs by running the following command:
167
+
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.
168
+
+
169
+
... Create `MachineConfig` objects from the Butane configs by running the following command:
`<overlay_from>`:: Specifies the current cluster network MTU value.
202
-
`<overlay_to>`:: Specifies the target MTU for the cluster network. This value is set relative to the value of `<machine_to>`. For OVN-Kubernetes, this value must be `100` less than the value of `<machine_to>`.
203
-
`<machine_to>`:: Specifies the MTU for the primary network interface on the underlying host network.
204
-
--
191
+
<1> Where `<overlay_from>` specifies the current cluster network MTU value.
192
+
<2> Where `<overlay_to>` specifies the target MTU for the cluster network.
193
+
<3> Where `<machine_to>` specifies the MTU for the primary network interface on the underlying host network. For OVN-Kubernetes, this value must be `100` less than the value of `<machine_to>`.
* The value of `machineconfiguration.openshift.io/state` field is `Done`.
260
248
* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field.
261
-
--
262
249
263
250
.. To confirm that the machine config is correct, enter the following command:
264
251
+
265
252
[source,terminal]
266
253
----
267
-
$ oc get machineconfig <config_name> -o yaml | grep ExecStart
. Update the underlying network interface MTU value:
281
-
267
+
+
282
268
** If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster.
283
269
+
284
270
[source,terminal]
@@ -287,7 +273,7 @@ $ for manifest in control-plane-interface worker-interface; do
287
273
oc create -f $manifest.yaml
288
274
done
289
275
----
290
-
276
+
+
291
277
** If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure.
292
278
293
279
. As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
* The value of `machineconfiguration.openshift.io/state` field is `Done`.
330
-
* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field.
331
-
--
314
+
* The value of `machineconfiguration.openshift.io/state` field is `Done`.
315
+
* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field.
332
316
333
317
.. To confirm that the machine config is correct, enter the following command:
334
318
+
335
319
[source,terminal]
336
320
----
337
-
$ oc get machineconfig <config_name> -o yaml | grep path:
where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
323
+
<1> Where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
341
324
+
342
325
If the machine config is successfully deployed, the previous output contains the `/etc/NetworkManager/conf.d/99-<interface>-mtu.conf` file path and the `ExecStart=/usr/local/bin/mtu-migration.sh` line.
`<mtu>`:: Specifies the new cluster network MTU that you specified with `<overlay_to>`.
358
-
--
335
+
<1> Replace `<mtu>` with the new cluster network MTU that you specified with `<overlay_to>`.
359
336
360
337
. After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
361
338
+
@@ -398,15 +375,10 @@ $ oc get nodes
398
375
+
399
376
[source,terminal]
400
377
----
401
-
$ oc debug node/<node> -- chroot /host ip address show <interface>
378
+
$ oc debug node/<node> -- chroot /host ip address show <interface> <1> <2>
402
379
----
403
-
+
404
-
where:
405
-
+
406
-
--
407
-
`<node>`:: Specifies a node from the output from the previous step.
408
-
`<interface>`:: Specifies the primary network interface name for the node.
409
-
--
380
+
<1> Where `<node>` specifies a node from the output from the previous step.
381
+
<2> Where `<interface>` specifies the primary network interface name for the node.
Copy file name to clipboardExpand all lines: networking/changing-cluster-network-mtu.adoc
+3Lines changed: 3 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,10 @@ toc::[]
9
9
[role="_abstract"]
10
10
As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change.
0 commit comments