Skip to content

OSDOCS-14356: Added bond best practices to the networking docs #92458

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,9 @@ include::modules/ipi-install-configuring-networking.adoc[leveloffset=+1]
// Creating a manifest object that includes a customized `br-ex` bridge
include::modules/creating-manifest-file-customized-br-ex-bridge.adoc[leveloffset=+1]

// Open vSwitch (OVS) bonding
include::modules/nw-ovs-bonding.adoc[leveloffset=+1]

// Scale each machine set to compute nodes
include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@ include::modules/nw-enabling-a-provisioning-network-after-installation.adoc[leve
// Creating a manifest object that includes a customized `br-ex` bridge
include::modules/creating-manifest-file-customized-br-ex-bridge.adoc[leveloffset=+1]

// Open vSwitch (OVS) bonding
include::modules/nw-ovs-bonding.adoc[leveloffset=+1]

// Services for a user-managed load balancer
include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,9 @@ include::modules/installation-load-balancing-user-infra.adoc[leveloffset=+2]
// Creating a manifest object that includes a customized `br-ex` bridge
include::modules/creating-manifest-file-customized-br-ex-bridge.adoc[leveloffset=+1]

// Open vSwitch (OVS) bonding
include::modules/nw-ovs-bonding.adoc[leveloffset=+1]

// Scale each machine set to compute nodes
include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,9 @@ include::modules/installation-load-balancing-user-infra.adoc[leveloffset=+2]
// Creating a manifest object that includes a customized `br-ex` bridge
include::modules/creating-manifest-file-customized-br-ex-bridge.adoc[leveloffset=+1]

// Open vSwitch (OVS) bonding
include::modules/nw-ovs-bonding.adoc[leveloffset=+1]

// Scale each machine set to compute nodes
include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,9 @@ include::modules/installation-load-balancing-user-infra.adoc[leveloffset=+2]
// Creating a manifest object that includes a customized `br-ex` bridge
include::modules/creating-manifest-file-customized-br-ex-bridge.adoc[leveloffset=+1]

// Open vSwitch (OVS) bonding
include::modules/nw-ovs-bonding.adoc[leveloffset=+1]

// Scale each machine set to compute nodes
include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2]

Expand Down
9 changes: 9 additions & 0 deletions modules/installation-network-user-infra.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,15 @@ endif::ibm-z[]
ifndef::ibm-z[]
During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation.

Use a DHCP server for the long-term management of the machines for your cluster. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. As a cluster administrator, ensure that you reserve the following IP addresses components that interact with the DHCP server:

* Two unique virtual IP (VIP) addresses. One VIP address for the API endpoint and one VIP address for the wildcard ingress endpoint.
* One IP address for the provisioner node.
* An IP address for each control plane node.
* An IP address for each compute node.

If you have multiple network interfaces that interact with a bonded interface, reserve the same IP addresses for these multiple network interfaces so to ensure better load balancing, fault tolerance, and bandwidth capabilites for your cluster network infrastructure.

[NOTE]
====
* It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,20 @@
:_mod-docs-content-type: PROCEDURE
[id="installation-user-infra-machines-advanced-customizing-live-{boot}_network_keyfile_{context}"]
= Modifying a live install {boot-media} with customized network settings

You can embed a NetworkManager keyfile into the live {boot-media} and pass it through to the installed system with the `--network-keyfile` flag of the `customize` subcommand.

[WARNING]
====
When creating a connection profile, you must use a `.nmconnection` filename extension in the filename of the connection profile. If you do not use a `.nmconnection` filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work.
When creating a connection profile, you must use a `.nmconnection` filename extension in the filename of the connection profile. If you do not use a `.nmconnection` filename extension, the cluster applies the connection profile to the live environment, but the cluster does not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work.
====

By creating a customized connection profile for nodes in your cluster, you can apply specific settings to your network to meet your newotking needs.

[IMPORTANT]
====
Consider that for a customized connection profile that applies changes to a physical interface and a bonding interface, the `configure-ovs` script might reset setting for these interfaces during a reboot operation. To fix this issue, set the `autoconnect-priority` parameter to `99` so that all interfaces are activited through the custom connection profile and not the default connection profile, which has `autoconnect-priority` set to `0`.
====

.Procedure

Expand Down
31 changes: 8 additions & 23 deletions modules/installation-user-infra-machines-static-network.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -287,9 +287,14 @@ ifndef::ibm-z[]
[discrete]
=== Bonding multiple SR-IOV network interfaces to a dual port NIC interface

Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the `bond=` option.
As an optional configuration, you can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the `bond=` option. To apply this configuration to your cluster, complete the procedure steps for each node that runs on your cluster.

On each node, you must perform the following tasks:
[IMPORTANT]
====
If your network configuration includes an Open vSwitch (OVS) interface and you enabled `active-backup` bond mode, you must specify a Media Access Control (MAC) address failover. This configuration prevents node communication issues with the bonded interfaces, such as `eno1f0` and `eno2f0`.
====

.Procedure

ifndef::installing-ibm-power[]
. Create the SR-IOV virtual functions (VFs) following the guidance in link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/managing-virtual-devices_configuring-and-managing-virtualization#managing-sr-iov-devices_managing-virtual-devices[Managing SR-IOV devices]. Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section.
Expand All @@ -314,6 +319,7 @@ The following examples illustrate the syntax you must use:
----
bond=bond0:eno1f0,eno2f0:mode=active-backup
ip=bond0:dhcp
fail_over_mac=1
----

** To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example:
Expand Down Expand Up @@ -393,12 +399,6 @@ a|Override the Ignition platform ID for the installed system.
a|`--console <spec>`
a|Set the kernel and bootloader console for the installed system. For more information about the format of `<spec>`, see the link:https://www.kernel.org/doc/html/latest/admin-guide/serial-console.html[Linux kernel serial console] documentation.

a|`--append-karg <arg>...`
a|Append a default kernel argument to the installed system.

a|`--delete-karg <arg>...`
a|Delete a default kernel argument from the installed system.

a|`-n`, `--copy-network`
a|Copy the network configuration from the install environment.

Expand Down Expand Up @@ -464,12 +464,6 @@ a|Specify the kernel and bootloader console for the destination system.
a|`--dest-device <path>`
a|Install and overwrite the specified destination device.

a|`--dest-karg-append <arg>`
a|Add a kernel argument to each boot of the destination system.

a|`--dest-karg-delete <arg>`
a|Delete a kernel argument from each boot of the destination system.

a|`--network-keyfile <path>`
a|Configure networking by using the specified NetworkManager keyfile for live and destination systems.

Expand All @@ -488,15 +482,6 @@ a|Apply the specified installer configuration file.
a|`--live-ignition <path>`
a|Merge the specified Ignition config file into a new configuration fragment for the live environment.

a|`--live-karg-append <arg>`
a|Add a kernel argument to each boot of the live environment.

a|`--live-karg-delete <arg>`
a|Delete a kernel argument from each boot of the live environment.

a|`--live-karg-replace <k=o=n>`
a|Replace a kernel argument in each boot of the live environment, in the form `key=old=new`.

a|`-f`, `--force`
a|Overwrite an existing Ignition config.

Expand Down
35 changes: 35 additions & 0 deletions modules/nw-ovs-bonding.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
// Module included in the following assemblies:
//
// * networking/configuring-ingress-cluster-traffic-ingress-controller.adoc

:_mod-docs-content-type: CONCEPT
[id="nw-ovs-bonding_{context}"]
= Open vSwitch (OVS) bonding

OVS bonding, also known as _link aggregation_, is a method that combines multiple physical network interfaces into a single logical physical interface, which is called either the _bond_ or the _link aggregate_. By applying this method to your network, you can increase performance, reliability, and load balancing capabilities for your network.

With an OVS bonding configuration on your network, each physical interface acts as a port and connects to a specific bond. A bond then connects to a virtual switch or an OVS bridge. This connection layout provides increased bandwidth and fault tolerance capabilities for traffic that runs on your network.

Consider the following architectural layout for OVS bridges that interact with OVS interfaces:

* The bridge MAC address is used for local communication.
* The physical MAC addresses of physical interfaces do not handle traffic.
* OVS handles all MAC address management at the OVS bridge level.

This layout simplies bond interface management as bonds acts as data paths where MAC address managements is centralized at the OVS bridge level.

You can choose the following OVS bonding modes for network:

* `active-backup` mode provides link aggregation capabilities for your network, where one physical interface acts as the active port while other physical interfaces act as standby ports. This mode provides fault tolerance connections for your network.
* `kernel-bonding` mode is a built-in Linux kernel function where link aggregation can exist among mutliple Ethernet interfaces to create a single logical physical interface. This mode does not provide the same level of customization as supported OVS mode, such as `balance-slb` mode.
* `balance-slb` mode, where an interface provides source load balancing (SLB) capabilities for a cluster that runs virtualization workloads. The interface can act independently without needing to communicate with a network switch.

For `kernel-bonding` mode, the bond interfaces exist outside, which means they are not in the data path, of the bridge interface. Network traffic in this mode is not sent or received on the bond interface port but instead requires additional bridging capabilities for MAC address assignment at the Kernel level. For `active-backup` and `balance-slb` modes, the bond interfaces exist in the same data path as the OVS bridge interface, so the OVS bridge can manage bonding logic instead of the physcial interfaces manages traffic.

Enabling `balance-slb` mode for an OVS bonding configuration provides source Media Access Control (MAC) hash-based load balancing capabilities to your network. With this mode, the source MAC hash is processed as a hash function that takes the MAC address as input. Outputted hash information determines the physical interface that acts as the bond. Consider enabling this mode for an advanced network configuration that has multiple source IP addresses and ports.

Consider that an OVS bond with `balance-slb` mode enabled might experience issues if the bond forwards unknown unicast traffic from one physical network interface controller (NIC) into the phsycial network through another NIC. Such a situation can result in an Layer 2 loop, or _bridge loop_, that in turn causes MAC flapping, where the same MAC address exists in multiple network locations for a period of time, for physical switches that exist in the network infrastructure.

This behavior is expected as a remote switch does not learn the MAC address for the destination of a unicast packet and this causes the packet to exist on all links available on the SLB bond configuration. As a workaround for this issue, you can set the bond to `active-backup` mode during MAC address assignment and then switch the bond to use `balance-slb` mode.


2 changes: 1 addition & 1 deletion modules/nw-understanding-networking-service-to-pod.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Key concepts of service-to-pod communication include:

Services use selectors to identify the pods that should receive the traffic. The selectors match labels on the pods to determine which pods are part of the service. Example: A service with the selector `app: myapp` will route traffic to all pods with the label `app: myapp`.

Endpoints are dynamically updated to reflect the current IP addresses of the pods that match the service selector. {product-name} maintains these endpoints and ensures that the service routes traffic to the correct pods.
Endpoints are dynamically updated to reflect the current IP addresses of the pods that match the service selector. {product-title} maintains these endpoints and ensures that the service routes traffic to the correct pods.

The communication flow refers to the sequence of steps and interactions that occur when a service in Kubernetes routes traffic to the appropriate pods. The typical communication flow for service-to-pod communication is as follows:

Expand Down
16 changes: 7 additions & 9 deletions modules/virt-example-bond-nncp.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,22 +5,20 @@
[id="virt-example-bond-nncp_{context}"]
= Example: Bond interface node network configuration policy

Create a bond interface on nodes in the cluster by applying a `NodeNetworkConfigurationPolicy` manifest
to the cluster.
Create a bond interface on nodes in the cluster by applying a `NodeNetworkConfigurationPolicy` manifest to the cluster.

[NOTE]
====
{VirtProductName} only supports the following bond modes:
{VirtProductName} supports only the following bond modes.

* mode=1 active-backup +
* mode=2 balance-xor +
* mode=4 802.3ad +
* `mode=1 active-backup` does not require a switch configuration but might cause loss of connectivity for a guest network.
* `mode=2 balance-xor` or similar typically requires a switch configuration to establish a port grouping and additional load-balancing configurations. If you set `xmit_hash_policy` to `vlan+srcmac` and `balance-slb: 1`, no switch configuration is needed as the network configuration bahaves similar to balance-slb` mode on an Open vSwitch (OVS) bonding interface.
* `mode=4 802.3ad` or similar does require a switch configuration to establish a port grouping and additional load-balancing configurations.

Other bond modes are not supported.
Other modes are not supported on {VirtProductName} .
====

The following YAML file is an example of a manifest for a bond interface.
It includes samples values that you must replace with your own information.
The following YAML file is an example of a manifest for a bond interface. The file includes samples values that you must replace with your own information.

[source,yaml]
----
Expand Down
5 changes: 5 additions & 0 deletions modules/virt-example-nmstate-multiple-interfaces.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,11 @@

You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest.

[IMPORTANT]
====
If the multiple interfaces use the same default configuration, a single Network Manager connection profile activates on multiple interfaces simultaneously and this causes connections to have the same universally unique identifier (UUID). To avoid this issue, ensure that each interface has a specific configuration that is different to the default configuration.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] RedHat.TermsErrors: Use 'different from' rather than 'different to'. For more information, see RedHat.TermsErrors.

====

The following example YAML file creates a bond that is named `bond10` across two NICs and VLAN that is named `bond10.103` that connects to the bond.

[source,yaml]
Expand Down