You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Because encrypting and decrypting node hosts uses CPU power, performance is affected both in throughput and CPU usage on the nodes when encryption is enabled, regardless of the IP security system being used.
9
10
10
-
IPSec encrypts traffic at the IP payload level, before it hits the NIC, protecting fields that would otherwise be used for NIC offloading. This means that some NIC acceleration features might not be usable when IPSec is enabled and will lead to decreased throughput and increased CPU usage.
11
+
IPSec encrypts traffic at the IP payload level, before it hits the NIC, protecting fields that would otherwise be used for NIC offloading. This means that some NIC acceleration features might not be usable when IPSec is enabled and leads to decreased throughput and increased CPU usage.
There are two important maximum transmission units (MTUs): the network interface controller (NIC) MTU and the cluster network MTU.
9
10
10
-
The NIC MTU is configured at the time of {product-title} installation, and you can also change the cluster's MTU as a Day 2 operation. See "Changing cluster network MTU" for more information. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value.
11
+
The NIC MTU is configured at the time of {product-title} installation, and you can also change the MTU of a cluster as a postinstallation task. For more information, see "Changing cluster network MTU".
12
+
13
+
For a cluster that uses the OVN-Kubernetes plugin, the MTU must be less than `100` bytes to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value, such as `8900`. If you are optimizing for lowest latency, choose a lower value.
14
+
15
+
[IMPORTANT]
16
+
====
17
+
If your cluster uses the OVN-Kubernetes plugin and the network uses a NIC to send and receive unfragmented jumbo frame packets over the network, you must specify `9000` bytes as the MTU value for the NIC so that pods do not fail.
18
+
====
11
19
12
-
For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum.
= Recommended practices for installing large scale clusters
7
8
8
-
When installing large clusters or scaling the cluster to larger node counts,
9
-
set the cluster network `cidr` accordingly in your `install-config.yaml`
10
-
file before you install the cluster:
9
+
When installing large clusters or scaling the cluster to larger node counts, set the cluster network `cidr` accordingly in your `install-config.yaml` file before you install the cluster.
10
+
11
+
.Example `install-config.yaml` file with a network configuration for a cluster with a large node count
11
12
12
13
[source,yaml]
13
14
----
@@ -22,6 +23,4 @@ networking:
22
23
- 172.30.0.0/16
23
24
----
24
25
25
-
The default cluster network `cidr` `10.128.0.0/14` cannot be used if the cluster
26
-
size is more than 500 nodes. It must be set to `10.128.0.0/12` or
27
-
`10.128.0.0/10` to get to larger node counts beyond 500 nodes.
26
+
The default cluster network `cidr``10.128.0.0/14` cannot be used if the cluster size is more than 500 nodes. The `cidr` must be set to `10.128.0.0/12` or `10.128.0.0/10` to get to larger node counts beyond 500 nodes.
Geneve provides benefits over VLANs, such as an increase in networks from 4096 to over 16 million, and layer 2 connectivity across physical networks. This allows for all pods behind a service to communicate with each other, even if they are running on different systems.
12
12
13
-
Geneve encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and
14
-
inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional
15
-
processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks.
13
+
Geneve encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks.
16
14
17
15
Cloud, VM, and bare metal CPU performance can be capable of handling much more than one Gbps network throughput. When using higher bandwidth links such as 10 or 40 Gbps, reduced performance can occur. This is a known issue in Geneve-based environments and is not specific to containers or {product-title}. Any network that relies on Geneve or VXLAN tunnels will perform similarly because of the tunnel implementation.
18
16
@@ -23,15 +21,18 @@ If you are looking to push beyond one Gbps, you can:
23
21
24
22
Geneve-offload does not reduce latency. However, CPU utilization is reduced even in latency tests.
0 commit comments