Skip to content

Commit 6e805f8

Browse files
authored
Merge pull request #13327 from ahardin-rh/4-0-networking-optimization
Added Network optimization content to 4.0 docs
2 parents cef4943 + d853fcd commit 6e805f8

File tree

5 files changed

+174
-2
lines changed

5 files changed

+174
-2
lines changed

_topic_map.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -188,6 +188,8 @@ Topics:
188188
File: understanding-networking
189189
- Name: Using cookies to keep route statefulness
190190
File: using-cookies-to-keep-route-statefulness
191+
- Name: Network optimization
192+
File: network-optimization
191193
---
192194
Name: Registry
193195
Dir: registry
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
// Module included in the following assemblies:
2+
//
3+
// networking/network-optimization.adoc
4+
5+
[id='Configuring-network-subnets-{context}']
6+
= Configuring network subnets
7+
8+
All Pods are assigned IPs. This enables Pod to Pod communication and Pod to node
9+
communication without network address translation (NAT). For a CIDR in the range
10+
of 10.128.0.0/14, IPs in this range are, by default, assigned to the Pods. If
11+
you want to change this, you must adjust the `ClusterNetwork`. If you want to
12+
customize the IPs rolled out to services, then you must adjust the
13+
`ServiceNetwork`.
14+
15+
.Procedure
16+
17+
To configure a custom IP range, also called a _subnet_, complete the following.
18+
19+
. Run:
20+
+
21+
----
22+
$ ./openshift-install --dir=new-install create install-config
23+
----
24+
25+
. Change to the `new-install` directory:
26+
+
27+
----
28+
$ cd new-install
29+
----
30+
31+
. Edit the `install-config.yaml` file, setting the required fields under
32+
`networking`:
33+
+
34+
----
35+
networking:
36+
clusterNetworks:
37+
- cidr: 10.128.0.0/14
38+
hostSubnetLength: 9
39+
machineCIDR: 10.0.0.0/16
40+
serviceCIDR: 172.30.0.0/16
41+
type: OpenshiftSDN
42+
----
43+
44+
. Consume the customized `install-config.yaml` file to deploy the cluster:
45+
+
46+
----
47+
$ ./openshift-install --dir=new-install create cluster
48+
----
Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
// Module included in the following assemblies:
2+
//
3+
// networking/network-optimization.adoc
4+
5+
[id='optimizing-the-mtu-for-your-network-{context}']
6+
= Optimizing the MTU for your network
7+
8+
There are two important maximum transmission units (MTUs): the network
9+
interface card (NIC) MTU and the SDN overlay's MTU.
10+
11+
MTUs are now autodetected, so you do not need to adjust MTU settings. When the
12+
Cluster Network Operator runs, it determines the MTU used by the default route,
13+
then sets the MTU accordingly. If you want to have a fixed MTU size for your
14+
workloads, then you must set the MTU explicitly, then set the network
15+
configuration before installation.
16+
17+
.Procedure
18+
19+
To set the MTU explicitly, complete the following.
20+
21+
. Run:
22+
+
23+
----
24+
$ openshift-installer create install-config
25+
----
26+
27+
. Run the following to generate the required manifests:
28+
+
29+
----
30+
$ openshift-installer create manifests
31+
----
32+
33+
. Edit the `manifests/cluster-network-02-config.yml` file.
34+
35+
. Run the following to use the manifests to create a cluster:
36+
+
37+
----
38+
$ openshift-installer create cluster
39+
----
40+
41+
. The following is the default state of the `cluster-network-02-config.yaml` file.
42+
Adjust the settings as necessary:
43+
+
44+
----
45+
apiVersion: networkoperator.openshift.io/v1
46+
kind: NetworkConfig
47+
metadata:
48+
creationTimestamp: null
49+
name: default
50+
spec:
51+
additionalNetworks: null
52+
clusterNetworks:
53+
- cidr: 10.128.0.0/14
54+
hostSubnetLength: 9
55+
defaultNetwork:
56+
openshiftSDNConfig:
57+
mode: Networkpolicy <1>
58+
otherConfig: null
59+
type: OpenshiftSDN
60+
serviceNetwork: 172.30.0.0/16
61+
status: {}
62+
----
63+
+
64+
<1> `Networkpolicy` is the default.
65+
+
66+
You can also set the `Multitenant` or `Subnet`, `vxlanPort`, and `mtu`:
67+
+
68+
----
69+
spec:
70+
defaultNetwork:
71+
type: OpenShiftSDN
72+
openshiftSDNConfig:
73+
mode: NetworkPolicy
74+
vxlanPort: 4789
75+
mtu: 1450
76+
useExternalOpenvswitch: false
77+
----

networking/PLACEHOLDER

Lines changed: 0 additions & 2 deletions
This file was deleted.

networking/network-optimization.adoc

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
[id='network-optimization']
2+
= Network optimization
3+
include::modules/common-attributes.adoc[]
4+
:context: networking
5+
6+
toc::[]
7+
8+
{nbsp} +
9+
The OpenShift SDN uses Open vSwitch, Virtual Extensible LAN (VXLAN) tunnels,
10+
OpenFlow rules, and iptables. This network can be tuned by using jumbo frames,
11+
network interface cards (NIC) offloads, multi-queue, and ethtool settings.
12+
13+
VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to
14+
over 16 million, and layer 2 connectivity across physical networks. This allows
15+
for all Pods behind a service to communicate with each other, even if they are
16+
running on different systems.
17+
18+
VXLAN encapsulates all tunneled traffic in user datagram protocol (UDP) packets.
19+
However, this leads to increased CPU utilization. Both these outer- and
20+
inner-packets are subject to normal checksumming rules to guarantee data has not
21+
been corrupted during transit. Depending on CPU performance, this additional
22+
processing overhead can cause a reduction in throughput and increased latency
23+
when compared to traditional, non-overlay networks.
24+
25+
Cloud, VM, and bare metal CPU performance can be capable of handling much more
26+
than one Gbps network throughput. When using higher bandwidth links such as 10
27+
or 40 Gbps, reduced performance can occur. This is a known issue in VXLAN-based
28+
environments and is not specific to containers or {product-title}. Any network
29+
that relies on VXLAN tunnels performs similarly because of the VXLAN
30+
implementation.
31+
32+
If you are looking to push beyond one Gbps, you can:
33+
34+
* Evaluate network plug-ins that implement different routing techniques, such as
35+
border gateway protocol (BGP).
36+
* Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet
37+
checksum calculation and associated CPU overhead off of the system CPU and onto
38+
dedicated hardware on the network adapter. This frees up CPU cycles for use by
39+
Pods and applications, and allows users to utilize the full bandwidth of their
40+
network infrastructure.
41+
42+
VXLAN-offload does not reduce latency. However, CPU utilization is reduced even
43+
in latency tests.
44+
45+
include::modules/configuring-network-subnets.adoc[leveloffset=+1]
46+
47+
include::modules/optimizing-the-MTU-for-your-network.adoc[leveloffset=+1]

0 commit comments

Comments
 (0)