Skip to content

Commit f798153

Browse files
authored
Merge pull request #87228 from DCChadwick/osdocs8646b
OSDOCS-8646: Rewriting introduction to networking (ContentX)
2 parents dfbf620 + 4499d7d commit f798153

File tree

34 files changed

+926
-46
lines changed

34 files changed

+926
-46
lines changed

_topic_maps/_topic_map.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1333,8 +1333,6 @@ Name: Networking
13331333
Dir: networking
13341334
Distros: openshift-enterprise,openshift-origin
13351335
Topics:
1336-
- Name: About networking
1337-
File: about-networking
13381336
- Name: Understanding networking
13391337
File: understanding-networking
13401338
- Name: Zero trust networking

modules/nw-load-balancing-about.adoc

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/understanding-networking.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="nw-load-balancing-about_{context}"]
7+
= Supported load balancers
8+
9+
Load balancing distributes incoming network traffic across multiple servers to maintain the health and efficiency of your clusters by ensuring that no single server bears too much load. Load balancers are devices that perform load balancing. They act as intermediaries between clients and servers to manage and direct traffic based on predefined rules.
10+
11+
{product-title} supports the following types of load balancers:
12+
13+
* Classic Load Balancer (CLB)
14+
* Elastic Load Balancing (ELB)
15+
* Network Load Balancer (NLB)
16+
* Application Load Balancer (ALB)
17+
18+
ELB is the default load-balancer type for AWS routers. CLB is the default for self-managed environments. NLB is the default for Red Hat OpenShift Service on AWS (ROSA).
19+
20+
[IMPORTANT]
21+
====
22+
Use ALB in front of an application but not in front of a router. Using an ALB requires the AWS Load Balancer Operator add-on. This operator is not supported for all {aws-first} regions or for all {product-title} profiles.
23+
====
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/understanding-networking.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="nw-load-balancing-configure-define-type_{context}"]
7+
= Define the default load balancer type
8+
9+
When installing the cluster, you can specify the type of load balancer that you want to use. The type of load balancer you choose at cluster installation gets applied to the entire cluster.
10+
11+
This example shows how to define the default load-balancer type for a cluster deployed on {aws-short}.You can apply the procedure on other supported platforms.
12+
13+
[source,yaml]
14+
----
15+
apiVersion: v1
16+
kind: Network
17+
metadata:
18+
name: cluster
19+
platform:
20+
aws: <1>
21+
lbType: classic <2>
22+
----
23+
<1> The `platform` key represents the platform on which you have deployed your cluster. This example uses `aws`.
24+
<2> The `lbType` key represents the load balancer type. This example uses the Classic Load Balancer, `classic`.
Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/understanding-networking.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="nw-load-balancing-configure-specify-behavior_{context}"]
7+
= Specify load balancer behavior for an Ingress Controller
8+
9+
After you install a cluster, you can configure your Ingress Controller to specify how services are exposed to external networks, so that you can better control the settings and behavior of a load balancer.
10+
11+
[NOTE]
12+
====
13+
Changing the load balancer settings on an Ingress Controller might override the load balancer settings you specified at installation.
14+
====
15+
16+
[source,yaml]
17+
----
18+
apiVersion: v1
19+
kind: Network
20+
metadata:
21+
name: cluster
22+
endpointPublishingStrategy:
23+
loadBalancer: <1>
24+
dnsManagementPolicy: Managed
25+
providerParameters:
26+
aws:
27+
classicLoadBalancer: <2>
28+
connectionIdleTimeout: 0s
29+
type: Classic
30+
type: AWS
31+
scope: External
32+
type: LoadBalancerService
33+
----
34+
<1> The `loadBalancer' field specifies the load balancer configuration settings.
35+
<2> The `classicLoadBalancer` field sets the load balancer to `classic` and includes settings specific to the CLB on {aws-short}.
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/understanding-networking.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="nw-load-balancing-configure_{context}"]
7+
= Configuring Load balancers
8+
9+
You can define your default load-balancer type during cluster installation. After installation, you can configure your ingress controller to behave in a specific way that is not covered by the global platform configuration that you defined at cluster installation.
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/understanding-networking.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="nw-understanding-networking-choosing-service-types_{context}"]
7+
= Choosing between service types and API resources
8+
9+
Service types and API resources offer different benefits for exposing applications and securing network connections. By leveraging the appropriate service type or API resource, you can effectively manage how your applications are exposed and ensure secure, reliable access for both internal and external clients.
10+
11+
{product-title} supports the following service types and API resources:
12+
13+
* Service Types
14+
15+
** `ClusterIP` is intended for internal-only exposure. It is easy to set up and provides a stable internal IP address for accessing services within the cluster. `ClusterIP` is suitable for communication between services within the cluster.
16+
17+
** `NodePort` allows external access by exposing the service on each node's IP at a static port. It is straightforward to set up and useful for development and testing. `NodePort` is good for simple external access without the need for a load balancer from the cloud provider.
18+
19+
** `LoadBalancer` automatically provisions an external load balancer to distribute traffic across multiple nodes.
20+
It is ideal for production environments where reliable, high-availability access is needed.
21+
22+
** `ExternalName` maps a service to an external DNS name to allow services outside the cluster to be accessed using the service's DNS name. It is good for integrating external services or legacy systems with the cluster.
23+
24+
** Headless service is a DNS name that returns the list of pod IPs without providing a stable `ClusterIP`. This is ideal for stateful applications or scenarios where direct access to individual pod IPs is needed.
25+
26+
* API Resources
27+
28+
** `Ingress` provides control over routing HTTP and HTTPS traffic, including support for load balancing, SSL/TLS termination, and name-based virtual hosting. It is more flexible than services alone and supports multiple domains and paths. `Ingress` is ideal when complex routing is required.
29+
30+
** `Route` is similar to `Ingress` but provides additional features, including TLS re-encryption and passthrough. It simplifies the process of exposing services externally. `Route` is best for when you need advanced features, such as integrated certificate management.
31+
32+
If you need a simple way to expose a service to external traffic, `Route` or `Ingress` might be the best choice. These resources can be managed by a namespace admin or developer. The easiest approach is to create a route, check its external DNS name, and configure your DNS to have a CNAME that points to the external DNS name.
33+
34+
For HTTP/HTTPS/TLS, `Route` or `Ingress` should suffice. Anything else is more complex and requires a cluster admin to ensure ports are accessible or MetalLB is configured. `LoadBalancer` services are also an option in cloud environments or appropriately configured bare-metal environments.
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/understanding-networking.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="nw-understanding-networking-common-practices_{context}"]
7+
= Common practices for networking services
8+
9+
In {product-title}, services create a single IP address for clients to use, even if multiple pods are providing that service. This abstraction enables seamless scaling, fault tolerance, and rolling upgrades without affecting clients.
10+
11+
Network security policies manage traffic within the cluster. Network controls empower namespace administrators to define ingress and egress rules for their pods. By using network administration policies, cluster administrators can establish namespace policies, override namespace policies, or set default policies when none are defined.
12+
13+
Egress firewall configurations control outbound traffic from pods. These configuration settings ensure that only authorized communication occurs. The ingress node firewall protects nodes by controlling incoming traffic. Additionally, the Universal Data Network manages data traffic across the cluster.
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/understanding-networking.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="nw-understanding-networking-concepts-components_{context}"]
7+
= Networking concepts and components
8+
9+
Networking in {product-title} uses several key components and concepts.
10+
11+
* Pods and services are the smallest deployable units in Kubernetes, and services provide stable IP addresses and DNS names for sets of pods. Each pod in a cluster is assigned a unique IP address. Pods use IP addresses to communicate directly with other pods, regardless of which node they are on. The pod IP addresses will change when pods are destroyed and created. Services are also assigned unique IP addresses. A service is associated with the pods that can provide the service. When accessed, the service IP address provides a stable way to access pods by sending traffic to one of the pods that backs the service.
12+
13+
* Route and Ingress APIs define rules that route HTTP, HTTPS, and TLS traffic to services within the cluster. {product-title} provides both Route and Ingress APIs as part of the default installation, but you can add third-party Ingress Controllers to the cluster.
14+
15+
* The Container Network Interface (CNI) plugin manages the pod network to enable pod-to-pod communication.
16+
17+
* The Cluster Network Operator (CNO) CNO manages the networking plugin components of a cluster. Using the CNO, you can set the network configuration, such as the pod network CIDR and service network CIDR.
18+
19+
* DNS operators manage DNS services within the cluster to ensure that services are reachable by their DNS names.
20+
21+
* Network controls define how pods are allowed to communicate with each other and with other network endpoints. These policies help secure the cluster by controlling traffic flow and enforcing rules for pod communication.
22+
23+
* Load balancing distributes network traffic across multiple servers to ensure reliability and performance.
24+
25+
* Service discovery is a mechanism for services to find and communicate with each other within the cluster.
26+
27+
* The Ingress Operator uses {product-title} Route to manage the router and enable external access to cluster services.
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/understanding-networking.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="nw-understanding-networking-controls_{context}"]
7+
= Network controls
8+
9+
Network controls define rules for how pods are allowed to communicate with each other and with other network endpoints. Network controls are implemented at the network level to ensure that only allowed traffic can flow between pods. This helps secure the cluster by restricting traffic flow and preventing unauthorized access.
10+
11+
* Admin network policies (ANP): ANPs are cluster-scoped custom resource definitions (CRDs). As a cluster administrator, you can use an ANP to define network policies at a cluster level. You cannot override these policies by using regular network policy objects. These policies enforce strict network security rules across the entire cluster. ANPs can specify ingress and egress rules to allow administrators to control the traffic that enters and leaves the cluster.
12+
13+
* Egress firewall: The egress firewall restricts egress traffic leaving the cluster. With this firewall, administrators can limit the external hosts that pods can access from within the cluster. You can configure egress firewall policies to allow or deny traffic to specific IP ranges, DNS names, or external services. This helps prevent unauthorized access to external resources and ensures that only allowed traffic can leave the cluster.
14+
15+
* Ingress node firewall: The ingress node firewall controls ingress traffic to the nodes in a cluster. With this firewall, administrators define rules that restrict which external hosts can initiate connections to the nodes. This helps protect the nodes from unauthorized access and ensures that only trusted traffic can reach the cluster.
Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * networking/understanding-networking.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="nw-understanding-networking-dns-example_{context}"]
7+
= Example: DNS use case
8+
9+
For this example, a front-end application is running in one set of pods and a back-end service is running in another set of pods. The front-end application needs to communicate with the back-end service. You create a service for the back-end pods that gives it a stable IP address and DNS name. The front-end pods use this DNS name to access the back-end service regardless of changes to individual pod IP addresses.
10+
11+
By creating a service for the back-end pods, you provide a stable IP and DNS name, `backend-service.default.svc.cluster.local`, that the front-end pods can use to communicate with the back-end service. This setup would ensure that even if individual pod IP addresses change, the communication remains consistent and reliable.
12+
13+
The following steps demonstrate an example of how to configure front-end pods to communicate with a back-end service using DNS.
14+
15+
. Create the back-end service.
16+
17+
.. Deploy the back-end pods.
18+
+
19+
[source, yaml]
20+
----
21+
apiVersion: apps/v1
22+
kind: Deployment
23+
metadata:
24+
name: backend-deployment
25+
labels:
26+
app: backend
27+
spec:
28+
replicas: 3
29+
selector:
30+
matchLabels:
31+
app: backend
32+
template:
33+
metadata:
34+
labels:
35+
app: backend
36+
spec:
37+
containers:
38+
- name: backend-container
39+
image: your-backend-image
40+
ports:
41+
- containerPort: 8080
42+
----
43+
44+
.. Define a service to expose the back-end pods.
45+
+
46+
[source, yaml]
47+
----
48+
apiVersion: v1
49+
kind: Service
50+
metadata:
51+
name: backend-service
52+
spec:
53+
selector:
54+
app: backend
55+
ports:
56+
- protocol: TCP
57+
port: 80
58+
targetPort: 8080
59+
----
60+
61+
. Create the front-end pods.
62+
63+
.. Define the front-end pods.
64+
+
65+
[source, yaml]
66+
----
67+
apiVersion: apps/v1
68+
kind: Deployment
69+
metadata:
70+
name: frontend-deployment
71+
labels:
72+
app: frontend
73+
spec:
74+
replicas: 3
75+
selector:
76+
matchLabels:
77+
app: frontend
78+
template:
79+
metadata:
80+
labels:
81+
app: frontend
82+
spec:
83+
containers:
84+
- name: frontend-container
85+
image: your-frontend-image
86+
ports:
87+
- containerPort: 80
88+
----
89+
90+
.. Apply the pod definition to your cluster.
91+
+
92+
[source,terminal]
93+
----
94+
$ oc apply -f frontend-deployment.yaml
95+
----
96+
97+
. Configure the front-end to communicate with the back-end.
98+
+
99+
In your front-end application code, use the DNS name of the back-end service to send requests. For example, if your front-end application needs to fetch data from the back-end pod, your application might include the following code:
100+
+
101+
[source, JavaScript]
102+
----
103+
fetch('http://backend-service.default.svc.cluster.local/api/data')
104+
.then(response => response.json())
105+
.then(data => console.log(data));
106+
----

0 commit comments

Comments
 (0)