Skip to content

Commit 53d7070

Browse files
committed
Added Planning your environment according to object limits content for 4.0
1 parent f09d774 commit 53d7070

5 files changed

+217
-0
lines changed

_topic_map.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,8 @@ Topics:
105105
File: using-node-tuning-operator
106106
- Name: Scaling the cluster monitoring Operator
107107
File: scaling-cluster-monitoring-operator
108+
- Name: Planning your environment according to object limits
109+
File: planning-your-environment-according-to-object-limits
108110
---
109111
Name: Operators
110112
Dir: operators
Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * scalability_and_performance/planning-your-environment-according-to-object-limits.adoc
4+
5+
[id='how-to-plan-according-to-application-requirements_{context}'
6+
= How to plan your environment according to application requirements
7+
8+
Consider an example application environment:
9+
10+
[options="header",cols="5"]
11+
|===
12+
|Pod type |Pod quantity |Max memory |CPU cores |Persistent storage
13+
14+
|apache
15+
|100
16+
|500 MB
17+
|0.5
18+
|1 GB
19+
20+
|node.js
21+
|200
22+
|1 GB
23+
|1
24+
|1 GB
25+
26+
|postgresql
27+
|100
28+
|1 GB
29+
|2
30+
|10 GB
31+
32+
|JBoss EAP
33+
|100
34+
|1 GB
35+
|1
36+
|1 GB
37+
|===
38+
39+
Extrapolated requirements: 550 CPU cores, 450GB RAM, and 1.4TB storage.
40+
41+
Instance size for nodes can be modulated up or down, depending on your
42+
preference. Nodes are often resource overcommitted. In this deployment
43+
scenario, you can choose to run additional smaller nodes or fewer larger nodes
44+
to provide the same amount of resources. Factors such as operational agility and
45+
cost-per-instance should be considered.
46+
47+
[options="header",cols="4"]
48+
|===
49+
|Node type |Quantity |CPUs |RAM (GB)
50+
51+
|Nodes (option 1)
52+
|100
53+
|4
54+
|16
55+
56+
|Nodes (option 2)
57+
|50
58+
|8
59+
|32
60+
61+
|Nodes (option 3)
62+
|25
63+
|16
64+
|64
65+
|===
66+
67+
Some applications lend themselves well to overcommitted environments, and some
68+
do not. Most Java applications and applications that use huge pages are examples
69+
of applications that would not allow for overcommitment. That memory can not be
70+
used for other applications. In the example above, the environment would be
71+
roughly 30 percent overcommitted, a common ratio.
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * scalability_and_performance/planning-your-environment-according-to-object-limits.adoc
4+
5+
[id='how-to-plan-according-to-cluster-limits_{context}'
6+
= How to plan your environment according to cluster limits
7+
8+
[IMPORTANT]
9+
====
10+
Oversubscribing the physical resources on a node affects resource guarantees the
11+
Kubernetes scheduler makes during pod placement. Learn what measures you can
12+
take to avoid memory swapping.
13+
====
14+
15+
While planning your environment, determine how many pods are expected to fit per
16+
node:
17+
18+
----
19+
Maximum Pods per Cluster / Expected Pods per Node = Total Number of Nodes
20+
----
21+
22+
The number of pods expected to fit on a node is dependent on the application
23+
itself. Consider the application's memory, CPU, and storage requirements.
24+
25+
.Example scenario
26+
27+
If you want to scope your cluster for 2200 pods per cluster, you would need at
28+
least nine nodes, assuming that there are 250 maximum pods per node:
29+
30+
----
31+
2200 / 250 = 8.8
32+
----
33+
34+
If you increase the number of nodes to 20, then the pod distribution changes to
35+
110 pods per node:
36+
37+
----
38+
2200 / 20 = 110
39+
----

modules/openshift-cluster-limits.adoc

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * scalability_and_performance/planning-your-environment-according-to-object-limits.adoc
4+
5+
[id='cluster-limits_{context}']
6+
= {product-title} cluster limits
7+
8+
[options="header",cols="5*"]
9+
|===
10+
| Limit type |3.9 limit |3.10 limit |3.11 limit |4.0 limit
11+
12+
| Number of nodes footnoteref:[numberofnodes,Clusters with more than the stated limit are not supported. Consider splitting into multiple clusters.]
13+
| 2,000
14+
| 2,000
15+
| 2,000
16+
|TBD
17+
18+
| Number of pods footnoteref:[numberofpods,The pod count displayed here is the number of test pods. The actual number of pods depends on the application’s memory, CPU, and storage requirements.]
19+
| 120,000
20+
| 150,000
21+
| 150,000
22+
|TBD
23+
24+
| Number of pods per node
25+
| 250
26+
| 250
27+
| 250
28+
|TBD
29+
30+
| Number of pods per core
31+
| 10 is the default value. The maximum supported value is the number of pods per node.
32+
| There is no default value. The maximum supported value is the number of pods per node.
33+
| There is no default value. The maximum supported value is the number of pods per node.
34+
|TBD
35+
36+
| Number of namespaces
37+
| 10,000
38+
| 10,000
39+
| 10,000
40+
|TBD
41+
42+
| Number of builds: Pipeline Strategy
43+
| 10,000 (Default pod RAM 512 Mi)
44+
| 10,000 (Default pod RAM 512 Mi)
45+
| 10,000 (Default pod RAM 512 Mi)
46+
|TBD
47+
48+
| Number of pods per namespace footnoteref:[objectpernamespace,There are
49+
a number of control loops in the system that need to iterate over all objects
50+
in a given namespace as a reaction to some changes in state. Having a large
51+
number of objects of a given type in a single namespace can make those loops
52+
expensive and slow down processing given state changes.]
53+
| 3,000
54+
| 3,000
55+
| 3,000
56+
|TBD
57+
58+
| Number of services footnoteref:[servicesandendpoints,Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.]
59+
| 10,000
60+
| 10,000
61+
| 10,000
62+
|TBD
63+
64+
| Number of services per namespace
65+
| N/A
66+
| 5,000
67+
| 5,000
68+
|TBD
69+
70+
| Number of back-ends per service
71+
| 5,000
72+
| 5,000
73+
| 5,000
74+
|TBD
75+
76+
| Number of deployments per namespace footnoteref:[objectpernamespace]
77+
| 2,000
78+
| 2,000
79+
| 2,000
80+
|TBD
81+
82+
|===
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
[id='planning-your-environment-according-to-object-limits']
2+
= Planning your environment according to object limits
3+
include::modules/common-attributes.adoc[]
4+
:context: object-limits
5+
6+
toc::[]
7+
8+
9+
Consider the following object limits when you plan your {product-title} cluster.
10+
11+
These limits are based on on the the largest possible cluster. For smaller
12+
clusters, the limits are proportionally lower. There are many factors that
13+
influence the stated thresholds, including the etcd version or storage data
14+
format.
15+
16+
In most cases, exceeding these limits results in lower overall performance. It
17+
does not necessarily mean that the cluster will fail.
18+
19+
include::modules/openshift-cluster-limits.adoc[leveloffset=+1]
20+
21+
include::modules/how-to-plan-your-environment-according-to-cluster-limits.adoc[leveloffset=+1]
22+
23+
include::modules/how-to-plan-your-environment-according-to-application-requirements.adoc[leveloffset=+1]

0 commit comments

Comments
 (0)