Skip to content

Commit 0f8eebb

Browse files
authored
Merge pull request #74822 from eromanova97/OBSDOCS-432
OBSDOCS-432: Documentation updates for changes to topology spread constraints for monitoring
2 parents 8a748fc + 006be9b commit 0f8eebb

6 files changed

+190
-219
lines changed
Lines changed: 180 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,180 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/monitoring/configuring-the-monitoring-stack.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="configuring-pod-topology-spread-constraints_{context}"]
7+
= Configuring pod topology spread constraints
8+
9+
You can configure pod topology spread constraints for
10+
ifndef::openshift-dedicated,openshift-rosa[]
11+
all the pods deployed by the Cluster Monitoring Operator
12+
endif::openshift-dedicated,openshift-rosa[]
13+
ifdef::openshift-dedicated,openshift-rosa[]
14+
all the pods for user-defined monitoring
15+
endif::openshift-dedicated,openshift-rosa[]
16+
to control how pod replicas are scheduled to nodes across zones.
17+
This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.
18+
19+
You can configure pod topology spread constraints for monitoring pods by using
20+
ifndef::openshift-dedicated,openshift-rosa[]
21+
the `cluster-monitoring-config` or
22+
endif::openshift-dedicated,openshift-rosa[]
23+
the `user-workload-monitoring-config` config map.
24+
25+
.Prerequisites
26+
27+
ifndef::openshift-dedicated,openshift-rosa[]
28+
* *If you are configuring pods for core {product-title} monitoring:*
29+
** You have access to the cluster as a user with the `cluster-admin` cluster role.
30+
** You have created the `cluster-monitoring-config` `ConfigMap` object.
31+
* *If you are configuring pods for user-defined monitoring:*
32+
** A cluster administrator has enabled monitoring for user-defined projects.
33+
** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project.
34+
endif::openshift-dedicated,openshift-rosa[]
35+
ifdef::openshift-dedicated,openshift-rosa[]
36+
* You have access to the cluster as a user with the `dedicated-admin` role.
37+
* The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created.
38+
endif::openshift-dedicated,openshift-rosa[]
39+
40+
* You have installed the OpenShift CLI (`oc`).
41+
42+
.Procedure
43+
44+
ifndef::openshift-dedicated,openshift-rosa[]
45+
* *To configure pod topology spread constraints for core {product-title} monitoring:*
46+
47+
. Edit the `cluster-monitoring-config` config map in the `openshift-monitoring` project:
48+
+
49+
[source,terminal]
50+
----
51+
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
52+
----
53+
54+
. Add the following settings under the `data/config.yaml` field to configure pod topology spread constraints:
55+
+
56+
[source,yaml]
57+
----
58+
apiVersion: v1
59+
kind: ConfigMap
60+
metadata:
61+
name: cluster-monitoring-config
62+
namespace: openshift-monitoring
63+
data:
64+
config.yaml: |
65+
<component>: # <1>
66+
topologySpreadConstraints:
67+
- maxSkew: <n> # <2>
68+
topologyKey: <key> # <3>
69+
whenUnsatisfiable: <value> # <4>
70+
labelSelector: # <5>
71+
<match_option>
72+
----
73+
<1> Specify a name of the component for which you want to set up pod topology spread constraints.
74+
<2> Specify a numeric value for `maxSkew`, which defines the degree to which pods are allowed to be unevenly distributed.
75+
<3> Specify a key of node labels for `topologyKey`.
76+
Nodes that have a label with this key and identical values are considered to be in the same topology.
77+
The scheduler tries to put a balanced number of pods into each domain.
78+
<4> Specify a value for `whenUnsatisfiable`.
79+
Available options are `DoNotSchedule` and `ScheduleAnyway`.
80+
Specify `DoNotSchedule` if you want the `maxSkew` value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum.
81+
Specify `ScheduleAnyway` if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
82+
<5> Specify `labelSelector` to find matching pods.
83+
Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.
84+
+
85+
.Example configuration for Prometheus
86+
[source,yaml]
87+
----
88+
apiVersion: v1
89+
kind: ConfigMap
90+
metadata:
91+
name: cluster-monitoring-config
92+
namespace: openshift-monitoring
93+
data:
94+
config.yaml: |
95+
prometheusK8s:
96+
topologySpreadConstraints:
97+
- maxSkew: 1
98+
topologyKey: monitoring
99+
whenUnsatisfiable: DoNotSchedule
100+
labelSelector:
101+
matchLabels:
102+
app.kubernetes.io/name: prometheus
103+
----
104+
105+
. Save the file to apply the changes automatically.
106+
+
107+
[WARNING]
108+
====
109+
When you save changes to the `cluster-monitoring-config` config map, the pods and other resources in the `openshift-monitoring` project might be redeployed.
110+
The running monitoring processes in that project might also restart.
111+
====
112+
113+
* *To configure pod topology spread constraints for user-defined monitoring:*
114+
endif::openshift-dedicated,openshift-rosa[]
115+
116+
. Edit the `user-workload-monitoring-config` config map in the `openshift-user-workload-monitoring` project:
117+
+
118+
[source,terminal]
119+
----
120+
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
121+
----
122+
123+
. Add the following settings under the `data/config.yaml` field to configure pod topology spread constraints:
124+
+
125+
[source,yaml]
126+
----
127+
apiVersion: v1
128+
kind: ConfigMap
129+
metadata:
130+
name: user-workload-monitoring-config
131+
namespace: openshift-user-workload-monitoring
132+
data:
133+
config.yaml: |
134+
<component>: # <1>
135+
topologySpreadConstraints:
136+
- maxSkew: <n> # <2>
137+
topologyKey: <key> # <3>
138+
whenUnsatisfiable: <value> # <4>
139+
labelSelector: # <5>
140+
<match_option>
141+
----
142+
<1> Specify a name of the component for which you want to set up pod topology spread constraints.
143+
<2> Specify a numeric value for `maxSkew`, which defines the degree to which pods are allowed to be unevenly distributed.
144+
<3> Specify a key of node labels for `topologyKey`.
145+
Nodes that have a label with this key and identical values are considered to be in the same topology.
146+
The scheduler tries to put a balanced number of pods into each domain.
147+
<4> Specify a value for `whenUnsatisfiable`.
148+
Available options are `DoNotSchedule` and `ScheduleAnyway`.
149+
Specify `DoNotSchedule` if you want the `maxSkew` value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum.
150+
Specify `ScheduleAnyway` if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
151+
<5> Specify `labelSelector` to find matching pods.
152+
Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.
153+
+
154+
.Example configuration for Thanos Ruler
155+
[source,yaml]
156+
----
157+
apiVersion: v1
158+
kind: ConfigMap
159+
metadata:
160+
name: user-workload-monitoring-config
161+
namespace: openshift-user-workload-monitoring
162+
data:
163+
config.yaml: |
164+
thanosRuler:
165+
topologySpreadConstraints:
166+
- maxSkew: 1
167+
topologyKey: monitoring
168+
whenUnsatisfiable: ScheduleAnyway
169+
labelSelector:
170+
matchLabels:
171+
app.kubernetes.io/name: thanos-ruler
172+
----
173+
174+
. Save the file to apply the changes automatically.
175+
+
176+
[WARNING]
177+
====
178+
When you save changes to the `user-workload-monitoring-config` config map, the pods and other resources in the `openshift-user-workload-monitoring` project might be redeployed.
179+
The running monitoring processes in that project might also restart.
180+
====

modules/monitoring-setting-up-pod-topology-spread-constraints-for-alertmanager.adoc

Lines changed: 0 additions & 69 deletions
This file was deleted.

modules/monitoring-setting-up-pod-topology-spread-constraints-for-prometheus.adoc

Lines changed: 0 additions & 69 deletions
This file was deleted.

modules/monitoring-setting-up-pod-topology-spread-constraints-for-thanos-ruler.adoc

Lines changed: 0 additions & 67 deletions
This file was deleted.

0 commit comments

Comments
 (0)