Skip to content

Commit e2001e0

Browse files
authored
Merge pull request #88422 from gabriel-rh/OBSDOCS-1669
OBSDOCS-1669 add details on server-side apply
2 parents 7aeaa79 + 9691295 commit e2001e0

File tree

2 files changed

+304
-0
lines changed

2 files changed

+304
-0
lines changed

modules/coo-server-side-apply.adoc

Lines changed: 302 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,302 @@
1+
//Module included in the following assemblies:
2+
//
3+
// * observability/cluster_observability_operator/cluster-observability-operator-overview.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="server-side-apply_{context}"]
7+
= Using Server-Side Apply to customize Prometheus resources
8+
9+
Server-Side Apply is a feature that enables collaborative management of Kubernetes resources. The control plane tracks how different users and controllers manage fields within a Kubernetes object. It introduces the concept of field managers and tracks ownership of fields. This centralized control provides conflict detection and resolution, and reduces the risk of unintended overwrites.
10+
11+
Compared to Client-Side Apply, it is more declarative, and tracks field management instead of last applied state.
12+
13+
Server-Side Apply:: Declarative configuration management by updating a resource's state without needing to delete and recreate it.
14+
15+
Field management:: Users can specify which fields of a resource they want to update, without affecting the other fields.
16+
17+
Managed fields:: Kubernetes stores metadata about who manages each field of an object in the `managedFields` field within metadata.
18+
19+
Conflicts:: If multiple managers try to modify the same field, a conflict occurs. The applier can choose to overwrite, relinquish control, or share management.
20+
21+
Merge strategy:: Server-Side Apply merges fields based on the actor who manages them.
22+
23+
.Procedure
24+
25+
. Add a `MonitoringStack` resource using the following configuration:
26+
+
27+
.Example `MonitoringStack` object
28+
+
29+
[source,yaml]
30+
----
31+
apiVersion: monitoring.rhobs/v1alpha1
32+
kind: MonitoringStack
33+
metadata:
34+
labels:
35+
coo: example
36+
name: sample-monitoring-stack
37+
namespace: coo-demo
38+
spec:
39+
logLevel: debug
40+
retention: 1d
41+
resourceSelector:
42+
matchLabels:
43+
app: demo
44+
----
45+
46+
. A Prometheus resource named `sample-monitoring-stack` is generated in the `coo-demo` namespace. Retrieve the managed fields of the generated Prometheus resource by running the following command:
47+
+
48+
[source,terminal]
49+
----
50+
$ oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields
51+
----
52+
+
53+
.Example output
54+
[source,yaml]
55+
----
56+
managedFields:
57+
- apiVersion: monitoring.rhobs/v1
58+
fieldsType: FieldsV1
59+
fieldsV1:
60+
f:metadata:
61+
f:labels:
62+
f:app.kubernetes.io/managed-by: {}
63+
f:app.kubernetes.io/name: {}
64+
f:app.kubernetes.io/part-of: {}
65+
f:ownerReferences:
66+
k:{"uid":"81da0d9a-61aa-4df3-affc-71015bcbde5a"}: {}
67+
f:spec:
68+
f:additionalScrapeConfigs: {}
69+
f:affinity:
70+
f:podAntiAffinity:
71+
f:requiredDuringSchedulingIgnoredDuringExecution: {}
72+
f:alerting:
73+
f:alertmanagers: {}
74+
f:arbitraryFSAccessThroughSMs: {}
75+
f:logLevel: {}
76+
f:podMetadata:
77+
f:labels:
78+
f:app.kubernetes.io/component: {}
79+
f:app.kubernetes.io/part-of: {}
80+
f:podMonitorSelector: {}
81+
f:replicas: {}
82+
f:resources:
83+
f:limits:
84+
f:cpu: {}
85+
f:memory: {}
86+
f:requests:
87+
f:cpu: {}
88+
f:memory: {}
89+
f:retention: {}
90+
f:ruleSelector: {}
91+
f:rules:
92+
f:alert: {}
93+
f:securityContext:
94+
f:fsGroup: {}
95+
f:runAsNonRoot: {}
96+
f:runAsUser: {}
97+
f:serviceAccountName: {}
98+
f:serviceMonitorSelector: {}
99+
f:thanos:
100+
f:baseImage: {}
101+
f:resources: {}
102+
f:version: {}
103+
f:tsdb: {}
104+
manager: observability-operator
105+
operation: Apply
106+
- apiVersion: monitoring.rhobs/v1
107+
fieldsType: FieldsV1
108+
fieldsV1:
109+
f:status:
110+
.: {}
111+
f:availableReplicas: {}
112+
f:conditions:
113+
.: {}
114+
k:{"type":"Available"}:
115+
.: {}
116+
f:lastTransitionTime: {}
117+
f:observedGeneration: {}
118+
f:status: {}
119+
f:type: {}
120+
k:{"type":"Reconciled"}:
121+
.: {}
122+
f:lastTransitionTime: {}
123+
f:observedGeneration: {}
124+
f:status: {}
125+
f:type: {}
126+
f:paused: {}
127+
f:replicas: {}
128+
f:shardStatuses:
129+
.: {}
130+
k:{"shardID":"0"}:
131+
.: {}
132+
f:availableReplicas: {}
133+
f:replicas: {}
134+
f:shardID: {}
135+
f:unavailableReplicas: {}
136+
f:updatedReplicas: {}
137+
f:unavailableReplicas: {}
138+
f:updatedReplicas: {}
139+
manager: PrometheusOperator
140+
operation: Update
141+
subresource: status
142+
----
143+
144+
. Check the `metadata.managedFields` values, and observe that some fields in `metadata` and `spec` are managed by the `MonitoringStack` resource.
145+
146+
. Modify a field that is not controlled by the `MonitoringStack` resource:
147+
148+
.. Change `spec.enforcedSampleLimit`, which is a field not set by the `MonitoringStack` resource. Create the file `prom-spec-edited.yaml`:
149+
+
150+
.`prom-spec-edited.yaml`
151+
+
152+
[source,yaml]
153+
----
154+
apiVersion: monitoring.rhobs/v1
155+
kind: Prometheus
156+
metadata:
157+
name: sample-monitoring-stack
158+
namespace: coo-demo
159+
spec:
160+
enforcedSampleLimit: 1000
161+
----
162+
163+
.. Apply the YAML by running the following command:
164+
+
165+
[source,terminal]
166+
----
167+
$ oc apply -f ./prom-spec-edited.yaml --server-side
168+
----
169+
+
170+
[NOTE]
171+
====
172+
You must use the `--server-side` flag.
173+
====
174+
175+
.. Get the changed Prometheus object and note that there is one more section in `managedFields` which has `spec.enforcedSampleLimit`:
176+
+
177+
[source,terminal]
178+
----
179+
$ oc get prometheus -n coo-demo
180+
----
181+
+
182+
.Example output
183+
[source,yaml]
184+
----
185+
managedFields: <1>
186+
- apiVersion: monitoring.rhobs/v1
187+
fieldsType: FieldsV1
188+
fieldsV1:
189+
f:metadata:
190+
f:labels:
191+
f:app.kubernetes.io/managed-by: {}
192+
f:app.kubernetes.io/name: {}
193+
f:app.kubernetes.io/part-of: {}
194+
f:spec:
195+
f:enforcedSampleLimit: {} <2>
196+
manager: kubectl
197+
operation: Apply
198+
----
199+
<1> `managedFields`
200+
<2> `spec.enforcedSampleLimit`
201+
202+
. Modify a field that is managed by the `MonitoringStack` resource:
203+
.. Change `spec.LogLevel`, which is a field managed by the `MonitoringStack` resource, using the following YAML configuration:
204+
+
205+
[source,yaml]
206+
----
207+
# changing the logLevel from debug to info
208+
apiVersion: monitoring.rhobs/v1
209+
kind: Prometheus
210+
metadata:
211+
name: sample-monitoring-stack
212+
namespace: coo-demo
213+
spec:
214+
logLevel: info <1>
215+
----
216+
<1> `spec.logLevel` has been added
217+
218+
.. Apply the YAML by running the following command:
219+
+
220+
[source,terminal]
221+
----
222+
$ oc apply -f ./prom-spec-edited.yaml --server-side
223+
----
224+
+
225+
.Example output
226+
+
227+
[source,terminal]
228+
----
229+
error: Apply failed with 1 conflict: conflict with "observability-operator": .spec.logLevel
230+
Please review the fields above--they currently have other managers. Here
231+
are the ways you can resolve this warning:
232+
* If you intend to manage all of these fields, please re-run the apply
233+
command with the `--force-conflicts` flag.
234+
* If you do not intend to manage all of the fields, please edit your
235+
manifest to remove references to the fields that should keep their
236+
current managers.
237+
* You may co-own fields by updating your manifest to match the existing
238+
value; in this case, you'll become the manager if the other manager(s)
239+
stop managing the field (remove it from their configuration).
240+
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
241+
----
242+
243+
.. Notice that the field `spec.logLevel` cannot be changed using Server-Side Apply, because it is already managed by `observability-operator`.
244+
245+
.. Use the `--force-conflicts` flag to force the change.
246+
+
247+
[source,terminal]
248+
----
249+
$ oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts
250+
----
251+
+
252+
.Example output
253+
+
254+
[source,terminal]
255+
----
256+
prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied
257+
----
258+
+
259+
With `--force-conflicts` flag, the field can be forced to change, but since the same field is also managed by the `MonitoringStack` resource, the Observability Operator detects the change, and reverts it back to the value set by the `MonitoringStack` resource.
260+
+
261+
[NOTE]
262+
====
263+
Some Prometheus fields generated by the `MonitoringStack` resource are influenced by the fields in the `MonitoringStack` `spec` stanza, for example, `logLevel`. These can be changed by changing the `MonitoringStack` `spec`.
264+
====
265+
266+
.. To change the `logLevel` in the Prometheus object, apply the following YAML to change the `MonitoringStack` resource:
267+
+
268+
[source,yaml]
269+
----
270+
apiVersion: monitoring.rhobs/v1alpha1
271+
kind: MonitoringStack
272+
metadata:
273+
name: sample-monitoring-stack
274+
labels:
275+
coo: example
276+
spec:
277+
logLevel: info
278+
----
279+
280+
.. To confirm that the change has taken place, query for the log level by running the following command:
281+
+
282+
[source,terminal]
283+
----
284+
$ oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}'
285+
----
286+
+
287+
.Example output
288+
+
289+
[source,terminal]
290+
----
291+
info
292+
----
293+
294+
295+
[NOTE]
296+
====
297+
. If a new version of an Operator generates a field that was previously generated and controlled by an actor, the value set by the actor will be overridden.
298+
+
299+
For example, you are managing a field `enforcedSampleLimit` which is not generated by the `MonitoringStack` resource. If the Observability Operator is upgraded, and the new version of the Operator generates a value for `enforcedSampleLimit`, this will overide the value you have previously set.
300+
301+
. The `Prometheus` object generated by the `MonitoringStack` resource may contain some fields which are not explicitly set by the monitoring stack. These fields appear because they have default values.
302+
====

observability/cluster_observability_operator/cluster-observability-operator-overview.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,8 @@ Monitoring stacks deployed by the two Operators do not conflict. You can use a {
2222

2323
include::modules/monitoring-understanding-the-cluster-observability-operator.adoc[leveloffset=+1]
2424

25+
include::modules/coo-server-side-apply.adoc[leveloffset=+1]
26+
2527
[role="_additional-resources"]
2628
.Additional resources
2729

0 commit comments

Comments
 (0)