You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
. Edit the `MachineConfigPool` of the nodes where CPU Manager should be enabled. In this example, all workers have CPU Manager enabled:
21
+
. To enable CPU Manager for all compute nodes, edit the CR by running the following command:
20
22
+
21
23
[source,terminal]
22
24
----
23
25
# oc edit machineconfigpool worker
24
26
----
25
27
26
-
. Add a label to the worker machine config pool:
28
+
. Add the `custom-kubelet: cpumanager-enabled`label to `metadata.labels` section.
27
29
+
28
30
[source,yaml]
29
31
----
@@ -55,7 +57,7 @@ spec:
55
57
* `static`. This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If `static`, you must use a lowercase `s`.
56
58
<2> Optional. Specify the CPU Manager reconcile frequency. The default is `5s`.
57
59
58
-
. Create the dynamic kubelet config:
60
+
. Create the dynamic kubelet config by running the following command:
59
61
+
60
62
[source,terminal]
61
63
----
@@ -64,7 +66,7 @@ spec:
64
66
+
65
67
This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed.
66
68
67
-
. Check for the merged kubelet config:
69
+
. Check for the merged kubelet config by running the following command:
68
70
+
69
71
[source,terminal]
70
72
----
@@ -84,7 +86,7 @@ This adds the CPU Manager feature to the kubelet config and, if needed, the Mach
84
86
]
85
87
----
86
88
87
-
. Check the worker for the updated `kubelet.conf`:
89
+
. Check the compute node for the updated `kubelet.conf` file by running the following command:
<1> `cpuManagerPolicy` is defined when you create the `KubeletConfig` CR.
102
104
<2> `cpuManagerReconcilePeriod` is defined when you create the `KubeletConfig` CR.
103
105
106
+
. Create a project by running the following command:
107
+
+
108
+
[source,terminal]
109
+
----
110
+
$ oc new-project <project_name>
111
+
----
112
+
104
113
. Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod:
105
114
+
106
115
[source,terminal]
@@ -145,7 +154,9 @@ spec:
145
154
# oc create -f cpumanager-pod.yaml
146
155
----
147
156
148
-
. Verify that the pod is scheduled to the node that you labeled:
157
+
.Verification
158
+
159
+
. Verify that the pod is scheduled to the node that you labeled by running the following command:
149
160
+
150
161
[source,terminal]
151
162
----
@@ -172,34 +183,73 @@ QoS Class: Guaranteed
172
183
Node-Selectors: cpumanager=true
173
184
----
174
185
175
-
. Verify that the `cgroups` are set up correctly. Get the process ID (PID) of the `pause` process:
186
+
. Verify that a CPU has been exclusively assigned to the pod by running the following command:
. Verify that pods of quality of service (QoS) tier `Guaranteed` are placed within the `kubepods.slice` subdirectory by running the following commands:
186
229
+
187
-
Pods of quality of service (QoS) tier `Guaranteed` are placed within the `kubepods.slice`. Pods of other QoS tiers end up in child `cgroups` of `kubepods`:
230
+
[source,terminal]
231
+
----
232
+
# cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope
233
+
----
188
234
+
189
235
[source,terminal]
190
236
----
191
-
# cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope
192
-
# for i in `ls cpuset.cpus tasks` ; do echo -n "$i "; cat $i ; done
237
+
# for i in `ls cpuset.cpus cgroup.procs` ; do echo -n "$i "; cat $i ; done
193
238
----
194
239
+
240
+
[NOTE]
241
+
====
242
+
Pods of other QoS tiers end up in child `cgroups` of the parent `kubepods`.
243
+
====
244
+
+
195
245
.Example output
196
246
[source,terminal]
197
247
----
198
248
cpuset.cpus 1
199
249
tasks 32706
200
250
----
201
251
202
-
. Check the allowed CPU list for the task:
252
+
. Check the allowed CPU list for the task by running the following command:
203
253
+
204
254
[source,terminal]
205
255
----
@@ -212,12 +262,15 @@ tasks 32706
212
262
Cpus_allowed_list: 1
213
263
----
214
264
215
-
. Verify that another pod (in this case, the pod in the `burstable` QoS tier) on the system cannot run on the core allocated for the `Guaranteed` pod:
265
+
. Verify that another pod on the system cannot run on the core allocated for the `Guaranteed` pod. For example, to verify the pod in the `besteffort` QoS tier, run the following commands:
0 commit comments