|
5 | 5 | [id='operators-overview-{context}']
|
6 | 6 | = Operators in {product-title}
|
7 | 7 |
|
8 |
| -In {product-title} version 4.0, all cluster functions are divided into a series |
9 |
| -of component Operators. |
| 8 | +{product-title} v4 uses different classes of Operators to perform cluster |
| 9 | +operations and run services on the cluster for your applications to use. |
10 | 10 |
|
11 |
| -Operators can manage other Operators within their component. This lends to a |
12 |
| -transparent model: a single Operator usually manages a single binary file. |
13 |
| -Operators also offer a more granual configuration experience. You configure each component by |
14 |
| -modifying its Operator instead of modifying a global configuration file. |
| 11 | +[id='-platform-operators-overview-{context}'] |
| 12 | +== Platform Operators in {product-title} |
15 | 13 |
|
16 |
| -In version 4.0, separate processes and images and pods run to drive the kubernetes |
17 |
| -API and pod managers. The control plane services run as static pods so they can |
| 14 | +In {product-title} version 4.0, all cluster functions are divided into a series |
| 15 | +of platform Operators. Platform operators manage a particular area of |
| 16 | +cluster functionality, such as cluster-wide application logging, management of |
| 17 | +the Kubernetes control plane, or the machine provisioning system. |
| 18 | + |
| 19 | +Each Operator provides you with a simple API for determining cluster |
| 20 | +functionality. The Operator hides the details of managing the lifecycle of that |
| 21 | +component. Operators can manage a single component or tens of components, but |
| 22 | +the end goal is always to reduce operational burden by automating common actions. |
| 23 | +Operators also offer a more granular configuration experience. You configure each |
| 24 | +component by modifying the API that the Operator exposes instead of modifying a |
| 25 | +global configuration file. |
| 26 | + |
| 27 | +In {product-title} v4, all control plane components are run and managed as |
| 28 | +applications on the infrastructure to ensure a uniform and consistent management |
| 29 | +experience. The control plane, services run as static pods so they can |
18 | 30 | manage normal workloads or processes the same way that they manage disaster
|
19 | 31 | recovery. Aside from the core control plane components, other services run as
|
20 | 32 | normal pods on the cluster, managed by regular Kubernetes constructs. Unlike in the past
|
21 | 33 | where the `kubelet` could be running as containerized or non-containerized, the `kubelet`
|
22 | 34 | always runs as a `systemd` process.
|
23 | 35 |
|
24 | 36 |
|
25 |
| -[id='cluster-version-operator-{context}'] |
26 |
| -== The cluster version Operator |
27 |
| - |
28 |
| -The cluster version Operator orchestrates all things. |
29 |
| - |
30 |
| -{product-title} 4.0 introduces several new components that support the cluster |
31 |
| -version Operator, including Cincinnati and Telemetry. |
32 |
| - |
33 |
| -Cincinnati is the hosted service that provides over the air updates to both |
34 |
| -{product-title} and RHCOS. It provides a graph, or diagram that contain |
35 |
| -_vertices_ and the _edges_ that connect them, of component Operators. The edges |
36 |
| -in the graph show which versions you can safely upgrade to. The cluster version |
37 |
| -Operator checks with Cincinnati and determines valid upgrades and upgrade paths |
38 |
| -based on current component versions and information in the graph. If you |
39 |
| -configure it to do so, Cincinnati sends the release artifacts that it needs to |
40 |
| -perform the upgrade to your image registry, and the cluster version Operator |
41 |
| -upgrades your cluster. By accepting automatic updates, you can automatically |
42 |
| -keep your cluster up to date with the most recent compatible components. |
43 |
| - |
44 |
| -To allow Cincinnati to provide only compatible updates, a release verification |
45 |
| -pipeline exists to drive automation. Each release artifact is verified for |
46 |
| -compatibility with supported cloud platforms and system architectures as well |
47 |
| -as other component packages. After the pipeline confirms the suitability of a |
48 |
| -release, Cincinnati can apply the update to your cluster or notify you that it |
49 |
| -is available. |
50 |
| - |
51 |
| -The interaction between the registry and the Cincinnati service is different during |
52 |
| -bootstrap and continuous update modes. When you bootstrap the initial |
53 |
| -infrastructure, the cluster version Operator finds |
54 |
| -the fully qualified image name for the shortname of the images that it needs to |
55 |
| -apply to the server during installation. It looks at the image stream that it needs |
56 |
| -to apply and renders it to disk. It calls bootkube and waits for a temporary minimal control |
57 |
| -plane to come up and load the cluster version Operator. |
58 |
| - |
59 |
| -During continuous update mode, two controllers run. One continuously updates |
60 |
| -the payload manifests, applies them to the cluster, and outputs the status of |
61 |
| -the controlled rollout of the Operators, whether they are available, upgrading, |
62 |
| -or failed. The second controller constantly checks with to Cincinnati to |
63 |
| -determine if updates are available. |
64 |
| - |
65 |
| -In a managed Red Hat environment, Telemetry is the component that provides |
66 |
| -metrics about cluster health and the success of updates. |
67 |
| - |
68 | 37 | [id='second-level-operators-{context}']
|
69 | 38 | == Second-level Operators in {product-title}
|
70 | 39 |
|
|
0 commit comments