Skip to content

Commit e704d24

Browse files
authored
Merge pull request #69809 from amolnar-rh/TELCODOCS-1576
TELCODOCS-1576: Image-based upgrade with Lifecycle Agent
2 parents fd272b9 + e681ca0 commit e704d24

13 files changed

+1776
-2
lines changed

_attributes/common-attributes.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -278,3 +278,5 @@ endif::[]
278278
:odf-full: Red Hat OpenShift Data Foundation
279279
:odf-short: ODF
280280
:rh-dev-hub: Red Hat Developer Hub
281+
//IBU
282+
:lcao: Lifecycle Agent

_topic_maps/_topic_map.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2966,6 +2966,8 @@ Topics:
29662966
File: ztp-sno-additional-worker-node
29672967
- Name: Pre-caching images for single-node OpenShift deployments
29682968
File: ztp-precaching-tool
2969+
- Name: Image-based upgrade for single-node OpenShift clusters
2970+
File: ztp-image-based-upgrade
29692971
---
29702972
Name: Reference design specifications
29712973
Dir: telco_ref_design_specs
Lines changed: 217 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,217 @@
1+
// Module included in the following assemblies:
2+
// * scalability_and_performance/ztp-image-based-upgrade.adoc
3+
4+
:_mod-docs-content-type: PROCEDURE
5+
[id="ztp-image-based-upgrade-seed-generation_{context}"]
6+
= Generating a seed image with the {lcao}
7+
8+
Use the {lcao} to generate the seed image with the `SeedGenerator` CR. The Operator checks for required system configurations, performs any necessary system cleanup before generating the seed image, and launches the image generation. The seed image generation includes the following tasks:
9+
10+
* Stopping cluster operators
11+
* Preparing the seed image configuration
12+
* Generating and pushing the seed image to the image repository specified in the `SeedGenerator` CR
13+
* Restoring cluster operators
14+
* Expiring seed cluster certificates
15+
* Generating new certificates for the seed cluster
16+
* Restoring and updates the `SeedGenerator` CR on the seed cluster
17+
18+
[NOTE]
19+
====
20+
The generated seed image does not include any site-specific data.
21+
====
22+
23+
[IMPORTANT]
24+
====
25+
During the Developer Preview of this feature, when upgrading a cluster, any custom trusted certificates configured on the cluster will be lost. As a temporary workaround, to preserve these certificates, you must use a seed image from a seed cluster that trusts the certificates.
26+
====
27+
28+
.Prerequisites
29+
30+
* Deploy a {sno} cluster with a DU profile.
31+
* Install the {lcao} on the seed cluster.
32+
* Install the OADP Operator on the seed cluster.
33+
* Log in as a user with `cluster-admin` privileges.
34+
* The seed cluster has the same CPU topology as the target cluster.
35+
* The seed cluster has the same IP version as the target cluster.
36+
37+
+
38+
[NOTE]
39+
====
40+
Dual-stack networking is not supported in this release.
41+
====
42+
43+
* If the target cluster has a proxy configuration, the seed cluster must have a proxy configuration too. The proxy configuration does not have to be the same.
44+
* The seed cluster is registered as a managed cluster.
45+
* The {lcao} deployed on the target cluster is compatible with the version in the seed image.
46+
* The seed cluster has a separate partition for the container images that will be shared between stateroots. For more information, see _Additional resources_.
47+
48+
[WARNING]
49+
====
50+
If the target cluster has multiple IPs and one of them belongs to the subnet that was used for creating the seed image, the upgrade fails if the target cluster's node IP does not belong to that subnet.
51+
====
52+
53+
.Procedure
54+
55+
. Detach the seed cluster from the hub cluster either manually or if using ZTP, by removing the `SiteConfig` CR from the `kustomization.yaml`.
56+
This deletes any cluster-specific resources from the seed cluster that must not be in the seed image.
57+
58+
.. If you are using {rh-rhacm}, manually detach the seed cluster by running the following command:
59+
+
60+
[source,terminal]
61+
----
62+
$ oc delete managedcluster sno-worker-example
63+
----
64+
65+
.. Wait until the `ManagedCluster` CR is removed. Once the CR is removed, create the proper `SeedGenerator` CR. The {lcao} cleans up the {rh-rhacm} artifacts.
66+
67+
. If you are using GitOps ZTP, detach your cluster by removing the seed cluster's `SiteConfig` CR from the `kustomization.yaml`:
68+
69+
.. Remove your seed cluster's `SiteConfig` CR from the `kustomization.yaml`.
70+
+
71+
[source,yaml]
72+
----
73+
apiVersion: kustomize.config.k8s.io/v1beta1
74+
kind: Kustomization
75+
76+
generators:
77+
#- example-seed-sno1.yaml
78+
- example-target-sno2.yaml
79+
- example-target-sno3.yaml
80+
----
81+
82+
.. Commit the `kustomization.yaml` changes in your Git repository and push the changes.
83+
+
84+
The ArgoCD pipeline detects the changes and removes the managed cluster.
85+
86+
. Create the `Secret`.
87+
88+
.. Create the authentication file by running the following command:
89+
+
90+
--
91+
.Authentication file
92+
[source,terminal]
93+
----
94+
$ MY_USER=myuserid
95+
$ AUTHFILE=/tmp/my-auth.json
96+
$ podman login --authfile ${AUTHFILE} -u ${MY_USER} quay.io/${MY_USER}
97+
----
98+
99+
[source,terminal]
100+
----
101+
$ base64 -w 0 ${AUTHFILE} ; echo
102+
----
103+
--
104+
105+
.. Copy the output into the `seedAuth` field in the `Secret` YAML file named `seedgen` in the `openshift-lifecycle-agent` namespace.
106+
+
107+
--
108+
[source,yaml]
109+
----
110+
apiVersion: v1
111+
kind: Secret
112+
metadata:
113+
name: seedgen <1>
114+
namespace: openshift-lifecycle-agent
115+
type: Opaque
116+
data:
117+
seedAuth: <encoded_AUTHFILE> <2>
118+
----
119+
<1> The `Secret` resource must have the `name: seedgen` and `namespace: openshift-lifecycle-agent` fields.
120+
<2> Specifies a base64-encoded authfile for write-access to the registry for pushing the generated seed images.
121+
--
122+
123+
.. Apply the `Secret`.
124+
+
125+
[source,terminal]
126+
----
127+
$ oc apply -f secretseedgenerator.yaml
128+
----
129+
130+
. Create the `SeedGenerator` CR:
131+
+
132+
--
133+
[source,yaml]
134+
----
135+
apiVersion: lca.openshift.io/v1alpha1
136+
kind: SeedGenerator
137+
metadata:
138+
name: seedimage <1>
139+
spec:
140+
seedImage: <seed_container_image> <2>
141+
----
142+
<1> The `SeedGenerator` CR must be named `seedimage`.
143+
<2> Specify the container image URL, for example, `quay.io/example/seed-container-image:<tag>`. It is recommended to use the `<seed_cluster_name>:<ocp_version>` format.
144+
--
145+
146+
. Generate the seed image by running the following command:
147+
+
148+
[source,terminal]
149+
----
150+
$ oc apply -f seedgenerator.yaml
151+
----
152+
153+
+
154+
[IMPORTANT]
155+
====
156+
The cluster reboots and loses API capabilities while the {lcao} generates the seed image.
157+
Applying the `SeedGenerator` CR stops the `kubelet` and the CRI-O operations, then it starts the image generation.
158+
====
159+
160+
Once the image generation is complete, the cluster can be reattached to the hub cluster, and you can access it through the API.
161+
162+
If you want to generate further seed images, you must provision a new seed cluster with the version you want to generate a seed image from.
163+
164+
.Verification
165+
166+
. Once the cluster recovers and it is available, you can check the status of the `SeedGenerator` CR:
167+
+
168+
--
169+
[source,terminal]
170+
----
171+
$ oc get seedgenerator -oyaml
172+
----
173+
174+
.Example output
175+
[source,yaml]
176+
----
177+
status:
178+
conditions:
179+
- lastTransitionTime: "2024-02-13T21:24:26Z"
180+
message: Seed Generation completed
181+
observedGeneration: 1
182+
reason: Completed
183+
status: "False"
184+
type: SeedGenInProgress
185+
- lastTransitionTime: "2024-02-13T21:24:26Z"
186+
message: Seed Generation completed
187+
observedGeneration: 1
188+
reason: Completed
189+
status: "True"
190+
type: SeedGenCompleted <1>
191+
observedGeneration: 1
192+
----
193+
<1> The seed image generation is complete.
194+
--
195+
196+
. Verify that the {sno} cluster is running and is attached to the {rh-rhacm} hub cluster:
197+
+
198+
--
199+
[source,terminal]
200+
----
201+
$ oc get managedclusters sno-worker-example
202+
----
203+
204+
.Example output
205+
[source,terminal]
206+
----
207+
$ oc get managedclusters sno-worker-example
208+
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
209+
sno-worker-example true https://api.sno-worker-example.example.redhat.com True True 21h <1>
210+
----
211+
<1> The cluster is attached if you see that the value is `True` for both `JOINED` and `AVAILABLE`.
212+
213+
[NOTE]
214+
====
215+
The cluster requires time to recover after restarting the `kubelet` operation.
216+
====
217+
--
Lines changed: 111 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,111 @@
1+
// Module included in the following assemblies:
2+
// * scalability_and_performance/ztp-image-based-upgrade.adoc
3+
4+
:_mod-docs-content-type: PROCEDURE
5+
[id="installing-lcao-using-cli_{context}"]
6+
= Installing the {lcao} by using the CLI
7+
8+
You can use the OpenShift CLI (`oc`) to install the {lcao} from the 4.15 Operator catalog on both the seed and target cluster.
9+
10+
.Prerequisites
11+
12+
* Install the OpenShift CLI (`oc`).
13+
* Log in as a user with `cluster-admin` privileges.
14+
15+
.Procedure
16+
17+
. Create a namespace for the {lcao}.
18+
+
19+
[source,yaml]
20+
----
21+
apiVersion: v1
22+
kind: Namespace
23+
metadata:
24+
name: openshift-lifecycle-agent
25+
annotations:
26+
workload.openshift.io/allowed: management
27+
----
28+
29+
.. Create the `Namespace` CR:
30+
+
31+
[source,terminal]
32+
----
33+
$ oc create -f lcao-namespace.yaml
34+
----
35+
36+
. Create an Operator group for the {lcao}.
37+
+
38+
[source,yaml]
39+
----
40+
apiVersion: operators.coreos.com/v1
41+
kind: OperatorGroup
42+
metadata:
43+
name: openshift-lifecycle-agent
44+
namespace: openshift-lifecycle-agent
45+
spec:
46+
targetNamespaces:
47+
- openshift-lifecycle-agent
48+
----
49+
50+
.. Create the `OperatorGroup` CR:
51+
+
52+
[source,terminal]
53+
----
54+
$ oc create -f lcao-operatorgroup.yaml
55+
----
56+
57+
. Create a `Subscription` CR:
58+
59+
.. Define the `Subscription` CR and save the YAML file, for example, `lcao-subscription.yaml`:
60+
+
61+
[source,yaml]
62+
----
63+
apiVersion: operators.coreos.com/v1alpha1
64+
kind: Subscription
65+
metadata:
66+
name: openshift-lifecycle-agent-subscription
67+
namespace: openshift-lifecycle-agent
68+
spec:
69+
channel: "alpha"
70+
name: lifecycle-agent
71+
source: redhat-operators
72+
sourceNamespace: openshift-marketplace
73+
----
74+
75+
.. Create the `Subscription` CR by running the following command:
76+
+
77+
[source,terminal]
78+
----
79+
$ oc create -f lcao-subscription.yaml
80+
----
81+
82+
.Verification
83+
84+
. Verify that the installation succeeded by inspecting the CSV resource:
85+
+
86+
[source,terminal]
87+
----
88+
$ oc get csv -n openshift-lifecycle-agent
89+
----
90+
+
91+
.Example output
92+
[source,terminal,subs="attributes+"]
93+
----
94+
NAME DISPLAY VERSION REPLACES PHASE
95+
lifecycle-agent.v{product-version}.0 Openshift Lifecycle Agent {product-version}.0 Succeeded
96+
----
97+
98+
. Verify that the {lcao} is up and running:
99+
+
100+
[source,terminal]
101+
----
102+
$ oc get deploy -n openshift-lifecycle-agent
103+
----
104+
105+
+
106+
.Example output
107+
[source,terminal]
108+
----
109+
NAME READY UP-TO-DATE AVAILABLE AGE
110+
lifecycle-agent-controller-manager 1/1 1 1 14s
111+
----
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
// Module included in the following assemblies:
2+
// * scalability_and_performance/ztp-image-based-upgrade.adoc
3+
4+
:_mod-docs-content-type: PROCEDURE
5+
[id="installing-lifecycle-agent-using-web-console_{context}"]
6+
= Installing the {lcao} by using the web console
7+
8+
You can use the {product-title} web console to install the {lcao} from the 4.15 Operator catalog on both the seed and target cluster.
9+
10+
.Prerequisites
11+
12+
* Log in as a user with `cluster-admin` privileges.
13+
14+
.Procedure
15+
16+
. In the {product-title} web console, navigate to *Operators**OperatorHub*.
17+
. Search for the *{lcao}* from the list of available Operators, and then click *Install*.
18+
. On the *Install Operator* page, under *A specific namespace on the cluster* select *openshift-lifecycle-agent*. Then, click Install.
19+
. Click *Install*.
20+
21+
.Verification
22+
23+
To confirm that the installation is successful:
24+
25+
. Navigate to the *Operators**Installed Operators* page.
26+
. Ensure that the {lcao} is listed in the *openshift-lifecycle-agent* project with a *Status* of *InstallSucceeded*.
27+
28+
[NOTE]
29+
====
30+
During installation an Operator might display a *Failed* status. If the installation later succeeds with an *InstallSucceeded* message, you can ignore the *Failed* message.
31+
====
32+
33+
If the Operator is not installed successfully:
34+
35+
. Go to the *Operators**Installed Operators* page and inspect the *Operator Subscriptions* and *Install Plans* tabs for any failure or errors under *Status*.
36+
. Go to the *Workloads**Pods* page and check the logs for pods in the *openshift-lifecycle-agent* project.

0 commit comments

Comments
 (0)