Skip to content

Commit c1bc50f

Browse files
committed
OBSDOCS-1701: Upgrading to Logging 6 steps Final
1 parent b72eacf commit c1bc50f

12 files changed

+799
-0
lines changed

_topic_maps/_topic_map.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3045,6 +3045,8 @@ Topics:
30453045
File: log6x-configuring-lokistack-otlp-6.2
30463046
- Name: Visualization for logging
30473047
File: log6x-visual-6.2
3048+
- Name: Updating Logging
3049+
File: cluster-logging-upgrading
30483050
- Name: Logging 6.1
30493051
Dir: logging-6.1
30503052
Topics:
Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/cluster-logging-upgrading.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="logging-upgrading-clo_{context}"]
7+
= Updating the {clo}
8+
9+
The {clo} does not provide an automated upgrade from Logging 5.x to Logging 6.x because of the different combinations in which Logging can be configured. You must install all the different operators for managing logging seperately. You can upgrade to Logging 6.x version from both Logging 5.9 and Logging 5.8 versions.
10+
11+
You can update {clo} by either changing the subscription channel in the {product-title} web console, or by uninstalling it. The following procedure demonstrates updating {clo} by changing the subscription channel in the {product-title} web console.
12+
13+
////
14+
[IMPORTANT]
15+
====
16+
The path to the checkpoints in Vector in Logging v6 is different from the path in Logging v6. Therefore, on migration, all the logs are reprocessed which might impact the control-planes, network, storage, cpu and memory.
17+
====
18+
////
19+
20+
//Need to add steps about
21+
22+
.Prerequisites
23+
24+
* You have installed the {clo}.
25+
* You have administrator permissions.
26+
* You have access to the {product-title} web console and are viewing the *Administrator* perspective.
27+
28+
.Procedure
29+
30+
. Create service account.
31+
32+
.. Create a service account to be used by the log collector:
33+
+
34+
[source,terminal]
35+
----
36+
$ oc create sa logging-collector -n openshift-logging
37+
----
38+
39+
.. Bind the `ClusterRole` role to the `serviceAccount` to be able to write the logs to the Red{nbsp}Hat LokiStack
40+
+
41+
[source,terminal]
42+
----
43+
$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging
44+
----
45+
46+
.. Assign permission to collect and forward application logs by running the following command:
47+
+
48+
[source,terminal]
49+
----
50+
$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging
51+
----
52+
53+
.. Assign permission to collect and forward audit logs by running the following command:
54+
+
55+
[source,terminal]
56+
----
57+
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z logging-collector -n openshift-logging
58+
----
59+
60+
.. Assign permission to collect and forward infrastructure logs by running the following command:
61+
+
62+
[source,terminal]
63+
----
64+
$ oc adm policy add-cluster-role-to-user collect-infrastructure-log -z logging-collector -n openshift-logging
65+
----
66+
67+
. Transform the transform the current configuration to the new API in Logging 6.
68+
+
69+
For more information, see link:[Changes to Cluster logging and forwarding in Logging 6].
70+
71+
. Move Vector checkpoints to the new path.
72+
+
73+
//Need to add steps
74+
+
75+
[IMPORTANT]
76+
====
77+
When you migrate, all the logs that have not been compressed will be reprocessed by Vector. The reprocessing might lead to the following issues:
78+
79+
* Duplicated logs during migration.
80+
* Too many requests in the Log storage receiving the logs or reaching rate limit.
81+
* Problems on the log store on disk and performance as consequence of re-reading and processing all old logs the collector.
82+
* Impact in the Kube API.
83+
* A peak of memory and cpu in Vector until all the old logs are processed. The logs can be several GB per node.
84+
====
85+
86+
. Update the {clo} by using the {product-title} web console.
87+
.. Navigate to *Operators* -> *Installed Operators*.
88+
89+
.. Select the *openshift-logging* project.
90+
91+
.. Click the *Red Hat OpenShift Logging* Operator.
92+
93+
.. Click *Subscription*. In the *Subscription details* section, click the *Update channel* link.
94+
95+
.. In the *Change Subscription Update Channel* window, select the latest major version update channel, *stable-6.x*, and click *Save*. Note the `cluster-logging.v6.y.z` version.
96+
97+
.. Wait for a few seconds, and then go to *Operators* -> *Installed Operators* to verify that the {clo} version matches the latest `cluster-logging.v6.y.z` version.
98+
99+
.. On the *Operators* -> *Installed Operators* page, wait for the *Status* field to report *Succeeded*.
100+
+
101+
Your existing Logging v5 resources will continue to run, but are no longer managed by your operator. These unmanaged resources can be removed once your new resources are ready to be created.
102+
103+
// check if this is correct
104+
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/cluster-logging-upgrading.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="logging-upgrading-loki_{context}"]
7+
= Updating the {loki-op}
8+
9+
To update the {loki-op} to a new major release version, you must modify the update channel for the Operator subscription.
10+
11+
.Prerequisites
12+
13+
* You have installed the {loki-op}.
14+
* You have administrator permissions.
15+
* You have access to the {product-title} web console and are viewing the *Administrator* perspective.
16+
17+
.Procedure
18+
19+
. Navigate to *Operators* -> *Installed Operators*.
20+
21+
. Select the *openshift-operators-redhat* project.
22+
23+
. Click the *{loki-op}*.
24+
25+
. Click *Subscription*. In the *Subscription details* section, click the *Update channel* link. This link text might be *stable* or *stable-5.y*, depending on your current update channel.
26+
27+
. In the *Change Subscription Update Channel* window, select the latest major version update channel, *stable-6.y*, and click *Save*. Note the `loki-operator.v6.y.z` version.
28+
29+
. Wait for a few seconds, then click *Operators* -> *Installed Operators*. Verify that the {loki-op} version matches the latest `loki-operator.v5.y.z` version.
30+
31+
. On the *Operators* -> *Installed Operators* page, wait for the *Status* field to report *Succeeded*.
32+
33+
. Check if the `LokiStack` custom resource contains the `v13` schema version and add it if it is missing. For correctly adding the `v13` schema version, see "Upgrading the LokiStack storage schema".
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/cluster-logging-uninstall.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="uninstall-es-operator_{context}"]
7+
= Uninstalling Elasticsearch
8+
9+
You can uninstall Elasticsearch by using the {product-title} web console. Uninstall Elasticsearch only if it is not used for by component such as Jaeger, Service Mesh, or Kiali.
10+
11+
.Prerequisites
12+
13+
* You have administrator permissions.
14+
* You have access to the *Administrator* perspective of the {product-title} web console.
15+
* If you have not already removed the {clo} and related resources, you must remove references to Elasticsearch from the `ClusterLogging` custom resource.
16+
17+
.Procedure
18+
19+
. Go to the *Administration* -> *Custom Resource Definitions* page, and click *Elasticsearch*.
20+
21+
. On the *Custom Resource Definition Details* page, click *Instances*.
22+
23+
. Click the Options menu {kebab} next to the instance, and then click *Delete Elasticsearch*.
24+
25+
. Go to the *Administration* -> *Custom Resource Definitions* page.
26+
27+
. Click the Options menu {kebab} next to *Elasticsearch*, and select *Delete Custom Resource Definition*.
28+
29+
. Go to the *Operators* -> *Installed Operators* page.
30+
31+
. Click the Options menu {kebab} next to the {es-op}, and then click *Uninstall Operator*.
32+
33+
. Optional: Delete the `openshift-operators-redhat` project.
34+
+
35+
[IMPORTANT]
36+
====
37+
Do not delete the `openshift-operators-redhat` project if other global Operators are installed in this namespace.
38+
====
39+
40+
.. Go to the *Home* -> *Projects* page.
41+
.. Click the Options menu {kebab} next to the *openshift-operators-redhat* project, and then click *Delete Project*.
42+
.. Confirm the deletion by typing `openshift-operators-redhat` in the dialog box, and then click *Delete*.
Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
:_newdoc-version: 2.18.4
2+
:_template-generated: 2025-05-20
3+
:_mod-docs-content-type: PROCEDURE
4+
5+
[id="creating-and-configuring-a-service-account-for-the-log-collector_{context}"]
6+
= Creating and configuring a service account for the log collector
7+
8+
Create a service account for the log collector and assign it the necessary roles and permissions to collect logs.
9+
10+
.Prerequisites
11+
12+
.Procedure
13+
14+
. Create a service account to be used by the log collector:
15+
+
16+
[source,terminal]
17+
----
18+
$ oc create sa logging-collector -n openshift-logging
19+
----
20+
21+
. Bind the `ClusterRole` role to the `serviceAccount` to be able to write the logs to the Red{nbsp}Hat LokiStack
22+
+
23+
[source,terminal]
24+
----
25+
$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging
26+
----
27+
28+
. Assign the necessary permissions to the service account for the collector to be able to collect and forward logs.
29+
30+
.. Assign permission to collect and forward application logs by running the following command:
31+
+
32+
[source,terminal]
33+
----
34+
$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging
35+
----
36+
37+
.. Assign permission to collect and forward audit logs by running the following command:
38+
+
39+
[source,terminal]
40+
----
41+
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z logging-collector -n openshift-logging
42+
----
43+
44+
.. Assign permission to collect and forward infrastructure logs by running the following command:
45+
+
46+
[source,terminal]
47+
----
48+
$ oc adm policy add-cluster-role-to-user collect-infrastructure-log -z logging-collector -n openshift-logging
49+
----
50+
51+
////
52+
53+
.Verification
54+
[role="_additional-resources"]
55+
.Additional resources
56+
* link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]
57+
* xref:some-module_{context}[]
58+
////
Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
:_newdoc-version: 2.18.4
2+
:_template-generated: 2025-05-20
3+
:_mod-docs-content-type: PROCEDURE
4+
5+
[id="deleting-red-hat-log-visualization_{context}"]
6+
= Deleting Red{nbsp}Hat Log Visualization
7+
8+
When updating from Logging 5 to Logging 6, you must delete Red{nbsp}Hat Log Visualization before installing the UIPlugin.
9+
10+
.Prerequisites
11+
* You have administrator permissions.
12+
* You installed the {oc-first}.
13+
14+
.Procedure
15+
16+
. Run the following command:
17+
+
18+
[source,terminal]
19+
----
20+
$ oc get consoles.operator.openshift.io -o yaml -o jsonpath='{.spec.plugins}' |grep "logging-view-plugin" && oc patch consoles.operator.openshift.io/cluster --type json -p='[{"op": "remove", "path": "/spec/plugins", "value":{'logging-view-plugin'}}]'
21+
console.operator.openshift.io/cluster patched
22+
----
23+
24+
. Delete the logging view plugin by running the following command:
25+
+
26+
[source,terminal]
27+
----
28+
$ oc get consoleplugins logging-view-plugin && oc delete consoleplugins logging-view-plugin
29+
----
30+
////
31+
.Verification
32+
Delete this section if it does not apply to your module. Provide the user with verification methods for the procedure, such as expected output or commands that confirm success or failure.
33+
34+
* Provide an example of expected command output or a pop-up window that the user receives when the procedure is successful.
35+
* List actions for the user to complete, such as entering a command, to determine the success or failure of the procedure.
36+
* Make each step an instruction.
37+
* Use an unnumbered bullet (*) if the verification includes only one step.
38+
39+
.Troubleshooting
40+
Delete this section if it does not apply to your module. Provide the user with troubleshooting steps.
41+
42+
* Make each step an instruction.
43+
* Use an unnumbered bullet (*) if the troubleshooting includes only one step.
44+
45+
.Next steps
46+
* Delete this section if it does not apply to your module.
47+
* Provide a bulleted list of links that contain instructions that might be useful to the user after they complete this procedure.
48+
* Use an unnumbered bullet (*) if the list includes only one step.
49+
50+
NOTE: Do not use *Next steps* to provide a second list of instructions.
51+
52+
[role="_additional-resources"]
53+
.Additional resources
54+
* link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]
55+
* xref:some-module_{context}[]
56+
////
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
:_newdoc-version: 2.18.4
2+
:_template-generated: 2025-05-20
3+
:_mod-docs-content-type: PROCEDURE
4+
5+
[id="deleting-red-hat-openshift-logging-5-crds_{context}"]
6+
= Deleting Red{nbsp}Hat OpenShift Logging 5 CRD
7+
8+
You must delete Red{nbsp}Hat OpenShift Logging 5 custom resource definitions (CRD), when upgrading to Logging 6.
9+
10+
11+
.Prerequisites
12+
* You have administrator permissions.
13+
* You installed the {oc-first}.
14+
15+
.Procedure
16+
* Delete `clusterlogforwarders.logging.openshift.io` and `clusterloggings.logging.openshift.io` CRD by running the following command:
17+
+
18+
[source,terminal]
19+
----
20+
$ oc delete crd clusterloggings.logging.openshift.io clusterlogforwarders.logging.openshift.io
21+
----
22+
23+
////
24+
.Verification
25+
////
26+
Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
:_newdoc-version: 2.18.4
2+
:_template-generated: 2025-05-20
3+
:_mod-docs-content-type: PROCEDURE
4+
5+
[id="deleting-the-clusterlogging-instance_{context}"]
6+
= Deleting the ClusterLogging instance
7+
8+
Delete the ClusterLogging instance because it is no longer needed in Logging 6.x.
9+
10+
.Prerequisites
11+
* You have administrator permissions.
12+
* You installed the {oc-first}.
13+
14+
.Procedure
15+
* Delete the ClusterLogging instance.
16+
+
17+
[source,terminal]
18+
----
19+
$ oc delete clusterlogging <CR name> -n <namespace>
20+
----
21+
22+
.Verification
23+
24+
. Verify that no collector pods are running by running the following command:
25+
+
26+
[source,terminal]
27+
----
28+
$ oc get pods -l component=collector -n <namespace>
29+
----
30+
31+
. Verify that no Check that no `clusterLogForwarder.logging.openshift.io` custom resource (CR) exists by running the following command:
32+
+
33+
[source,terminal]
34+
----
35+
$ oc get clusterlogforwarders.logging.openshift.io -A
36+
----
37+
38+
[IMPORTANT]
39+
=====
40+
If any `clusterLogForwarder.logging.openshift.io` CR is listed, it belongs to the old 5.x Logging stack, and must be removed. Create a back up of the CRs and delete them before deploying any `clusterLogForwarder.observability.openshift.io` CR with the new APIversion.
41+
=====

0 commit comments

Comments
 (0)