Skip to content

OBSDOCS-1701: Upgrading to Logging 6 steps Final #93577

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: enterprise-4.18
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3045,6 +3045,8 @@ Topics:
File: log6x-configuring-lokistack-otlp-6.2
- Name: Visualization for logging
File: log6x-visual-6.2
- Name: Updating Logging
File: cluster-logging-upgrading
- Name: Logging 6.1
Dir: logging-6.1
Topics:
Expand Down
104 changes: 104 additions & 0 deletions modules/log-upgrade/6x-logging-upgrading-clo.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
// Module included in the following assemblies:
//
// * observability/logging/cluster-logging-upgrading.adoc

:_mod-docs-content-type: PROCEDURE
[id="logging-upgrading-clo_{context}"]
= Updating the {clo}

The {clo} does not provide an automated upgrade from Logging 5.x to Logging 6.x because of the different combinations in which Logging can be configured. You must install all the different operators for managing logging seperately. You can upgrade to Logging 6.x version from both Logging 5.9 and Logging 5.8 versions.

You can update {clo} by either changing the subscription channel in the {product-title} web console, or by uninstalling it. The following procedure demonstrates updating {clo} by changing the subscription channel in the {product-title} web console.

////
[IMPORTANT]
====
The path to the checkpoints in Vector in Logging v6 is different from the path in Logging v6. Therefore, on migration, all the logs are reprocessed which might impact the control-planes, network, storage, cpu and memory.
====
////

//Need to add steps about

.Prerequisites

* You have installed the {clo}.
* You have administrator permissions.
* You have access to the {product-title} web console and are viewing the *Administrator* perspective.

.Procedure
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, upgrading to Logging 6, even if using Vector in Logging 5, implies that the checkpoints used are in a different path and this has a big impact. The same reported for the change between Fluentd and Vector reported in https://issues.redhat.com/browse/OBSDA-540 where it's said:

Also, indicate for the migration, that all the logs not compressed will be reprocessed by Vector that can lead to:
  - have duplicated logs in the moment of the migration
  - 429 Too many requests in the Log storage receiving the logs or reaching Rate Limit
  - problems on the log store on disk and performance as consequence of re-reading and processing all old logs the collector
  - impact in the Kube API
  - a peak of memory and cpu in Vector until all the old logs are processed (these logs can be several GB per node). This also could lead to a big impact.

Then, it's provided the steps for moving the Vector checkpoints to the new path before upgrading or it's highlighted this "impact" here as it's reported issues with this upgrade

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @theashiot ,
In the article https://access.redhat.com/articles/7089860 in step "Step 4: Delete the ClusterLogging instance and deploy the ClusterLogForwarder observability Custom Resource", it was incorporated the checkpoints migration. If it's confirmed those steps, then, we could probably incorporate to this PR and close all


. Create service account.

.. Create a service account to be used by the log collector:
+
[source,terminal]
----
$ oc create sa logging-collector -n openshift-logging
----

.. Bind the `ClusterRole` role to the `serviceAccount` to be able to write the logs to the Red{nbsp}Hat LokiStack
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging
----

.. Assign permission to collect and forward application logs by running the following command:
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging
----

.. Assign permission to collect and forward audit logs by running the following command:
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z logging-collector -n openshift-logging
----

.. Assign permission to collect and forward infrastructure logs by running the following command:
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user collect-infrastructure-log -z logging-collector -n openshift-logging
----

. Transform the transform the current configuration to the new API in Logging 6.
+
For more information, see link:[Changes to Cluster logging and forwarding in Logging 6].

. Move Vector checkpoints to the new path.
+
//Need to add steps
+
[IMPORTANT]
====
When you migrate, all the logs that have not been compressed will be reprocessed by Vector. The reprocessing might lead to the following issues:

* Duplicated logs during migration.
* Too many requests in the Log storage receiving the logs or reaching rate limit.
* Problems on the log store on disk and performance as consequence of re-reading and processing all old logs the collector.
* Impact in the Kube API.
* A peak of memory and cpu in Vector until all the old logs are processed. The logs can be several GB per node.
====

. Update the {clo} by using the {product-title} web console.
.. Navigate to *Operators* -> *Installed Operators*.

.. Select the *openshift-logging* project.

.. Click the *Red Hat OpenShift Logging* Operator.

.. Click *Subscription*. In the *Subscription details* section, click the *Update channel* link.

.. In the *Change Subscription Update Channel* window, select the latest major version update channel, *stable-6.x*, and click *Save*. Note the `cluster-logging.v6.y.z` version.

.. Wait for a few seconds, and then go to *Operators* -> *Installed Operators* to verify that the {clo} version matches the latest `cluster-logging.v6.y.z` version.

.. On the *Operators* -> *Installed Operators* page, wait for the *Status* field to report *Succeeded*.
+
Your existing Logging v5 resources will continue to run, but are no longer managed by your operator. These unmanaged resources can be removed once your new resources are ready to be created.

// check if this is correct

33 changes: 33 additions & 0 deletions modules/log-upgrade/6x-logging-upgrading-loki.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
// Module included in the following assemblies:
//
// * observability/logging/cluster-logging-upgrading.adoc

:_mod-docs-content-type: PROCEDURE
[id="logging-upgrading-loki_{context}"]
= Updating the {loki-op}

To update the {loki-op} to a new major release version, you must modify the update channel for the Operator subscription.

.Prerequisites

* You have installed the {loki-op}.
* You have administrator permissions.
* You have access to the {product-title} web console and are viewing the *Administrator* perspective.

.Procedure

. Navigate to *Operators* -> *Installed Operators*.

. Select the *openshift-operators-redhat* project.

. Click the *{loki-op}*.

. Click *Subscription*. In the *Subscription details* section, click the *Update channel* link. This link text might be *stable* or *stable-5.y*, depending on your current update channel.

. In the *Change Subscription Update Channel* window, select the latest major version update channel, *stable-6.y*, and click *Save*. Note the `loki-operator.v6.y.z` version.

. Wait for a few seconds, then click *Operators* -> *Installed Operators*. Verify that the {loki-op} version matches the latest `loki-operator.v5.y.z` version.

. On the *Operators* -> *Installed Operators* page, wait for the *Status* field to report *Succeeded*.

. Check if the `LokiStack` custom resource contains the `v13` schema version and add it if it is missing. For correctly adding the `v13` schema version, see "Upgrading the LokiStack storage schema".
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "Upgrading the LokiStack storage schema" misses the link to the section

42 changes: 42 additions & 0 deletions modules/log-upgrade/6x-uninstall-es-operator.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
// Module included in the following assemblies:
//
// * observability/logging/cluster-logging-uninstall.adoc

:_mod-docs-content-type: PROCEDURE
[id="uninstall-es-operator_{context}"]
= Uninstalling Elasticsearch

You can uninstall Elasticsearch by using the {product-title} web console. Uninstall Elasticsearch only if it is not used for by component such as Jaeger, Service Mesh, or Kiali.

.Prerequisites

* You have administrator permissions.
* You have access to the *Administrator* perspective of the {product-title} web console.
* If you have not already removed the {clo} and related resources, you must remove references to Elasticsearch from the `ClusterLogging` custom resource.

.Procedure

. Go to the *Administration* -> *Custom Resource Definitions* page, and click *Elasticsearch*.

. On the *Custom Resource Definition Details* page, click *Instances*.

. Click the Options menu {kebab} next to the instance, and then click *Delete Elasticsearch*.

. Go to the *Administration* -> *Custom Resource Definitions* page.

. Click the Options menu {kebab} next to *Elasticsearch*, and select *Delete Custom Resource Definition*.
Copy link

@r2d2rnd r2d2rnd May 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is missing to delete the CRD Kibana and also deleting the Elasticsearch PVC


. Go to the *Operators* -> *Installed Operators* page.

. Click the Options menu {kebab} next to the {es-op}, and then click *Uninstall Operator*.

. Optional: Delete the `openshift-operators-redhat` project.
+
[IMPORTANT]
====
Do not delete the `openshift-operators-redhat` project if other global Operators are installed in this namespace.
====

.. Go to the *Home* -> *Projects* page.
.. Click the Options menu {kebab} next to the *openshift-operators-redhat* project, and then click *Delete Project*.
.. Confirm the deletion by typing `openshift-operators-redhat` in the dialog box, and then click *Delete*.
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
:_newdoc-version: 2.18.4
:_template-generated: 2025-05-20
:_mod-docs-content-type: PROCEDURE

[id="creating-and-configuring-a-service-account-for-the-log-collector_{context}"]
= Creating and configuring a service account for the log collector

Create a service account for the log collector and assign it the necessary roles and permissions to collect logs.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be only needed if not desired to use the legacy account "logcollector". It's explained in https://github.com/openshift/cluster-logging-operator/blob/master/docs/administration/upgrade/v6.0_changes.adoc in section "Legacy openshift-logging"


.Prerequisites

.Procedure

. Create a service account to be used by the log collector:
+
[source,terminal]
----
$ oc create sa logging-collector -n openshift-logging
----

. Bind the `ClusterRole` role to the `serviceAccount` to be able to write the logs to the Red{nbsp}Hat LokiStack
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging
----

. Assign the necessary permissions to the service account for the collector to be able to collect and forward logs.

.. Assign permission to collect and forward application logs by running the following command:
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging
----

.. Assign permission to collect and forward audit logs by running the following command:
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z logging-collector -n openshift-logging
----

.. Assign permission to collect and forward infrastructure logs by running the following command:
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user collect-infrastructure-log -z logging-collector -n openshift-logging
----

////

.Verification
[role="_additional-resources"]
.Additional resources
* link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]
* xref:some-module_{context}[]
////
56 changes: 56 additions & 0 deletions modules/log-upgrade/6x_deleting-red-hat-log-visualization.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
:_newdoc-version: 2.18.4
:_template-generated: 2025-05-20
:_mod-docs-content-type: PROCEDURE

[id="deleting-red-hat-log-visualization_{context}"]
= Deleting Red{nbsp}Hat Log Visualization

When updating from Logging 5 to Logging 6, you must delete Red{nbsp}Hat Log Visualization before installing the UIPlugin.

.Prerequisites
* You have administrator permissions.
* You installed the {oc-first}.

.Procedure

. Run the following command:
+
[source,terminal]
----
$ oc get consoles.operator.openshift.io -o yaml -o jsonpath='{.spec.plugins}' |grep "logging-view-plugin" && oc patch consoles.operator.openshift.io/cluster --type json -p='[{"op": "remove", "path": "/spec/plugins", "value":{'logging-view-plugin'}}]'
console.operator.openshift.io/cluster patched
----

. Delete the logging view plugin by running the following command:
+
[source,terminal]
----
$ oc get consoleplugins logging-view-plugin && oc delete consoleplugins logging-view-plugin
----
////
.Verification
Delete this section if it does not apply to your module. Provide the user with verification methods for the procedure, such as expected output or commands that confirm success or failure.

* Provide an example of expected command output or a pop-up window that the user receives when the procedure is successful.
* List actions for the user to complete, such as entering a command, to determine the success or failure of the procedure.
* Make each step an instruction.
* Use an unnumbered bullet (*) if the verification includes only one step.

.Troubleshooting
Delete this section if it does not apply to your module. Provide the user with troubleshooting steps.

* Make each step an instruction.
* Use an unnumbered bullet (*) if the troubleshooting includes only one step.

.Next steps
* Delete this section if it does not apply to your module.
* Provide a bulleted list of links that contain instructions that might be useful to the user after they complete this procedure.
* Use an unnumbered bullet (*) if the list includes only one step.

NOTE: Do not use *Next steps* to provide a second list of instructions.

[role="_additional-resources"]
.Additional resources
* link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]
* xref:some-module_{context}[]
////
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
:_newdoc-version: 2.18.4
:_template-generated: 2025-05-20
:_mod-docs-content-type: PROCEDURE

[id="deleting-red-hat-openshift-logging-5-crds_{context}"]
= Deleting Red{nbsp}Hat OpenShift Logging 5 CRD

You must delete Red{nbsp}Hat OpenShift Logging 5 custom resource definitions (CRD), when upgrading to Logging 6.


.Prerequisites
* You have administrator permissions.
* You installed the {oc-first}.

.Procedure
* Delete `clusterlogforwarders.logging.openshift.io` and `clusterloggings.logging.openshift.io` CRD by running the following command:
+
[source,terminal]
----
$ oc delete crd clusterloggings.logging.openshift.io clusterlogforwarders.logging.openshift.io
----

////
.Verification
////

41 changes: 41 additions & 0 deletions modules/log-upgrade/6x_deleting-the-clusterlogging-instance.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
:_newdoc-version: 2.18.4
:_template-generated: 2025-05-20
:_mod-docs-content-type: PROCEDURE

[id="deleting-the-clusterlogging-instance_{context}"]
= Deleting the ClusterLogging instance

Delete the ClusterLogging instance because it is no longer needed in Logging 6.x.

.Prerequisites
* You have administrator permissions.
* You installed the {oc-first}.

.Procedure
* Delete the ClusterLogging instance.
+
[source,terminal]
----
$ oc delete clusterlogging <CR name> -n <namespace>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably, it should good to provide the command to list all before. It's:

$ oc get clusterloggings.logging.openshift.io -A

----

.Verification

. Verify that no collector pods are running by running the following command:
+
[source,terminal]
----
$ oc get pods -l component=collector -n <namespace>
Copy link

@r2d2rnd r2d2rnd May 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can do in this way, or we can list in all the namespaces that sounds better as nobody will verify ns by ns:

$ oc get pods -l component=collector -A

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

----

. Verify that no Check that no `clusterLogForwarder.logging.openshift.io` custom resource (CR) exists by running the following command:
+
[source,terminal]
----
$ oc get clusterlogforwarders.logging.openshift.io -A
----

[IMPORTANT]
=====
If any `clusterLogForwarder.logging.openshift.io` CR is listed, it belongs to the old 5.x Logging stack, and must be removed. Create a back up of the CRs and delete them before deploying any `clusterLogForwarder.observability.openshift.io` CR with the new APIversion.
=====
Loading