From c1bc50fc858f6eedd2a2e6d15965d1854b887794 Mon Sep 17 00:00:00 2001 From: Ashwin Mehendale Date: Tue, 20 May 2025 18:34:39 +0530 Subject: [PATCH] OBSDOCS-1701: Upgrading to Logging 6 steps Final --- _topic_maps/_topic_map.yml | 2 + .../log-upgrade/6x-logging-upgrading-clo.adoc | 104 +++++++ .../6x-logging-upgrading-loki.adoc | 33 +++ .../log-upgrade/6x-uninstall-es-operator.adoc | 42 +++ ...service-account-for-the-log-collector.adoc | 58 ++++ ...6x_deleting-red-hat-log-visualization.adoc | 56 ++++ ...ting-red-hat-openshift-logging-5-crds.adoc | 26 ++ ..._deleting-the-clusterlogging-instance.adoc | 41 +++ ...rwarder-observability-custom-resource.adoc | 68 +++++ ...r-logging-and-forwarding-in-logging-6.adoc | 268 ++++++++++++++++++ .../proc_migrating-logging-resources.adoc | 49 ++++ .../cluster-logging-upgrading.adoc | 52 ++++ 12 files changed, 799 insertions(+) create mode 100644 modules/log-upgrade/6x-logging-upgrading-clo.adoc create mode 100644 modules/log-upgrade/6x-logging-upgrading-loki.adoc create mode 100644 modules/log-upgrade/6x-uninstall-es-operator.adoc create mode 100644 modules/log-upgrade/6x_creating-and-configuring-a-service-account-for-the-log-collector.adoc create mode 100644 modules/log-upgrade/6x_deleting-red-hat-log-visualization.adoc create mode 100644 modules/log-upgrade/6x_deleting-red-hat-openshift-logging-5-crds.adoc create mode 100644 modules/log-upgrade/6x_deleting-the-clusterlogging-instance.adoc create mode 100644 modules/log-upgrade/6x_deploying-a-clusterlogforwarder-observability-custom-resource.adoc create mode 100644 modules/log-upgrade/con_changes-to-cluster-logging-and-forwarding-in-logging-6.adoc create mode 100644 modules/log-upgrade/proc_migrating-logging-resources.adoc create mode 100644 observability/logging/logging-6.2/cluster-logging-upgrading.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index c84012ab1b3f..633f454f00a1 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -3045,6 +3045,8 @@ Topics: File: log6x-configuring-lokistack-otlp-6.2 - Name: Visualization for logging File: log6x-visual-6.2 + - Name: Updating Logging + File: cluster-logging-upgrading - Name: Logging 6.1 Dir: logging-6.1 Topics: diff --git a/modules/log-upgrade/6x-logging-upgrading-clo.adoc b/modules/log-upgrade/6x-logging-upgrading-clo.adoc new file mode 100644 index 000000000000..b09e2916f686 --- /dev/null +++ b/modules/log-upgrade/6x-logging-upgrading-clo.adoc @@ -0,0 +1,104 @@ +// Module included in the following assemblies: +// +// * observability/logging/cluster-logging-upgrading.adoc + +:_mod-docs-content-type: PROCEDURE +[id="logging-upgrading-clo_{context}"] += Updating the {clo} + +The {clo} does not provide an automated upgrade from Logging 5.x to Logging 6.x because of the different combinations in which Logging can be configured. You must install all the different operators for managing logging seperately. You can upgrade to Logging 6.x version from both Logging 5.9 and Logging 5.8 versions. + +You can update {clo} by either changing the subscription channel in the {product-title} web console, or by uninstalling it. The following procedure demonstrates updating {clo} by changing the subscription channel in the {product-title} web console. + +//// +[IMPORTANT] +==== +The path to the checkpoints in Vector in Logging v6 is different from the path in Logging v6. Therefore, on migration, all the logs are reprocessed which might impact the control-planes, network, storage, cpu and memory. +==== +//// + +//Need to add steps about + +.Prerequisites + +* You have installed the {clo}. +* You have administrator permissions. +* You have access to the {product-title} web console and are viewing the *Administrator* perspective. + +.Procedure + +. Create service account. + +.. Create a service account to be used by the log collector: ++ +[source,terminal] +---- +$ oc create sa logging-collector -n openshift-logging +---- + +.. Bind the `ClusterRole` role to the `serviceAccount` to be able to write the logs to the Red{nbsp}Hat LokiStack ++ +[source,terminal] +---- +$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging +---- + +.. Assign permission to collect and forward application logs by running the following command: ++ +[source,terminal] +---- +$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging +---- + +.. Assign permission to collect and forward audit logs by running the following command: ++ +[source,terminal] +---- +$ oc adm policy add-cluster-role-to-user collect-audit-logs -z logging-collector -n openshift-logging +---- + +.. Assign permission to collect and forward infrastructure logs by running the following command: ++ +[source,terminal] +---- + $ oc adm policy add-cluster-role-to-user collect-infrastructure-log -z logging-collector -n openshift-logging +---- + +. Transform the transform the current configuration to the new API in Logging 6. ++ +For more information, see link:[Changes to Cluster logging and forwarding in Logging 6]. + +. Move Vector checkpoints to the new path. ++ +//Need to add steps ++ +[IMPORTANT] +==== +When you migrate, all the logs that have not been compressed will be reprocessed by Vector. The reprocessing might lead to the following issues: + +* Duplicated logs during migration. +* Too many requests in the Log storage receiving the logs or reaching rate limit. +* Problems on the log store on disk and performance as consequence of re-reading and processing all old logs the collector. +* Impact in the Kube API. +* A peak of memory and cpu in Vector until all the old logs are processed. The logs can be several GB per node. +==== + +. Update the {clo} by using the {product-title} web console. +.. Navigate to *Operators* -> *Installed Operators*. + +.. Select the *openshift-logging* project. + +.. Click the *Red Hat OpenShift Logging* Operator. + +.. Click *Subscription*. In the *Subscription details* section, click the *Update channel* link. + +.. In the *Change Subscription Update Channel* window, select the latest major version update channel, *stable-6.x*, and click *Save*. Note the `cluster-logging.v6.y.z` version. + +.. Wait for a few seconds, and then go to *Operators* -> *Installed Operators* to verify that the {clo} version matches the latest `cluster-logging.v6.y.z` version. + +.. On the *Operators* -> *Installed Operators* page, wait for the *Status* field to report *Succeeded*. ++ +Your existing Logging v5 resources will continue to run, but are no longer managed by your operator. These unmanaged resources can be removed once your new resources are ready to be created. + +// check if this is correct + diff --git a/modules/log-upgrade/6x-logging-upgrading-loki.adoc b/modules/log-upgrade/6x-logging-upgrading-loki.adoc new file mode 100644 index 000000000000..c546b6d2e0c8 --- /dev/null +++ b/modules/log-upgrade/6x-logging-upgrading-loki.adoc @@ -0,0 +1,33 @@ +// Module included in the following assemblies: +// +// * observability/logging/cluster-logging-upgrading.adoc + +:_mod-docs-content-type: PROCEDURE +[id="logging-upgrading-loki_{context}"] += Updating the {loki-op} + +To update the {loki-op} to a new major release version, you must modify the update channel for the Operator subscription. + +.Prerequisites + +* You have installed the {loki-op}. +* You have administrator permissions. +* You have access to the {product-title} web console and are viewing the *Administrator* perspective. + +.Procedure + +. Navigate to *Operators* -> *Installed Operators*. + +. Select the *openshift-operators-redhat* project. + +. Click the *{loki-op}*. + +. Click *Subscription*. In the *Subscription details* section, click the *Update channel* link. This link text might be *stable* or *stable-5.y*, depending on your current update channel. + +. In the *Change Subscription Update Channel* window, select the latest major version update channel, *stable-6.y*, and click *Save*. Note the `loki-operator.v6.y.z` version. + +. Wait for a few seconds, then click *Operators* -> *Installed Operators*. Verify that the {loki-op} version matches the latest `loki-operator.v5.y.z` version. + +. On the *Operators* -> *Installed Operators* page, wait for the *Status* field to report *Succeeded*. + +. Check if the `LokiStack` custom resource contains the `v13` schema version and add it if it is missing. For correctly adding the `v13` schema version, see "Upgrading the LokiStack storage schema". diff --git a/modules/log-upgrade/6x-uninstall-es-operator.adoc b/modules/log-upgrade/6x-uninstall-es-operator.adoc new file mode 100644 index 000000000000..87c71ee54e79 --- /dev/null +++ b/modules/log-upgrade/6x-uninstall-es-operator.adoc @@ -0,0 +1,42 @@ +// Module included in the following assemblies: +// +// * observability/logging/cluster-logging-uninstall.adoc + +:_mod-docs-content-type: PROCEDURE +[id="uninstall-es-operator_{context}"] += Uninstalling Elasticsearch + +You can uninstall Elasticsearch by using the {product-title} web console. Uninstall Elasticsearch only if it is not used for by component such as Jaeger, Service Mesh, or Kiali. + +.Prerequisites + +* You have administrator permissions. +* You have access to the *Administrator* perspective of the {product-title} web console. +* If you have not already removed the {clo} and related resources, you must remove references to Elasticsearch from the `ClusterLogging` custom resource. + +.Procedure + +. Go to the *Administration* -> *Custom Resource Definitions* page, and click *Elasticsearch*. + +. On the *Custom Resource Definition Details* page, click *Instances*. + +. Click the Options menu {kebab} next to the instance, and then click *Delete Elasticsearch*. + +. Go to the *Administration* -> *Custom Resource Definitions* page. + +. Click the Options menu {kebab} next to *Elasticsearch*, and select *Delete Custom Resource Definition*. + +. Go to the *Operators* -> *Installed Operators* page. + +. Click the Options menu {kebab} next to the {es-op}, and then click *Uninstall Operator*. + +. Optional: Delete the `openshift-operators-redhat` project. ++ +[IMPORTANT] +==== +Do not delete the `openshift-operators-redhat` project if other global Operators are installed in this namespace. +==== + +.. Go to the *Home* -> *Projects* page. +.. Click the Options menu {kebab} next to the *openshift-operators-redhat* project, and then click *Delete Project*. +.. Confirm the deletion by typing `openshift-operators-redhat` in the dialog box, and then click *Delete*. diff --git a/modules/log-upgrade/6x_creating-and-configuring-a-service-account-for-the-log-collector.adoc b/modules/log-upgrade/6x_creating-and-configuring-a-service-account-for-the-log-collector.adoc new file mode 100644 index 000000000000..7ca7f3dbe861 --- /dev/null +++ b/modules/log-upgrade/6x_creating-and-configuring-a-service-account-for-the-log-collector.adoc @@ -0,0 +1,58 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-20 +:_mod-docs-content-type: PROCEDURE + +[id="creating-and-configuring-a-service-account-for-the-log-collector_{context}"] += Creating and configuring a service account for the log collector + +Create a service account for the log collector and assign it the necessary roles and permissions to collect logs. + +.Prerequisites + +.Procedure + +. Create a service account to be used by the log collector: ++ +[source,terminal] +---- +$ oc create sa logging-collector -n openshift-logging +---- + +. Bind the `ClusterRole` role to the `serviceAccount` to be able to write the logs to the Red{nbsp}Hat LokiStack ++ +[source,terminal] +---- +$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging +---- + +. Assign the necessary permissions to the service account for the collector to be able to collect and forward logs. + +.. Assign permission to collect and forward application logs by running the following command: ++ +[source,terminal] +---- +$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging +---- + +.. Assign permission to collect and forward audit logs by running the following command: ++ +[source,terminal] +---- +$ oc adm policy add-cluster-role-to-user collect-audit-logs -z logging-collector -n openshift-logging +---- + +.. Assign permission to collect and forward infrastructure logs by running the following command: ++ +[source,terminal] +---- + $ oc adm policy add-cluster-role-to-user collect-infrastructure-log -z logging-collector -n openshift-logging +---- + +//// + +.Verification +[role="_additional-resources"] +.Additional resources +* link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide] +* xref:some-module_{context}[] +//// diff --git a/modules/log-upgrade/6x_deleting-red-hat-log-visualization.adoc b/modules/log-upgrade/6x_deleting-red-hat-log-visualization.adoc new file mode 100644 index 000000000000..4acb736bcd2f --- /dev/null +++ b/modules/log-upgrade/6x_deleting-red-hat-log-visualization.adoc @@ -0,0 +1,56 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-20 +:_mod-docs-content-type: PROCEDURE + +[id="deleting-red-hat-log-visualization_{context}"] += Deleting Red{nbsp}Hat Log Visualization + +When updating from Logging 5 to Logging 6, you must delete Red{nbsp}Hat Log Visualization before installing the UIPlugin. + +.Prerequisites +* You have administrator permissions. +* You installed the {oc-first}. + +.Procedure + +. Run the following command: ++ +[source,terminal] +---- +$ oc get consoles.operator.openshift.io -o yaml -o jsonpath='{.spec.plugins}' |grep "logging-view-plugin" && oc patch consoles.operator.openshift.io/cluster --type json -p='[{"op": "remove", "path": "/spec/plugins", "value":{'logging-view-plugin'}}]' +console.operator.openshift.io/cluster patched +---- + +. Delete the logging view plugin by running the following command: ++ +[source,terminal] +---- +$ oc get consoleplugins logging-view-plugin && oc delete consoleplugins logging-view-plugin +---- +//// +.Verification +Delete this section if it does not apply to your module. Provide the user with verification methods for the procedure, such as expected output or commands that confirm success or failure. + +* Provide an example of expected command output or a pop-up window that the user receives when the procedure is successful. +* List actions for the user to complete, such as entering a command, to determine the success or failure of the procedure. +* Make each step an instruction. +* Use an unnumbered bullet (*) if the verification includes only one step. + +.Troubleshooting +Delete this section if it does not apply to your module. Provide the user with troubleshooting steps. + +* Make each step an instruction. +* Use an unnumbered bullet (*) if the troubleshooting includes only one step. + +.Next steps +* Delete this section if it does not apply to your module. +* Provide a bulleted list of links that contain instructions that might be useful to the user after they complete this procedure. +* Use an unnumbered bullet (*) if the list includes only one step. + +NOTE: Do not use *Next steps* to provide a second list of instructions. + +[role="_additional-resources"] +.Additional resources +* link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide] +* xref:some-module_{context}[] +//// diff --git a/modules/log-upgrade/6x_deleting-red-hat-openshift-logging-5-crds.adoc b/modules/log-upgrade/6x_deleting-red-hat-openshift-logging-5-crds.adoc new file mode 100644 index 000000000000..8378349f2352 --- /dev/null +++ b/modules/log-upgrade/6x_deleting-red-hat-openshift-logging-5-crds.adoc @@ -0,0 +1,26 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-20 +:_mod-docs-content-type: PROCEDURE + +[id="deleting-red-hat-openshift-logging-5-crds_{context}"] += Deleting Red{nbsp}Hat OpenShift Logging 5 CRD + +You must delete Red{nbsp}Hat OpenShift Logging 5 custom resource definitions (CRD), when upgrading to Logging 6. + + +.Prerequisites +* You have administrator permissions. +* You installed the {oc-first}. + +.Procedure +* Delete `clusterlogforwarders.logging.openshift.io` and `clusterloggings.logging.openshift.io` CRD by running the following command: ++ +[source,terminal] +---- +$ oc delete crd clusterloggings.logging.openshift.io clusterlogforwarders.logging.openshift.io +---- + +//// +.Verification +//// + diff --git a/modules/log-upgrade/6x_deleting-the-clusterlogging-instance.adoc b/modules/log-upgrade/6x_deleting-the-clusterlogging-instance.adoc new file mode 100644 index 000000000000..94a925a4aa02 --- /dev/null +++ b/modules/log-upgrade/6x_deleting-the-clusterlogging-instance.adoc @@ -0,0 +1,41 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-20 +:_mod-docs-content-type: PROCEDURE + +[id="deleting-the-clusterlogging-instance_{context}"] += Deleting the ClusterLogging instance + +Delete the ClusterLogging instance because it is no longer needed in Logging 6.x. + +.Prerequisites +* You have administrator permissions. +* You installed the {oc-first}. + +.Procedure +* Delete the ClusterLogging instance. ++ +[source,terminal] +---- +$ oc delete clusterlogging -n +---- + +.Verification + +. Verify that no collector pods are running by running the following command: ++ +[source,terminal] +---- +$ oc get pods -l component=collector -n +---- + +. Verify that no Check that no `clusterLogForwarder.logging.openshift.io` custom resource (CR) exists by running the following command: ++ +[source,terminal] +---- +$ oc get clusterlogforwarders.logging.openshift.io -A +---- + +[IMPORTANT] +===== +If any `clusterLogForwarder.logging.openshift.io` CR is listed, it belongs to the old 5.x Logging stack, and must be removed. Create a back up of the CRs and delete them before deploying any `clusterLogForwarder.observability.openshift.io` CR with the new APIversion. +===== \ No newline at end of file diff --git a/modules/log-upgrade/6x_deploying-a-clusterlogforwarder-observability-custom-resource.adoc b/modules/log-upgrade/6x_deploying-a-clusterlogforwarder-observability-custom-resource.adoc new file mode 100644 index 000000000000..4b81d5d5e3db --- /dev/null +++ b/modules/log-upgrade/6x_deploying-a-clusterlogforwarder-observability-custom-resource.adoc @@ -0,0 +1,68 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-20 +:_mod-docs-content-type: PROCEDURE + +[id="deploying-a-clusterlogforwarder-observability-custom-resource_{context}"] += Deploying a ClusterLogForwarder observability custom resource + +Deploy a ClusterLogForwarder observability custom resource (CR) by using the {oc-first} command. The following procedure demonstrates using Lokistack as the log store. + +.Prerequisites +* You have administrator permissions. +* You installed the {oc-first}. +* You have created a service account. +* You have installed LokiStack. + +.Procedure +. Create a ClusterLogForwarder observability custom resource (CR). ++ +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +metadata: + name: collector + namespace: openshift-logging +spec: + serviceAccount: + name: + outputs: + - name: default-lokistack + type: lokiStack + lokiStack: + target: + name: + namespace: openshift-logging + authentication: + token: + from: serviceAccount + tls: + ca: + key: service-ca.crt + configMapName: openshift-service-ca.crt + pipelines: + - name: default-logstore + inputRefs: + - application + - infrastructure + outputRefs: + - default-lokistack +---- + +. Deply the CR by running the following command: ++ +[source,terminal] +---- +oc create -f .yaml +---- + +//// +.Verification +Delete this section if it does not apply to your module. Provide the user with verification methods for the procedure, such as expected output or commands that confirm success or failure. + +* Provide an example of expected command output or a pop-up window that the user receives when the procedure is successful. +* List actions for the user to complete, such as entering a command, to determine the success or failure of the procedure. +* Make each step an instruction. +* Use an unnumbered bullet (*) if the verification includes only one step. + +//// \ No newline at end of file diff --git a/modules/log-upgrade/con_changes-to-cluster-logging-and-forwarding-in-logging-6.adoc b/modules/log-upgrade/con_changes-to-cluster-logging-and-forwarding-in-logging-6.adoc new file mode 100644 index 000000000000..a73353ea13f2 --- /dev/null +++ b/modules/log-upgrade/con_changes-to-cluster-logging-and-forwarding-in-logging-6.adoc @@ -0,0 +1,268 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-26 +:_mod-docs-content-type: CONCEPT + +[id="changes-to-cluster-logging-and-forwarding-in-logging-6_{context}"] += Changes to Cluster logging and forwarding in Logging 6 + +Log collection and forwarding configurations are now specified under the new link:https://github.com/openshift/cluster-logging-operator/blob/master/docs/reference/operator/api_observability_v1.adoc[API], part of the `observability.openshift.io` API group. The following sections highlight the differences from the old API resources. + +[NOTE] +==== +Vector is the only supported collector implementation. +==== + +== Management, resource allocation, and workload scheduling + +Configuration for management state, resource requests and limits, tolerations, and node selection is now part of the new `ClusterLogForwarder` API. + +.Logging 5.x configuration +[source,yaml] +---- +apiVersion: "logging.openshift.io/v1" +kind: "ClusterLogging" +spec: + managementState: "Managed" + collection: + resources: + limits: {} + requests: {} + nodeSelector: {} + tolerations: {} +---- + +.Logging 6 configuration +[source,yaml] +---- +apiVersion: "observability.openshift.io/v1" +kind: ClusterLogForwarder +spec: + managementState: Managed + collector: + resources: + limits: {} + requests: {} + nodeSelector: {} + tolerations: {} +---- + +== Input specifications + +The input specification is an optional part of the `ClusterLogForwarder` specification. Administrators can continue to use the predefined values `application`, `infrastructure`, and `audit` to collect these sources. + +=== Application Inputs + +Namespace and container inclusions and exclusions have been consolidated into a single field. + +.5.x application input with namespace and container includes and excludes +[source,yaml] +---- +apiVersion: "logging.openshift.io/v1" +kind: ClusterLogForwarder +spec: + inputs: + - name: application-logs + type: application + application: + namespaces: + - foo + - bar + includes: + - namespace: my-important + container: main + excludes: + - container: too-verbose +---- + +.6.x application input with namespace and container includes and excludes +[source,yaml] +---- +apiVersion: "observability.openshift.io/v1" +kind: ClusterLogForwarder +spec: + inputs: + - name: application-logs + type: application + application: + includes: + - namespace: foo + - namespace: bar + - namespace: my-important + container: main + excludes: + - container: too-verbose +---- + +[NOTE] +==== +"application", "infrastructure", and "audit" are reserved words and cannot be used as names when defining an input. +==== + +=== Input receivers + +Changes to input receivers include: + +* Explicit configuration of the type at the receiver level. +* Port settings moved to the receiver level. + +.5.x input receivers +[source,yaml] +---- +apiVersion: "logging.openshift.io/v1" +kind: ClusterLogForwarder +spec: + inputs: + - name: an-http + receiver: + http: + port: 8443 + format: kubeAPIAudit + - name: a-syslog + receiver: + type: syslog + syslog: + port: 9442 +---- + +.6.x input receivers +[source,yaml] +---- +apiVersion: "observability.openshift.io/v1" +kind: ClusterLogForwarder +spec: + inputs: + - name: an-http + type: receiver + receiver: + type: http + port: 8443 + http: + format: kubeAPIAudit + - name: a-syslog + type: receiver + receiver: + type: syslog + port: 9442 +---- + +== Output specifications + +High-level changes to output specifications include: + +* URL settings moved to each output type specification. +* Tuning parameters moved to each output type specification. +* Separation of TLS configuration from authentication. +* Explicit configuration of keys and secret/configmap for TLS and authentication. + +== Secrets and TLS Configuration + +Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. Examples in the following sections provide details on how to configure *ClusterLogForwarder* secrets to forward to existing Red Hat managed log storage solutions. + +.Logging 6.x output using service accounu token +[source,yaml] +---- +... +spec: + outputs: + - name: my-output + type: http + http: + url: https://my-secure-output:8080 + authentication: + token: + from: serviceAccount + tls: + ca: + key: service-ca.crt + configMapName: openshift-service-ca.crt +... +---- + +.Logging 6.x output authentication and TLS example +[source,yaml] +---- +... +spec: + outputs: + - name: my-output + type: http + http: + url: https://my-secure-output:8080 + authentication: + password: + key: pass + secretName: my-secret + username: + key: user + secretName: my-secret + tls: + ca: + key: ca-bundle.crt + secretName: collector + certificate: + key: tls.crt + secretName: collector + key: + key: tls.key + secretName: collector +... +---- + +== Filters and pipeline configuration + +All attributes of pipelines in previous releases have been converted to filters in this release. Individual filters are defined in the `filters`` spec and referenced by a pipeline. + +.5.x filters +[source,yaml] +---- +... +spec: + pipelines: + - name: app-logs + detectMultilineErrors: true + parse: json + labels: + foo: bar +... +---- + +.6.x filters and pipelines spec +[source,yaml] +---- +... +spec: + filters: + - name: my-multiline + type: detectMultilineException + - name: my-parse + type: parse + - name: my-labels + type: openshiftLabels + openshiftLabels: + foo: bar + pipelines: + - name: app-logs + filterRefs: + - my-multiline + - my-parse + - my-labels +... +---- + +[NOTE] +==== +`Drop`, `Prune`, and `KubeAPIAudit` filters remain unchanged. +==== + +== Validation and status + +Most validations are now enforced when a resource is created or updated which provides immediate feedback. This is a departure from previous releases where all validation occurred post creation requiring inspection of the resource status location. Some validation still occurs post resource creation for cases where is not possible to do so at creation or update time. + +Instances of the `ClusterLogForwarder.observability.openshift.io` must satisfy the following conditions before the operator deploys the log collector: + +* Resource Status Conditions: Authorized, Valid, Ready + +* Spec Validations: Filters, Inputs, Outputs, Pipelines + +All must evaluate to the status value of `True`. + diff --git a/modules/log-upgrade/proc_migrating-logging-resources.adoc b/modules/log-upgrade/proc_migrating-logging-resources.adoc new file mode 100644 index 000000000000..9f103dd5bad4 --- /dev/null +++ b/modules/log-upgrade/proc_migrating-logging-resources.adoc @@ -0,0 +1,49 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-23 +:_mod-docs-content-type: PROCEDURE + +[id="migrating-logging-resources_{context}"] += Migrating logging resources + +The new `ClusterLogForwarder` resource uses the new 'observability' API `ClusterLogForwarder.observability.openshift.io`. It replaces both `ClusterLogging.logging.openshift.io` and `ClusterLogForwarder.logging.openshift.io resources`. + +.Prerequisites +* A bulleted list of conditions that must be satisfied before the user starts the steps in this module. +* Prerequisites can be full sentences or sentence fragments; however, prerequisite list items must be parallel. +* Do not use imperative statements in the Prerequisites section. + +.Procedure +. Make each step an instruction. +. Include one imperative sentence for each step, for example: +.. Do this thing and then select *Next*. +.. Do this other thing, and this other thing, and then select *Next*. +. Use an unnumbered bullet (*) if the procedure includes only one step. ++ +NOTE: You can add text, tables, code examples, images, and other items to a step. However, these items must be connected to the step with a plus sign (+). Any items under the .Procedure heading and before one of the following approved headings that are not connected to the last step with a plus sign cannot be converted to DITA. + +.Verification +Delete this section if it does not apply to your module. Provide the user with verification methods for the procedure, such as expected output or commands that confirm success or failure. + +* Provide an example of expected command output or a pop-up window that the user receives when the procedure is successful. +* List actions for the user to complete, such as entering a command, to determine the success or failure of the procedure. +* Make each step an instruction. +* Use an unnumbered bullet (*) if the verification includes only one step. + +.Troubleshooting +Delete this section if it does not apply to your module. Provide the user with troubleshooting steps. + +* Make each step an instruction. +* Use an unnumbered bullet (*) if the troubleshooting includes only one step. + +.Next steps +* Delete this section if it does not apply to your module. +* Provide a bulleted list of links that contain instructions that might be useful to the user after they complete this procedure. +* Use an unnumbered bullet (*) if the list includes only one step. + +NOTE: Do not use *Next steps* to provide a second list of instructions. + +[role="_additional-resources"] +.Additional resources +* link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide] +* xref:some-module_{context}[] + diff --git a/observability/logging/logging-6.2/cluster-logging-upgrading.adoc b/observability/logging/logging-6.2/cluster-logging-upgrading.adoc new file mode 100644 index 000000000000..44bca869b298 --- /dev/null +++ b/observability/logging/logging-6.2/cluster-logging-upgrading.adoc @@ -0,0 +1,52 @@ +:_mod-docs-content-type: ASSEMBLY +:context: cluster-logging-upgrading +include::_attributes/common-attributes.adoc[] +[id="cluster-logging-upgrading"] += Updating Logging + +toc::[] + +There are two types of {logging} updates: minor release updates (6.y.z) and major release updates (6.y). + +[id="cluster-logging-upgrading-minor"] +== Minor release updates + +If you installed the {logging} Operators using the *Automatic* update approval option, your Operators receive minor version updates automatically. You do not need to complete any manual update steps. + +If you installed the {logging} Operators using the *Manual* update approval option, you must manually approve minor version updates. For more information, see xref:../../../operators/admin/olm-upgrading-operators.adoc#olm-approving-pending-upgrade_olm-upgrading-operators[Manually approving a pending Operator update]. + +[id="cluster-logging-upgrading-major"] +== Updating Logging 5 to Logging 6 + +Logging v6 is a significant upgrade from previous releases. The notable changes are the following: + +* Introduction of distinct operators to manage logging components. +** {clo} manages log collection. +** {loki-op} manages log storage. +** {coo-first} manages log visualization. +* Removal of support for managed log storage and visualization based on Elastic products such as Elasticsearch and Kibana. +* Deprecation of the Fluentd log collector implementation. +* Replacement of the following resources with the single `ClusterLogForwarder.observability.openshift.io` resource: +** `ClusterLogging.logging.openshift.io` +** `ClusterLogForwarder.logging.openshift.io` + +When upgrading to Logging 6.x, follow these steps: + +. Upgdare {clo} +. + +include::modules/log-upgrade/con_changes-to-cluster-logging-and-forwarding-in-logging-6.adoc[leveloffset=+2] + +//include::modules/log-upgrade/6x_creating-and-configuring-a-service-account-for-the-log-collector.adoc[leveloffset=+2] + +include::modules/log-upgrade/6x-logging-upgrading-clo.adoc[leveloffset=+2] + +include::modules/log-upgrade/6x-logging-upgrading-loki.adoc[leveloffset=+2] + +include::modules/log-upgrade/6x_deleting-the-clusterlogging-instance.adoc[leveloffset=+2] + +include::modules/log-upgrade/6x_deploying-a-clusterlogforwarder-observability-custom-resource.adoc[leveloffset=+2] + +include::modules/log-upgrade/6x_deleting-red-hat-openshift-logging-5-crds.adoc[leveloffset=+2] + +include::modules/log-upgrade/6x-uninstall-es-operator.adoc[leveloffset=+2]