Skip to content

Commit 5255efc

Browse files
authored
Merge pull request #94053 from gabriel-rh/migrate-old-content-modules
OBSDOCS-1972 migrate old content - add in modules
2 parents b1db32a + a982896 commit 5255efc

File tree

146 files changed

+12489
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

146 files changed

+12489
-0
lines changed

modules/about-log-collection.adoc

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/log_collection_forwarding/log-forwarding.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="about-log-collection_{context}"]
7+
= Log collection
8+
9+
The log collector is a daemon set that deploys pods to each {ocp-product-title} node to collect container and node logs.
10+
11+
By default, the log collector uses the following sources:
12+
13+
* System and infrastructure logs generated by journald log messages from the operating system, the container runtime, and {ocp-product-title}.
14+
* `/var/log/containers/*.log` for all container logs.
15+
16+
If you configure the log collector to collect audit logs, it collects them from `/var/log/audit/audit.log`.
17+
18+
The log collector collects the logs from these sources and forwards them internally or externally depending on your {logging} configuration.
19+
20+
[id="about-log-collectors-types_{context}"]
21+
== Log collector types
22+
23+
link:https://vector.dev/docs/about/what-is-vector/[Vector] is a log collector offered as an alternative to Fluentd for the {logging}.
24+
25+
You can configure which logging collector type your cluster uses by modifying the `ClusterLogging` custom resource (CR) `collection` spec:
26+
27+
.Example ClusterLogging CR that configures Vector as the collector
28+
[source,yaml]
29+
----
30+
apiVersion: logging.openshift.io/v1
31+
kind: ClusterLogging
32+
metadata:
33+
name: instance
34+
namespace: openshift-logging
35+
spec:
36+
collection:
37+
logs:
38+
type: vector
39+
vector: {}
40+
# ...
41+
----
42+
43+
[id="about-log-collectors-limitations_{context}"]
44+
== Log collection limitations
45+
46+
The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered _best effort_.
47+
48+
[IMPORTANT]
49+
====
50+
The available container runtimes provide minimal information to identify the source of log messages and do not guarantee unique individual log messages or that these messages can be traced to their source.
51+
====
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/cluster-logging-deploying.adoc
4+
5+
:_mod-docs-content-type: REFERENCE
6+
[id="cluster-logging-about-crd_{context}"]
7+
= About the ClusterLogging custom resource
8+
9+
To make changes to your {logging} environment, create and modify the `ClusterLogging` custom resource (CR).
10+
11+
.Sample `ClusterLogging` custom resource (CR)
12+
[source,yaml]
13+
----
14+
apiVersion: logging.openshift.io/v1
15+
kind: ClusterLogging
16+
metadata:
17+
name: instance <1>
18+
namespace: openshift-logging <2>
19+
spec:
20+
managementState: Managed <3>
21+
# ...
22+
----
23+
<1> The CR name must be `instance`.
24+
<2> The CR must be installed to the `openshift-logging` namespace.
25+
<3> The {clo} management state. When the state is set to `unmanaged`, the Operator is in an unsupported state and does not receive updates.
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/cluster-logging.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="cluster-logging-about-es-logstore_{context}"]
7+
= About the Elasticsearch log store
8+
9+
The {logging} Elasticsearch instance is optimized and tested for short term storage, approximately seven days. If you want to retain your logs over a longer term, it is recommended you move the data to a third-party storage system.
10+
11+
Elasticsearch organizes the log data from Fluentd into datastores, or _indices_, then subdivides each index into multiple pieces called _shards_, which it spreads across a set of Elasticsearch nodes in an Elasticsearch cluster. You can configure Elasticsearch to make copies of the shards, called _replicas_, which Elasticsearch also spreads across the Elasticsearch nodes. The `ClusterLogging` custom resource (CR) allows you to specify how the shards are replicated to provide data redundancy and resilience to failure. You can also specify how long the different types of logs are retained using a retention policy in the `ClusterLogging` CR.
12+
13+
[NOTE]
14+
====
15+
The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.
16+
====
17+
18+
The Red Hat OpenShift Logging Operator and companion OpenShift Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume.
19+
You can use a `ClusterLogging` custom resource (CR) to increase the number of Elasticsearch nodes, as needed.
20+
See the link:https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html[Elasticsearch documentation] for considerations involved in configuring storage.
21+
22+
[NOTE]
23+
====
24+
A highly-available Elasticsearch environment requires at least three Elasticsearch nodes, each on a different host.
25+
====
26+
27+
Role-based access control (RBAC) applied on the Elasticsearch indices enables the controlled access of the logs to the developers. Administrators can access all logs and developers can access only the logs in their projects.

modules/cluster-logging-about.adoc

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * virt/support/virt-openshift-cluster-monitoring.adoc
4+
// * observability/logging/cluster-logging.adoc
5+
// * serverless/monitor/cluster-logging-serverless.adoc
6+
7+
:_mod-docs-content-type: CONCEPT
8+
[id="cluster-logging-about_{context}"]
9+
= About deploying {logging}
10+
11+
Administrators can deploy the {logging} by using the {ocp-product-title} web console or the {oc-first} to install the {logging} Operators. The Operators are responsible for deploying, upgrading, and maintaining the {logging}.
12+
13+
Administrators and application developers can view the logs of the projects for which they have view access.
14+
15+
[id="cluster-logging-about-custom-resources_{context}"]
16+
== Logging custom resources
17+
18+
You can configure your {logging} deployment with custom resource (CR) YAML files implemented by each Operator.
19+
20+
*{clo}*:
21+
22+
* `ClusterLogging` (CL) - After the Operators are installed, you create a `ClusterLogging` custom resource (CR) to schedule {logging} pods and other resources necessary to support the {logging}. The `ClusterLogging` CR deploys the collector and forwarder, which currently are both implemented by a daemonset running on each node. The {clo} watches the `ClusterLogging` CR and adjusts the logging deployment accordingly.
23+
24+
* `ClusterLogForwarder` (CLF) - Generates collector configuration to forward logs per user configuration.
25+
26+
*{loki-op}*:
27+
28+
* `LokiStack` - Controls the Loki cluster as log store and the web proxy with {ocp-product-title} authentication integration to enforce multi-tenancy.
29+
30+
*{es-op}*:
31+
32+
[NOTE]
33+
====
34+
These CRs are generated and managed by the {es-op}. Manual changes cannot be made without being overwritten by the Operator.
35+
====
36+
37+
* `ElasticSearch` - Configure and deploy an Elasticsearch instance as the default log store.
38+
39+
* `Kibana` - Configure and deploy Kibana instance to search, query and view logs.
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * observability/logging/troubleshooting/cluster-logging-cluster-status.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="cluster-logging-clo-status-comp_{context}"]
7+
= Viewing the status of {logging} components
8+
9+
You can view the status for a number of {logging} components.
10+
11+
.Prerequisites
12+
13+
* The {clo} and {es-op} are installed.
14+
15+
.Procedure
16+
17+
. Change to the `openshift-logging` project.
18+
+
19+
[source,terminal]
20+
----
21+
$ oc project openshift-logging
22+
----
23+
24+
. View the status of {logging} environment:
25+
+
26+
[source,terminal]
27+
----
28+
$ oc describe deployment cluster-logging-operator
29+
----
30+
+
31+
.Example output
32+
[source,terminal]
33+
----
34+
Name: cluster-logging-operator
35+
36+
....
37+
38+
Conditions:
39+
Type Status Reason
40+
---- ------ ------
41+
Available True MinimumReplicasAvailable
42+
Progressing True NewReplicaSetAvailable
43+
44+
....
45+
46+
Events:
47+
Type Reason Age From Message
48+
---- ------ ---- ---- -------
49+
Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----
50+
----
51+
52+
. View the status of the {logging} replica set:
53+
54+
.. Get the name of a replica set:
55+
+
56+
.Example output
57+
[source,terminal]
58+
----
59+
$ oc get replicaset
60+
----
61+
+
62+
.Example output
63+
[source,terminal]
64+
----
65+
NAME DESIRED CURRENT READY AGE
66+
cluster-logging-operator-574b8987df 1 1 1 159m
67+
elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m
68+
elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m
69+
elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m
70+
kibana-5bd5544f87 1 1 1 157m
71+
----
72+
73+
.. Get the status of the replica set:
74+
+
75+
[source,terminal]
76+
----
77+
$ oc describe replicaset cluster-logging-operator-574b8987df
78+
----
79+
+
80+
.Example output
81+
[source,terminal]
82+
----
83+
Name: cluster-logging-operator-574b8987df
84+
85+
....
86+
87+
Replicas: 1 current / 1 desired
88+
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
89+
90+
....
91+
92+
Events:
93+
Type Reason Age From Message
94+
---- ------ ---- ---- -------
95+
Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----
96+
----

0 commit comments

Comments
 (0)