Skip to content

Commit 4b0d3da

Browse files
committed
OBSDOCS-1972 migrate old content - add in log_collection_forwarding folder
1 parent dd6b27e commit 4b0d3da

11 files changed

+251
-0
lines changed

_topic_maps/_topic_map.yml

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,3 +50,19 @@ Topics:
5050
File: cluster-logging-memory
5151
- Name: Configuring systemd-journald for Logging
5252
File: cluster-logging-systemd
53+
---
54+
Name: Log collection and forwarding
55+
Dir: log_collection_forwarding
56+
Topics:
57+
- Name: About log collection and forwarding
58+
File: log-forwarding
59+
- Name: Log output types
60+
File: logging-output-types
61+
- Name: Enabling JSON log forwarding
62+
File: cluster-logging-enabling-json-logging
63+
- Name: Configuring log forwarding
64+
File: configuring-log-forwarding
65+
- Name: Configuring the logging collector
66+
File: cluster-logging-collector
67+
- Name: Collecting and storing Kubernetes events
68+
File: cluster-logging-eventrouter

log_collection_forwarding/_attributes

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
../_attributes/
Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
:context: cluster-logging-collector
3+
[id="cluster-logging-collector"]
4+
= Configuring the logging collector
5+
include::_attributes/common-attributes.adoc[]
6+
7+
toc::[]
8+
9+
{logging-title-uc} collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata.
10+
All supported modifications to the log collector can be performed though the `spec.collection` stanza in the `ClusterLogging` custom resource (CR).
11+
12+
include::modules/configuring-logging-collector.adoc[leveloffset=+1]
13+
14+
include::modules/creating-logfilesmetricexporter.adoc[leveloffset=+1]
15+
16+
include::modules/cluster-logging-collector-limits.adoc[leveloffset=+1]
17+
18+
[id="cluster-logging-collector-input-receivers"]
19+
== Configuring input receivers
20+
21+
The {clo} deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver.
22+
The service name is generated based on the following:
23+
24+
* For multi log forwarder `ClusterLogForwarder` CR deployments, the service name is in the format `<ClusterLogForwarder_CR_name>-<input_name>`. For example, `example-http-receiver`.
25+
* For legacy `ClusterLogForwarder` CR deployments, meaning those named `instance` and located in the `openshift-logging` namespace, the service name is in the format `collector-<input_name>`. For example, `collector-http-receiver`.
26+
27+
include::modules/log-collector-http-server.adoc[leveloffset=+2]
28+
//include::modules/log-collector-rsyslog-server.adoc[leveloffset=+2]
29+
// uncomment for 5.9 release
30+
31+
[role="_additional-resources"]
32+
.Additional resources
33+
* xref:../log_collection_forwarding/configuring-log-forwarding.adoc#logging-audit-filtering_configuring-log-forwarding[Overview of API audit filter]
34+
35+
include::modules/cluster-logging-collector-tuning.adoc[leveloffset=+1]
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
:context: cluster-logging-enabling-json-logging
3+
[id="cluster-logging-enabling-json-logging"]
4+
= Enabling JSON log forwarding
5+
include::_attributes/common-attributes.adoc[]
6+
7+
toc::[]
8+
9+
You can configure the Log Forwarding API to parse JSON strings into a structured object.
10+
11+
include::modules/cluster-logging-json-log-forwarding.adoc[leveloffset=+1]
12+
include::modules/cluster-logging-configuration-of-json-log-data-for-default-elasticsearch.adoc[leveloffset=+1]
13+
include::modules/cluster-logging-forwarding-json-logs-to-the-default-elasticsearch.adoc[leveloffset=+1]
14+
include::modules/cluster-logging-forwarding-separate-indices.adoc[leveloffset=+1]
15+
16+
[role="_additional-resources"]
17+
.Additional resources
18+
19+
* xref:../log_collection_forwarding/log-forwarding.adoc#log-forwarding[About log forwarding]
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
:context: cluster-logging-eventrouter
3+
[id="cluster-logging-eventrouter"]
4+
= Collecting and storing Kubernetes events
5+
include::_attributes/common-attributes.adoc[]
6+
7+
toc::[]
8+
9+
The {ocp-product-title} Event Router is a pod that watches Kubernetes events and logs them for collection by the {logging}. You must manually deploy the Event Router.
10+
11+
The Event Router collects events from all projects and writes them to `STDOUT`. The collector then forwards those events to the store defined in the `ClusterLogForwarder` custom resource (CR).
12+
13+
[IMPORTANT]
14+
====
15+
The Event Router adds additional load to Fluentd and can impact the number of other log messages that can be processed.
16+
====
17+
18+
include::modules/cluster-logging-eventrouter-deploy.adoc[leveloffset=+1]
Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
include::_attributes/common-attributes.adoc[]
3+
[id="configuring-log-forwarding"]
4+
= Configuring log forwarding
5+
:context: configuring-log-forwarding
6+
7+
toc::[]
8+
9+
include::snippets/audit-logs-default.adoc[]
10+
11+
include::modules/cluster-logging-collector-log-forwarding-about.adoc[leveloffset=+1]
12+
13+
include::modules/logging-create-clf.adoc[leveloffset=+1]
14+
15+
include::modules/logging-delivery-tuning.adoc[leveloffset=+1]
16+
17+
include::modules/logging-multiline-except.adoc[leveloffset=+1]
18+
19+
ifndef::openshift-rosa[]
20+
include::modules/cluster-logging-collector-log-forward-gcp.adoc[leveloffset=+1]
21+
endif::openshift-rosa[]
22+
23+
include::modules/logging-forward-splunk.adoc[leveloffset=+1]
24+
25+
include::modules/logging-http-forward.adoc[leveloffset=+1]
26+
27+
include::modules/logging-forwarding-azure.adoc[leveloffset=+1]
28+
29+
include::modules/cluster-logging-collector-log-forward-project.adoc[leveloffset=+1]
30+
31+
include::modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc[leveloffset=+1]
32+
33+
include::modules/logging-audit-log-filtering.adoc[leveloffset=+1]
34+
35+
[role="_additional-resources"]
36+
.Additional resources
37+
38+
39+
* link:https://docs.openshift.com/container-platform/latest/networking/network_security/logging-network-security.adoc#logging-network-security[Logging network policy events][Logging for egress firewall and network policy rules]
40+
41+
// TODO ROSA
42+
ifdef::openshift-rosa,openshift-dedicated[]
43+
* link:https://docs.openshift.com/container-platform/latest/networking/ovn_kubernetes_network_provider/logging-network-security.html#logging-network-security[Logging for egress firewall and network policy rules]
44+
endif::[]
45+
46+
include::modules/cluster-logging-collector-log-forward-loki.adoc[leveloffset=+1]
47+
48+
[role="_additional-resources"]
49+
.Additional resources
50+
* link:https://grafana.com/docs/loki/latest/configuration/[Configuring Loki server]
51+
52+
include::modules/cluster-logging-collector-log-forward-es.adoc[leveloffset=+1]
53+
54+
include::modules/cluster-logging-collector-log-forward-fluentd.adoc[leveloffset=+1]
55+
56+
include::modules/cluster-logging-collector-log-forward-syslog.adoc[leveloffset=+1]
57+
58+
include::modules/cluster-logging-collector-log-forward-kafka.adoc[leveloffset=+1]
59+
60+
// Cloudwatch docs
61+
include::modules/cluster-logging-collector-log-forward-cloudwatch.adoc[leveloffset=+1]
62+
include::modules/cluster-logging-collector-log-forward-secret-cloudwatch.adoc[leveloffset=+1]
63+
64+
//TODO ROSA
65+
ifdef::openshift-rosa[]
66+
include::modules/rosa-cluster-logging-collector-log-forward-sts-cloudwatch.adoc[leveloffset=+1]
67+
endif::[]
68+
69+
70+
include::modules/cluster-logging-collector-log-forward-sts-cloudwatch.adoc[leveloffset=+1]
71+
72+
73+
[role="_additional-resources"]
74+
.Additional resources
75+
* link:https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html[AWS STS API Reference]
76+
ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[]
77+
* link:https://docs.openshift.com/container-platform/latest/authentication/managing_cloud_provider_credentials/about-cloud-credential-operator.adoc#about-cloud-credential-operator[Cloud Credential Operator (CCO)]
78+
endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[]

log_collection_forwarding/images

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
../images/
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
include::_attributes/common-attributes.adoc[]
3+
[id="log-forwarding"]
4+
= About log collection and forwarding
5+
:context: log-forwarding
6+
7+
toc::[]
8+
9+
The {clo} deploys a collector based on the `ClusterLogForwarder` resource specification. There are two collector options supported by this Operator: the legacy Fluentd collector, and the Vector collector.
10+
11+
include::snippets/logging-fluentd-dep-snip.adoc[]
12+
13+
include::modules/about-log-collection.adoc[leveloffset=+1]
14+
15+
include::modules/logging-vector-fluentd-feature-comparison.adoc[leveloffset=+2]
16+
17+
include::modules/log-forwarding-collector-outputs.adoc[leveloffset=+2]
18+
19+
[id="log-forwarding-about-clf"]
20+
== Log forwarding
21+
22+
Administrators can create `ClusterLogForwarder` resources that specify which logs are collected, how they are transformed, and where they are forwarded to.
23+
24+
`ClusterLogForwarder` resources can be used up to forward container, infrastructure, and audit logs to specific endpoints within or outside of a cluster. Transport Layer Security (TLS) is supported so that log forwarders can be configured to send logs securely.
25+
26+
Administrators can also authorize RBAC permissions that define which service accounts and users can access and forward which types of logs.
27+
28+
include::modules/log-forwarding-implementations.adoc[leveloffset=+2]
29+
30+
[id="log-forwarding-enabling-multi-clf-feature"]
31+
=== Enabling the multi log forwarder feature for a cluster
32+
33+
To use the multi log forwarder feature, you must create a service account and cluster role bindings for that service account. You can then reference the service account in the `ClusterLogForwarder` resource to control access permissions.
34+
35+
[IMPORTANT]
36+
====
37+
In order to support multi log forwarding in additional namespaces other than the `openshift-logging` namespace, you must update the {clo} to watch all namespaces]. This functionality is supported by default in new {clo} version 5.8 installations.
38+
====
39+
40+
include::modules/log-collection-rbac-permissions.adoc[leveloffset=+3]
41+
42+
[role="_additional-resources"]
43+
.Additional resources
44+
45+
* link:https://docs.openshift.com/container-platform/latest/authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions]
46+
* link:https://docs.openshift.com/container-platform/latest/authentication/using-service-accounts-in-applications.adoc#using-service-accounts-in-applications[Using service accounts in applications]
47+
48+
* link:https://kubernetes.io/docs/reference/access-authn-authz/rbac/[Using RBAC Authorization Kubernetes documentation]
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
include::_attributes/common-attributes.adoc[]
3+
[id="logging-output-types"]
4+
= Log output types
5+
:context: logging-output-types
6+
7+
toc::[]
8+
9+
Outputs define the destination where logs are sent to from a log forwarder. You can configure multiple types of outputs in the `ClusterLogForwarder` custom resource (CR) to send logs to servers that support different protocols.
10+
11+
include::modules/supported-log-outputs.adoc[leveloffset=+1]
12+
13+
[id="logging-output-types-descriptions"]
14+
== Output type descriptions
15+
16+
`default`:: The on-cluster, Red{nbsp}Hat managed log store. You are not required to configure the default output.
17+
+
18+
[NOTE]
19+
====
20+
If you configure a `default` output, you receive an error message, because the `default` output name is reserved for referencing the on-cluster, Red{nbsp}Hat managed log store.
21+
====
22+
`loki`:: Loki, a horizontally scalable, highly available, multi-tenant log aggregation system.
23+
`kafka`:: A Kafka broker. The `kafka` output can use a TCP or TLS connection.
24+
`elasticsearch`:: An external Elasticsearch instance. The `elasticsearch` output can use a TLS connection.
25+
`fluentdForward`:: An external log aggregation solution that supports Fluentd. This option uses the Fluentd `forward` protocols. The `fluentForward` output can use a TCP or TLS connection and supports shared-key authentication by providing a `shared_key` field in a secret. Shared-key authentication can be used with or without TLS.
26+
+
27+
[IMPORTANT]
28+
====
29+
The `fluentdForward` output is only supported if you are using the Fluentd collector. It is not supported if you are using the Vector collector. If you are using the Vector collector, you can forward logs to Fluentd by using the `http` output.
30+
====
31+
`syslog`:: An external log aggregation solution that supports the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocols. The `syslog` output can use a UDP, TCP, or TLS connection.
32+
`cloudwatch`:: Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS).
33+
`cloudlogging`:: Google Cloud Logging, a monitoring and log storage service hosted by Google Cloud Platform (GCP).

log_collection_forwarding/modules

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
../modules/

0 commit comments

Comments
 (0)