Skip to content

Commit 037c9a8

Browse files
authored
Merge pull request #71434 from abrennan89/OBSDOCS-805
OBSDOCS-805: Add docs for content filtering
2 parents b53af2c + 0451218 commit 037c9a8

File tree

6 files changed

+202
-0
lines changed

6 files changed

+202
-0
lines changed

_topic_maps/_topic_map.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2704,6 +2704,8 @@ Topics:
27042704
Topics:
27052705
- Name: Flow control mechanisms
27062706
File: logging-flow-control-mechanisms
2707+
# - Name: Filtering logs by content
2708+
# File: logging-content-filtering
27072709
- Name: Scheduling resources
27082710
Dir: scheduling_resources
27092711
Topics:

_topic_maps/_topic_map_osd.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1210,6 +1210,8 @@ Topics:
12101210
Topics:
12111211
- Name: Flow control mechanisms
12121212
File: logging-flow-control-mechanisms
1213+
# - Name: Filtering logs by content
1214+
# File: logging-content-filtering
12131215
- Name: Scheduling resources
12141216
Dir: scheduling_resources
12151217
Topics:

_topic_maps/_topic_map_rosa.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1452,6 +1452,8 @@ Topics:
14521452
Topics:
14531453
- Name: Flow control mechanisms
14541454
File: logging-flow-control-mechanisms
1455+
# - Name: Filtering logs by content
1456+
# File: logging-content-filtering
14551457
- Name: Scheduling resources
14561458
Dir: scheduling_resources
14571459
Topics:
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
include::_attributes/common-attributes.adoc[]
3+
include::_attributes/attributes-openshift-dedicated.adoc[]
4+
[id="logging-content-filtering"]
5+
= Filtering logs by content
6+
:context: logging-content-filtering
7+
8+
toc::[]
9+
10+
Collecting all logs from a cluster might produce a large amount of data, which can be expensive to transport and store.
11+
12+
You can reduce the volume of your log data by filtering out low priority data that does not need to be stored. {logging-uc} provides content filters that you can use to reduce the volume of log data.
13+
14+
[NOTE]
15+
====
16+
Content filters are distinct from `input` selectors. `input` selectors select or ignore entire log streams based on source metadata. Content filters edit log streams to remove and modify records based on the record content.
17+
====
18+
19+
Log data volume can be reduced by using one of the following methods:
20+
21+
* xref:../../logging/performance_reliability/logging-content-filtering.adoc#logging-content-filter-drop-records_logging-content-filtering[Configuring content filters to drop unwanted log records]
22+
* xref:../../logging/performance_reliability/logging-content-filtering.adoc#logging-content-filter-prune-records_logging-content-filtering[Configuring content filters to prune log records]
23+
24+
include::modules/logging-content-filter-drop-records.adoc[leveloffset=+1]
25+
include::modules/logging-content-filter-prune-records.adoc[leveloffset=+1]
26+
27+
[role="_additional-resources"]
28+
[id="additional-resources_logging-content-filtering"]
29+
== Additional resources
30+
* xref:../../logging/log_collection_forwarding/configuring-log-forwarding.adoc#cluster-logging-collector-log-forwarding-about_configuring-log-forwarding[About forwarding logs to third-party systems]
Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * logging/performance_reliability/logging-content-filtering.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="logging-content-filter-drop-records_{context}"]
7+
= Configuring content filters to drop unwanted log records
8+
9+
When the `drop` filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration.
10+
11+
.Prerequisites
12+
13+
* You have installed the {clo}.
14+
* You have administrator permissions.
15+
* You have created a `ClusterLogForwarder` custom resource (CR).
16+
17+
.Procedure
18+
19+
. Add a configuration for a filter to the `filters` spec in the `ClusterLogForwarder` CR.
20+
+
21+
The following example shows how to configure the `ClusterLogForwarder` CR to drop log records based on regular expressions:
22+
+
23+
.Example `ClusterLogForwarder` CR
24+
[source,yaml]
25+
----
26+
apiVersion: logging.openshift.io/v1
27+
kind: ClusterLogForwarder
28+
metadata:
29+
# ...
30+
spec:
31+
filters:
32+
- name: <filter_name>
33+
type: drop # <1>
34+
drop: # <2>
35+
test: # <3>
36+
- field: .kubernetes.labels."foo-bar/baz" # <4>
37+
matches: .+ # <5>
38+
- field: .kubernetes.pod_name
39+
notMatches: "my-pod" # <6>
40+
pipelines:
41+
- name: <pipeline_name> # <7>
42+
filterRefs: ["<filter_name>"]
43+
# ...
44+
----
45+
<1> Specifies the type of filter. The `drop` filter drops log records that match the filter configuration.
46+
<2> Specifies configuration options for applying the `drop` filter.
47+
<3> Specifies the configuration for tests that are used to evaluate whether a log record is dropped.
48+
** If all the conditions specified for a test are true, the test passes and the log record is dropped.
49+
** When multiple tests are specified for the `drop` filter configuration, if any of the tests pass, the record is dropped.
50+
** If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false.
51+
<4> Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (`a-zA-Z0-9_`), for example, `.kubernetes.namespace_name`. If segments contain characters outside of this range, the segment must be in quotes, for example, `.kubernetes.labels."foo.bar-bar/baz"`. You can include multiple field paths in a single `test` configuration, but they must all evaluate to true for the test to pass and the `drop` filter to be applied.
52+
<5> Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the `matches` or `notMatches` condition for a single `field` path, but not both.
53+
<6> Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the `matches` or `notMatches` condition for a single `field` path, but not both.
54+
<7> Specifies the pipeline that the `drop` filter is applied to.
55+
56+
. Apply the `ClusterLogForwarder` CR by running the following command:
57+
+
58+
[source,terminal]
59+
----
60+
$ oc apply -f <filename>.yaml
61+
----
62+
63+
.Additional examples
64+
65+
The following additional example shows how you can configure the `drop` filter to only keep higher priority log records:
66+
67+
[source,yaml]
68+
----
69+
apiVersion: logging.openshift.io/v1
70+
kind: ClusterLogForwarder
71+
metadata:
72+
# ...
73+
spec:
74+
filters:
75+
- name: important
76+
type: drop
77+
drop:
78+
test:
79+
- field: .message
80+
notMatches: "(?i)critical|error"
81+
- field: .level
82+
matches: "info|warning"
83+
# ...
84+
----
85+
86+
In addition to including multiple field paths in a single `test` configuration, you can also include additional tests that are treated as _OR_ checks. In the following example, records are dropped if either `test` configuration evaluates to true. However, for the second `test` configuration, both field specs must be true for it to be evaluated to true:
87+
88+
[source,yaml]
89+
----
90+
apiVersion: logging.openshift.io/v1
91+
kind: ClusterLogForwarder
92+
metadata:
93+
# ...
94+
spec:
95+
filters:
96+
- name: important
97+
type: drop
98+
drop:
99+
test:
100+
- field: .kubernetes.namespace_name
101+
matches: "^open"
102+
test:
103+
- field: .log_type
104+
matches: "application"
105+
- field: .kubernetes.pod_name
106+
notMatches: "my-pod"
107+
# ...
108+
----
Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * logging/performance_reliability/logging-content-filtering.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="logging-content-filter-prune-records_{context}"]
7+
= Configuring content filters to prune log records
8+
9+
When the `prune` filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations.
10+
11+
.Prerequisites
12+
13+
* You have installed the {clo}.
14+
* You have administrator permissions.
15+
* You have created a `ClusterLogForwarder` custom resource (CR).
16+
17+
.Procedure
18+
19+
. Add a configuration for a filter to the `prune` spec in the `ClusterLogForwarder` CR.
20+
+
21+
The following example shows how to configure the `ClusterLogForwarder` CR to prune log records based on field paths:
22+
+
23+
[IMPORTANT]
24+
====
25+
If both are specified, records are pruned based on the `notIn` array first, which takes precedence over the `in` array. After records have been pruned by using the `notIn` array, they are then pruned by using the `in` array.
26+
====
27+
+
28+
.Example `ClusterLogForwarder` CR
29+
[source,yaml]
30+
----
31+
apiVersion: logging.openshift.io/v1
32+
kind: ClusterLogForwarder
33+
metadata:
34+
# ...
35+
spec:
36+
filters:
37+
- name: <filter_name>
38+
type: prune # <1>
39+
prune: # <2>
40+
in: [.kubernetes.annotations, .kubernetes.namespace_id] # <3>
41+
notIn: [.kubernetes,.log_type,.message,."@timestamp"] # <4>
42+
pipelines:
43+
- name: <pipeline_name> # <5>
44+
filterRefs: ["<filter_name>"]
45+
# ...
46+
----
47+
<1> Specify the type of filter. The `prune` filter prunes log records by configured fields.
48+
<2> Specify configuration options for applying the `prune` filter. The `in` and `notIn` fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (`a-zA-Z0-9_`), for example, `.kubernetes.namespace_name`. If segments contain characters outside of this range, the segment must be in quotes, for example, `.kubernetes.labels."foo.bar-bar/baz"`.
49+
<3> Optional: Any fields that are specified in this array are removed from the log record.
50+
<4> Optional: Any fields that are not specified in this array are removed from the log record.
51+
<5> Specify the pipeline that the `prune` filter is applied to.
52+
53+
. Apply the `ClusterLogForwarder` CR by running the following command:
54+
+
55+
[source,terminal]
56+
----
57+
$ oc apply -f <filename>.yaml
58+
----

0 commit comments

Comments
 (0)