Skip to content

Commit 8821808

Browse files
Merge pull request #95467 from eromanova97/OBSDOCS-2053
OBSDOCS-2053: fix 'Configuring content filters to drop unwanted log records' and remove duplicate
2 parents e96f7c2 + 0da0715 commit 8821808

File tree

3 files changed

+84
-168
lines changed

3 files changed

+84
-168
lines changed

configuring/configuring-log-forwarding.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ Administrators can configure the following types of filters:
116116
include::modules/enabling-multi-line-exception-detection.adoc[leveloffset=+2]
117117
include::modules/logging-http-forward.adoc[leveloffset=+2]
118118
include::modules/cluster-logging-collector-log-forward-syslog.adoc[leveloffset=+2]
119-
include::modules/content-filter-drop-records.adoc[leveloffset=+2]
119+
include::modules/logging-content-filter-drop-records.adoc[leveloffset=+2]
120120
include::modules/logging-audit-log-filtering.adoc[leveloffset=+2]
121121
include::modules/input-spec-filter-labels-expressions.adoc[leveloffset=+2]
122122
include::modules/logging-content-filter-prune-records.adoc[leveloffset=+2]

modules/content-filter-drop-records.adoc

Lines changed: 0 additions & 104 deletions
This file was deleted.

modules/logging-content-filter-drop-records.adoc

Lines changed: 83 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -6,103 +6,123 @@
66
[id="logging-content-filter-drop-records_{context}"]
77
= Configuring content filters to drop unwanted log records
88

9-
When the `drop` filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration.
9+
Collecting all cluster logs produces a large amount of data, which can be expensive to move and store. To reduce volume, you can configure the `drop` filter to exclude unwanted log records before forwarding. The log collector evaluates log streams against the filter and drops records that match specified conditions.
10+
11+
The `drop` filter uses the `test` field to define one or more conditions for evaluating log records.
12+
The filter applies the following rules to check whether to drop a record:
13+
14+
* A test passes if all its specified conditions evaluate to true.
15+
* If a test passes, the filter drops the log record.
16+
* If you define several tests in the `drop` filter configuration, the filter drops the log record if any of the tests pass.
17+
* If there is an error evaluating a condition, for example, the referenced field is missing, that condition evaluates to false.
1018
1119
.Prerequisites
1220

1321
* You have installed the {clo}.
1422
* You have administrator permissions.
1523
* You have created a `ClusterLogForwarder` custom resource (CR).
24+
* You have installed the {oc-first}.
1625
1726
.Procedure
1827

19-
. Add a configuration for a filter to the `filters` spec in the `ClusterLogForwarder` CR.
28+
. Extract the existing `ClusterLogForwarder` configuration and save it as a local file.
29+
+
30+
[source,terminal]
31+
----
32+
$ oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml
33+
----
34+
+
35+
Where:
2036
+
21-
The following example shows how to configure the `ClusterLogForwarder` CR to drop log records based on regular expressions:
37+
* `<name>` is the name of the `ClusterLogForwarder` instance you want to configure.
38+
* `<namespace>` is the namespace where you created the `ClusterLogForwarder` instance, for example `openshift-logging`.
39+
* `<filename>` is the name of the local file where you save the configuration.
40+
41+
. Add a configuration to drop unwanted log records to the `filters` spec in the `ClusterLogForwarder` CR.
2242
+
43+
--
2344
.Example `ClusterLogForwarder` CR
2445
[source,yaml]
2546
----
26-
apiVersion: logging.openshift.io/v1
47+
apiVersion: observability.openshift.io/v1
2748
kind: ClusterLogForwarder
2849
metadata:
29-
# ...
50+
name: instance
51+
namespace: openshift-logging
3052
spec:
53+
# ...
3154
filters:
32-
- name: <filter_name>
55+
- name: drop-filter
3356
type: drop # <1>
3457
drop: # <2>
3558
- test: # <3>
36-
- field: .kubernetes.labels."foo-bar/baz" # <4>
59+
- field: .kubernetes.labels."app.version-1.2/beta" # <4>
3760
matches: .+ # <5>
3861
- field: .kubernetes.pod_name
3962
notMatches: "my-pod" # <6>
4063
pipelines:
41-
- name: <pipeline_name> # <7>
42-
filterRefs: ["<filter_name>"]
43-
# ...
64+
- name: my-pipeline # <7>
65+
filterRefs:
66+
- drop-filter
67+
# ...
4468
----
45-
<1> Specifies the type of filter. The `drop` filter drops log records that match the filter configuration.
46-
<2> Specifies configuration options for applying the `drop` filter.
47-
<3> Specifies the configuration for tests that are used to evaluate whether a log record is dropped.
48-
** If all the conditions specified for a test are true, the test passes and the log record is dropped.
49-
** When multiple tests are specified for the `drop` filter configuration, if any of the tests pass, the record is dropped.
50-
** If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false.
51-
<4> Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (`a-zA-Z0-9_`), for example, `.kubernetes.namespace_name`. If segments contain characters outside of this range, the segment must be in quotes, for example, `.kubernetes.labels."foo.bar-bar/baz"`. You can include multiple field paths in a single `test` configuration, but they must all evaluate to true for the test to pass and the `drop` filter to be applied.
52-
<5> Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the `matches` or `notMatches` condition for a single `field` path, but not both.
53-
<6> Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the `matches` or `notMatches` condition for a single `field` path, but not both.
54-
<7> Specifies the pipeline that the `drop` filter is applied to.
55-
56-
. Apply the `ClusterLogForwarder` CR by running the following command:
69+
<1> Specify the type of filter. The `drop` filter drops log records that match the filter configuration.
70+
<2> Specify configuration options for the `drop` filter.
71+
<3> Specify conditions for tests to evaluate whether the filter drops a log record.
72+
<4> Specify dot-delimited paths to fields in log records.
73+
** Each path segment can contain alphanumeric characters and underscores, `a-z`, `A-Z`, `0-9`, `_`, for example, `.kubernetes.namespace_name`.
74+
** If segments contain different characters, the segment must be in quotes, for example, `.kubernetes.labels."app.version-1.2/beta"`.
75+
** You can include several field paths in a single `test` configuration, but they must all evaluate to true for the test to pass and the `drop` filter to apply.
76+
<5> Specify a regular expression. If log records match this regular expression, they are dropped.
77+
<6> Specify a regular expression. If log records do not match this regular expression, they are dropped.
78+
<7> Specify the pipeline that uses the `drop` filter.
79+
--
5780
+
58-
[source,terminal]
59-
----
60-
$ oc apply -f <filename>.yaml
61-
----
62-
63-
.Additional examples
64-
65-
The following additional example shows how you can configure the `drop` filter to only keep higher priority log records:
66-
81+
[NOTE]
82+
====
83+
You can set either the `matches` or `notMatches` condition for a single `field` path, but not both.
84+
====
85+
+
86+
.Example configuration that keeps only high-priority log records
6787
[source,yaml]
6888
----
69-
apiVersion: logging.openshift.io/v1
70-
kind: ClusterLogForwarder
71-
metadata:
7289
# ...
73-
spec:
74-
filters:
75-
- name: important
76-
type: drop
77-
drop:
78-
test:
79-
- field: .message
80-
notMatches: "(?i)critical|error"
81-
- field: .level
82-
matches: "info|warning"
90+
filters:
91+
- name: important
92+
type: drop
93+
drop:
94+
- test:
95+
- field: .message
96+
notMatches: "(?i)critical|error"
97+
- field: .level
98+
matches: "info|warning"
8399
# ...
84100
----
85-
86-
In addition to including multiple field paths in a single `test` configuration, you can also include additional tests that are treated as _OR_ checks. In the following example, records are dropped if either `test` configuration evaluates to true. However, for the second `test` configuration, both field specs must be true for it to be evaluated to true:
87-
101+
+
102+
.Example configuration with several tests
88103
[source,yaml]
89104
----
90-
apiVersion: logging.openshift.io/v1
91-
kind: ClusterLogForwarder
92-
metadata:
93105
# ...
94-
spec:
95-
filters:
96-
- name: important
97-
type: drop
98-
drop:
99-
test:
100-
- field: .kubernetes.namespace_name
101-
matches: "^open"
102-
test:
103-
- field: .log_type
104-
matches: "application"
105-
- field: .kubernetes.pod_name
106-
notMatches: "my-pod"
106+
filters:
107+
- name: important
108+
type: drop
109+
drop:
110+
- test: # <1>
111+
- field: .kubernetes.namespace_name
112+
matches: "openshift.*"
113+
- test: # <2>
114+
- field: .log_type
115+
matches: "application"
116+
- field: .kubernetes.pod_name
117+
notMatches: "my-pod"
107118
# ...
108119
----
120+
<1> The filter drops logs that contain a namespace that starts with `openshift`.
121+
<2> The filter drops application logs that do not have `my-pod` in the pod name.
122+
123+
. Apply the `ClusterLogForwarder` CR by running the following command:
124+
+
125+
[source,terminal]
126+
----
127+
$ oc apply -f <filename>.yaml
128+
----

0 commit comments

Comments
 (0)