Skip to content

Commit 63c468d

Browse files
authored
Merge pull request #13979 from mburke5678/logging-naming-changes
Changes per Rich in PR
2 parents 49a5e33 + bb05e77 commit 63c468d

11 files changed

+107
-135
lines changed

modules/efk-logging-configuring-image-about.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77

88
There are several components in cluster logging, each one implemented with one
99
or more images. Each image is specified by an environment variable
10-
defined in the cluster-logging-operator deployment and should not be changed.
10+
defined in the *cluster-logging-operator* deployment in the *openshift-logging* project and should not be changed.
1111

1212
You can view the images by running the following command:
1313

modules/efk-logging-curator-configuration.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
[id='configuration-cronjob-{context}']
66
= Modifying the Curator configuration
77

8-
You can configure your ops and non-ops Curator instances using the `logging-curator` configuration map
8+
You can configure your ops and non-ops Curator instances using the Cluster Logging Custom Resource
99
created by the Cluster Logging Operator installation.
1010

1111
You can edit or replace this ConfigMap to reconfigure Curator.

modules/efk-logging-curator-delete-index.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ To delete indices:
2222
. Edit the {product-title} custom Curator configuration file:
2323
+
2424
----
25-
$ oc edit configmap/logging-curator
25+
$ oc edit configmap/curator
2626
----
2727

2828
. Set the following parameters as needed:

modules/efk-logging-curator-log-level.adoc

Lines changed: 0 additions & 40 deletions
This file was deleted.

modules/efk-logging-deploy-label.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,16 +11,16 @@ After deploying the logging infrastructure pods (Elasticsearch, Kibana, and
1111
Curator), node labeling should be done in steps of 20 nodes at a time. For
1212
example:
1313

14-
Using a simple loop:
14+
Using a simple loop:
1515

1616
----
17-
$ while read node; do oc label nodes $node logging-infra-fluentd=true; done < 20_fluentd.lst
17+
$ while read node; do oc label nodes $node elasticsearch-infra-fluentd=true; done < 20_fluentd.lst
1818
----
1919

2020
The following also works:
2121

2222
----
23-
$ oc label nodes 10.10.0.{100..119} logging-infra-fluentd=true
23+
$ oc label nodes 10.10.0.{100..119} elasticsearch-infra-fluentd=true
2424
----
2525

2626
Labeling nodes in groups paces the DaemonSets used by OpenShift logging, helping to avoid contention on shared resources such as the image registry.

modules/efk-logging-elasticsearch-admin.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@
77

88
An administrator certificate, key, and CA that can be used to communicate with and perform
99
administrative operations on Elasticsearch are provided within the
10-
*logging-elasticsearch* secret.
10+
*elasticsearch* secret in the `openshift-logging` project.
1111

1212
[NOTE]
1313
====
1414
To confirm whether or not your cluster logging installation provides these, run:
1515
----
16-
$ oc describe secret logging-elasticsearch
16+
$ oc describe secret elasticsearch -n openshift-logging
1717
----
1818
====
1919

@@ -23,8 +23,8 @@ attempting to perform maintenance.
2323
. To find a pod in a cluster use either:
2424
+
2525
----
26-
$ oc get pods -l component=es -o name | head -1
27-
$ oc get pods -l component=es-ops -o name | head -1
26+
$ oc get pods -l component=elasticsearch -o name | head -1
27+
$ oc get pods -l component=elasticsearch-infra -o name | head -1
2828
----
2929

3030
. Connect to a pod:
@@ -40,13 +40,13 @@ link:https://www.elastic.co/guide/en/elasticsearch/reference/2.3/indices.html[In
4040
Fluentd sends its logs to Elasticsearch using the index format *project.{project_name}.{project_uuid}.YYYY.MM.DD*
4141
where YYYY.MM.DD is the date of the log record.
4242
+
43-
For example, to delete all logs for the *logging* project with uuid *3b3594fa-2ccd-11e6-acb7-0eb6b35eaee3*
43+
For example, to delete all logs for the *openshift-logging* project with uid *3b3594fa-2ccd-11e6-acb7-0eb6b35eaee3*
4444
from June 15, 2016, we can run:
4545
+
4646
----
4747
$ curl --key /etc/elasticsearch/secret/admin-key \
4848
--cert /etc/elasticsearch/secret/admin-cert \
4949
--cacert /etc/elasticsearch/secret/admin-ca -XDELETE \
50-
"https://localhost:9200/project.logging.3b3594fa-2ccd-11e6-acb7-0eb6b35eaee3.2016.06.15"
50+
"https://localhost:9200/project.openshift-logging.664360-11e9-92d0-0eb4e1b4a396.2019.03.10"
5151
----
5252

modules/efk-logging-eventrouter-deploy.adoc

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,11 @@ The following Template object creates the Service Account, ClusterRole, and Clus
1111

1212
.Prerequisite
1313

14-
You need proper premissions to make updates within the `openshift-*` namespace.
14+
You need proper permissions to make updates within the `openshift-*` namespace.
1515

1616
.Procedure
1717

18-
. Create a template for the Event Router:
18+
. Create a template for the Event Router:
1919
+
2020
[source,yaml]
2121
----
@@ -31,7 +31,7 @@ objects:
3131
apiVersion: v1
3232
metadata:
3333
name: cluster-logging-eventrouter
34-
namespace: ${NAMESPACE}
34+
namespace: cluster-logging
3535
- kind: ClusterRole <2>
3636
apiVersion: v1
3737
metadata:
@@ -130,4 +130,11 @@ parameters:
130130
+
131131
----
132132
$ oc process -f <templatefile> | oc apply -f -
133+
134+
serviceaccount/cluster-logging-eventrouter created
135+
clusterrole.authorization.openshift.io/event-reader created
136+
clusterrolebinding.authorization.openshift.io/event-reader-binding created
137+
configmap/cluster-logging-eventrouter created
138+
deployment.apps/cluster-logging-eventrouter created
139+
133140
----

modules/efk-logging-external-elasticsearch.adoc

Lines changed: 31 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -28,17 +28,42 @@ an instance of Fluentd that you control and that is configured with the
2828

2929
To direct logs to a specific Elasticsearch instance:
3030

31-
. Edit the deployment configuration and replace the value of the above variables with the desired
32-
instance:
31+
. Edit the `fluentd` DaemonSet in the *openshift-logging* project:
3332
+
3433
----
35-
$ oc edit dc/<deployment_configuration>
34+
$ oc edit ds/fluentd
35+
36+
spec:
37+
template:
38+
spec:
39+
containers:
40+
env:
41+
- name: ES_HOST
42+
value: elasticsearch
43+
- name: ES_PORT
44+
value: '9200'
45+
- name: ES_CLIENT_CERT
46+
value: /etc/fluent/keys/app-cert
47+
- name: ES_CLIENT_KEY
48+
value: /etc/fluent/keys/app-key
49+
- name: ES_CA
50+
value: /etc/fluent/keys/app-ca
51+
- name: OPS_HOST
52+
value: elasticsearch
53+
- name: OPS_PORT
54+
value: '9200'
55+
- name: OPS_CLIENT_CERT
56+
value: /etc/fluent/keys/infra-cert
57+
- name: OPS_CLIENT_KEY
58+
value: /etc/fluent/keys/infra-key
59+
- name: OPS_CA
60+
value: /etc/fluent/keys/infra-ca
3661
----
3762

3863
. Set `ES_HOST` and `OPS_HOST` to the same destination,
3964
while ensuring that `ES_PORT` and `OPS_PORT` also have the same value
4065
for an external Elasticsearch instance to contain both application and
41-
operations logs.
66+
operations logs.
4267

4368
. Configure your externally hosted Elasticsearch instance for TLS:
4469

@@ -47,11 +72,11 @@ operations logs.
4772

4873
** *If your externally hosted Elasticsearch instance uses TLS, but not mutual TLS*,
4974
update the `_CLIENT_CERT` and `_CLIENT_KEY` variables to be empty. Then patch or
50-
recreate the *logging-fluentd* secret with the appropriate `_CA` value for
75+
recreate the *fluentd* secret with the appropriate `_CA` value for
5176
communicating with your Elasticsearch instance.
5277

5378
** *If your externally hosted Elasticsearch instance uses Mutual TLS*, patch
54-
or recreate the *logging-fluentd* secret with your client key, client cert, and CA.
79+
or recreate the *fluentd* secret with your client key, client cert, and CA.
5580
The provided Elasticsearch instance uses mutual TLS.
5681

5782
[NOTE]

modules/efk-logging-external-syslog.adoc

Lines changed: 16 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,34 +6,39 @@
66
= Configuring Fluentd to send logs to an external syslog server
77

88
Use the `fluent-plugin-remote-syslog` plug-in on the host to send logs to an
9-
external syslog server.
9+
external syslog server.
1010

1111
.Prerequisite
1212

1313
Set cluster logging to the unmanaged state.
1414

1515
.Procedure
1616

17-
. Set environment variables in the `logging-fluentd` deployment
18-
configurations:
17+
. Set environment variables in the `fluentd` daemonset in the `openshift-logging` project:
1918
+
2019
[source,yaml]
2120
----
22-
- name: REMOTE_SYSLOG_HOST <1>
23-
value: host1
24-
- name: REMOTE_SYSLOG_HOST_BACKUP
25-
value: host2
26-
- name: REMOTE_SYSLOG_PORT_BACKUP
27-
value: 5555
21+
spec:
22+
template:
23+
spec:
24+
containers:
25+
- name: fluentd
26+
image: 'quay.io/openshift/origin-logging-fluentd:latest'
27+
env:
28+
- name: REMOTE_SYSLOG_HOST
29+
value: host1
30+
- name: REMOTE_SYSLOG_HOST_BACKUP
31+
value: host2
32+
- name: REMOTE_SYSLOG_PORT_BACKUP
33+
value: 5555
2834
----
2935
<1> The desired remote syslog host. Required for each host.
3036
+
3137
This will build two destinations. The syslog server on `host1` will be
3238
receiving messages on the default port of `514`, while `host2` will be receiving
3339
the same messages on port `5555`.
3440

35-
. Alternatively, you can configure your own custom *_fluent.conf_* in the
36-
`logging-fluentd` ConfigMaps.
41+
. Alternatively, you can configure your own custom the `fluentd` daemonset in the `openshift-logging` project.
3742
+
3843
**Fluentd Environment Variables**
3944
+

modules/efk-logging-fluentd-external.adoc

Lines changed: 30 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
You can configure Fluentd to send a copy of its logs to an external log
99
aggregator, and not the default Elasticsearch, using the `secure-forward`
1010
plug-in. From there, you can further process log records after the locally
11-
hosted Fluentd has processed them.
11+
hosted Fluentd has processed them.
1212

1313
ifdef::openshift-origin[]
1414
The `secure-forward` plug-in is provided with the Fluentd image as of v1.4.0.
@@ -28,44 +28,35 @@ To send a copy of Fluentd logs to an external log aggregator:
2828
. Edit the Fluentd configuration map:
2929
+
3030
----
31-
$ oc edit configmap/fluentd
32-
----
33-
+
34-
----
35-
https://docs.fluentd.org/v1.0/articles/in_forward
36-
<store>
37-
@type forward
38-
<security>
39-
self_hostname ${hostname} # ${hostname} is a placeholder.
40-
shared_key <shared_key_between_forwarder_and_forwardee>
41-
</security>
42-
transport tls
43-
tls_verify_hostname true # Set false to ignore server cert hostname.
44-
45-
tls_cert_path /path/for/certificate/ca_cert.pem
46-
<buffer>
47-
@type file
48-
path '/var/lib/fluentd/forward'
49-
queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '1024' }"
50-
chunk_limit_size "#{ENV['BUFFER_SIZE_LIMIT'] || '1m' }"
51-
flush_interval "#{ENV['FORWARD_FLUSH_INTERVAL'] || '5s'}"
52-
flush_at_shutdown "#{ENV['FLUSH_AT_SHUTDOWN'] || 'false'}"
53-
retry_max_interval "#{ENV['FORWARD_RETRY_WAIT'] || '300'}"
54-
retry_forever true
55-
# the systemd journald 0.0.8 input plugin will just throw away records if the buffer
56-
# queue limit is hit - 'block' will halt further reads and keep retrying to flush the
57-
# buffer to the remote - default is 'exception' because in_tail handles that case
58-
overflow_action "#{ENV['BUFFER_QUEUE_FULL_ACTION'] || 'exception'}"
59-
</buffer>
60-
<server>
61-
host server.fqdn.example.com # or IP
62-
port 24284
63-
</server>
64-
<server>
65-
host 203.0.113.8 # ip address to connect
66-
name server.fqdn.example.com # The name of the server. Used for logging and certificate verification in TLS transport (when host is address).
67-
</server>
68-
</store>
31+
$ oc edit configmap/fluentd -n openshift-logging
32+
33+
secure-forward.conf: |
34+
# <store>
35+
# @type secure_forward
36+
37+
# self_hostname ${hostname}
38+
# shared_key <SECRET_STRING>
39+
40+
# secure yes
41+
# enable_strict_verification yes
42+
43+
# ca_cert_path /etc/fluent/keys/your_ca_cert
44+
# ca_private_key_path /etc/fluent/keys/your_private_key
45+
# for private CA secret key
46+
# ca_private_key_passphrase passphrase
47+
48+
# <server>
49+
# or IP
50+
# host server.fqdn.example.com
51+
# port 24284
52+
# </server>
53+
# <server>
54+
# ip address to connect
55+
# host 203.0.113.8
56+
# specify hostlabel for FQDN verification if ipaddress is used for host
57+
# hostlabel server.fqdn.example.com
58+
# </server>
59+
# </store>
6960
----
7061

7162
. Add certificates to be used in `secure-forward.conf` to the existing

0 commit comments

Comments
 (0)