Skip to content

Commit f4019bf

Browse files
committed
Added OSDOCS-215 to 4.0 files
1 parent fac0088 commit f4019bf

8 files changed

+20
-35
lines changed

modules/efk-logging-about-fluentd.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77

88
{product-title} uses Fluentd to collect data about your cluster.
99

10-
Fluentd is deployed as a DaemonSet in {product-title} that deploys replicas according to a node
10+
Fluentd is deployed as a DaemonSet in {product-title} that deploys nodes according to a node
1111
label selector, which you can specify with the inventory parameter
1212
`openshift_logging_fluentd_nodeselector` and the default is `logging-infra-fluentd`.
1313
As part of the OpenShift cluster installation, it is recommended that you add the

modules/efk-logging-deploy-pre.adoc

Lines changed: 1 addition & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ various areas of the EFK stack.
2222
+
2323
.. Ensure that you have deployed a router for the cluster.
2424
+
25-
** Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch replica
25+
** Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
2626
requires its own storage volume.
2727

2828
. Specify a node selector
@@ -34,22 +34,3 @@ node selector should be used.
3434
$ oc adm new-project logging --node-selector=""
3535
----
3636

37-
* Choose a project.
38-
+
39-
Once deployed, the EFK stack collects logs for every
40-
project within your {product-title} cluster. But the stack requires a dedicated project, by default *openshift-logging*.
41-
The Ansible playbook creates the project for you. You only need to create a project if you want
42-
to specify a node-selector on it.
43-
+
44-
----
45-
$ oc adm new-project logging --node-selector=""
46-
$ oc project logging
47-
----
48-
+
49-
[NOTE]
50-
====
51-
Specifying an empty node selector on the project is recommended, as Fluentd should be deployed
52-
throughout the cluster and any selector would restrict where it is
53-
deployed. To control component placement, specify node selectors per component to
54-
be applied to their deployment configurations.
55-
====

modules/efk-logging-deploy-variables.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -344,7 +344,7 @@ server cert. The default is the internal CA.
344344
|The location of the client key Fluentd uses for `openshift_logging_es_host`.
345345

346346
|`openshift_logging_es_cluster_size`
347-
|Elasticsearch replicas to deploy. Redundancy requires at least three or more.
347+
|Elasticsearch nodes to deploy. Redundancy requires at least three or more.
348348

349349
|`openshift_logging_es_cpu_limit`
350350
|The amount of CPU limit for the ES cluster.

modules/efk-logging-elasticsearch-persistent-storage.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ $ chown 1000:1000 /usr/local/es-storage
7474
----
7575
7676
Then, use *_/usr/local/es-storage_* as a host-mount as described below.
77-
Use a different backing file as storage for each Elasticsearch replica.
77+
Use a different backing file as storage for each Elasticsearch node.
7878
7979
This loopback must be maintained manually outside of {product-title}, on the
8080
node. You must not maintain it from inside a container.
@@ -94,7 +94,7 @@ $ oc adm policy add-scc-to-user privileged \
9494
<1> Use the project you created earlier (for example, *logging*) when running the
9595
logging playbook.
9696

97-
. Each Elasticsearch replica definition must be patched to claim that privilege,
97+
. Each Elasticsearch node definition must be patched to claim that privilege,
9898
for example:
9999
+
100100
----
@@ -107,7 +107,7 @@ $ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o
107107

108108
. The Elasticsearch replicas must be located on the correct nodes to use the local
109109
storage, and should not move around even if those nodes are taken down for a
110-
period of time. This requires giving each Elasticsearch replica a node selector
110+
period of time. This requires giving each Elasticsearch node a node selector
111111
that is unique to a node where an administrator has allocated storage for it. To
112112
configure a node selector, edit each Elasticsearch deployment configuration and
113113
add or edit the *nodeSelector* section to specify a unique label that you have

modules/efk-logging-fluentd-log-location.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
[id='efk-logging-fluentd-log-location_{context}']
66
= Configuring Fluentd log location
77

8-
Fluentd writes logs to a specified file, by default `/var/log/fluentd/fluentd.log`, or to the console, based on the `LOGGING_FILE_PATH` environment variable.
8+
Fluentd writes logs to a specified file or to the default location, `/var/log/fluentd/fluentd.log`, based on the `LOGGING_FILE_PATH` environment variable.
99

1010
.Procedure
1111

@@ -18,8 +18,8 @@ in the default inventory file. You can specify a particular file or to STDOUT:
1818
LOGGING_FILE_PATH=console <1>
1919
LOGGING_FILE_PATH=<path-to-log/fluentd.log> <2>
2020
----
21-
<1> Sends the log output to STDOUT.
22-
<2> Sends the log output to the specified file.
21+
<1> Sends the log output to the Fluentd default location. Retrieve the logs with the `oc logs -f <pod_name>` command.
22+
<2> Sends the log output to the specified file. Retrieve the logs with the `oc exec <pod_name> -- logs` command.
2323

2424
. Re-run the logging installer playbook:
2525
+

modules/efk-logging-fluentd-log-rotation.adoc

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,5 +41,9 @@ $ oc set env ds/logging-fluentd LOGGING_FILE_AGE=30 LOGGING_FILE_SIZE=1024000"
4141
----
4242

4343
Turn off log rotation by setting `LOGGING_FILE_PATH=console`.
44-
This causes Fluentd to write logs to STDOUT where they can be retrieved using the `oc logs -f <pod_name>` command.
44+
This causes Fluentd to write logs to the Fluentd default location, *_/var/log/fluentd/fluentd.log_*, where you can retrieve them using the `oc logs -f <pod_name>` command.
45+
46+
----
47+
oc set env ds/fluentd LOGGING_FILE_PATH=console
48+
----
4549

modules/efk-logging-fluentd-log-viewing.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,19 +10,19 @@ How you view logs depends upon the `LOGGING_FILE_PATH` setting.
1010
* If `LOGGING_FILE_PATH` points to a file, use the *logs* utility to print out the contents of Fluentd log files:
1111
+
1212
----
13-
oc exec <pod> logs <1>
13+
oc exec <pod> -- logs <1>
1414
----
15-
<1> Specify the name of the Fluentd pod.
15+
<1> Specify the name of the Fluentd pod. Note the space before `logs`.
1616
+
1717
For example:
1818
+
1919
----
20-
oc exec logging-fluentd-lmvms logs
20+
oc exec logging-fluentd-lmvms -- logs
2121
----
2222
+
2323
The contents of log files are printed out, starting with the oldest log. Use `-f` option to follow what is being written into the logs.
2424

25-
* If you are using `LOGGING_FILE_PATH=console`, fluentd to write logs to STDOUT. You can retrieve the logs with the `oc logs -f <pod_name>` command.
25+
* If you are using `LOGGING_FILE_PATH=console`, Fluentd writes logs to its default location, `/var/log/fluentd/fluentd.log`. You can retrieve the logs with the `oc logs -f <pod_name>` command.
2626
+
2727
For example
2828
+

modules/efk-logging-manual-rollout-full.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ $ oc exec -c elasticsearch <any_es_pod_in_the_cluster> --
5252
-d '{ "transient": { "cluster.routing.allocation.enable" : "none" } }'
5353
----
5454

55-
. Once complete, for each `dc` you have for an ES cluster, scale down all replicas:
55+
. Once complete, for each `dc` you have for an ES cluster, scale down all nodes:
5656
+
5757
----
5858
$ oc scale dc <dc_name> --replicas=0
@@ -69,7 +69,7 @@ You will see a new pod deployed. Once the pod has two ready containers, you can
6969
move on to the next `dc`.
7070

7171
. Once deployment is complete, for each `dc` you have for an ES cluster, scale up
72-
replicas:
72+
nodes:
7373
+
7474
----
7575
$ oc scale dc <dc_name> --replicas=1

0 commit comments

Comments
 (0)