Skip to content

Commit 27b401d

Browse files
authored
Merge pull request #13230 from mburke5678/logging-move-311-changes-to-40
Adding changes to 3.11 docs to 4.0
2 parents b75c81e + 37b3f5a commit 27b401d

16 files changed

+393
-168
lines changed

_topic_map.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -152,8 +152,6 @@ Topics:
152152
File: efk-logging-deploy
153153
- Name: Uninstalling the EFK stack
154154
File: efk-logging-uninstall
155-
- Name: Troubleshooting Kubernetes
156-
File: efk-logging-troubleshooting
157155
- Name: Working with Elasticsearch
158156
File: efk-logging-elasticsearch
159157
- Name: Working with Fluentd
@@ -170,5 +168,7 @@ Topics:
170168
File: efk-logging-manual-rollout
171169
- Name: Configuring systemd-journald and rsyslog
172170
File: efk-logging-systemd
171+
- Name: Troubleshooting Kubernetes
172+
File: efk-logging-troubleshooting
173173
- Name: Exported fields
174174
File: efk-logging-exported-fields

logging/efk-logging-elasticsearch.adoc

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,11 @@ toc::[]
1414

1515
include::modules/efk-logging-elasticsearch-ha.adoc[leveloffset=+1]
1616

17-
include::modules/efk-logging-elasticsearch-persistent-storage.adoc[leveloffset=+1]
17+
include::modules/efk-logging-elasticsearch-persistent-storage-about.adoc[leveloffset=+1]
18+
19+
include::modules/efk-logging-elasticsearch-persistent-storage-persistent.adoc[leveloffset=+2]
20+
21+
include::modules/efk-logging-elasticsearch-persistent-storage-local.adoc[leveloffset=+2]
1822

1923
include::modules/efk-logging-elasticsearch-scaling.adoc[leveloffset=+1]
2024

logging/efk-logging-fluentd.adoc

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,14 @@ toc::[]
1313
// assemblies.
1414

1515

16+
include::modules/efk-logging-fluentd-pod-location.adoc[leveloffset=+1]
17+
18+
include::modules/efk-logging-fluentd-log-viewing.adoc[leveloffset=+1]
19+
20+
include::modules/efk-logging-fluentd-log-location.adoc[leveloffset=+1]
21+
22+
include::modules/efk-logging-fluentd-log-rotation.adoc[leveloffset=+1]
23+
1624
include::modules/efk-logging-external-fluentd.adoc[leveloffset=+1]
1725

1826
include::modules/efk-logging-fluentd-connections.adoc[leveloffset=+1]

modules/efk-logging-about-fluentd.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77

88
{product-title} uses Fluentd to collect data about your cluster.
99

10-
Fluentd is deployed as a DaemonSet in {product-title} that deploys replicas according to a node
10+
Fluentd is deployed as a DaemonSet in {product-title} that deploys nodes according to a node
1111
label selector, which you can specify with the inventory parameter
1212
`openshift_logging_fluentd_nodeselector` and the default is `logging-infra-fluentd`.
1313
As part of the OpenShift cluster installation, it is recommended that you add the

modules/efk-logging-deploy-pre.adoc

Lines changed: 1 addition & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ various areas of the EFK stack.
2222
+
2323
.. Ensure that you have deployed a router for the cluster.
2424
+
25-
** Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch replica
25+
** Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
2626
requires its own storage volume.
2727

2828
. Specify a node selector
@@ -34,22 +34,3 @@ node selector should be used.
3434
$ oc adm new-project logging --node-selector=""
3535
----
3636

37-
* Choose a project.
38-
+
39-
Once deployed, the EFK stack collects logs for every
40-
project within your {product-title} cluster. But the stack requires a dedicated project, by default *openshift-logging*.
41-
The Ansible playbook creates the project for you. You only need to create a project if you want
42-
to specify a node-selector on it.
43-
+
44-
----
45-
$ oc adm new-project logging --node-selector=""
46-
$ oc project logging
47-
----
48-
+
49-
[NOTE]
50-
====
51-
Specifying an empty node selector on the project is recommended, as Fluentd should be deployed
52-
throughout the cluster and any selector would restrict where it is
53-
deployed. To control component placement, specify node selectors per component to
54-
be applied to their deployment configurations.
55-
====

modules/efk-logging-deploy-variables.adoc

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -344,7 +344,7 @@ server cert. The default is the internal CA.
344344
|The location of the client key Fluentd uses for `openshift_logging_es_host`.
345345

346346
|`openshift_logging_es_cluster_size`
347-
|Elasticsearch replicas to deploy. Redundancy requires at least three or more.
347+
|Elasticsearch nodes to deploy. Redundancy requires at least three or more.
348348

349349
|`openshift_logging_es_cpu_limit`
350350
|The amount of CPU limit for the ES cluster.
@@ -377,7 +377,10 @@ openshift_logging_es_pvc_dynamic value.
377377
|`openshift_logging_es_pvc_size`
378378
|Size of the persistent volume claim to
379379
create per Elasticsearch instance. For example, 100G. If omitted, no PVCs are
380-
created and ephemeral volumes are used instead. If this parameter is set, `openshift_logging_elasticsearch_storage_type` is set to `pvc`.
380+
created and ephemeral volumes are used instead. If you set this parameter, the logging installer sets `openshift_logging_elasticsearch_storage_type` to `pvc`.
381+
382+
|`openshift_logging_elasticsearch_storage_type`
383+
|Sets the Elasticsearch storage type. If you are using Persistent Elasticsearch Storage, the logging installer sets this to `pvc`.
381384

382385
|`openshift_logging_elasticsearch_storage_type`
383386
|Sets the Elasticsearch storage type. If you are using Persistent Elasticsearch Storage, set to `pvc`.
Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * logging/efk-logging-elasticsearch.adoc
4+
5+
[id='efk-logging-elasticsearch-persistent-storage-about_{context}']
6+
= Configuring persistent storage for Elasticsearch
7+
8+
By default, the `openshift_logging` Ansible role creates an ephemeral
9+
deployment in which all of a pod's data is lost upon restart.
10+
11+
For production environments, each Elasticsearch deployment configuration requires a persistent storage volume. You can specify an existing persistent
12+
volume claim or allow {product-title} to create one.
13+
14+
* *Use existing PVCs.* If you create your own PVCs for the deployment, {product-title} uses those PVCs.
15+
+
16+
Name the PVCs to match the `openshift_logging_es_pvc_prefix` setting, which defaults to
17+
`logging-es`. Assign each PVC a name with a sequence number added to it: `logging-es-0`,
18+
`logging-es-1`, `logging-es-2`, and so on.
19+
20+
* *Allow {product-title} to create a PVC.* If a PVC for Elsaticsearch does not exist, {product-title} creates the PVC based on parameters
21+
in the Ansible inventory file, by default *_/etc/ansible/hosts_*.
22+
+
23+
[cols="3,7",options="header"]
24+
|===
25+
|Parameter
26+
|Description
27+
28+
|`openshift_logging_es_pvc_size`
29+
| Specify the size of the PVC request.
30+
31+
|`openshift_logging_elasticsearch_storage_type`
32+
a|Specify the storage type as `pvc`.
33+
[NOTE]
34+
====
35+
This is an optional parameter. Setting the `openshift_logging_es_pvc_size` parameter to a value greater than 0 automatically sets this parameter to `pvc` by default.
36+
====
37+
38+
|`openshift_logging_es_pvc_prefix`
39+
|Optionally, specify a custom prefix for the PVC.
40+
|===
41+
+
42+
For example:
43+
+
44+
[source,bash]
45+
----
46+
openshift_logging_elasticsearch_storage_type=pvc
47+
openshift_logging_es_pvc_size=104802308Ki
48+
openshift_logging_es_pvc_prefix=es-logging
49+
----
50+
51+
If you use dynamically provisioned PVs, the {product-title} logging installer creates PVCs
52+
that use the default storage class or the PVC specified with the `openshift_logging_elasticsearch_pvc_storage_class_name` parameter.
53+
54+
If you use NFS storage, the {product-title} installer creates the persistent volumes, based on the `openshift_logging_storage_*` parameters
55+
and the {product-title} logging installer creates PVCs, using the `openshift_logging_es_pvc_*` paramters.
56+
Make sure you specify the correct parameters to use persistent volumes with EFK.
57+
Also set the `openshift_enable_unsupported_configurations=true` parameter in the Ansible inventory file,
58+
as the logging installer blocks the installation of NFS with core infrastructure by default.
59+
60+
[WARNING]
61+
====
62+
Using NFS storage as a volume or a persistent volume (or via NAS such as
63+
Gluster) is not supported for Elasticsearch storage, as Lucene relies on file
64+
system behavior that NFS does not supply. Data corruption and other problems can
65+
occur.
66+
====
67+
Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * logging/efk-logging-elasticsearch.adoc
4+
5+
[id='efk-logging-elasticsearch-persistent-storage-local_{context}']
6+
= Configuring NFS as local storage for Elasticsearch
7+
8+
9+
You can allocate a large file on an NFS server and mount the file to the nodes. You can then use the file as a host path device.
10+
11+
.Prerequisites
12+
13+
Allocate a large file on an NFS server and mount the file to the nodes
14+
15+
----
16+
$ mount -F nfs nfserver:/nfs/storage/elasticsearch-1 /usr/local/es-storage
17+
$ chown 1000:1000 /usr/local/es-storage
18+
----
19+
20+
Then, use *_/usr/local/es-storage_* as a host-mount as described below.
21+
Use a different backing file as storage for each Elasticsearch replica.
22+
23+
This loopback must be maintained manually outside of {product-title}, on the
24+
node. You must not maintain it from inside a container.
25+
26+
.Procedure
27+
28+
To use a local disk volume (if available) on each
29+
node host as storage for an Elasticsearch replica:
30+
31+
. The relevant service account must be given the privilege to mount and edit a
32+
local volume:
33+
+
34+
----
35+
$ oc adm policy add-scc-to-user privileged \
36+
system:serviceaccount:logging:aggregated-logging-elasticsearch <1>
37+
----
38+
<1> Use the project you created earlier, for example, *logging*, when running the
39+
logging playbook.
40+
41+
. Each Elasticsearch node definition must be patched to claim that privilege,
42+
for example:
43+
+
44+
----
45+
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
46+
oc scale $dc --replicas=0
47+
oc patch $dc \
48+
-p '{"spec":{"template":{"spec":{"containers":[{"name":"elasticsearch","securityContext":{"privileged": true}}]}}}}'
49+
done
50+
----
51+
52+
. The Elasticsearch replicas must be located on the correct nodes to use the local
53+
storage, and should not move around even if those nodes are taken down for a
54+
period of time. This requires giving each Elasticsearch node a node selector
55+
that is unique to a node where an administrator has allocated storage for it. To
56+
configure a node selector, edit each Elasticsearch deployment configuration and
57+
add or edit the *nodeSelector* section to specify a unique label that you have
58+
applied for each desired node:
59+
+
60+
----
61+
apiVersion: v1
62+
kind: DeploymentConfig
63+
spec:
64+
template:
65+
spec:
66+
nodeSelector:
67+
logging-es-node: "1" <1>
68+
----
69+
<1> This label should uniquely identify a replica with a single node that bears that
70+
label, in this case `logging-es-node=1`. Use the `oc label` command to apply
71+
labels to nodes as needed.
72+
+
73+
To automate applying the node selector you can instead use the `oc patch` command:
74+
+
75+
----
76+
$ oc patch dc/logging-es-<suffix> \
77+
-p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-es-node":"1"}}}}}'
78+
----
79+
80+
. Apply a local host mount to each replica. The following example assumes storage is mounted at the same path on each node:
81+
+
82+
----
83+
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
84+
oc set volume $dc \
85+
--add --overwrite --name=elasticsearch-storage \
86+
--type=hostPath --path=/usr/local/es-storage
87+
oc rollout latest $dc
88+
oc scale $dc --replicas=1
89+
done
90+
----
91+
Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * logging/efk-logging-elasticsearch.adoc
4+
5+
[id='efk-logging-elasticsearch-persistent-storage-persistent_{context}']
6+
= Using NFS as a persistent volume for Elasticsearch
7+
8+
You can deploy NFS as an automatically provisioned persistent volume or using a predefined NFS volume.
9+
10+
For more information, see _Sharing an NFS mount across two persistent volume claims_ to leverage shared storage for use by two separate containers.
11+
12+
13+
*Using automatically provisioned NFS*
14+
15+
You can use NFS as a persistent volume where NFS is automatically provisioned.
16+
17+
.Procedure
18+
19+
. Add the following lines to the Ansible inventory file to create an NFS auto-provisioned storage class and dynamically provision the backing storage:
20+
+
21+
----
22+
openshift_logging_es_pvc_storage_class_name=$nfsclass
23+
openshift_logging_es_pvc_dynamic=true
24+
----
25+
26+
. Use the following command to deploy the NFS volume using the logging playbook:
27+
+
28+
----
29+
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml
30+
----
31+
32+
. Use the following steps to create a PVC:
33+
34+
.. Edit the Ansible inventory file to set the PVC size:
35+
+
36+
----
37+
openshift_logging_es_pvc_size=50Gi
38+
----
39+
+
40+
[NOTE]
41+
====
42+
The logging playbook selects a volume based on size and might use an unexpected volume if any other persistent volume has same size.
43+
====
44+
45+
.. Use the following command to rerun the Ansible *_deploy_cluster.yml_* playbook:
46+
+
47+
----
48+
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
49+
----
50+
+
51+
The installer playbook creates the NFS volume based on the `openshift_logging_storage` variables.
52+
53+
*Using a predefined NFS volume*
54+
55+
You can deploy logging alongside the {product-title} cluster using an existing NFS volume.
56+
57+
.Procedure
58+
59+
. Edit the Ansible inventory file to configure the NFS volume and set the PVC size:
60+
+
61+
----
62+
openshift_logging_storage_kind=nfs
63+
openshift_enable_unsupported_configurations=true
64+
openshift_logging_storage_access_modes=["ReadWriteOnce"]
65+
openshift_logging_storage_nfs_directory=/srv/nfs
66+
openshift_logging_storage_nfs_options=*(rw,root_squash)
67+
openshift_logging_storage_volume_name=logging
68+
openshift_logging_storage_volume_size=100Gi
69+
openshift_logging_storage_labels={:storage=>"logging"}
70+
openshift_logging_install_logging=true
71+
----
72+
73+
. Use the following command to redeploy the EFK stack:
74+
+
75+
----
76+
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
77+
----
78+

0 commit comments

Comments
 (0)