Skip to content

Commit d6a981d

Browse files
committed
Added OSDOCS-139 to 4.0 files
1 parent f4019bf commit d6a981d

5 files changed

+257
-5
lines changed

logging/efk-logging-elasticsearch.adoc

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,11 @@ toc::[]
1414

1515
include::modules/efk-logging-elasticsearch-ha.adoc[leveloffset=+1]
1616

17-
include::modules/efk-logging-elasticsearch-persistent-storage.adoc[leveloffset=+1]
17+
include::modules/efk-logging-elasticsearch-persistent-storage-about.adoc[leveloffset=+1]
18+
19+
include::modules/efk-logging-elasticsearch-persistent-storage-persistent.adoc[leveloffset=+2]
20+
21+
include::modules/efk-logging-elasticsearch-persistent-storage-local.adoc[leveloffset=+2]
1822

1923
include::modules/efk-logging-elasticsearch-scaling.adoc[leveloffset=+1]
2024

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * logging/efk-logging-elasticsearch.adoc
4+
5+
[id='efk-logging-elasticsearch-persistent-storage-about_{context}']
6+
= Configuring persistent storage for Elasticsearch
7+
8+
By default, the `openshift_logging` Ansible role creates an ephemeral
9+
deployment in which all of a pod's data is lost upon restart.
10+
11+
For production environments, each Elasticsearch deployment configuration requires a persistent storage volume. You can specify an existing persistent
12+
volume claim or allow {product-title} to create one.
13+
14+
* *Use existing PVCs.* If you create your own PVCs for the deployment, {product-title} uses those PVCs.
15+
+
16+
Name the PVCs to match the `openshift_logging_es_pvc_prefix` setting, which defaults to
17+
`logging-es`. Assign each PVC a name with a sequence number added to it: `logging-es-0`,
18+
`logging-es-1`, `logging-es-2`, and so on.
19+
20+
* *Allow {product-title} to create a PVC.* If a PVC for Elsaticsearch does not exist, {product-title} creates the PVC based on parameters
21+
in the Ansible inventory file, by default *_/etc/ansible/hosts_*.
22+
+
23+
[cols="3,7",options="header"]
24+
|===
25+
|Parameter
26+
|Description
27+
28+
|`openshift_logging_es_pvc_size`
29+
| Specify the size of the PVC request.
30+
31+
|`openshift_logging_elasticsearch_storage_type`
32+
a|Specify the storage type as `pvc`.
33+
[NOTE]
34+
====
35+
This is an optional parameter. Setting the `openshift_logging_es_pvc_size` parameter to a value greater than 0 automatically sets this parameter to `pvc` by default.
36+
====
37+
38+
|`openshift_logging_es_pvc_prefix`
39+
|Optionally, specify a custom prefix for the PVC.
40+
|===
41+
+
42+
For example:
43+
+
44+
[source,bash]
45+
----
46+
openshift_logging_elasticsearch_storage_type=pvc
47+
openshift_logging_es_pvc_size=104802308Ki
48+
openshift_logging_es_pvc_prefix=es-logging
49+
----
50+
51+
If using dynamically provisioned PVs, the {product-title} logging installer creates PVCs
52+
that use the default storage class or the PVC specified with the `openshift_logging_elasticsearch_pvc_storage_class_name` parameter.
53+
54+
If using NFS storage, the {product-title} installer creates the persistent volumes, based on the `openshift_logging_storage_*` parameters
55+
and the {product-title} logging installer creates PVCs, using the `openshift_logging_es_pvc_*` paramters.
56+
Make sure you specify the correct parameters in order to use persistent volumes with EFK.
57+
Also set the `openshift_enable_unsupported_configurations=true` parameter in the Ansible inventory file,
58+
as the logging installer blocks the installation of NFS with core infrastructure by default.
59+
60+
[WARNING]
61+
====
62+
Using NFS storage as a volume or a persistent volume (or via NAS such as
63+
Gluster) is not supported for Elasticsearch storage, as Lucene relies on file
64+
system behavior that NFS does not supply. Data corruption and other problems can
65+
occur.
66+
67+
----
68+
$ truncate -s 1T /nfs/storage/elasticsearch-1
69+
$ mkfs.xfs /nfs/storage/elasticsearch-1
70+
$ mount -o loop /nfs/storage/elasticsearch-1 /usr/local/es-storage
71+
$ chown 1000:1000 /usr/local/es-storage
72+
----
73+
74+
Then, use *_/usr/local/es-storage_* as a host-mount as described below.
75+
Use a different backing file as storage for each Elasticsearch node.
76+
77+
This loopback must be maintained manually outside of {product-title}, on the
78+
node. You must not maintain it from inside a container.
79+
====
80+
Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * logging/efk-logging-elasticsearch.adoc
4+
5+
[id='efk-logging-elasticsearch-persistent-storage-local_{context}']
6+
= Configuring NFS as local storage for Elasticsearch
7+
8+
9+
You can allocate a large file on an NFS server and mount the file to the nodes. You can then use the file as a host path device.
10+
11+
.Prerequisites
12+
13+
Allocate a large file on an NFS server and mount the file to the nodes
14+
15+
----
16+
$ mount -F nfs nfserver:/nfs/storage/elasticsearch-1 /usr/local/es-storage
17+
$ chown 1000:1000 /usr/local/es-storage
18+
----
19+
20+
Then, use *_/usr/local/es-storage_* as a host-mount as described below.
21+
Use a different backing file as storage for each Elasticsearch replica.
22+
23+
This loopback must be maintained manually outside of {product-title}, on the
24+
node. You must not maintain it from inside a container.
25+
26+
.Procedure
27+
28+
To use a local disk volume (if available) on each
29+
node host as storage for an Elasticsearch replica:
30+
31+
. The relevant service account must be given the privilege to mount and edit a
32+
local volume:
33+
+
34+
----
35+
$ oc adm policy add-scc-to-user privileged \
36+
system:serviceaccount:logging:aggregated-logging-elasticsearch <1>
37+
----
38+
<1> Use the project you created earlier, for example, *logging*, when running the
39+
logging playbook.
40+
41+
. Each Elasticsearch node definition must be patched to claim that privilege,
42+
for example:
43+
+
44+
----
45+
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
46+
oc scale $dc --replicas=0
47+
oc patch $dc \
48+
-p '{"spec":{"template":{"spec":{"containers":[{"name":"elasticsearch","securityContext":{"privileged": true}}]}}}}'
49+
done
50+
----
51+
52+
. The Elasticsearch replicas must be located on the correct nodes to use the local
53+
storage, and should not move around even if those nodes are taken down for a
54+
period of time. This requires giving each Elasticsearch node a node selector
55+
that is unique to a node where an administrator has allocated storage for it. To
56+
configure a node selector, edit each Elasticsearch deployment configuration and
57+
add or edit the *nodeSelector* section to specify a unique label that you have
58+
applied for each desired node:
59+
+
60+
----
61+
apiVersion: v1
62+
kind: DeploymentConfig
63+
spec:
64+
template:
65+
spec:
66+
nodeSelector:
67+
logging-es-node: "1" <1>
68+
----
69+
<1> This label should uniquely identify a replica with a single node that bears that
70+
label, in this case `logging-es-node=1`. Use the `oc label` command to apply
71+
labels to nodes as needed.
72+
+
73+
To automate applying the node selector you can instead use the `oc patch` command:
74+
+
75+
----
76+
$ oc patch dc/logging-es-<suffix> \
77+
-p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-es-node":"1"}}}}}'
78+
----
79+
80+
. Once these steps are taken, a local host mount can be applied to each replica
81+
as in this example, assuming storage is mounted at the same path on each node:
82+
+
83+
----
84+
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
85+
oc set volume $dc \
86+
--add --overwrite --name=elasticsearch-storage \
87+
--type=hostPath --path=/usr/local/es-storage
88+
oc rollout latest $dc
89+
oc scale $dc --replicas=1
90+
done
91+
----
92+
Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * logging/efk-logging-elasticsearch.adoc
4+
5+
[id='efk-logging-elasticsearch-persistent-storage-persistent_{context}']
6+
= Using NFS as a persistent volume for Elasticsearch
7+
8+
You can deploy NFS as an automatically provisioned persistent volume or using a predefined NFS volume.
9+
10+
For more information, see _Sharing an NFS mount across two persistent volume claims_ to leverage shared storage for use by two separate containers.
11+
12+
13+
*Using automatically provisioned NFS*
14+
15+
You can use NFS as a persistent volume where NFS is automatically provisioned.
16+
17+
.Procedure
18+
19+
. Add the following lines to the Ansible inventory file to create an NFS auto-provisioned storage class and dynamically provision the backing storage:
20+
+
21+
----
22+
openshift_logging_es_pvc_storage_class_name=$nfsclass
23+
openshift_logging_es_pvc_dynamic=true
24+
----
25+
26+
. Use the following command to deploy the NFS volume using the logging playbook:
27+
+
28+
----
29+
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml
30+
----
31+
32+
. Use the following steps to create a PVC:
33+
34+
.. Edit the Ansible inventory file to set the PVC size:
35+
+
36+
----
37+
openshift_logging_es_pvc_size=50Gi
38+
----
39+
+
40+
[NOTE]
41+
====
42+
The logging playbook selects a volume based on size and might use an unexpected volume if any other persistent volume has same size.
43+
====
44+
45+
.. Use the following command to rerun the Ansible *_deploy_cluster.yml_* playbook:
46+
+
47+
----
48+
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
49+
----
50+
+
51+
The installer playbook creates the NFS volume based on the `openshift_logging_storage` variables.
52+
53+
*Using a predefined NFS volume*
54+
55+
You can deploy logging alongside the {product-title} cluster using an existing NFS volume.
56+
57+
.Procedure
58+
59+
. Edit the Ansible inventory file to configure the NFS volume and set the PVC size:
60+
+
61+
----
62+
openshift_logging_storage_kind=nfs
63+
openshift_enable_unsupported_configurations=true
64+
openshift_logging_storage_access_modes=["ReadWriteOnce"]
65+
openshift_logging_storage_nfs_directory=/srv/nfs
66+
openshift_logging_storage_nfs_options=*(rw,root_squash)
67+
openshift_logging_storage_volume_name=logging
68+
openshift_logging_storage_volume_size=100Gi
69+
openshift_logging_storage_labels={:storage=>"logging"}
70+
openshift_logging_install_logging=true
71+
----
72+
73+
. Use the following command to redeploy the EFK stack:
74+
+
75+
----
76+
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
77+
----
78+

modules/efk-logging-elasticsearch-persistent-storage.adoc

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ If using dynamically provisioned PVs, the {product-title} logging installer crea
5252
that use the default storage class or the PVC specified with the `openshift_logging_elasticsearch_pvc_storage_class_name` parameter.
5353

5454
If using NFS storage, the {product-title} installer creates the persistent volumes, based on the `openshift_logging_storage_*` parameters
55-
and the {product-title} logging installer creates PVCs, using the `openshift_logging_es_pvc_` paramters.
55+
and the {product-title} logging installer creates PVCs, using the `openshift_logging_es_pvc_*` paramters.
5656
Make sure you specify the correct parameters in order to use persistent volumes with EFK.
5757
Also set the `openshift_enable_unsupported_configurations=true` parameter in the Ansible inventory file,
5858
as the logging installer blocks the installation of NFS with core infrastructure by default.
@@ -62,9 +62,7 @@ as the logging installer blocks the installation of NFS with core infrastructure
6262
Using NFS storage as a volume or a persistent volume (or via NAS such as
6363
Gluster) is not supported for Elasticsearch storage, as Lucene relies on file
6464
system behavior that NFS does not supply. Data corruption and other problems can
65-
occur. If NFS storage is required, you can allocate a large file on a
66-
volume to serve as a storage device and mount it locally on one host.
67-
For example, if your NFS storage volume is mounted at *_/nfs/storage_*:
65+
occur.
6866
6967
----
7068
$ truncate -s 1T /nfs/storage/elasticsearch-1

0 commit comments

Comments
 (0)