You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/efk-logging-deploy-storage-considerations.adoc
+4-5Lines changed: 4 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -7,13 +7,13 @@
7
7
8
8
////
9
9
An Elasticsearch index is a collection of primary shards and its corresponding replica
10
-
shards. This is how ES implements high availability internally, therefore there
10
+
shards. This is how Elasticsearch implements high availability internally, therefore there
11
11
is little need to use hardware based mirroring RAID variants. RAID 0 can still
12
12
be used to increase overall disk performance.
13
13
14
14
//Following paragraph also in nodes/efk-logging-elasticsearch
15
15
16
-
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and CPU limits.
16
+
Elasticsearch is a memory-intensive application. The default cluster logging installation deploys 16G of memory for both memory requests and CPU limits.
17
17
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
18
18
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
19
19
memory setting though this is not recommended for production deployments.
@@ -89,9 +89,8 @@ absolute storage consumption around 50% and below 70% at all times]. This
89
89
helps to avoid Elasticsearch becoming unresponsive during large merge
90
90
operations.
91
91
92
-
By default, at 85% ES stops allocating new data to the node, at 90% ES starts de-allocating
93
-
existing shards from that node to other nodes if possible. But if no nodes have
94
-
free capacity below 85% then ES will effectively reject creating new indices
92
+
By default, at 85% Elasticsearch stops allocating new data to the node, at 90% Elasticsearch attempts to relocate
93
+
existing shards from that node to other nodes if possible. But if no nodes have free capacity below 85%, Elasticsearch effectively rejects creating new indices
Copy file name to clipboardExpand all lines: modules/efk-logging-deploying-about.adoc
+10-7Lines changed: 10 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -5,14 +5,14 @@
5
5
[id="efk-logging-deploying-about-{context}"]
6
6
= About deploying and configuring cluster logging
7
7
8
-
{product-title} cluster logging is designed to be used with the default configuration that should support most {product-title} environments.
8
+
{product-title} cluster logging is designed to be used with the default configuration, which is tuned for small to medium sized {product-title} clusters.
9
9
10
-
The installation instructions that follow include a template Cluster Logging Custom Resource, which you can use to configure your cluster logging
11
-
deployment.
10
+
The installation instructions that follow include a sample Cluster Logging Custom Resource (CR), which you can use to create a cluster logging instance
11
+
and configure your cluster logging deployment.
12
12
13
-
If you want to use the default cluster logging install, you can use the template directly.
13
+
If you want to use the default cluster logging install, you can use the sample CR directly.
14
14
15
-
If you want to customize your deployment, make changes to that template as needed. The following describes the configurations you can make when installing your cluster logging instance or modify after installtion. See the Configuring sections for more information on working with each component, including modifications you can make outside of the Cluster Logging Custom Resource.
15
+
If you want to customize your deployment, make changes to the sample CR as needed. The following describes the configurations you can make when installing your cluster logging instance or modify after installtion. See the Configuring sections for more information on working with each component, including modifications you can make outside of the Cluster Logging Custom Resource.
16
16
17
17
[IMPORTANT]
18
18
====
@@ -105,7 +105,7 @@ You can configure a persistent storage class and size for the Elasticsearch clus
105
105
----
106
106
107
107
This example specifies each data node in the cluster will be bound to a `PersistentVolumeClaim` that
108
-
requests "200G" of "gp2" storage. Additionally, each primary shard will be backed by a single replica.
108
+
requests "200G" of "gp2" storage. Each primary shard will be backed by a single replica.
109
109
110
110
[NOTE]
111
111
====
@@ -129,6 +129,7 @@ You can set the policy that defines how Elasticsearch shards are replicated acro
129
129
* `SingleRedundancy`. A single copy of each shard. Logs are always available and recoverable as long as at least two data nodes exist.
130
130
* `ZeroRedundancy`. No copies of any shards. Logs may be unavailable (or lost) in the event a node is down or fails.
131
131
132
+
////
132
133
Log collectors::
133
134
You can select which log collector is deployed as a Daemonset to each node in the {product-title} cluster, either:
134
135
@@ -149,6 +150,7 @@ You can select which log collector is deployed as a Daemonset to each node in th
149
150
memory:
150
151
type: "fluentd"
151
152
----
153
+
////
152
154
153
155
Curator schedule::
154
156
You specify the schedule for Curator in the [cron format](https://en.wikipedia.org/wiki/Cron).
@@ -172,7 +174,8 @@ The following is an example of a Cluster Logging Custom Resource modified using
0 commit comments