Skip to content

Commit d45d6c7

Browse files
committed
edits per jcantrill
1 parent 2bc120a commit d45d6c7

File tree

4 files changed

+20
-14
lines changed

4 files changed

+20
-14
lines changed

logging/config/efk-logging-fluentd.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,11 @@ include::modules/efk-logging-fluentd-limits.adoc[leveloffset=+1]
3030
////
3131
4.1
3232
modules/efk-logging-fluentd-log-rotation.adoc[leveloffset=+1]
33+
34+
4.2
35+
modules/efk-logging-fluentd-collector.adoc[leveloffset=+1]
3336
////
3437

35-
include::modules/efk-logging-fluentd-collector.adoc[leveloffset=+1]
3638

3739
include::modules/efk-logging-fluentd-log-location.adoc[leveloffset=+1]
3840

modules/efk-logging-deploy-storage-considerations.adoc

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@
77

88
////
99
An Elasticsearch index is a collection of primary shards and its corresponding replica
10-
shards. This is how ES implements high availability internally, therefore there
10+
shards. This is how Elasticsearch implements high availability internally, therefore there
1111
is little need to use hardware based mirroring RAID variants. RAID 0 can still
1212
be used to increase overall disk performance.
1313
1414
//Following paragraph also in nodes/efk-logging-elasticsearch
1515
16-
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and CPU limits.
16+
Elasticsearch is a memory-intensive application. The default cluster logging installation deploys 16G of memory for both memory requests and CPU limits.
1717
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
1818
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
1919
memory setting though this is not recommended for production deployments.
@@ -89,9 +89,8 @@ absolute storage consumption around 50% and below 70% at all times]. This
8989
helps to avoid Elasticsearch becoming unresponsive during large merge
9090
operations.
9191

92-
By default, at 85% ES stops allocating new data to the node, at 90% ES starts de-allocating
93-
existing shards from that node to other nodes if possible. But if no nodes have
94-
free capacity below 85% then ES will effectively reject creating new indices
92+
By default, at 85% Elasticsearch stops allocating new data to the node, at 90% Elasticsearch attempts to relocate
93+
existing shards from that node to other nodes if possible. But if no nodes have free capacity below 85%, Elasticsearch effectively rejects creating new indices
9594
and becomes RED.
9695

9796
[NOTE]

modules/efk-logging-deploy-subscription.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,9 @@ spec:
125125
type: "elasticsearch" <3>
126126
elasticsearch:
127127
nodeCount: 3
128-
storage: {}
128+
storage:
129+
storageClassName: gp2
130+
size: 200G
129131
redundancyPolicy: "SingleRedundancy"
130132
visualization:
131133
type: "kibana" <4>

modules/efk-logging-deploying-about.adoc

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,14 +5,14 @@
55
[id="efk-logging-deploying-about-{context}"]
66
= About deploying and configuring cluster logging
77

8-
{product-title} cluster logging is designed to be used with the default configuration that should support most {product-title} environments.
8+
{product-title} cluster logging is designed to be used with the default configuration, which is tuned for small to medium sized {product-title} clusters.
99

10-
The installation instructions that follow include a template Cluster Logging Custom Resource, which you can use to configure your cluster logging
11-
deployment.
10+
The installation instructions that follow include a sample Cluster Logging Custom Resource (CR), which you can use to create a cluster logging instance
11+
and configure your cluster logging deployment.
1212

13-
If you want to use the default cluster logging install, you can use the template directly.
13+
If you want to use the default cluster logging install, you can use the sample CR directly.
1414

15-
If you want to customize your deployment, make changes to that template as needed. The following describes the configurations you can make when installing your cluster logging instance or modify after installtion. See the Configuring sections for more information on working with each component, including modifications you can make outside of the Cluster Logging Custom Resource.
15+
If you want to customize your deployment, make changes to the sample CR as needed. The following describes the configurations you can make when installing your cluster logging instance or modify after installtion. See the Configuring sections for more information on working with each component, including modifications you can make outside of the Cluster Logging Custom Resource.
1616

1717
[IMPORTANT]
1818
====
@@ -105,7 +105,7 @@ You can configure a persistent storage class and size for the Elasticsearch clus
105105
----
106106

107107
This example specifies each data node in the cluster will be bound to a `PersistentVolumeClaim` that
108-
requests "200G" of "gp2" storage. Additionally, each primary shard will be backed by a single replica.
108+
requests "200G" of "gp2" storage. Each primary shard will be backed by a single replica.
109109

110110
[NOTE]
111111
====
@@ -129,6 +129,7 @@ You can set the policy that defines how Elasticsearch shards are replicated acro
129129
* `SingleRedundancy`. A single copy of each shard. Logs are always available and recoverable as long as at least two data nodes exist.
130130
* `ZeroRedundancy`. No copies of any shards. Logs may be unavailable (or lost) in the event a node is down or fails.
131131

132+
////
132133
Log collectors::
133134
You can select which log collector is deployed as a Daemonset to each node in the {product-title} cluster, either:
134135
@@ -149,6 +150,7 @@ You can select which log collector is deployed as a Daemonset to each node in th
149150
memory:
150151
type: "fluentd"
151152
----
153+
////
152154

153155
Curator schedule::
154156
You specify the schedule for Curator in the [cron format](https://en.wikipedia.org/wiki/Cron).
@@ -172,7 +174,8 @@ The following is an example of a Cluster Logging Custom Resource modified using
172174
apiVersion: "logging.openshift.io/v1alpha1"
173175
kind: "ClusterLogging"
174176
metadata:
175-
name: "customresourcefluentd"
177+
name: "instance"
178+
namespace: "openshift-logging"
176179
spec:
177180
managementState: "Managed"
178181
logStore:

0 commit comments

Comments
 (0)