Skip to content

Commit 6e144ab

Browse files
authored
Merge pull request #13451 from mburke5678/logging-deploy-subscription
Added web console steps to create subscription for logging
2 parents b693009 + 87dc2d3 commit 6e144ab

5 files changed

+36
-6
lines changed

logging/efk-logging-deploy.adoc

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,10 +18,12 @@ The process for deploying the EFK into {prouct-title} involves:
1818

1919
include::modules/efk-logging-deploy-pre.adoc[leveloffset=+1]
2020

21-
include::modules/efk-logging-storage-considerations.adoc[leveloffset=+1]
21+
include::modules/efk-logging-deploy-subscription.adoc[leveloffset=+1]
2222

23-
include::modules/efk-logging-deploy-memory.adoc[leveloffset=+1]
23+
include::modules/efk-logging-deploy-storage-considerations.adoc[leveloffset=+1]
2424

25-
include::modules/efk-logging-deploy-certificates.adoc[leveloffset=+1]
25+
// include::modules/efk-logging-deploy-memory.adoc[leveloffset=+1]
2626

27-
include::modules/efk-logging-deploy-label.adoc[leveloffset=+1]
27+
// include::modules/efk-logging-deploy-certificates.adoc[leveloffset=+1]
28+
29+
// include::modules/efk-logging-deploy-label.adoc[leveloffset=+1]

modules/efk-logging-deploy-label.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,12 +13,14 @@ example:
1313

1414
Using a simple loop:
1515

16+
[source,bash]
1617
----
1718
$ while read node; do oc label nodes $node logging-infra-fluentd=true; done < 20_fluentd.lst
1819
----
1920

2021
The following also works:
2122

23+
[source,bash]
2224
----
2325
$ oc label nodes 10.10.0.{100..119} logging-infra-fluentd=true
2426
----

modules/efk-logging-deploy-pre.adoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,14 +17,15 @@ Before deploying cluster logging into {product-title} perform the following task
1717
+
1818
.. Ensure that you have deployed a router for the cluster.
1919
+
20-
** Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
20+
.. Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
2121
requires its own storage volume.
2222

2323
. Specify a node selector
2424
+
2525
In order for the logging pods to spread evenly across your cluster, an empty
2626
node selector should be used.
2727
+
28+
[source,bash]
2829
----
2930
$ oc adm new-project logging --node-selector=""
3031
----

modules/efk-logging-storage-considerations.adoc renamed to modules/efk-logging-deploy-storage-considerations.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
//
33
// * logging/efk-logging-deploy.adoc
44

5-
[id='efk-logging-storage-considerations_{context}']
5+
[id='efk-logging-deploy-storage-considerations_{context}']
66
= Storage considerations for cluster logging and {product-title}
77

88
An Elasticsearch index is a collection of shards and its corresponding replica
@@ -68,6 +68,8 @@ Calculating total logging throughput and disk space required for your logging
6868
environment requires knowledge of your application. For example, if one of your
6969
applications on average logs 10 lines-per-second, each 256 bytes-per-line,
7070
calculate per-application throughput and disk space as follows:
71+
72+
[source,bash]
7173
----
7274
(bytes-per-line * (lines-per-second) = 2560 bytes per app per second
7375
(2560) * (number-of-pods-per-node,100) = 256,000 bytes per second per node
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * logging/efk-logging-deploy.adoc
4+
5+
[id='efk-logging-deploy-subscription_{context}']
6+
= Installing the Cluster Logging Operator
7+
8+
You can use the {product-title} console to install cluster logging, which creates the Cluster Logging Operator.
9+
10+
.Procedure
11+
12+
To install cluster logging:
13+
14+
. In the {product-title} console, click *Catalog* -> *Operator Hub*.
15+
16+
. Choose *cluster-logging* from the list of available Operators, and click Install.
17+
18+
. On the *Create Operator Subscription* page change the *Target* to the *global-operators* Operator Group. This makes the Operator available to all users and projects that use this OpenShift Container Platform cluster.
19+
20+
. On the *Catalog* → *Installed Operators* page, verify that the ClusterLogging (CSV) eventually shows up and its *Status* ultimately resolves to *InstallSucceeded*.
21+
22+
If it does not, switch to the *Catalog* → *Operator Management* page and inspect the *Operator Subscriptions* and *Install Plans* tabs for any failure or errors under *Status*. Then, check the logs in any Pods in the openshift-operators project (on the *Workloads* → *Pods* page) that are reporting issues to troubleshoot further.
23+

0 commit comments

Comments
 (0)