Skip to content

Commit 57d1f0e

Browse files
committed
improvement in logging-elasticsearch-exposing
1 parent 26b8bc2 commit 57d1f0e

File tree

3 files changed

+105
-39
lines changed

3 files changed

+105
-39
lines changed

logging/efk-logging-fluentd.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ include::modules/efk-logging-fluentd-limits.adoc[leveloffset=+1]
2929

3030
////
3131
4.1
32-
include::modules/efk-logging-fluentd-collector.adoc[leveloffset=+1]
32+
::modules/efk-logging-fluentd-collector.adoc[leveloffset=+1]
3333
////
3434

3535
include::modules/efk-logging-fluentd-log-rotation.adoc[leveloffset=+1]

modules/efk-logging-elasticsearch-exposing.adoc

Lines changed: 85 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -9,24 +9,36 @@ By default, Elasticsearch deployed with cluster logging is not
99
accessible from outside the logging cluster. You can enable a route with re-encryption termination
1010
for external access to Elasticsearch for those tools that want to access its data.
1111

12-
Internally, you can access Elasticsearch using your {product-title} token, and
13-
you can provide the external Elasticsearch and Elasticsearch Ops
14-
hostnames using the server certificate (similar to Kibana).
15-
16-
* The request must contain three HTTP headers:
12+
Externally, you can access Elasticsearch by creating a reencrypt route, your {product-title} token and the installed
13+
Elasticsearch CA certificate. The request must contain three HTTP headers:
1714
+
1815
----
1916
Authorization: Bearer $token
2017
X-Proxy-Remote-User: $username
2118
X-Forwarded-For: $ip_address
2219
----
2320

21+
Internally, you can access Elastiscearch using the Elasticsearch cluster IP:
22+
23+
----
24+
$ oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging
25+
172.30.183.229
26+
27+
oc get service elasticsearch
28+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
29+
elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h
30+
31+
$ oc exec elasticsearch-clientdatamaster-0-1-858c8f-hhnkn -- curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://172.30.183.229:9200/_cat/health"
32+
33+
% Total % Received % Xferd Average Speed Time Time Time Current
34+
Dload Upload Total Spent Left Speed
35+
100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108
36+
----
37+
2438
.Prerequisites
2539

2640
* Cluster logging and Elasticsearch must be installed.
2741

28-
* Set cluster logging to the unmanaged state.
29-
3042
* You must have access to the project in order to be able to access to the logs. For example:
3143
+
3244
----
@@ -37,10 +49,12 @@ $ oc new-app <httpd-example>
3749

3850
.Procedure
3951

40-
. Use the following command to set name of the Elasticsearch pod in a variable for use in a cURL command:
52+
To expose Elasticsearch externally:
53+
54+
. Change to the `openshift-logging` project:
4155
+
4256
----
43-
ESPOD=$( oc get pods -l component=elasticsearch -o name | sed -e "s/pod\///" )
57+
$ oc project openshift-logging
4458
----
4559

4660
. Use the following command to extract the CA certificate from Elasticsearch and write to the *_admin-ca_* file:
@@ -68,9 +82,9 @@ spec:
6882
name: elasticsearch
6983
tls:
7084
termination: reencrypt
71-
destinationCACertificate: <1>
85+
destinationCACertificate: | <1>
7286
----
73-
<1> Add the Elasticsearch CA ceritifcate or use the command in the next step. You do not need to set the `spec.tls.key`, `spec.tls.certificate` and `spec.tls.caCertificate` parameters
87+
<1> Add the Elasticsearch CA ceritifcate or use the command in the next step. You do not need to set the `spec.tls.key`, `spec.tls.certificate`, and `spec.tls.caCertificate` parameters
7488
required by some reencrypt routes.
7589

7690
.. Run the following command to add the Elasticsearch CA certificate to the route YAML you created:
@@ -99,20 +113,73 @@ route.route.openshift.io/elasticsearch created
99113
$ token=$(oc whoami -t)
100114
----
101115

102-
.. Run a command similar to the following, using your cluster address to access Elasticsearch through the exposed route:
116+
.. Set the *elasticsearch* route you created as an environment variable.
103117
+
104118
----
105-
curl -tlsv1.2 -v --insecure -H "Authorization: Bearer me3IL_WmD_I_McBUm2uhxIayUKped-H3L1njlRRxPlE" "https://elasticsearch-openshift-logging.apps.<cluster-address>.openshift.com/.operations.*/_search?size=1" | jq
119+
$ routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`
120+
----
106121

122+
.. To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route:
123+
+
124+
----
125+
curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}/.operations.*/_search?size=1" | jq
126+
----
127+
+
128+
The response appears similar to the following:
129+
+
130+
----
131+
% Total % Received % Xferd Average Speed Time Time Time Current
132+
Dload Upload Total Spent Left Speed
133+
100 944 100 944 0 0 62 0 0:00:15 0:00:15 --:--:-- 204
107134
{
108-
"took": 49,
135+
"took": 441,
109136
"timed_out": false,
110137
"_shards": {
111-
"total": 1,
112-
"successful": 1,
138+
"total": 3,
139+
"successful": 3,
113140
"skipped": 0,
114141
"failed": 0
115142
},
116-
....
143+
"hits": {
144+
"total": 89157,
145+
"max_score": 1,
146+
"hits": [
147+
{
148+
"_index": ".operations.2019.03.15",
149+
"_type": "com.example.viaq.common",
150+
"_id": "ODdiNWIyYzAtMjg5Ni0TAtNWE3MDY1MjMzNTc3",
151+
"_score": 1,
152+
"_source": {
153+
"_SOURCE_MONOTONIC_TIMESTAMP": "673396",
154+
"systemd": {
155+
"t": {
156+
"BOOT_ID": "246c34ee9cdeecb41a608e94",
157+
"MACHINE_ID": "e904a0bb5efd3e36badee0c",
158+
"TRANSPORT": "kernel"
159+
},
160+
"u": {
161+
"SYSLOG_FACILITY": "0",
162+
"SYSLOG_IDENTIFIER": "kernel"
163+
}
164+
},
165+
"level": "info",
166+
"message": "acpiphp: Slot [30] registered",
167+
"hostname": "localhost.localdomain",
168+
"pipeline_metadata": {
169+
"collector": {
170+
"ipaddr4": "10.128.2.12",
171+
"ipaddr6": "fe80::xx:xxxx:fe4c:5b09",
172+
"inputname": "fluent-plugin-systemd",
173+
"name": "fluentd",
174+
"received_at": "2019-03-15T20:25:06.273017+00:00",
175+
"version": "1.3.2 1.6.0"
176+
}
177+
},
178+
"@timestamp": "2019-03-15T20:00:13.808226+00:00",
179+
"viaq_msg_id": "ODdiNWIyYzAtMYTAtNWE3MDY1MjMzNTc3"
180+
}
181+
}
182+
]
183+
}
184+
}
117185
----
118-

modules/nodes-pods-plugins-install.adoc

Lines changed: 19 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -14,33 +14,25 @@ with the help of plug-ins known as device plug-ins.
1414
. Obtain the label associated with the static Machine Config Pool CRD for the type of node you want to configure.
1515
Perform one of the following steps:
1616

17-
.. View the Machine Config Pool:
17+
.. View the Machine Config:
1818
+
1919
----
20-
$ oc describe machineconfigpool <name>
20+
# oc describe machineconfig <name>
2121
----
2222
+
2323
For example:
2424
+
2525
[source,yaml]
2626
----
27-
$ oc describe machineconfigpool worker
27+
# oc describe machineconfig 00-worker
2828
29-
apiVersion: machineconfiguration.openshift.io/v1
30-
kind: MachineConfigPool
31-
metadata:
32-
creationTimestamp: 2019-02-08T14:52:39Z
33-
generation: 1
34-
labels:
35-
custom-kubelet: small-pods <1>
29+
oc describe machineconfig 00-worker
30+
Name: 00-worker
31+
Namespace:
32+
Labels: machineconfiguration.openshift.io/role=worker <1>
3633
----
37-
<1> If a label has been added it appears under `labels`.
34+
<1> Label required for the device manager.
3835

39-
.. If the label is not present, add a key/value pair:
40-
+
41-
----
42-
$ oc label machineconfigpool worker custom-kubelet=small-pods
43-
----
4436

4537
.Procedure
4638

@@ -52,17 +44,24 @@ $ oc label machineconfigpool worker custom-kubelet=small-pods
5244
apiVersion: machineconfiguration.openshift.io/v1
5345
kind: KubeletConfig
5446
metadata:
55-
name: deploy-device-manager <1>
47+
name: devicemgr <1>
5648
spec:
5749
machineConfigPoolSelector:
5850
matchLabels:
59-
custom-kubelet: small-pods <2>
51+
machine.openshift.io/cluster-api-machine-type: devicemgr <2>
6052
kubeletConfig:
6153
feature-gates:
62-
- DevicePlugins=true <2>
54+
- DevicePlugins=true <3>
6355
----
6456
<1> Assign a name to CR.
65-
<2> Set `DevicePlugins` to 'true`.
57+
<2> Enter the label from the Machine Config Pool.
58+
<3> Set `DevicePlugins` to 'true`.
59+
60+
. Create the device manager:
61+
+
62+
----
63+
$ oc create -f devicemgr.yaml
64+
----
6665

6766
. Ensure that Device Manager was actually enabled by confirming that
6867
*_/var/lib/kubelet/device-plugins/kubelet.sock_* is created on the node. This is

0 commit comments

Comments
 (0)