This project is focused on setting up and using the powerful EFK stack to monitor and manage our applications.
The EFK stack is a popular logging and monitoring solution that efficiently collects, analyzes, and visualizes logs from your applications and infrastructure. Their purposes are:
- Elasticsearch:Â Store and search your logs efficiently.
- Fluent Bit:Â Collect and forward your logs from various sources.
- Kibana:Â Visualize and explore your logs to gain valuable insights.
The way it works is Fluentbit reads the logs from the application container log files present on the nodes and pushes these logs to Elasticsearch which handles storing and searching of logs efficiently. Then Kibana will be used as a visualization tool (UI) for checking those logs.
There are numerous log management tools such as Logstash, Fluentd, etc. I will prefer Fluenbit for this project because it boasts impressive lightweight performance. In the screenshot below are major differences.
Requirements for the project:
- A Kubernetes Cluster
- Helm
Provision your Kubernetes cluster using eksctl command.
To deploy Elasticsearch in an Amazon EKS cluster effectively, certain prerequisites must be in place. Elasticsearch, functioning as a database, is typically deployed as a stateful set, which requires the use of Persistent Volume Claims (PVCs). These PVCs must be backed by storage resources to ensure reliable and persistent data storage.
To provision Elastic Block Store (EBS) volumes for these PVCs within the EKS cluster, the following components are essential:
- StorageClass: A storage class configured with the AWS EBS provisioner is required. This storage class defines the parameters for provisioning EBS volumes, such as volume type, size, and access modes.
- AWS EBS CSI Driver: The EBS Container Storage Interface (CSI) driver must be installed and configured within the EKS cluster. This driver allows Kubernetes to communicate with AWS and dynamically provision EBS volumes as requested by PVCs.
AWS EBS in EKS setup procedure:
- Create the required IAM role for the EBS CSI Driver.
- Install the EBS CSI Driver using EKS Addons.
Create the OIDC Provider for the EKS cluster
eksctl utils associate-iam-oidc-provider \
--region <region> \
--cluster <cluster-name> \
--approve
Confirm the cluster’s OIDC provider aws eks describe-cluster --name my-cluster --query "cluster.identity.oidc.issuer" --output text
Create IAM role for Service Account: create an IAM role which can be assumed by Kubernetes Service Accounts, the authorized principal should be the Kubernetes OIDC provider, and to allow only a specific service account, we can use policy conditions to restrict access for selected ones.
Create the IAM role, granting the AssumeRoleWithWebIdentity action. Update the json file with account-id
, region
the last digit from the OIDC 96EB298B212A248710459183292D0B25
.
aws-ebs-csi-driver-trust-policy.json
Create the role.
aws iam create-role \
--role-name AmazonEKS_EBS_CSI_DriverRole \
--assume-role-policy-document file://"aws-ebs-csi-driver-trust-policy.json"
Attach the AWS managed policy to the role
aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--role-name AmazonEKS_EBS_CSI_DriverRole
It is recommended to install the Amazon EBS CSI driver through the Amazon EKS add-on to improve security and reduce the amount of work
aws eks create-addon \
--cluster-name efk-cluster \
--addon-name aws-ebs-csi-driver \
--addon-version v1.37.0-eksbuild.1 \
Create a Storage Class for Elasticsearch
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-gp3
provisioner: ebs.csi.aws.com
parameters:
type: gp3
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
kubectl create namespace efk
helm repo add elastic https://helm.elastic.co
helm search repo elastic
Please make sure whenever you want to install both ElasticSearch and Kibana, make sure that they are of the same version just like 8.5.1
for both.
When installing ElasticSearch, specify the name of the storage class (ebs-gp3) we deployed earlier and the storage value (5Gi) we are interested in.
helm install elasticsearch \
--set replicas=2 \
--set service.type=LoadBalancer \
--set volumeClaimTemplate.storageClassName=ebs-gp3 \
--set volumeClaimTemplate.resources.requests.storage=5Gi \
--set persistence.labels.enabled=true \
--set persistence.labels.customLabel=elasticsearch-pv \
elastic/elasticsearch -n efk
Get the credentials to login to ElasticSearch
kubectl get secrets --namespace=efk elasticsearch-master-credentials -ojsonpath='{.data.username}' | base64 -d
kubectl get secrets --namespace=efk elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
helm install kibana --set service.type=LoadBalancer elastic/kibana -n efk
kubectl get secrets --namespace=efk elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
kubectl get secrets --namespace=efk kibana-kibana-es-token -ojsonpath='{.data.token}' | base64 -d
Deploy a log event generator
Deploy Fluentbit using helm and edit the values.yaml
helm repo add fluent https://fluent.github.io/helm-charts
helm show values fluent/fluent-bit > fluentbit-values.yaml
Update the input section with the path where the app-event-simulator container log is located
Also update the output section with the highlighted part in the screenshot which includes the password of Elasticsearch, port, logstash prefix which is the way to identify the logs in Kibana
helm install fluent-bit fluent/fluent-bit -f fluentbit-values.yaml -n efk
Fluentbit already getting the logs from the container
Now, lets display the logs
Give the log a name app-event-log
and ensure the name given in the index pattern matches the index in the screenshot
Now we have Kibana display our logs, in which we can also query the search bar with keys define in the logs