We designed a simple yet efficient logging solution to simplify log collection, processing, and forwarding.
- Fluent Bit collects logs.
- Sends logs to Kafka.
- Alloy processes and transforms the logs from kafka.
- Alloy Forwards logs to Loki.
- Loki stores logs as indexes and chunks in MinIO.
- Grafana visualizes the logs from Loki.
My current setup on which I am running this logging solution:
- Device: Apple M4 Max
- RAM: 36 GB
- macOS Version: 15.3.1 (24D70)
- OS: Linux , macOS, or Windows
- Docker: Installed
- Kubernetes Cluster: Installed
- Grafana and Loki: Installed
- Helm: Installed
- Fluentbit: Installed and configured
We use these tools to develop this solution:
The purpose of using a KIND cluster is to deploy Kubernetes in a local environment. Docker is required to run a KIND cluster. If you already have a Kubernetes cluster, there is no need to deploy a KIND cluster.
chmod 777 /KindCluster/install_kind.sh
./KindCluster/install_kind.sh # For Linux
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.28.0/kind-darwin-amd64 # For Intel Mac
[ $(uname -m) = arm64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.28.0/kind-darwin-arm64 # For M1 / ARM Mac
chmod +x ./kind
brew install kubectl # For Mac
For setting up kubectl on Linux, follow this tutorial
kubectl version
./kind --version # For Mac
kind --version # For Linux
./kind create cluster --name=mycluster --config=./KindCluster/config.yaml # For Mac
kind create cluster --name=mycluster --config=./KindCluster/config.yaml # For Linux
kubectl get nodes
softwareupdate --install-rosetta --agree-to-license # For M1 / ARM Mac
docker run --privileged --rm tonistiigi/binfmt --install all # For M1 / ARM Mac
docker run --rm --platform linux/amd64 alpine uname -m # For M1 / ARM Mac
cd src/Nopayloaddb
kubectl create namespace npps
kubectl create -f secret.yaml -f django-service.yaml -f django-deployment.yaml -f postgres-service.yaml -f postgres-deployment.yaml
kubectl port-forward deployment/npdb 8000:8000 -n npps
http://localhost:8000/api/cdb_rest/payloadiovs/?gtName=sPHENIX_ExampleGT_24&majorIOV=0&minorIOV=999999
kubectl delete ns grafana-loki
kubectl create ns kafka
kubectl config set-context --current --namespace=kafka
cd fluentbit
kubectl create -f cluster-role.yaml -f clusterrole-binding.yaml -f service-account.yaml
kubectl create -f fluent-bit-configmap.yaml
kubectl create -f fluent-bit-daemonset.yaml
cd ..
cd kafka
kafka create -n kafka -f kafka-pvc.yaml -f kafka-statefulset.yaml -f kafka-service.yaml
helm install kafka-ui kafka-ui/kafka-ui --values kafka-ui.yaml
kubectl port-forward svc/kafka-ui 8081:80
Open your browser and navigate to: http://localhost:8080
-
Go to Topics
-
Click on ops.kube-logs-fluentbit.stream.json.001
-
View Messages (logs)
We can now access logs from Nopayloaddb
Configuring alloy which are acting as a consumer of kafka and get logs from kafka. and then sends logs to loki.
cd ..
cd loki
kubectl create ns monitoring
kubectl config set-context --current --namespace=monitoring
kubectl create -f loki-pvc.yaml
helm repo add grafana https://grafana.github.io/helm-chart
helm upgrade --install --values all-values.yaml loki grafana/loki-stack -n monitoring
helm upgrade --install alloy grafana/alloy -n monitoring --values configalloy.yaml
kubectl get secret loki-grafana -o jsonpath="{.data.admin-user}" | base64 --decode
kubectl get secret loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode
kubectl port-forward svc/loki-grafana 3000:80
Open the Grafana web UI by visiting http://localhost:3000 Then go to Connections > Data sources, select Loki and go to Explore to show the logs of the payload.
Loki sends logs to MinIO for storing log indexes and chunks.
cd ..
cd minio
kubectl create ns minio
kubectl config set-context --current --namespace=minio
kubectl create -f minio-newdeploy.yaml -f minio-service.yaml -f minio-pvc.yaml -f minio-secret.yaml
kubectl port-forward svc/minio-service 9090:9090 -n minio
Open the MinIO web UI by visiting http://localhost:9090 and create a bucket named logs. You will then see the logs stored as indexes and chunks.
By default, the username and password of the MinIO UI are minioadmin. We will replace them using Kubernetes secrets.