Personal capstone project merging together some technologies that I've been looking around. We can see the creation of Kubernetes managed cluster (EKS) on AWS using Terraform. To enrich and automate the experience I used Helm Charts for deploying Ingress-Nginx as Ingress Controller, Metrics Server, Kube Prometheus Stack and Cluster Autoscaler. For the CI part I used Jenkins and for the CD part ArgoCD. All around in a GitOps style.
In order to be cost-effetive, once created and configured the Kubernetes Control Plane (on the managed solution of AWS, EKS), using the Helm Charts and Terraform, I destroyed all the infrastructure previously created on cloud (terraform destroy
) and moved to a local Kubernetes Cluster using K3D or KIND (out of fun and the sake of experimenting).
- Design & Run well-architected an AWS EKS Cluster based on High-Availability & Cost effectiveness
- DevOps Environment for Kubernetes Cluster
- Use Terraform as Infrastructure-as-Code to Provision the Kubernetes Cluster on AWS
- Use Helm as Infrastructure-as-Code to release Kubernetes applications
- Install Core Applications on Kubernetes Cluster like Prometheus, Grafana and others
- Authenticate to AWS EKS Cluster effectively
- Run CI pipeline using Jenkins with Production setup & Github integration
- ArgoCD for deploying into the Kubernetes Cluster
few words on the project
- using terraform
- Installed Ingress-Nginx Ingress type Controller
- ingress: what can come in
- egress: what can go outside of the cluster
- enabled HTTPS/SSL
- we need a certificate for that
- Installed Metrics Server
- Based on the Metrics one can setup a Cluster-Autoscaler
- Setup a Monitoring stack with Prometheus and Grafana
- So Prometheus can collect the metrics exposed by the Metrics-Server and can present them inside some Grafana Dashboards
- I used the kube-promethues-stack
- all the prometheus stack like server, agents etc
- there is also the Altert-Manager
- we have also Grafana
- its like an Operator
- cli or docker image
-
docker run -it --rm hashicorp/terraform:0.12.12 --version
-
- check the following instructions
- check the following instructions
- k3d
- create a cluster with k3d
k3d cluster create -c k3d.yaml
- print the cluster kubeconfig
k3d kubeconfig get my-cluster > ~/.kube/k3d-my-cluster
- kind
- it collects metrics of nodes and pods so it is mandatory to set up autoscaling policies
- it requires metrics-server
- chart value used
- cluster-autoscaler chart
- reference parameter tooked from main.tf
-
helm install cluster-autoscaler autoscaler/cluster-autoscaler --namespace kube-system -f notes/helm_overview/charts/cluster-autoscaler/values.yaml
- Setup an autosclaler group
- min and max capacity -> main.tf
3. Ingress Controller: Ingress-Nginx
- Exposes website pubblicly
- It's an Nginx web server which recieves HTTP/S requests from outside and it routes them to a specific server according to some rules called Ingress Objects
- lecture
- Create a cert with ACM, with DNS Verification
- Paste the ARN of the certificate in values.yaml
- aws-load-balancer
-
helm upgrade my-ingress-nginx ingress-nginx/ingress-nginx --version 4.0.9 --namespace kube-system -f notes/helm_overview/charts/ingress-nginx/values.yaml
- lecture ; Prometheus Exporter
- chart ; stack doc
- Install prometheus comunity helm repo
helm repo add prometheus-community c
- Create namespace
monitoring
kubectl create ns monitoring
- Install stack
helm install --create-namespace --namespace monitoring prometheus prometheus-community/kube-prometheus-stack
- swap ns
kubectl config set-context --current --namespace monitoring
kubectl get customresourcedefinitions.apiextensions.k8s.io | grep monitoring
- get the port from
kubectl get pod prometheus-prometheus-kube-prometheus-prometheus-0 -o yaml
kubectl port-forward prometheus-prometheus-kube-prometheus-prometheus-0 9090
- good question.
kubectl get pod prometheus-prometheus-kube-prometheus-prometheus-0 -o jsonpath='{..args}'
kubectl get pod alertmanager-prometheus-kube-prometheus-alertmanager-0 -o yaml
kubectl port-forward alertmanager-prometheus-kube-prometheus-alertmanager-0 9093
kubectl -n monitoring get pods
kubectl logs prometheus-grafana-5cddc775c4-f62pj | less
- search for
user=
- running port (should be 3000)
- search for
- get grafana admin password
kubectl get secrets prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
- or values.yaml
- or
kubectl get secrets prometheus-grafana -o jsonpath='{..admin-user}{"\n"}{..admin-password}' | base64 --decode
kubectl port-forward deployment/prometheus-grafana 3000
- deploy doc ; nginx log and monitor doc
- helm chart values.yaml
- by default metrics are disabled
- enabling
controller.metrics.serviceMonitor
will create a new Kubenernetes Object called ServiceMonitor
- added
metrics
block inside ingress-nginx/values.yaml- populate additionalLabels with
kubectl get --namespace monitoring pod --show-labels
- populate additionalLabels with
- upgrade the release
helm upgrade my-ingress-nginx ingress-nginx/ingress-nginx --version 4.0.9 --namespace kube-system -f notes/helm_overview/charts/ingress-nginx/values.yaml
- check the new resource created
kubectl get -n monitoring servicemonitors.monitoring.coreos.com
- Now the Ingress Nginx Apllication is exposing Prometheus metrics
- Add Grafana Dasboard for the Ingress
- nginx doc
- Add new Grafana Dasboard
helm upgrade prometheus prometheus-community/kube-prometheus-stack --create-namespace --namespace monitoring -f notes/helm_overview/charts/kube-prometheus-stack/values.yaml
- Grafana Dashboard
- ArgoCD doc ; lecture
- Install ArgoCD
-
kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
-
- Access ArgoCD Web UI
- port forwarding
kubectl port-forward svc/argocd-server -n argocd 8080:443
- Expose it through the Ingress-Nginx Ingress Controller
- Login
admin
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
- port forwarding
- Configure ArgoCD
- Create an
application.yaml
on the root of the desired Project to be deployed in Kubernetes- reference application.yaml ; doc
- this
Application
component will be created in the same namespace as ArgoCD
- this
- the first time the application.yaml must be applied manually
- guestbook example
- appfiles
- reference application.yaml ; doc