This is a repository to store a code and concepts of free youtube course 'Jornada Devops de Elite' that is teachered by Fabricio Veronez (DevOps Pro).
I was used a Ubuntu SO and all of install documentation are about that.
All examples are store in the respective folder and documented in Makefile also.
-
k3d: https://k3d.io/v5.4.6/ It is a tool to run kubernets cluster locally
-
kubect: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/ It is a tool to interacts with kubernets cluster
To create a k3d cluster run: k3d cluster create <cluster_name> (optional) k3d cluster create --no-lb (optional condition without load balancer)
To create a k3d cluster with server/agents
k3d create <cluster_name> --servers <number> --agents <number>
example: k3d cluster create simple-cluster --servers 2 --agents 2
To list cluster: k3d cluster list
To delete cluster k3d cluster delete
2.2.0 - Pod:
It is a smallest resource in kubernets and into them is stored containers.
- All containers in same pod share IP and file system.
executing pods pointed to localhost:
kubect port-forward pod/<pod_name> hostPort:podPort
get pods with label filter:
kubect get pods -l <label_tag>:<label_value>
2.2.1 - ReplicaSet
This feature keeps desired number of pods equal to the number of running pods. But there is no automatic creation of pod with latest configuration, to see the last config applied, you must perform deletion and after a new creation of pod.
2.2.3 - Deployment
2.2.4 - Services
Service types:
- Internal communication:
- ClusterIp
-
External communication:
- NodePort
- LoadBalancer
-
see default kube config file by path cat ~/.kube/config
-
list api resource api
kubect api-resources
with resource filter: kubectl api-resource | grep
-
list resources (nodes,deployments,pods and other) kubect get
-
describe resource: kubect describe
-
apply or create resource by .yml
kubect apply -f .yml kubect create -f .yml
- Install terraform tool: https://www.terraform.io/downloads
Terraform is a tool to apply infraestructure as code (IAC).
In this case, the used provider is Digital Ocean and you need that run this steps bellow:
- To execute terraform setup files
-
Create Digital Ocean Account:
https://www.digitalocean.com/ -
Generate ssh key to connect into droplet by ssh key pair ssh-keygen -t rsa -b 2048
-
3.2.0 - Resource
Refere-se a um recurso a ser criado ou atualizado no momento do apply
3.2.1 - Data Source
It refers to previously created resource inside provider.
3.2.2 - Providers
It Refers to cloud service as AWS, GCP, Digital Ocean, Azure and anothers.
3.2.3 Terraform settings
3.2.4 Variables
As well as in programing, we can reference variables with this feature.
3.2.5 Outputs
After running the terraform configuration flow, we can save, through the present resource, the outputs of some other resource.
Continous integration is a flow that contains these steps:
- codification
- commit
- building
- test
- packaging
Continous deploy is a flow that contains these steps:
- release
- acceptance tests
- deploy
Métricas são medições numéricas relativas a dados de um software, sendo disponibilizadas em uma linha temporal.
5.1.0 - Types os metrics
System Metrics
- Request amount
- Quantity of Errors
- Resource consumption
- Resource timing access
Businnes Metrics
- User type of application access
- Product by
5.1.1 - Metrics are not Logs
Metrics are different from logs, as they have data organized and exposed through some interface, which can be numerical data, graphs or aggregations of values.
Logs are textual data, error messages.
Documentation: https://prometheus.io
It is an open source tool for managing and monitoring software metrics. It has several ways to visualize data.
Prometheus is a standalone tool and has been graduated by the CNFC (Cloud Native Computing Foundation), so it doesn't need additional software to run.
5.2.0 - Prometheus Server
The Prometheus server is responsible for managing and maintaining three parts, and they are:
-
Retrieval : Responsible for the management and execution of jobs
-
Storage: Responsible for storing data in TSDB format.
-
PromQL: Responsible for searches within the data.
5.2.0 - Time series database (TSDB)
There are, in general, two ways of storing data:
- Pure: Prometheus itself stores data in sets every two hours.
- With an Adapter: An external service is used to perform this storage.
5.2.1 - Retrieval and jobs
This resource is responsible for performing data collection.
Data collection is done through endpoints that are exposed in the application, prometheus accesses this endpoint and thus manages to obtain the data.
Prometheus supports multiple programming languages (Python, Java and others) and tools (Docker, Kubernets, Grafana and others).
But, when it does not have integration with such software or platform, we can use Exporters.
Exporter
It is a tool that runs inside the application server and collects the metrics for availability via api.
Push Gateway
It is a tool that makes data available for short-lived processes, such as tasks and workers.
Service Discorvery
5.2.2 - PromQL
After collecting data, it is necessary to expose them in some way, the means that Prometheus can do this are:
-
Web Ui : It's a built-in endpoint (usually used for quick access or testing).
-
Grafana
-
API: The query can be done via api.
5.2.3 - Alert Manager
Going in parallel with the parts mentioned above, we have the Prometheus alerts mechanism that can be addressed to different stacks, such as: Slack, Telegram, Discord and others.
(Docker)
(Kubernets)
(Terraform)
(Jenkins)