Skip to content

Commit ea8663e

Browse files
authored
Merge pull request #15 from Lemoncode/feature/eks-contents
Feature/eks contents
2 parents 68f7783 + 87e186b commit ea8663e

File tree

41 files changed

+28480
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+28480
-0
lines changed

.gitignore

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,3 +17,10 @@ exercise-solution/
1717

1818
# code Jenkins demos
1919
app/
20+
21+
# Access Keys
22+
*.pem
23+
*.pub
24+
25+
# CDK8s demos
26+
*/08-ckd8s/00-hello-world-code/
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Install Tools
2+
3+
Para trabajar con EKS necesitamos tener instalado en nuestros equipos las siguientes herramientas:
4+
5+
* kubectl
6+
* aws cli
7+
* eksctl
8+
9+
Si bien la última herramienta no es obligatoria para trabajar con `EKS`, va a hacer todos nuestros procesos mucho más fáciles.
10+
11+
En el caso de `kubectl`, en el siguinete enlace podemos encontrar la [guía instalación kubectl](https://kubernetes.io/es/docs/tasks/tools/install-kubectl/).
12+
13+
Para instalar `aws cli`, podemos hacer uso del siguiente [enlace](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html).
14+
15+
Por último debemos instalar `eksctl`, en este [enlace](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html), tenemos los pasos necesarios.
Lines changed: 114 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
# Create AWS user
2+
3+
## Pre requisitos
4+
5+
* Tener una cuenta de `AWS` y poder acceder a la consola de `AWS` vía web.
6+
* `AWS CLI` instalado en nuestro equipo.
7+
8+
## Agenda
9+
10+
11+
## Introducción
12+
13+
Cuando creamos una nueva cuenta en AWS, el usuario por defecto es `root`, se considera como buena práctica no utilizar este usuario para realizar las distaintas tareas en su día a día. En su lugar es mejor, crear un nuevo usario, con permisos de administrador y guardar las clavers de `root` en un lugar bien seguro.
14+
15+
Para poder crear un usuario con permisos de administrador previamente debemos crear un grupo y a este grupo añadir las poíticas de `AWS`, de administrador.
16+
17+
## Creando un grupo
18+
19+
```bash
20+
$ aws iam create-group --group-name <group-name>
21+
```
22+
23+
> Contsraits: The name can consist of letters, digits, and the following characters: plus (+), equal (=), comma (,), period (.), at (@), underscore (_), and hyphen (-). The name is not case sensitive and can be a maximum of 128 characters in length.
24+
25+
Para verificar que hemos tenido exito en nuestra operación
26+
27+
```bash
28+
$ aws iam list-groups
29+
```
30+
31+
La respuesta incluye el `Amazon Resource Name` (ARN) para el nuevo grupo. El `ARN` es un standard que Amazon utiliza para identificar recursos.
32+
33+
## Atando una política al grupo
34+
35+
Utilizando el siguiente comando enlazamos la política de administardor con el grupo recientemenet creado
36+
37+
```bash
38+
$ aws iam attach-group-policy --group-name <group-name> --policy-arn arn:aws:iam::aws:policy/AdministratorAccess
39+
```
40+
41+
Para verificar que la política se ha atado correctamente al grupo
42+
43+
```bash
44+
$ aws iam list-attached-group-policies --group-name <group-name>
45+
```
46+
47+
La respuesta nos da la lista de políticas atadas al grupo. Si queremos comprobar los contenidos de una política en particular podemos usar `aws iam get-policy`
48+
49+
## Creando un usuario IAM y añadiendolo al grupo
50+
51+
### 1. Creamos un usuario
52+
53+
```bash
54+
$ aws iam create-user --user-name eksAdmin
55+
```
56+
57+
### 2. Añadiendo el Usuaro a un grupo
58+
59+
```bash
60+
$ aws iam add-user-to-group --group-name <group-name> --user-name <user-name>
61+
```
62+
63+
64+
### 3. (Opcional) Dar al usuario acceso a la consola.
65+
66+
Tenemos que proporcioanr al usuario la url de su cuenta para que se pueda registrar dentro de la consola
67+
68+
```
69+
https://My_AWS_Account_ID.signin.aws.amazon.com/console/
70+
```
71+
72+
```bash
73+
$ aws iam create-login-profile --generate-cli-skeleton > create-login-profile.json
74+
```
75+
76+
Genera un `template`, que ahora podemos utilizar para inicializar el usuario
77+
78+
```bash
79+
$ aws iam create-login-profile --cli-input-json file://create-login-profile.json
80+
```
81+
82+
Esto nos da como salida
83+
84+
```json
85+
{
86+
"LoginProfile": {
87+
"UserName": "eksAdmin",
88+
"CreateDate": "2020-12-20T16:38:19+00:00",
89+
"PasswordResetRequired": true
90+
}
91+
}
92+
```
93+
94+
Ahora con el `loginProfile` creado, del ARN del usuario extraemos su `Account ID` , solo los dígitos, y lo pegamos aquí:
95+
96+
```
97+
google https://xxxxxxxxxxxx.signin.aws.amazon.com/console/
98+
```
99+
100+
### 4. Crear Access Key
101+
102+
Con esta `key` nuestro nuevo usuario tendrá acceso programático desde `AWS CLI`
103+
104+
```bash
105+
$ aws iam create-access-key --user-name <user-name>
106+
```
107+
108+
Con la salida anterior podemos configurar nuestro usurio por defecto usando `aws configute`
109+
110+
En el caso de Linux y macOS, podemos encontrar nuestras credenciales en `/.aws`
111+
112+
## Referencias
113+
114+
> Configurar AWS CLI: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
apiVersion: eksctl.io/v1alpha5
2+
kind: ClusterConfig
3+
4+
metadata:
5+
name: lc-cluster
6+
region: eu-west-3
7+
version: "1.18"
8+
9+
iam:
10+
withOIDC: true
11+
12+
managedNodeGroups:
13+
- name: lc-nodes
14+
instanceType: t2.small
15+
desiredCapacity: 3
16+
minSize: 1
17+
maxSize: 4
18+
ssh:
19+
allow: true
20+
publicKeyPath: "./eks_key.pub"
Lines changed: 130 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,130 @@
1+
# Build your first EKS Cluster
2+
3+
## Introduction
4+
5+
The official CLI to launch a cluster is `eksctl`, this is a tool develop by weavworks on conjuntion of AWS team. The goal of this tool is to build an `EKS cluster` much easier. If we thing the things that we have to do to build a cluster, we have to create VPC, provision subnets (multiples of them), set up routing in your VPC, then you con go to the control plane on the console and launch the cluster. All the infrastructure could be done using `CloudFormation`, but still being a lot of work.
6+
7+
We can build the cluster from `EKS Cluster console`, all the choices expose there can be done with `eksctl`
8+
9+
## How do I check the IAM role on the workspace?
10+
11+
```bash
12+
aws sts get-caller-identity
13+
```
14+
15+
## Create EC2 Key
16+
17+
```bash
18+
$ aws ec2 create-key-pair --key-name EksKeyPair --query 'KeyMaterial' --output text > EksKeyPair.pem
19+
```
20+
21+
Modify permissions over the private key to avoid future warnings
22+
23+
```bash
24+
$ chmod 400 EksKeyPair.pem
25+
```
26+
27+
With this new private key we can go ahead and generate a public one, that's the key that will be upload into the node (EC2 instance). If we provide this key, and we have the private one, we can connect to the remote instance.
28+
29+
```bash
30+
$ ssh-keygen -y -f EksKeyPair.pem > eks_key.pub
31+
```
32+
33+
## Create definition YAML
34+
35+
```yaml
36+
apiVersion: eksctl.io/v1alpha5
37+
kind: ClusterConfig
38+
39+
metadata:
40+
name: lc-cluster
41+
region: eu-west-3
42+
version: "1.18"
43+
44+
iam:
45+
withOIDC: true
46+
47+
managedNodeGroups:
48+
- name: lc-nodes
49+
instanceType: t2.small
50+
desiredCapacity: 3
51+
minSize: 1
52+
maxSize: 4
53+
ssh:
54+
allow: true
55+
publicKeyPath: "./eks_key.pub"
56+
```
57+
58+
```bash
59+
eksctl create cluster \
60+
--name lc-cluster \
61+
--version 1.18 \
62+
--region eu-west-3 \
63+
--nodegroup-name lc-nodes \
64+
--node-type t2.small \
65+
--nodes 3 \
66+
--nodes-min 1 \
67+
--nodes-max 4 \
68+
--with-oidc \
69+
--ssh-access=true \
70+
--ssh-public-key=eks_key.pub \
71+
--managed
72+
```
73+
74+
Both forms are going to create exactly the same, but if we want to get all the power of `eksctl` we have to use the declarative way using the yaml form.
75+
76+
## Understanding eks file
77+
78+
`eksctl` is going to build our cluster using this file.
79+
80+
```yaml
81+
apiVersion: eksctl.io/v1alpha5
82+
kind: ClusterConfig
83+
84+
metadata:
85+
name: lc-cluster # [1]
86+
region: eu-west-3 # [2]
87+
version: "1.18" # [3]
88+
89+
iam:
90+
withOIDC: true # [4]
91+
92+
managedNodeGroups: # [5]
93+
- name: lc-nodes # [6]
94+
instanceType: t2.small # [7]
95+
desiredCapacity: 3 # [8]
96+
minSize: 1 # [9]
97+
maxSize: 4 # [10]
98+
ssh: # [11]
99+
allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key
100+
publicKeyPath: "./eks_key.pub" # Add path to key
101+
```
102+
103+
1. This is the cluster name, in our case `lc-cluster`
104+
2. The AZ where the cluster is going to be deplyed
105+
3. The Kuberntes version that we're going to use, if we let it empty, will use the last stable for `AWS`
106+
4. enables the IAM OIDC provider as well as IRSA for the Amazon CNI plugin
107+
5. `managedNodeGroups` are a way for the `eks service` to actually provision your data plane on your behalf so normally if you think about the of a container orchestrator it's jsut orchestarte containers on your compute so we're starting to see expansion of that role a little bit so now instead of you bringing your own compute and you having to manage patching it, updating it, rolling in new versions of it and all that day to day stuff, it's possible to be managed by AWS, this is what `managedNodeGroup` does. AWS provides the AMI and provisioning into your account on your behalf.
108+
6. The name of the group of nodes
109+
7. The instance type that we're running. We're usding the free tier
110+
8. The number of nodes that we want to have on the node group
111+
9. If the cluster infrastructure is updated the minimun mumber of instances that we want on the node group
112+
10. If the cluster infrastructure is updated the max number of instances that we want on the node group
113+
11. The `ssh` key to connect to our EC2 instances.
114+
115+
116+
## Launching the Cluster
117+
118+
Now we're ready to launch the cluster
119+
120+
```bash
121+
$ eksctl create cluster -f demos.yml
122+
```
123+
124+
## Test the cluster
125+
126+
Now we can test that our cluster is up and running.
127+
128+
```bash
129+
$ kubectl get nodes
130+
```
Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
# Deploy the Kubernetes Dashboard
2+
3+
## Deploy the Official Kubernetes Dashboard
4+
5+
The official Kubernetes dashboard is not deployed by default, but there are instructions in the official documentation
6+
7+
We can deploy the dashboard with the following command:
8+
9+
```bash
10+
export DASHBOARD_VERSION="v2.0.0"
11+
12+
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/${DASHBOARD_VERSION}/aio/deploy/recommended.yaml
13+
14+
namespace/kubernetes-dashboard created
15+
serviceaccount/kubernetes-dashboard created
16+
service/kubernetes-dashboard created
17+
secret/kubernetes-dashboard-certs created
18+
secret/kubernetes-dashboard-csrf created
19+
secret/kubernetes-dashboard-key-holder created
20+
configmap/kubernetes-dashboard-settings created
21+
role.rbac.authorization.k8s.io/kubernetes-dashboard created
22+
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
23+
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
24+
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
25+
deployment.apps/kubernetes-dashboard created
26+
service/dashboard-metrics-scraper created
27+
deployment.apps/dashboard-metrics-scraper created
28+
```
29+
30+
If we have a look into our services we can find
31+
32+
```bash
33+
kubectl get services --all-namespaces
34+
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
35+
default kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 3h15m
36+
kube-system kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 3h15m
37+
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.100.90.15 <none> 8000/TCP 75s
38+
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.100.9.10 <none> 443/TCP 76s
39+
```
40+
41+
## Access the Dashboard
42+
43+
Since this is deployed to our private cluster, we need to access it via a proxy. `kube-proxy` is available to proxy our requests to the dashboard service. In your workspace, run the following command:
44+
45+
```bash
46+
$ kubectl proxy --port=8080 --address=0.0.0.0 --disable-filter=true &
47+
```
48+
49+
> Running from a local environment is enough to do `kubectl proxy --port=8080` (or any other port that we want to use)
50+
51+
This will start the proxy, listen on port 8080, listen on all interface, and will filtering of non-localhost requests.
52+
53+
This command will continue to run in the background of the current terminal's session
54+
55+
Now we can access the Kubernetes Dashboard from
56+
57+
```
58+
google localhost:8080/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
59+
```
60+
61+
To access the dashboard we have to provide a `token`, we can achive this by running the following:
62+
63+
```bash
64+
$ aws eks get-token --cluster-name lc-cluster | jq -r '.status.token'
65+
```
66+
67+
Copy the output of this command and then click the radio button next to Token then in the text field below paste the output from the last command
68+
69+
## Cleanup
70+
71+
Stop the proxy and delete the dashboard deployment
72+
73+
```bash
74+
# kill proxy
75+
pkill -f 'kubectl proxy --port=8080'
76+
77+
# delete dashboard
78+
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/${DASHBOARD_VERSION}/aio/deploy/recommended.yaml
79+
80+
unset DASHBOARD_VERSION
81+
```
82+
83+
## References
84+
85+
> How to use kubectl proxy to access your applications: https://www.returngis.net/2019/04/como-usar-kubectl-proxy-para-acceder-a-tus-aplicaciones/
86+
> Proxies in Kubernetes official documentation: https://kubernetes.io/docs/concepts/cluster-administration/proxies/

0 commit comments

Comments
 (0)