The main.tf file includes both EKS and self-managed Kubeadm cluster with EC2, you only need to use one of them, so comment out the other.
terraform init
terraform plan
terraform apply --auto-approve
- SSH into nodes
- Run installation scripts
chmod +x common.sh
./common.sh
- SSH into the first node
- Execute the init script
chmod +x multi-master-init.sh
./multi-master-init.sh
- Enter DNS name or IP of the Network Load Balancer created.
- If calico network step fails, simply re-run the
kubectl apply
command after a moment.
After running the multi-master-init.sh
, the console should print out the join command, copy and run it on the other nodes.
If you missed or lost it, on the first node, run kubeadm token create --print-join-command
to get a new join command.
Run this script to delete the cluster and undo the initialization process.
chmod +x reset-cluster.sh
./reset-cluster.sh
Before running terraform apply
, make sure you modify role_arn to match your role in IAM (it should have sufficient permissions too).
aws eks update-kubeconfig --region region-code --name my-cluster