This script bootstraps Cluster Stacks on OpenStack, namely the SCS gx-scs environment.
Quick-start source: cluster-stacks/providers/openstack/README.md.
Run create_all.sh
with bash or zsh and follow the displayed instructions.
As stated in the source README, you need the following CLI tools:
kind
(works with both Docker and Podman)kubectl
helm
clusterctl
jq
Additionally, for this script also needed are:
python3
- the python3-yaml library
Go with envsubst
is not needed here because it is replaced with Python.
gh-pat
: plain text file that contains your Github PAT as text stringclouds.yaml
: credentials from your OpenStack project
- Delete the cluster resource like so (kubectl targets the Cluster Stacks management cluster):
kubectl -n scs-tenant delete cluster cs-cluster
- Delete the KinD cluster (run on local machine):
kind delete clusters cluster-stacks-bootstrapper
The folder generate-cert-access
contains a script which let's you generate a new kubeconfig with less privileges than the cluster-admin.
This is very helpful for running tests, experiments or compliance checks.
After you run the bootstrapping script, you have both a Cluster Stacks management cluster as well as the first workload cluster.
Use the workload cluster's kubeconfig via export KUBECONFIG=xyz
, which makes you the cluster-admin by default and run the script.
The resulting kubeconfig allows for someone else to use kubectl
with the workload cluster as endpoint, but scoped in a namespace.
If you have the OpenStack CLI client installed, you can make use of the app-cred-*-openrc.sh
file you get from Horizon:
source <(openstack complete)
source app-cred-*-openrc.sh
The CLI tool helps with cleaning up OpenStack resources if something went wrong and the UI is too annoying.
Example: Delete all ports in a project, which are marked as DOWN
:
openstack port list --long --format value | grep DOWN | awk '{ print $1 }' | xargs -L 1 openstack port delete
The networking in the workload clusters is managed by Cilium.
Via kubectl, you can check Cilium state in the workload cluster with: kubectl -n kube-system exec -ti cilium-4ww5k -- cilium status
(where -4ww5k
is to be replaced by the pod name).
Of course you can also install the cilium
CLI binary on your local machine and aim it at the workload cluster as well.