This repository is used to manage one or more os2display environments. The environments themselves are deployed via Helm, this repo and the tools within it simply makes it easier to handle the installation, upgrade and management of the releases.
The Helm chart itself is hosted at https://reload.github.io/os2display-k8s and the source can be found in the os2display-hosting repository.
Requirements:
- The Helm client
- kubectl and a working connection to the cluster (see os2display-infrastructure for details on how to connect).
Create a <some-prefix>-master branch, copy _variables.source.example to _variables.source and customize the settings in the file. Then commit the file to your repository, and use your branch for futur work.
You first need to add the chart repository to your local helm client.
helm repo add os2display https://reload.github.io/os2display-k8sYou can verify that you've accessed the repository by listing all charts in the repository.
helm search os2display/You should see a single os2display chart.
Then verify that you have access to the environment by executing
helm listThis should at the very least list the cert-manager and nginx-ingress used in os2display cluster. You may also see os2display-[environment] releases for each os2display environment handled via Helm.
You are now ready for creating and updating environments.
Use these steps to setup a new os2display environment. Each environment is contained in a seperate Kubernetes namespace that will be created automatically.
First create the environment configuration using ./init.sh <environment-name> <admin-release-number>
# Create a staging environment using the reload-build-8 release
./init staging 8See https://hub.docker.com/r/reload/os2display-admin-release/tags for at list of available release numbers, you probably want the latest.
This creates a directory named after the environment and instantiates a values.yaml and a secrets.yaml file. The latter should never be committed.
Inspect the yaml files and make any necessary updates to eg. the admin email or credentials. Note that you are not required to change anything. You rarely need the credentials listed in secrets.yaml, and they can always be extracted from the environment, but you may want to save copy for important environments such as production.
You could commit the values.yaml at this point, but you may just as well wait until you have a working environment.
You are now ready to create the environment, as all of the configuration is ready, this is a rather simple step.
First make sure you have the latest helm chart
helm repo updateThen invoke ./install.sh [environment]
# Perform the initial installation of the staging environment
./install stagingHelm will block until the environment has been created
First have Helm delete the release using helm delete os2display-[environment]
# Delete the "justtesting" environment
helm delete os2display-justtestingOr if you want to completely remove all traces so that you could create a new environment with the same name.
# Delete the "justtesting" environment completely.
helm delete --purge os2display-justtestingThen remove the namespace using kubectl delete namespace os2display-[environment]
kubectl delete namespace os2display-justtestingYou need to build and push the release images first. Go to the building a release page and follow the steps there if you haven't already.
Make sure you are setup to use Helm and kubectl. You should follow the steps here
To deploy to an environment - for instance test, find the folder with the environment name and update the tag under adminRelease in [env-dir]/values.yaml.
Then make sure you have the latest helm chart
helm repo updateThen invoke ./upgrade.sh <environment>
# Upgrade the test environment
./upgrade.sh testThat's it!
Celebrate, then get back to work.