This repo contains a set of Packer, Terraform, and Ansible configurations that demonstrate working with IBM Cloud Infrastructure Center. The demo has three components:
- Image creation using Packer (and Ansible)
- Infrastructure deployment using Terraform
- Post-deployment configuration using Ansible
This demo depends on a local installation of Terraform, Packer, and Ansible. It makes use of IBM Cloud Infrastructure Center through the OpenStack providers in Packer and Terraform. The ICIC instance must have an existing deployable Linux image (our example is based on RHEL 8).
The stages of the demo are described here.
Using a pre-existing Linux image, Packer creates new images containing:
- Apache HTTP Server and PHP, along with a test file (a file that runs
phpinfo()
); - HAProxy, with a base configuration to enable the statistics web page. The images become deployable in ICIC.
Packer instatiates the base image nominated in the configuration, then uses its Ansible provider to run a playbook that performs the appropriate installation and customisation tasks. It then creates new images from the instatiated VMs, destroying the VMs once the images are saved.
The Terraform stage uses the images generated by Packer to create the demo infrastructure:
- Three VMs created from the Apache/PHP image;
- One VM created from the HAProxy image.
The final stage is reconfiguration of the HAProxy instance to point to the three Apache/PHP VMs.
As a verification proof-point, the HAProxy coniguration uses the phpinfo()
page as the health-check URL for the backends.
If you have the pre-requisite resources, you can give the demo a try.
-
Make sure your ICIC user-ID has an associated SSH key pair defined. When logged-on to ICIC, click your user name (and project) in the top-right of the screen. From the menu that appears, select "Key pairs". If you have a key that is listed here and is usable from the machine you will run the demo from, make a note of the name of the key-pair. If none of the listed key-pairs is usable from the machine you will run the demo from, or there are no key-pairs listed, do the following:
- Generate an SSH key using an appropriate method for your OS.
For example on Linux or macOS use the
ssh-keygen
command. If you already have other SSH keys, make sure you don't destroy an existing key by generating a new one over the top of an existing one! - Display the public key of the generated key-pair, and copy it to the clipboard.
The public key is contained in a file with the name you specified for the private key but with
.pub
appended. For example, the default name for an ED25519 key on Linux is~/.ssh/id_ed25519
. The public key for that key-pair would be found at~/.ssh/id_ed25519.pub
. - On the ICIC interface under "Key Pairs", click on "Import Public Key".
- Paste the public key into the "Public Key" field.
- In the "Key pair name" field, type a convenient name for the key-pair. Make a note of the name, as you will need it for the Terraform configuration.
- Press the "Import Key Pair" button to save the key-pair into ICIC.
- Generate an SSH key using an appropriate method for your OS.
For example on Linux or macOS use the
-
Clone the repository:
$ git clone https://github.com/viccross/icic-packer-terraform-ansible.git
- Change into the Packer directory in the repo
$ cd <path-to-repo>/packer
- Update the variables in the Packer config
variables.pkr.hcl
to reflect your environment. You will need to update:- Details for your ICIC instance (credentials, tenant, etc).
- The source image name to be used as a base.
- Details of the target images, including the ICIC network. It is a Packer/OpenStack requirement that the network identifier be the UUID of the network as known to OpenStack and not the readable network name. You can obtain the UUID of the network using the ICIC web interface, or by using an appropriate ICIC/OpenStack CLI command.
- Run Packer
$ packer init . $ packer validate . $ packer build .
In the ICIC UI you should be able to see two new images.
- Change to the Terraform directory in the repo
$ cd <path-to-repo>/terraform
- Update the variables in the Terraform config
variables.tf
, if needed. The "image timestamp" variable is not currently used. - Update variables and data entries in the modules under the
modules
directory. In each module, thevariables.tf
and data entries inmain.tf
may need to be updated. In particular, the static IP address of the HAProxy instance will need to be changed. If you don't want to use a static IP for the HAProxy instance, comment out or delete thefixed_ip_v4
line in thenetwork
setting in theopenstack_compute_instance_v2
resource in<path-to-repo>/terraform/modules/icic_haproxy_vm/main.cf
. - Run Terraform
When prompted, enter "yes" to approve.
$ terraform init $ terraform plan $ terraform apply
At this point you will be able to see four new VMs in the ICIC UI. Looking at the details of those VMs, you will see that the "Deployed image" field is the image created by Packer for the type of VM being inspected ("haproxy" or "httpd").
You should also be able to go to http://<haproxy-ip-address>:8404/stats
to see the HAProxy statistics page.
The only entry will be the "frontend" definition of the stats page itself.
- Run the Ansible playbook
$ ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook -i <haproxy-ip-address>, --extra-vars "$(terraform output -json)" -u root ansible/playbook.yml
Once this runs, the next refresh of the HAProxy stats page will show the new configuration.
The frontend "tf-demo-http" will be present, as well as the three entries under the "tf-httpd" backend reflecting the three httpd VMs.
The three backend servers should show as greeen status, with "L7OK/200" in the LastChk
column.
- Open
http://<haproxy-ip-address>/phpinfo.php
in a new browser tab. This will show thephpinfo()
output from one of the servers (look for the content in the "System" field to indicate which one). Refresh the page (with a pause of a few seconds in between refreshes to allow HAProxy to distribute the connections) will yield a different server. - Back on the HAProxy statistics page, you will see the count of requests and bytes served against each of the backend servers has increased.
Terraform keeps track of the state of the resources it manages. If the definitions or characteristics of any defined resources changes, Terraform can reapply those changes to the managed resources in a controlled manner. You can demonstrate this by rebuilding the Packer images and seeing the effect on Terraform.
- Repeat all of Stage 1, to create new "haproxy" and "httpd" images. The images will be unchanged, but they will have more recent creation time.
- Change to the Terraform directory in the repo
$ cd <path-to-repo>/terraform
- Run Terraform
$ terraform plan
terraform plan
checks the known state of the resources (as last deployed by Terraform) against the state they would be in as per the definition. Because the code that chooses the image to be deployed containsmost_recent = true
, and a more recent image is available,terraform plan
reports that the VMs would be re-created.
- Use Terraform to remove the deployed VMs
When prompted, enter "yes" to approve.
$ terraform destroy
If you want (or need) to clean up the Packer images, this needs to be done manually using either the ICIC GUI or the ICIC/OpenStack CLI.
During development, some issues and challenges were encountered.
To perform the post-deployment HAProxy configuration, the instance name and IP address information from Terraform needs to be available to Ansible.
This could have been avoided by using static IP addresses for the HTTPD instances, but the author believes that to be a lazy workaround. ;)
The output of terraform output -json
provides a convenient way to format Terraform output in a way that Ansible should be able to consume, but there are a couple of issues:
- Maybe it's a reflection of the author's lack of knowedge of parsing JSON, but it was not trivial to work out how to process the JSON object generated by
terraform output -json
. Terraform returns an "object of objects" for the HTTPD instances, which the author tried to use a Jinja2for
block to loop through. The end result required a pass through thejson_query
filter using a counter-intuitive query specification to yield something usable. - The Ansible stage was initially run as a
local_exec
in the HAProxy instance resource module, then as part of anull_resource
at the top level. However, it appears that the Terraform state file is not updated during aterraform apply
run, so the Ansible playbook fails because the required output is not present.
Both of these issues could possibly be solved using the Ansible Terraform collection rather than calling external Ansible as a local_exec
.
Unfortunately the system I've been developing on has too back-level a Python and Ansible environment to work with this collection.
In theory, practice and theory are the same... but in practice, theory and practice are different.
It's been a little challenging to make this into a visually interesting demo.
When deploying real infrastructure one is only interested in the end result, but for a demo it's useful to watch the progress unfolding.
Seeing status in a HAProxy page instantly appear green is only interesting when deploying to production... in a demo, seeing red change to green as a program runs is more effective.
In this example, trying to create this effect has led to additional depends_on
and other configurations (e.g. null_resource
s) that make the total duration longer.
Vic Cross viccross@au.ibm.com. Cards and letters containing suggestions or other feedback are welcome!