You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/CONFIG-VARS.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -99,12 +99,12 @@ You can use `default_public_access_cidrs` to set a default range for all created
99
99
100
100
The Federal Information Processing Standard (FIPS) 140 is a US government standard that defines minimum security requirements for cryptographic modules in information technology products and systems. Azure Kubernetes Service (AKS) allows the creation of node pools with FIPS 140-2 enabled. Deployments running on FIPS-enabled node pools provide increased security and help meet security controls as part of FedRAMP compliance. For more information on FIPS 140-2, see [Federal Information Processing Standard (FIPS) 140](https://learn.microsoft.com/en-us/azure/compliance/offerings/offering-fips-140-2).
101
101
102
-
To enable the FIPS support in your subscription, you first need to accept the legal terms of the `Ubuntu Pro FIPS 20.04 LTS` image that will be used in the deployment. For details see [Ubuntu Pro FIPS 20.04 LTS](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/canonical.0001-com-ubuntu-pro-focal-fips?tab=Overview).
102
+
To enable the FIPS support in your subscription, you first need to accept the legal terms of the `Ubuntu Pro FIPS 22.04 LTS` image that will be used in the deployment. For details see [Ubuntu Pro FIPS 22.04 LTS](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/canonical.0001-com-ubuntu-pro-jammy-fips?tab=Overview).
103
103
104
104
To accept the terms please run following az command before deploying cluster:
105
105
106
106
```bash
107
-
az vm image terms accept --urn Canonical:0001-com-ubuntu-pro-focal-fips:pro-fips-20_04-gen2:latest --subscription $subscription_id
107
+
az vm image terms accept --urn Canonical:0001-com-ubuntu-pro-focal-fips:pro-fips-22_04-gen2:latest --subscription $subscription_id
108
108
```
109
109
110
110
| Name | Description | Type | Default | Notes |
@@ -193,7 +193,7 @@ subnet_names = {
193
193
194
194
## General
195
195
196
-
Ubuntu 20.04 LTS is the operating system used on the Jump/NFS servers. Ubuntu creates the `/mnt` location as an ephemeral drive that cannot be used as the root location of the `jump_rwx_filestore_path` variable.
196
+
Ubuntu 22.04 LTS is the operating system used on the Jump/NFS servers. Ubuntu creates the `/mnt` location as an ephemeral drive that cannot be used as the root location of the `jump_rwx_filestore_path` variable.
Community-contributed configuration variables are listed in the tables below. These variables can also be specified on the terraform command line.
4
+
5
+
> [!CAUTION]
6
+
> Community members are responsible for maintaining these features. While project maintainers will verify these features work as expected when merged, they cannot guarantee future releases will not break them. If you encounter issues while using these features, start a [GitHub Discussion](https://github.com/sassoftware/viya4-iac-azure/discussions) or open a Pull Request to fix them. As a last resort, you can create a GitHub Issue.
7
+
8
+
## Table of Contents
9
+
10
+
*[Spot Nodes](#spot_nodes)
11
+
*[Netapp Volume Size](#netapp_volume_size)
12
+
13
+
<aname="spot_nodes"></a>
14
+
## Spot Nodes
15
+
16
+
Spot Nodes allow you to run Azure Kubernetes Service (AKS) workloads on low-cost, surplus compute capacity offered by Azure. These Spot Virtual Machines (VMs) can significantly reduce infrastructure costs, especially for workloads that are fault-tolerant or batch-oriented or temporary lab environments. However, Spot VMs can be preempted (evicted) by Azure at any time if the capacity is needed elsewhere, which makes them less suitable for critical or stateful workloads.
17
+
18
+
For further information, see https://learn.microsoft.com/en-us/azure/aks/spot-node-pool
19
+
20
+
> [!CAUTION]
21
+
> Spot nodes can be evicted with little notice. They are best used for non-production, non-critical workloads or for scenarios where cost savings outweigh the risk of eviction. This is a configuration not supported by SAS Technical Support. Monitor eviction rates and ensure your workloads can tolerate sudden node loss.
22
+
23
+
To enable a Spot node pool in your AKS cluster using this module, configure the community-maintained variables listed below. These options customize the behavior of the Spot node pool and its underlying virtual machine scale set.
24
+
25
+
| Name | Description | Type | Default | Release Added | Notes |
26
+
| :--- | ---: | ---: | ---: | ---: | ---: |
27
+
| community_priority | (Optional) The Priority for Virtual Machines within the Virtual Machine Scale Set that powers this Node Pool. Possible values are Regular and Spot. Defaults to Regular. Changing this forces a new resource to be created. | string |`Regular`| 10.3.0 | Changing this to Spot enables the Spot node pool functionality |
28
+
| community_eviction_policy | (Optional) The Eviction Policy which should be used for Virtual Machines within the Virtual Machine Scale Set powering this Node Pool. Possible values are Deallocate and Delete. Changing this forces a new resource to be created. | string |`Delete`| 10.3.0 ||
29
+
| community_spot_max_price | (Optional) The maximum price you're willing to pay in USD per Virtual Machine. Valid values are -1 (the current on-demand price for a Virtual Machine) or a positive value with up to five decimal places. Changing this forces a new resource to be created. | string |`-1`| 10.3.0 ||
30
+
31
+
<aname="netapp_volume_size"></a>
32
+
## Netapp Volume Size
33
+
34
+
Netapp Volume Size control allows you to create a Netapp Volume smaller than the Netapp Pool. This will allow other tools outside of this Terraform to create Netapp Volumes within the pool.
35
+
36
+
To control the Netapp Volume size use the below community-maintained variable listed below. This will allow you to control the size of the Netapp Volume in GBs. This value must be smaller than the Netapp Pool size. There is no validation for this during the planning phase of Terraform. If this is misconfigured, the Terraform Apply will fail when attempting to deploy the volume.
37
+
38
+
| Name | Description | Type | Default | Release Added | Notes |
39
+
| :--- | ---: | ---: | ---: | ---: | ---: |
40
+
| community_netapp_volume_size | Size of the netapp volume | number | 0 | 10.3.0 | Zero will disable, must be smaller than the Netapp Pool. The value is given in GB |
Copy file name to clipboardExpand all lines: docs/user/TerratestDockerUsage.md
+59-5Lines changed: 59 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -21,14 +21,24 @@ The Docker image `viya4-iac-azure-terratest` will contain Terraform and Go execu
21
21
### Docker Environment File for Azure Authentication
22
22
23
23
Follow either one of the authentication methods that are described in [Authenticating Terraform to access Azure](./TerraformAzureAuthentication.md), and create a file with the authentication variable values to use with container invocation. Store these values outside of this repository in a secure file, such as
24
-
`$HOME/.azure_docker_creds.env`. Protect that file with Azure credentials so that only you have Read access to it. **NOTE**: Do not use quotation marks around the values in the file, and be sure to avoid any trailing blank spaces.
24
+
`$HOME/.azure_docker_creds.env`.
25
+
26
+
**NOTE**: Do not use quotation marks around the values in the file, and be sure to avoid any trailing blank spaces.
27
+
28
+
#### Public Access Cidrs Environment File
29
+
30
+
In order to run ```terraform apply``` integration tests, you will also need to define your ```TF_VAR_public_cidrs``` as described in [Admin Access](../CONFIG-VARS.md#admin-access), and create a file with the public access cidr values to use with container invocation. Store these values in [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) outside of this repository in a secure file, such as `$HOME/.azure_public_cidrs.env`. Protect that file with public access cidr values so that only you have Read access to it. Below is an example of what the file should look like.
Now each time you invoke the container, specify the file with the [`--env-file`](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file) option to pass Azure credentials to the container.
27
37
28
38
### Docker Volume Mounts
29
39
30
-
Run the following command:
31
-
`--volume="$(pwd)":/viya4-iac-azure`
40
+
To mount the current working directory, add the following argument to the docker run command:
41
+
`--volume="$(pwd)":/viya4-iac-azure`
32
42
Note that the project must be mounted to the `/viya4-iac-azure` directory.
33
43
34
44
## Command-Line Arguments
@@ -42,9 +52,9 @@ The `terratest_docker_entrypoint.sh` script supports several command-line argume
42
52
43
53
## Running Terratest Commands
44
54
45
-
### Running the Default Tests
55
+
### Running the Plan Tests
46
56
47
-
To run the default suite of unit tests (only `terraform plan`), run the following Docker command:
57
+
To run the suite of unit tests (only `terraform plan`), run the following Docker command:
48
58
49
59
```bash
50
60
# Run from the ./viya4-iac-azure directory
@@ -54,6 +64,20 @@ docker run --rm \
54
64
viya4-iac-azure-terratest
55
65
```
56
66
67
+
### Running the Apply Tests
68
+
69
+
To run the suite of integration tests (only `terraform apply`), run the following Docker command:
70
+
71
+
```bash
72
+
# Run from the ./viya4-iac-azure directory
73
+
docker run --rm \
74
+
--env-file=$HOME/.azure_docker_creds.env \
75
+
--env-file=$HOME/.azure_public_cidrs.env \
76
+
--volume "$(pwd)":/viya4-iac-azure \
77
+
viya4-iac-azure-terratest \
78
+
-r=".*Apply.*"
79
+
```
80
+
57
81
### Running a Specific Go Test
58
82
59
83
To run a specific test, run the following Docker command with the `-r` option:
@@ -62,12 +86,27 @@ To run a specific test, run the following Docker command with the `-r` option:
62
86
# Run from the ./viya4-iac-azure directory
63
87
docker run --rm \
64
88
--env-file=$HOME/.azure_docker_creds.env \
89
+
--env-file=$HOME/.azure_public_cidrs.env \ #env file for integration tests
65
90
--volume "$(pwd)":/viya4-iac-azure \
66
91
viya4-iac-azure-terratest \
67
92
-r="YourTest"
68
93
```
69
94
To run multiple tests, pass in a regex to the `-r` option - "TestName1|TestName2|TestName3"
70
95
96
+
#### Running a Specific Integration Go Test
97
+
98
+
To run a specific integration test, modify the main test runner function (e.g. YourIntegrationTestMainFunction) to define the test name you desire and run the following Docker command with the `-r` option:
99
+
100
+
```bash
101
+
# Run from the ./viya4-iac-azure directory
102
+
docker run --rm \
103
+
--env-file=$HOME/.azure_docker_creds.env \
104
+
--env-file=$HOME/.azure_public_cidrs.env \
105
+
--volume "$(pwd)":/viya4-iac-azure \
106
+
viya4-iac-azure-terratest \
107
+
-r="YourIntegrationTestMainFunction"
108
+
```
109
+
71
110
### Running a Specific Go Package and Test
72
111
73
112
If you want to specify the Go package and test name, run the following Docker command with the following options:
@@ -82,6 +121,21 @@ docker run --rm \
82
121
-p="YourPackage"
83
122
```
84
123
124
+
#### Running a Specific Integration Go Package and Test
125
+
126
+
To run a specific integration Go package and test name, modify the main test runner function in the desired packaged to define the test name you want and run the following Docker command with the following options:
127
+
128
+
```bash
129
+
# Run from the ./viya4-iac-azure directory
130
+
docker run --rm \
131
+
--env-file=$HOME/.azure_docker_creds.env \
132
+
--env-file=$HOME/.azure_public_cidrs.env \
133
+
--volume "$(pwd)":/viya4-iac-azure \
134
+
viya4-iac-azure-terratest \
135
+
-r="YourIntegrationTestMainFunction" \
136
+
-p="YourPackage"
137
+
```
138
+
85
139
### Running the Go Tests with verbose mode
86
140
87
141
If you want to run the tests in verbose mode, run the Docker command with the `-v` option:
Copy file name to clipboardExpand all lines: docs/user/TestingPhilosophy.md
+46-6Lines changed: 46 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ The unit tests in this project are designed to quickly and efficiently verify th
15
15
16
16
The unit tests are written as [table-driven tests](https://go.dev/wiki/TableDrivenTests) so that they are easier to read, understand, and expand. The tests are divided into two packages, [defaultplan](../../test/defaultplan) and [nondefaultplan](../../test/nondefaultplan).
17
17
18
-
The test package defaultplan validates the default values of a Terraform plan. This testing ensures that there are no regressions in the default behavior of the code base. The test package nondefaultplan modifies the input values before running the Terraform plan. After generating the plan file, the test verifies that it contains the expected values. Both sets of tests are written to be table-driven.
18
+
The test package defaultplan validates the default values of a `terraform plan`. This testing ensures that there are no regressions in the default behavior of the code base. The test package nondefaultplan modifies the input values before running the Terraform plan. After generating the plan file, the test verifies that it contains the expected values. Both sets of tests are written to be table-driven.
19
19
20
20
To see an example, look at the `TestPlanStorage` function in the defaultplan/storage_test.go file that is shown below.
21
21
@@ -59,19 +59,59 @@ To create a unit test, you can add an entry to an existing test table if it's re
59
59
60
60
### Integration Testing
61
61
62
-
The integration tests are designed to thoroughly verify the code base using `terraform apply`. Unlike the unit tests, these tests provision resources in cloud platforms. Careful consideration is required to avoid unnecessary infrastructure costs.
63
-
64
-
These test are still a work-in-progress. We will update these sections once we have more examples to reference.
62
+
The integration tests are designed to thoroughly verify the code base using `terraform apply`. The tests are intended to validate that the cloud provider creates the expected resources. Unlike the unit tests, these tests provision resources through the cloud provider. Careful consideration is required to avoid unnecessary infrastructure costs. The integration test framework is designed to optimize resource utilization and reduce associated costs by enabling multiple test cases to run against a single provisioned resource group, provided the test cases are compatible with the resource’s configuration and state.
65
63
66
64
### Integration Testing Structure
67
65
68
-
These test are still a work-in-progress. We will update these sections once we have more examples to reference.
66
+
The integration tests are also written as [table-driven tests](https://go.dev/wiki/TableDrivenTests) so that they are easier to read, understand, and expand. The tests are divided into two packages, [defaultapply](../../test/defaultapply) and [nondefaultapply](../../test/nondefaultapply).
67
+
68
+
The test package defaultapply validates that the provisioned resources match the default configuration values. The test package nondefaultapply validates that, when given non-default input configuration values, the provisioned resources match the input configuration values. This level of integration testing ensures the cloud provider is correctly creating the resources via Terraform.
69
+
70
+
### Resource Management
71
+
72
+
As running `terraform apply` provisions infrastructure, it inherently incurs costs. To manage and minimize these expenses, it is essential that our testing framework optimizes resource utilization and ensures proper teardown and cleanup of any infrastructure created during testing.
73
+
74
+
To support this, we have implemented main function test runners for our integration tests that handle the setup of the testing environment by provisioning resources based on the provided Terraform options. These runners also include deferred cleanup routines that automatically decommission resources once tests are completed.
75
+
76
+
We encourage developers contributing integration tests to be mindful of resource usage. Add your tests to the defaultapply suite if no configuration changes are needed. If testing non default options, please modify the nondefault suite as long as the new options do not conflict with the existing overrides. If the existing packages do not fit your testing needs, please add a new non default apply package, test runner, and test suite for your unique option configuration.
77
+
78
+
### Error Handling
79
+
80
+
Terratest provides some flexibility with how to [handle errors](https://terratest.gruntwork.io/docs/testing-best-practices/error-handling/). Every method in Terratest comes in two versions (e.g., `terraform.Apply` and `terraform.ApplyE` )
81
+
82
+
* `terraform.Apply`: The base method takes a `t *testing.T` as an argument. If the method hits any errors, it calls `t.Fatal` to fail the test
83
+
* `terraform.ApplyE`: Methods that end with the capital letter `E` always return an error as the last argument and never call `t.Fatal` themselves. This allows you to decide how to handle errors.
84
+
85
+
We recommend using the capital letter `E` version of Terratest methods. This allows the test to handle the error rather than immediately calling `t.Fatal`. `t.Fatal` causes test run to exit, preventing other tests from running and stopping the deferred cleanup routine from being executed.
86
+
87
+
Here's an example of how we handle terratest method calls:
To create an integration test, you can add a new test file with your table tests to the appropriate package and update the desired main function test runner to call and run your test. If you don't see a main function test runner that fits your needs, you are welcome to create a new package, main function test runner, and test suite in a similar format.
100
+
101
+
Below is an example of a possible structure for the new package, main function test runner, and test suite:
102
+
103
+
.
104
+
└── test/
105
+
└── nondefaultapply/
106
+
└── nondefaultapplycustomconfig/
107
+
├── non_default_apply_custom_config_main_test.go
108
+
└── test_custom_config.go
109
+
69
110
70
111
## How to Run the Tests Locally
71
112
72
113
Before changes can be merged, all unit tests must pass as part of the SAS CI/CD process. Unit tests are automatically run against every PR using the [Dockerfile.terratest](../../Dockerfile.terratest) Docker image. Refer to [TerratestDockerUsage.md](./TerratestDockerUsage.md) document for more information about running the tests locally.
0 commit comments