Skip to content

Commit ad9a981

Browse files
authored
Merge pull request #12 from data-platform-hq/secret-scope-resources
feat: custom secret scope
2 parents 608d1fc + 462d5a4 commit ad9a981

File tree

7 files changed

+176
-88
lines changed

7 files changed

+176
-88
lines changed

README.md

Lines changed: 13 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -10,14 +10,14 @@ Terraform module used for Databricks Workspace configuration and Resources creat
1010
| ---------------------------------------------------------------------------- | --------- |
1111
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.0.0 |
1212
| <a name="requirement_azurerm"></a> [azurerm](#requirement\_azurerm) | >= 3.40.0 |
13-
| <a name="requirement_databricks"></a> [databricks](#requirement\_databricks) | >= 1.8.0 |
13+
| <a name="requirement_databricks"></a> [databricks](#requirement\_databricks) | >= 1.9.2 |
1414

1515
## Providers
1616

1717
| Name | Version |
1818
| ---------------------------------------------------------------------- | ------- |
1919
| <a name="provider_azurerm"></a> [azurerm](#provider\_azurerm) | 3.40.0 |
20-
| <a name="provider_databricks"></a> [databricks](#provider\_databricks) | 1.8.0 |
20+
| <a name="provider_databricks"></a> [databricks](#provider\_databricks) | 1.9.2 |
2121

2222
## Modules
2323

@@ -36,6 +36,8 @@ No modules.
3636
| [databricks_cluster_policy.this](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/cluster_policy) | resource |
3737
| [databricks_cluster.this](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/cluster) | resource |
3838
| [databricks_mount.adls](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/mount) | resource |
39+
| [databricks_secret_scope.main](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/secret_scope) | resource |
40+
| [databricks_secret.main](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/secret) | resource |
3941
| [databricks_secret_scope.this](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/secret_scope) | resource |
4042
| [databricks_secret.this](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/secret) | resource |
4143

@@ -50,24 +52,24 @@ No modules.
5052
| <a name="input_key_vault_id"></a> [key\_vault\_id](#input\_key\_vault\_id) | ID of the Key Vault instance where the Secret resides | `string` | n/a | yes |
5153
| <a name="input_sku"></a> [sku](#input\_sku) | The sku to use for the Databricks Workspace: [standard \ premium \ trial] | `string` | "standard" | no |
5254
| <a name="input_pat_token_lifetime_seconds"></a> [pat\_token\_lifetime\_seconds](#input\_pat\_token\_lifetime\_seconds) | The lifetime of the token, in seconds. If no lifetime is specified, the token remains valid indefinitely | `number` | 315569520 | no |
55+
| <a name="input_users"></a> [users](#input\_users)| List of users to access Databricks | `list(string)` | [] | no |
56+
| <a name="input_permissions"></a> [permissions](#input\_permissions)| Databricks Workspace permission maps | `list(map(string))` | <pre> [{ <br> object_id = null <br> role = null <br> }] </pre> | no |
57+
| <a name="input_custom_cluster_policies"></a> [custom\_cluster\_policies](#input\_custom\_cluster\_policies) | Provides an ability to create custom cluster policy, assign it to cluster and grant CAN_USE permissions on it to certain custom groups | <pre>list(object({<br> name = string<br> can_use = list(string)<br> definition = any<br> assigned = bool<br>}))</pre> | <pre>[{<br> name = null<br> can_use = null<br> definition = null<br> assigned = false<br>}]</pre> | no |
5358
| <a name="input_cluster_nodes_availability"></a> [cluster\_nodes\_availability](#input\_cluster\_nodes\_availability) | Availability type used for all subsequent nodes past the first_on_demand ones: [SPOT_AZURE \ SPOT_WITH_FALLBACK_AZURE \ ON_DEMAND_AZURE] | `string` | null | no |
5459
| <a name="input_first_on_demand"></a> [first\_on\_demand](#input\_first\_on\_demand)| The first first_on_demand nodes of the cluster will be placed on on-demand instances: [[ \:number ]] | `number` | 0 | no |
5560
| <a name="input_spot_bid_max_price"></a> [spot\_bid\_max\_price](#input\_spot\_bid\_max\_price) | The max price for Azure spot instances. Use -1 to specify lowest price | `number` | -1 | no |
5661
| <a name="input_autotermination_minutes"></a> [autotermination\_minutes](#input\_autotermination\_minutes) | Automatically terminate the cluster after being inactive for this time in minutes. If not set, Databricks won't automatically terminate an inactive cluster. If specified, the threshold must be between 10 and 10000 minutes. You can also set this value to 0 to explicitly disable automatic termination | `number`| 15 | no |
5762
| <a name="input_min_workers"></a> [min\_workers](#input\_min\_workers)| The minimum number of workers to which the cluster can scale down when underutilized. It is also the initial number of workers the cluster will have after creation | `number` | 1 | no |
5863
| <a name="input_max_workers"></a> [max\_workers](#input\_max\_workers) | The maximum number of workers to which the cluster can scale up when overloaded. max_workers must be strictly greater than min_workers | `number` | 2 | no |
59-
| <a name="input_users"></a> [users](#input\_users)| List of users to access Databricks | `list(string)` | [] | no |
60-
| <a name="input_secrets"></a> [secrets](#input\_secrets) | Map of secrets to create in Databricks | `map(any)`| {} | no |
61-
| <a name="input_use_local_secret_scope"></a> [use\_local\_secret\_scope](#input\_use\_local\_secret\_scope) | Create databricks secret scope and create secrets | `bool` | false | no |
62-
| <a name="input_permissions"></a> [permissions](#input\_permissions)| Databricks Workspace permission maps | `list(map(string))` | <pre> [{ <br> object_id = null <br> role = null <br> }] </pre> | no |
63-
| <a name="input_custom_cluster_policies"></a> [custom\_cluster\_policies](#input\_custom\_cluster\_policies) | Provides an ability to create custom cluster policy, assign it to cluster and grant CAN_USE permissions on it to certain custom groups | <pre>list(object({<br> name = string<br> can_use = list(string)<br> definition = any<br> assigned = bool<br>}))</pre> | <pre>[{<br> name = null<br> can_use = null<br> definition = null<br> assigned = false<br>}]</pre> | no |
64-
| <a name="input_data_security_mode"></a> [data\_security\_mode](#input\_data\_security\_mode) | Security features of the cluster | `string` | "NONE" | no |
64+
| <a name="input_data_security_mode"></a> [data\_security\_mode](#input\_data\_security\_mode) | Security features of the cluster | `string` | "NONE" | no |
6565
| <a name="input_spark_version"></a> [spark\_version](#input\_spark\_version) | Runtime version | `string` | "11.3.x-scala2.12" | no |
6666
| <a name="input_spark_conf"></a> [spark\_conf](#input\_spark\_conf)| Map with key-value pairs to fine-tune Spark clusters, where you can provide custom Spark configuration properties in a cluster configuration. | `map(any)` | {} | no |
6767
| <a name="input_spark_env_vars"></a> [spark_env_vars](#input\_spark_env_vars)| Map with environment variable key-value pairs to fine-tune Spark clusters. Key-value pairs of the form (X,Y) are exported (i.e., X='Y') while launching the driver and workers.| `map(any)`| {} | no |
68-
| <a name="input_cluster_log_conf_destination"></a> [cluster\_log\_conf\_destination](#input\_cluster\_log\_conf\_destination) | Provide a dbfs location, example 'dbfs:/cluster-logs', to push all cluster logs to certain location | `string` | "" | no |
69-
| <a name="input_node_type"></a> [spark\_node\_type](#input\_node\_type)| Databricks_node_type id | `string` | "Standard_D3_v2" | no |
68+
| <a name="input_cluster_log_conf_destination"></a> [cluster\_log\_conf\_destination](#input\_cluster\_log\_conf\_destination) | Provide a dbfs location, example 'dbfs:/cluster-logs', to push all cluster logs to certain location | `string` | " " | no |
69+
| <a name="input_node_type"></a> [node\_type](#input\_node\_type)| Databricks_node_type id | `string` | "Standard_D3_v2" | no |
7070
| <a name="input_mountpoints"></a> [mountpoints](#input\_mountpoints) | Mountpoints for databricks | `map(any)`| null | no |
71+
| <a name="input_secret_scope"></a> [secret\_scope](#input\_secret\_scope) | Provides an ability to create custom Secret Scope, store secrets in it and assigning ACL for access management | <pre>list(object({<br> scope_name = string<br> acl = optional(list(object({<br> principal = string<br> permission = string<br> secrets = optional(list(object({<br> key = string<br> string_value = string<br>})))<br></pre> | <pre>default = [{<br> scope_name = null<br> acl = null<br> can_use = null<br> secrets = null<br>}]</pre> | no |
72+
7173

7274

7375
## Outputs
@@ -77,6 +79,7 @@ No modules.
7779
| <a name="output_token"></a> [token](#output\_token) | Databricks Personal Authorization Token |
7880
| <a name="output_cluster_id"></a> [cluster\_id](#output\_cluster\_id) | Databricks Cluster Id |
7981
| <a name="output_cluster_policies_object"></a> [cluster\_policies\_object](#output\_cluster\_policies\_object) | Databricks Cluster Policies object map |
82+
| <a name="output_secret_scope_object"></a> [secret_scope\_object](#output\_secret_scope\_object) | Databricks-managed Secret Scope object map to create ACLs |
8083
<!-- END_TF_DOCS -->
8184

8285
## License

main.tf

Lines changed: 3 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,28 +13,21 @@ data "azurerm_key_vault_secret" "tenant_id" {
1313
key_vault_id = var.key_vault_id
1414
}
1515

16-
locals {
17-
secrets = merge(var.secrets, {
18-
(var.sp_client_id_secret_name) = { value = data.azurerm_key_vault_secret.sp_client_id.value }
19-
(var.sp_key_secret_name) = { value = data.azurerm_key_vault_secret.sp_key.value }
20-
})
21-
}
22-
2316
resource "databricks_token" "pat" {
2417
comment = "Terraform Provisioning"
2518
lifetime_seconds = var.pat_token_lifetime_seconds
2619
}
2720

2821
resource "databricks_user" "this" {
29-
for_each = var.sku == "standard" ? toset(var.users) : []
22+
for_each = var.sku == "premium" ? [] : toset(var.users)
3023
user_name = each.value
3124
lifecycle { ignore_changes = [external_id] }
3225
}
3326

3427
resource "azurerm_role_assignment" "this" {
3528
for_each = {
36-
for permision in var.permissions : "${permision.object_id}-${permision.role}" => permision
37-
if permision.role != null
29+
for permission in var.permissions : "${permission.object_id}-${permission.role}" => permission
30+
if permission.role != null
3831
}
3932
scope = var.workspace_id
4033
role_definition_name = each.value.role

mount.tf

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,3 @@
1-
locals {
2-
secret_scope_name = var.use_local_secret_scope ? databricks_secret_scope.this[0].name : "main"
3-
mount_secret_name = var.use_local_secret_scope ? databricks_secret.this[var.sp_key_secret_name].config_reference : "{{secrets/${local.secret_scope_name}/${data.azurerm_key_vault_secret.sp_key.name}}}"
4-
}
5-
61
resource "databricks_mount" "adls" {
72
for_each = var.mountpoints
83

@@ -12,10 +7,10 @@ resource "databricks_mount" "adls" {
127
"fs.azure.account.auth.type" : "OAuth",
138
"fs.azure.account.oauth.provider.type" : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider",
149
"fs.azure.account.oauth2.client.id" : data.azurerm_key_vault_secret.sp_client_id.value,
15-
"fs.azure.account.oauth2.client.secret" : local.mount_secret_name,
10+
"fs.azure.account.oauth2.client.secret" : databricks_secret.main[data.azurerm_key_vault_secret.sp_key.name].config_reference,
1611
"fs.azure.account.oauth2.client.endpoint" : "https://login.microsoftonline.com/${data.azurerm_key_vault_secret.tenant_id.value}/oauth2/token",
1712
"fs.azure.createRemoteFileSystemDuringInitialization" : "false",
1813
"spark.databricks.sqldw.jdbc.service.principal.client.id" : data.azurerm_key_vault_secret.sp_client_id.value,
19-
"spark.databricks.sqldw.jdbc.service.principal.client.secret" : local.mount_secret_name,
14+
"spark.databricks.sqldw.jdbc.service.principal.client.secret" : databricks_secret.main[data.azurerm_key_vault_secret.sp_key.name].config_reference
2015
}
2116
}

outputs.tf

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,3 +16,11 @@ output "cluster_policies_object" {
1616
} if policy.definition != null]
1717
description = "Databricks Cluster Policies object map"
1818
}
19+
20+
output "secret_scope_object" {
21+
value = [for param in var.secret_scope : {
22+
scope_name = databricks_secret_scope.this[param.scope_name].name
23+
acl = param.acl
24+
} if param.acl != null]
25+
description = "Databricks-managed Secret Scope object map to create ACLs"
26+
}

secrets.tf

Lines changed: 65 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,74 @@
1-
resource "databricks_secret_scope" "this" {
2-
count = var.use_local_secret_scope ? 1 : 0
1+
locals {
2+
sp_secrets = {
3+
(var.sp_client_id_secret_name) = { value = data.azurerm_key_vault_secret.sp_client_id.value }
4+
(var.sp_key_secret_name) = { value = data.azurerm_key_vault_secret.sp_key.value }
5+
}
6+
7+
secrets_objects_list = flatten([for param in var.secret_scope : [
8+
for secret in param.secrets : {
9+
scope_name = param.scope_name, key = secret.key, string_value = secret.string_value
10+
}] if param.secrets != null
11+
])
12+
}
313

14+
# Secret Scope with SP secrets for mounting Azure Data Lake Storage
15+
resource "databricks_secret_scope" "main" {
416
name = "main"
5-
initial_manage_principal = "users"
17+
initial_manage_principal = var.sku == "premium" ? null : "users"
618
}
719

8-
resource "databricks_secret" "this" {
9-
for_each = var.use_local_secret_scope ? local.secrets : {}
20+
resource "databricks_secret" "main" {
21+
for_each = local.sp_secrets
1022

1123
key = each.key
1224
string_value = each.value["value"]
13-
scope = databricks_secret_scope.this[0].id
25+
scope = databricks_secret_scope.main.id
26+
}
27+
28+
# Custom additional Databricks Secret Scope
29+
resource "databricks_secret_scope" "this" {
30+
for_each = {
31+
for param in var.secret_scope : (param.scope_name) => param
32+
if param.scope_name != null
33+
}
1434

15-
depends_on = [databricks_secret_scope.this]
35+
name = each.key
36+
initial_manage_principal = var.sku == "premium" ? null : "users"
1637
}
38+
39+
resource "databricks_secret" "this" {
40+
for_each = { for entry in local.secrets_objects_list : "${entry.scope_name}.${entry.key}" => entry }
41+
42+
key = each.value.key
43+
string_value = each.value.string_value
44+
scope = databricks_secret_scope.this[each.value.scope_name].id
45+
}
46+
47+
# At the nearest future, Azure will allow acquiring AAD tokens by service principals,
48+
# thus providing an ability to create Azure backed Key Vault with Terraform
49+
# https://github.com/databricks/terraform-provider-databricks/pull/1965
50+
51+
## Azure Key Vault-backed Scope
52+
#resource "azurerm_key_vault_access_policy" "databricks" {
53+
# count = var.key_vault_secret_scope.key_vault_id != null ? 1 : 0
54+
55+
# key_vault_id = var.key_vault_secret_scope.key_vault_id
56+
# object_id = "9b38785a-6e08-4087-a0c4-20634343f21f" # Global 'AzureDatabricks' SP object id
57+
# tenant_id = data.azurerm_key_vault_secret.tenant_id.value
58+
#
59+
# secret_permissions = [
60+
# "Get",
61+
# "List",
62+
# ]
63+
#}
64+
#
65+
#resource "databricks_secret_scope" "external" {
66+
# count = var.key_vault_secret_scope.key_vault_id != null ? 1 : 0
67+
#
68+
# name = "external"
69+
# keyvault_metadata {
70+
# resource_id = var.key_vault_secret_scope.key_vault_id
71+
# dns_name = var.key_vault_secret_scope.dns_name
72+
# }
73+
# depends_on = [azurerm_key_vault_access_policy.databricks]
74+
#}

0 commit comments

Comments
 (0)