-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Description
Summary
This RFC proposes adding support for for_each and count meta-arguments to dependency blocks, stack blocks, and unit blocks in Terragrunt configurations. This feature would allow users to dynamically create multiple instances of these blocks based on iterable data (sets, map of strings), similar to how Terraform handles resource iteration.
Motivation
Currently, Terragrunt users must manually duplicate dependency, stack, and unit blocks when they need multiple similar configurations. This leads to:
- Code duplication - Repeated blocks with minor variations
- Maintenance overhead - Changes must be applied to multiple blocks
- Error-prone configuration - Inconsistencies between duplicated blocks
- Scalability issues - Managing dozens or hundreds of similar blocks becomes unwieldy
The problem becomes particularly acute in enterprise environments where:
- Multiple services require similar infrastructure patterns (databases, caches, etc.)
- Cross-service dependencies require connections to multiple similar resources
- Infrastructure must scale across multiple environments, regions, or teams
- Configuration drift occurs when manually maintaining dozens of similar blocks
Current Pain Points: Examples
The real-world applicability or valididity of the following scenarios is not meant to be scrutinized. The following examples are proviced only to highlight a possible scenario where count
and for_each
could provide benefit.
Dependencies
Consider a scenario where you're deploying an ECS service that runs database migrations across multiple Aurora clusters. Each service has its own Aurora cluster with specific connection details and credentials that the migration service needs to access:
# Manual duplication for each service's Aurora cluster
dependency "aurora-web" {
config_path = "../services/web/aurora"
mock_outputs = {
cluster_endpoint = "web-aurora-cluster.cluster-xyz.us-east-1.rds.amazonaws.com"
cluster_reader_endpoint = "web-aurora-cluster.cluster-ro-xyz.us-east-1.rds.amazonaws.com"
port = 5432
database_name = "web_production"
master_username = "web_admin"
security_group_id = "sg-web-aurora-12345"
}
}
dependency "aurora-api" {
config_path = "../services/api/aurora"
mock_outputs = {
cluster_endpoint = "api-aurora-cluster.cluster-xyz.us-east-1.rds.amazonaws.com"
cluster_reader_endpoint = "api-aurora-cluster.cluster-ro-xyz.us-east-1.rds.amazonaws.com"
port = 5432
database_name = "api_production"
master_username = "api_admin"
security_group_id = "sg-api-aurora-67890"
}
}
dependency "aurora-worker" {
config_path = "../services/worker/aurora"
mock_outputs = {
cluster_endpoint = "worker-aurora-cluster.cluster-xyz.us-east-1.rds.amazonaws.com"
cluster_reader_endpoint = "worker-aurora-cluster.cluster-ro-xyz.us-east-1.rds.amazonaws.com"
port = 5432
database_name = "worker_production"
master_username = "worker_admin"
security_group_id = "sg-worker-aurora-11111"
}
}
dependency "aurora-analytics" {
config_path = "../services/analytics/aurora"
mock_outputs = {
cluster_endpoint = "analytics-aurora-cluster.cluster-xyz.us-east-1.rds.amazonaws.com"
cluster_reader_endpoint = "analytics-aurora-cluster.cluster-ro-xyz.us-east-1.rds.amazonaws.com"
port = 5432
database_name = "analytics_production"
master_username = "analytics_admin"
security_group_id = "sg-analytics-aurora-22222"
}
}
dependency "secrets-web" {
config_path = "../services/web/secrets"
mock_outputs = {
db_credentials_secret_arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:web-db-credentials-AbCdEf"
migration_role_arn = "arn:aws:iam::123456789012:role/web-migration-role"
}
}
dependency "secrets-api" {
config_path = "../services/api/secrets"
mock_outputs = {
db_credentials_secret_arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:api-db-credentials-GhIjKl"
migration_role_arn = "arn:aws:iam::123456789012:role/api-migration-role"
}
}
dependency "secrets-worker" {
config_path = "../services/worker/secrets"
mock_outputs = {
db_credentials_secret_arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:worker-db-credentials-MnOpQr"
migration_role_arn = "arn:aws:iam::123456789012:role/worker-migration-role"
}
}
dependency "secrets-analytics" {
config_path = "../services/analytics/secrets"
mock_outputs = {
db_credentials_secret_arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret/analytics-db-credentials-StUvWx"
migration_role_arn = "arn:aws:iam::123456789012:role/analytics-migration-role"
}
}
# The inputs block becomes equally repetitive and error-prone
terraform {
source = "./modules/ecs-db-migration-service"
}
inputs = {
# Database connection details - manual duplication for each service
database_connections = {
web = {
endpoint = dependency.aurora-web.outputs.cluster_endpoint
reader_endpoint = dependency.aurora-web.outputs.cluster_reader_endpoint
port = dependency.aurora-web.outputs.port
database_name = dependency.aurora-web.outputs.database_name
username = dependency.aurora-web.outputs.master_username
}
api = {
endpoint = dependency.aurora-api.outputs.cluster_endpoint
reader_endpoint = dependency.aurora-api.outputs.cluster_reader_endpoint
port = dependency.aurora-api.outputs.port
database_name = dependency.aurora-api.outputs.database_name
username = dependency.aurora-api.outputs.master_username
}
worker = {
endpoint = dependency.aurora-worker.outputs.cluster_endpoint
reader_endpoint = dependency.aurora-worker.outputs.cluster_reader_endpoint
port = dependency.aurora-worker.outputs.port
database_name = dependency.aurora-worker.outputs.database_name
username = dependency.aurora-worker.outputs.master_username
}
analytics = {
endpoint = dependency.aurora-analytics.outputs.cluster_endpoint
reader_endpoint = dependency.aurora-analytics.outputs.cluster_reader_endpoint
port = dependency.aurora-analytics.outputs.port
database_name = dependency.aurora-analytics.outputs.database_name
username = dependency.aurora-analytics.outputs.master_username
}
}
# Secrets Manager ARNs - more manual duplication
database_secrets = {
web = {
credentials_secret_arn = dependency.secrets-web.outputs.db_credentials_secret_arn
migration_role_arn = dependency.secrets-web.outputs.migration_role_arn
}
api = {
credentials_secret_arn = dependency.secrets-api.outputs.db_credentials_secret_arn
migration_role_arn = dependency.secrets-api.outputs.migration_role_arn
}
worker = {
credentials_secret_arn = dependency.secrets-worker.outputs.db_credentials_secret_arn
migration_role_arn = dependency.secrets-worker.outputs.migration_role_arn
}
analytics = {
credentials_secret_arn = dependency.secrets-analytics.outputs.db_credentials_secret_arn
migration_role_arn = dependency.secrets-analytics.outputs.migration_role_arn
}
}
# Security groups - manual list construction
database_security_groups = [
dependency.aurora-web.outputs.security_group_id,
dependency.aurora-api.outputs.security_group_id,
dependency.aurora-worker.outputs.security_group_id,
dependency.aurora-analytics.outputs.security_group_id
]
}
Stacks and Units
Similarly, for stack and unit blocks, you end up with repetitive configurations. Building on the same service architecture, consider deploying Aurora database clusters for each of these services within a production environment, where each Aurora cluster is relatively similar, but each service might have different performance and storage requirements:
# Manual duplication for each service
unit "aurora-web" {
name = "aurora-web"
source = "./modules/aurora"
path = "aurora/web"
values = {
cluster_identifier = "web-prod"
environment = "prod"
instance_class = "db.r6g.large"
instance_count = 3
allocated_storage = 1000
database_name = "web_production"
}
}
unit "aurora-api" {
name = "aurora-api"
source = "./modules/aurora"
path = "aurora/api"
values = {
cluster_identifier = "api-prod"
environment = "prod"
instance_class = "db.r6g.large"
instance_count = 2
allocated_storage = 500
database_name = "api_production"
}
}
unit "aurora-worker" {
name = "aurora-worker"
source = "./modules/aurora"
path = "aurora/worker"
values = {
cluster_identifier = "worker-prod"
environment = "prod"
instance_class = "db.r6g.large"
instance_count = 1
allocated_storage = 200
database_name = "worker_production"
}
}
unit "aurora-analytics" {
name = "aurora-analytics"
source = "./modules/aurora"
path = "aurora/analytics"
values = {
cluster_identifier = "analytics-prod"
environment = "prod"
instance_class = "db.r6g.large"
instance_count = 2
allocated_storage = 2000
database_name = "analytics_production"
}
}
This results in significant duplication: 8 dependency blocks + 4 unit blocks + 1 complex inputs block = 12+ blocks of nearly identical configuration that must be maintained manually. The inputs block alone contains 20+ lines of repetitive dependency references that are prone to copy-paste errors.
Proposal
This RFC proposes adding support for for_each
and count
meta-arguments to dependency, stack, and unit blocks in Terragrunt configurations. This feature would allow users to dynamically create multiple instances of these blocks based on iterable data (sets, maps of strings), similar to how Terraform handles resource iteration.
Solution Examples
With this RFC, the 12 blocks above could be simplified to just 3 blocks:
Dependencies (using for_each)
# Proposed approach - dynamic iteration for dependencies
locals {
services = toset(["web", "api", "worker", "analytics"])
}
dependency "aurora" {
for_each = local.services
config_path = "../services/${each.value}/aurora"
mock_outputs = {
cluster_endpoint = "${each.value}-aurora-cluster.cluster-xyz.us-east-1.rds.amazonaws.com"
cluster_reader_endpoint = "${each.value}-aurora-cluster.cluster-ro-xyz.us-east-1.rds.amazonaws.com"
port = 5432
database_name = "${each.value}_production"
master_username = "${each.value}_admin"
security_group_id = "sg-${each.value}-aurora-12345"
}
}
dependency "secrets" {
for_each = local.services
config_path = "../services/${each.value}/secrets"
mock_outputs = {
db_credentials_secret_arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:${each.value}-db-credentials-AbCdEf"
migration_role_arn = "arn:aws:iam::123456789012:role/${each.value}-migration-role"
}
}
# Access dependencies in inputs block - ECS Migration Service needs ALL database connections
terraform {
source = "./modules/ecs-db-migration-service"
}
inputs = {
# Database connection details for all services
database_connections = {
for service in local.services : service => {
endpoint = dependency.aurora[service].outputs.cluster_endpoint
reader_endpoint = dependency.aurora[service].outputs.cluster_reader_endpoint
port = dependency.aurora[service].outputs.port
database_name = dependency.aurora[service].outputs.database_name
username = dependency.aurora[service].outputs.master_username
}
}
# Secrets Manager ARNs for database credentials
database_secrets = {
for service in local.services : service => {
credentials_secret_arn = dependency.secrets[service].outputs.db_credentials_secret_arn
migration_role_arn = dependency.secrets[service].outputs.migration_role_arn
}
}
# Security groups for database access
database_security_groups = [
for service in local.services : dependency.aurora[service].outputs.security_group_id
]
}
Stacks and Units (using for_each)
# Proposed approach - dynamic iteration for stacks and units
locals {
# Service-specific configurations
service_configs = {
web = {
instance_count = 3 # High traffic, needs more instances
storage = "1000" # High storage needs
}
api = {
instance_count = 2 # Moderate traffic
storage = "500" # Moderate storage
}
worker = {
instance_count = 1 # Background processing, lower needs
storage = "200" # Lower storage needs
}
analytics = {
instance_count = 2 # Data processing
storage = "2000" # Large storage for analytics data
}
}
}
unit "aurora" {
for_each = toset(keys(local.service_configs))
name = "aurora-${each.value}"
source = "./modules/aurora"
path = "aurora/${each.value}"
values = {
cluster_identifier = "${each.value}-prod"
environment = "prod"
instance_class = "db.r6g.large"
instance_count = local.service_configs[each.value].instance_count
allocated_storage = local.service_configs[each.value].storage
database_name = "${each.value}_production"
}
}
Stacks and Units Example (using count)
For simpler cases where services follow a pattern, you can use count
:
# Alternative approach - using count for stacks and units
locals {
services = ["web", "api", "worker", "analytics"]
# Different instance requirements per service
instance_counts = [3, 2, 1, 2]
# Different storage requirements per service (in GB)
storage_sizes = [1000, 500, 200, 2000]
}
unit "aurora" {
count = length(local.services)
name = "aurora-${local.services[count.index]}"
source = "./modules/aurora"
path = "aurora/${local.services[count.index]}"
values = {
cluster_identifier = "${local.services[count.index]}-prod"
environment = "prod"
instance_class = "db.r6g.large"
instance_count = local.instance_counts[count.index]
allocated_storage = local.storage_sizes[count.index]
database_name = "${local.services[count.index]}_production"
}
}
Summary
The proposed for_each
and count
meta-arguments reduce the 12+ repetitive blocks shown above to just 3 clean blocks, eliminating copy-paste errors and making configuration changes much easier to maintain. Note that these examples focus on single region/environment scenarios; real-world usage across multiple regions and environments with varying configurations and selective service deployment would provide even greater benefits.
Technical Details
The implementation would leverage Terragrunt's existing HCL parsing infrastructure and extend it with an "expandable blocks" system. The following diagram shows the technical processing flow:
flowchart TD
A["HCL File Input<br/>_block ''name''_ {...}"] --> B["Parse Phase<br/>Standard HCL parsing"]
B --> C["Detection Phase<br/>Identify blocks with count/for_each"]
C --> D{"Has expandable<br/>meta-arguments?"}
D -->|No| E["Standard Processing<br/>Single block instance"]
D -->|Yes| F["Validation Phase<br/>Check if block struct supports iteration<br/>Check expression types & values"]
F --> G{"Is block expandable?"}
G -->|No| H["Error: count/for_each not supported"]
G -->|Yes| J{"Is count/for_each expression valid?"}
J -->|No| K["Error: Invalid count/for_each expression"]
J --> |Yes| L["Expansion Phase<br/>Create multiple block instances"]
L --> M["Context Phase<br/>Inject count.index, each.key, each.value"]
M --> N["Integration Phase<br/>Replace original with expanded blocks"]
N --> O["Configuration Output<br/>Multiple dependency/unit/stack instances"]
E --> O
The bulk of the changes would reside with the HCL Parser. While this implementation targets dependency, stack, and unit blocks, it would be flexible enough to be applied to other blocks in the future.
Core Components
The core components are:
1. HCL Parser Extension (config/hclparse/file.go
)
The implementation extends the HCL parser, specifically the Decode
function to detect and handle expandable blocks:
- Block Detection: Identifies blocks that have
count
orfor_each
attributes - Expression Validation: Validates count expressions (must be numbers ≥ 0) and for_each expressions (must be maps or sets)
- Block Expansion: Creates multiple block instances based on the iteration logic
- Context Injection: Injects
count.index
,each.key
, andeach.value
into the evaluation context
2. Struct Field Extensions
The relevant structs have been extended with metadata fields:
type Dependency struct {
// ... existing fields ...
Count *cty.Value `hcl:"count,attr" cty:"count"`
ForEach *cty.Value `hcl:"for_each,attr" cty:"for_each"`
CountIndex *int `cty:"index"`
EachKey *string `cty:"key"`
}
type Unit struct {
// ... existing fields ...
Count *cty.Value `hcl:"count,attr" cty:"count"`
ForEach *cty.Value `hcl:"for_each,attr" cty:"for_each"`
CountIndex *int `cty:"index"`
EachKey *string `cty:"key"`
}
type Stack struct {
// ... existing fields ...
Count *cty.Value `hcl:"count,attr" cty:"count"`
ForEach *cty.Value `hcl:"for_each,attr" cty:"for_each"`
CountIndex *int `cty:"index"`
EachKey *string `cty:"key"`
}
Count
and ForEach
would be used to read in the iteration expression. CountIndex
and EachKey
are not used in expansion, but would be made available for use after the parsing. This currently would be useful for the secondary processing (dependency outputs) and item identification in logs (name + iteration key). The presence of these attributes on block structs are what determine expandability.
3. Processing Logic
The implementation follows this flow:
- Parse Phase: HCL blocks are parsed normally
- Detection Phase: Blocks with
count
/for_each
are identified - Validation Phase: Expressions are validated for type and value constraints
- Expansion Phase: Multiple block instances are created
- Context Phase: Each instance gets appropriate iteration context
- Integration Phase: Expanded blocks replace the original in the configuration
4. Output Handling
Dependency outputs can be referenced in inputs. The mechanism to create the output values map will need to be updated. For this, the current implementation can be extended so that the where expanded dependencies are in use, their outputs are keyed by their iteration values:
# Input
dependency "vpc" {
for_each = toset(["dev", "staging", "prod"])
config_path = "../vpc-${each.value}"
}
# Output structure
dependency = {
vpc = {
"dev" = { outputs = {...}, inputs = {...} }
"staging" = { outputs = {...}, inputs = {...} }
"prod" = { outputs = {...}, inputs = {...} }
}
}
Press Release
TBD
Drawbacks
No response
Alternatives
3. External Code Generation
Description: Use external tools (scripts, other programs) to generate Terragrunt configurations.
Example:
# Generate config with external script
./generate-config.sh environments.json > terragrunt.hcl
Pros:
- Maximum flexibility in generation logic
- Can use any programming language
- Clear separation of concerns
Cons:
- Requires additional tooling and maintenance
- Breaks the "everything in HCL" paradigm
- Harder to version control and review
- Additional complexity in CI/CD pipelines
Migration Strategy
The proposed changes should be completely backwards compatible.
Unresolved Questions
1.) How block merging is handled for dependencies. Stacks/units are only used for generation and are not referenced in the same way as dependencies, so they won't have the same problem.
Scenarios:
- Multiple dependency blocks with same name:
- Some iterable, some not
- All iterable, some with
count
, some withfor_each
- All iterable, same iterator type, different set of expansion keys
Do we disallow i. and allow ii. and iii.? Disallow all?
2.) Unique constraints for fields on expanded blocks.
Do we want fields like config_path
for dependencies and path
for units/stacks to be validated for uniqueness? If so, should this be done in the parser, or somewhere after parsing?
3.) ParseConfig accumulates errors instead of failing out. This leads to duplicate/redundant errors on expansion. Should the logic in ParseConfig be modified to allow for an immediate return error?
4.) How should expanded blocks be represented with terragrunt render
? Should the name be modified to include the iteration key?
"fooname[1]"
vs "fooname"
as below?
Example:
terragrunt.hcl
dependency "fooname" {
count = 2
config_path = "../second_${count.index}"
mock_outputs = {
barval = count.index
}
mock_outputs_allowed_terraform_commands = tostring(count.index) == "0" ? ["apply", "plan"] : []
}
dependency "barname" {
for_each = tomap({"us-west-2" = 0})
config_path = "../second_${each.value}"
mock_outputs = {
barval = each.value
}
}
dependency "bazname" {
config_path = "../second_1"
mock_outputs = {
barval = "asdf"
}
}
locals {
label = "fooname"
}
terraform {
source = "./"
}
inputs = {
foo = dependency.fooname[0].outputs.barval
bar = dependency.barname["us-west-2"].outputs.barval
baz = dependency.bazname.outputs.barval
}
> terragrunt render
locals {
label = "fooname"
}
terraform {
source = "./"
}
dependency "barname" {
config_path = "../second_0"
mock_outputs = {
barval = 0
}
}
dependency "fooname" {
config_path = "../second_1"
mock_outputs = {
barval = 1
}
mock_outputs_allowed_terraform_commands = []
}
dependency "fooname" {
config_path = "../second_1"
mock_outputs = {
barval = 1
}
mock_outputs_allowed_terraform_commands = []
}
dependency "bazname" {
config_path = "../second_1"
mock_outputs = {
barval = "asdf"
}
}
inputs = {
bar = 0
baz = "asdf"
foo = 0
}
References
Proof of Concept Pull Request
Support Level
- I have Terragrunt Enterprise Support
- I am a paying Gruntwork customer
Customer Name
No response