-
-
Notifications
You must be signed in to change notification settings - Fork 17
S3 Replication Guide: don't include provider configuration in Terraform modules #738
Description
Bug Report
Issue
https://nitric.io/docs/guides/terraform/s3-replicate includes instructions to add provider configuration in the existing S3 bucket module Nitric uses:
provider "aws" {
alias = "replication"
region = var.replication_region
endpoints {
s3 = "https://s3.${var.replication_region}.amazonaws.com"
}
}
This provider configuration is needed to have Terraform deploy the replication bucket to a specific region.
Unfortunately, Terraform has a known limitation that means including provider configuration in modules is not recommended. https://support.hashicorp.com/hc/en-us/articles/1500000332721-Error-Provider-configuration-not-present?utm_source=chatgpt.com
If the bucket is deployed then later deleted from the stack the module is removed, which in turn removes the provider configuration. However, that provider configuration is needed to destroy the bucket, so the terraform plan
or terraform apply
fails with the error:
Error: Provider configuration not present
Why the provider information isn't persisted in the stack state and used for the deletion is unclear.
Steps
Steps to reproduce the behavior:
See https://discord.com/channels/955259353043173427/1366842570810068992 for steps and example code to replicate the issue.
Expected
Work around this limitation by including all provider configuration in the root module, then passing it down to other modules as needed.
Challenges
- Replication requires 2 buckets, each deployed to a different region
- Terraform's AWS provider can only target a single region, so two or more providers are always required.
- Provider configuration appears to be the only method of specifying which region S3 buckets are deployed to
Given these limitations, region can't be passed into a bucket module as a variable. It must be done using providers.
Recommended patterns often include multiple module to work around these challenges/limitations, for example:
# Root module
provider "aws" {
alias = "source"
region = "us-west-2"
}
provider "aws" {
alias = "destination"
region = "us-east-1"
}
module "source_bucket" {
source = "./modules/bucket"
bucket_id = "my-source-bucket"
providers = {
aws = aws.source
}
}
module "destination_bucket" {
source = "./modules/bucket"
bucket_id = "my-dest-bucket"
providers = {
aws = aws.destination
}
}
module "replication" {
source = "./modules/replication"
source_bucket = module.source_bucket.bucket_name
destination_bucket = module.destination_bucket.bucket_name
providers = {
aws = aws.source # The replication rule is usually created on the *source* bucket
}
}
It may also be possible to use a single module, with multiple aliased providers (tbd):
# Root
provider "aws" {
alias = "source"
region = "us-west-2"
}
provider "aws" {
alias = "destination"
region = "us-east-1"
}
module "bucket_pair" {
source = "./modules/bucket-pair"
bucket1_name = "my-source-bucket"
bucket2_name = "my-dest-bucket"
providers = {
aws.source = aws.source
aws.destination = aws.destination
}
}
In the module:
# modules/bucket-pair/main.tf
resource "aws_s3_bucket" "source" {
provider = aws.source
bucket = var.bucket1_name
}
resource "aws_s3_bucket" "dest" {
provider = aws.destination
bucket = var.bucket2_name
}
Then, given we use CDKTF, we'll need to include code changes in the guide, not just changes to the modules since the root module is essentially the Go code.