Skip to content

China S3 bucket resource returning 401 for non-ICP accounts #42743

Closed
@adobe-jeremy

Description

@adobe-jeremy

Terraform and AWS Provider Version

Terraform v1.7.3
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v5.98.0

Affected Resource(s) or Data Source(s)

  • aws_s3_bucket

Expected Behavior

Terrraform bucket created and both plan and apply finish successfully.

Actual Behavior

Terraform plan is successful. During apply the bucket is created but the apply ultimately fails with an error.

Relevant Error/Panic Output

 Error: reading S3 Bucket (my-tf-test-bucket-10000) location: operation error S3: HeadBucket, https response error StatusCode: 401, RequestID: V2D8C443RGNNEDZ6, HostID: d6tTm73+CHIR3BhZwA6WIonZs3TB0zYI35gQaWNQgQJuPu2F72TKzXXpMv/tTr2Y4pqLyJPo8OA=, api error Unauthorized: Unauthorized

│   with aws_s3_bucket.d,
│   on main.tf line 17, in resource "aws_s3_bucket" "d":
│   17: resource "aws_s3_bucket" "d" {

Sample Terraform Configuration

terraform {
    required_version = ">= 1.0"

    required_providers {
        aws = {
            source  = "hashicorp/aws"
            version = "5.98.0"
        }
    }
}

provider "aws" {
  region = "cn-north-1"
}


resource "aws_s3_bucket" "d" {
  bucket = "my-tf-test-bucket-10000"
}

Steps to Reproduce

  1. Terraform apply

Debug Logging

Click to expand log output
2025-05-23T09:57:10.855-0400 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2025-05-23T09:57:10.860-0400 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
2025-05-23T09:57:10.860-0400 [ERROR] vertex "aws_s3_bucket.d" error: reading S3 Bucket (my-tf-test-bucket-10000) location: operation error S3: HeadBucket, https response error StatusCode: 401, RequestID: V2D8C443RGNNEDZ6, HostID: d6tTm73+CHIR3BhZwA6WIonZs3TB0zYI35gQaWNQgQJuPu2F72TKzXXpMv/tTr2Y4pqLyJPo8OA=, api error Unauthorized: Unauthorized
2025-05-23T09:57:10.860-0400 [TRACE] vertex "aws_s3_bucket.d": visit complete, with errors
2025-05-23T09:57:10.860-0400 [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/hashicorp/aws\"] (close)" errored, so skipping
2025-05-23T09:57:10.860-0400 [TRACE] dag/walk: upstream of "root" errored, so skipping
2025-05-23T09:57:10.860-0400 [TRACE] statemgr.Filesystem: reading latest snapshot from terraform.tfstate
2025-05-23T09:57:10.860-0400 [TRACE] statemgr.Filesystem: read snapshot with lineage "25d3357c-3547-6c8a-f959-455fa4048f8e" serial 1
2025-05-23T09:57:10.860-0400 [TRACE] statemgr.Filesystem: no original state snapshot to back up
2025-05-23T09:57:10.860-0400 [TRACE] statemgr.Filesystem: no state changes since last snapshot
2025-05-23T09:57:10.860-0400 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate

│ Error: reading S3 Bucket (my-tf-test-bucket-10000) location: operation error S3: HeadBucket, https response error StatusCode: 401, RequestID: V2D8C443RGNNEDZ6, HostID: d6tTm73+CHIR3BhZwA6WIonZs3TB0zYI35gQaWNQgQJuPu2F72TKzXXpMv/tTr2Y4pqLyJPo8OA=, api error Unauthorized: Unauthorized

│   with aws_s3_bucket.d,
│   on main.tf line 17, in resource "aws_s3_bucket" "d":
│   17: resource "aws_s3_bucket" "d" {


2025-05-23T09:57:10.867-0400 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
2025-05-23T09:57:10.867-0400 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
2025-05-23T09:57:10.868-0400 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2025-05-23T09:57:10.871-0400 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/aws/5.98.0/darwin_arm64/terraform-provider-aws_v5.98.0_x5 pid=48537
2025-05-23T09:57:10.871-0400 [DEBUG] provider: plugin exited

GenAI / LLM Assisted Development

n/a

Important Facts and References

This looks to me to be a regression or at least similar to #15420. Additionally, the issue does not occur in provider version v5.97.0, so it looks to be introduced in v5.98.0. I suspect that the underlying cause is the upgrade of github.com/aws/aws-sdk-go-v2/feature/s3/manager to v1.17.75 due to https://github.com/aws/aws-sdk-go-v2/pull/3081/files

Would you like to implement a fix?

No

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugAddresses a defect in current functionality.partition/aws-cnPertains to the aws-cn partition.regressionPertains to a degraded workflow resulting from an upstream patch or internal enhancement.service/s3Issues and PRs that pertain to the s3 service.

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions