diff --git a/contributing/DEVELOPMENT.md b/contributing/DEVELOPMENT.md index 7b7ca2423a..c2729d6612 100644 --- a/contributing/DEVELOPMENT.md +++ b/contributing/DEVELOPMENT.md @@ -15,7 +15,7 @@ Clone repository to: `$HOME/development/terraform-providers/` ```sh $ mkdir -p $HOME/development/terraform-providers/; cd $HOME/development/terraform-providers/ -$ git clone git@github.com:terraform-providers/terraform-provider-awscc +$ git clone git@github.com:hashicorp/terraform-provider-awscc ... ``` diff --git a/docs/resources/applicationsignals_discovery.md b/docs/resources/applicationsignals_discovery.md index 64a7299f11..585796ce42 100644 --- a/docs/resources/applicationsignals_discovery.md +++ b/docs/resources/applicationsignals_discovery.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_applicationsignals_discovery Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,18 @@ description: |- Resource Type definition for AWS::ApplicationSignals::Discovery +## Example Usage + +### Configure Application Signals Discovery + +Configures AWS Application Signals discovery service which appears to be empty or pending configuration details. +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +resource "awscc_applicationsignals_discovery" "example" { +} +``` ## Schema diff --git a/docs/resources/batch_consumable_resource.md b/docs/resources/batch_consumable_resource.md index 459e8c6a6b..f01a4111e1 100644 --- a/docs/resources/batch_consumable_resource.md +++ b/docs/resources/batch_consumable_resource.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_batch_consumable_resource Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,30 @@ description: |- Resource Type definition for AWS::Batch::ConsumableResource +## Example Usage + +### AWS Batch Consumable Resource Configuration + +Creates a replenishable consumable resource for AWS Batch with a total quantity of 10 units, enabling license management for batch workloads. +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +# Batch Consumable Resource Example +resource "awscc_batch_consumable_resource" "demo" { + resource_type = "REPLENISHABLE" + total_quantity = 10 + consumable_resource_name = "demo-license-resource" + + tags = [{ + key = "Environment" + value = "demo" + }, { + key = "Modified By" + value = "AWSCC" + }] +} +``` ## Schema diff --git a/docs/resources/bedrock_intelligent_prompt_router.md b/docs/resources/bedrock_intelligent_prompt_router.md index e21e269015..1e08ab0c53 100644 --- a/docs/resources/bedrock_intelligent_prompt_router.md +++ b/docs/resources/bedrock_intelligent_prompt_router.md @@ -1,5 +1,4 @@ --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_bedrock_intelligent_prompt_router Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +9,45 @@ description: |- Definition of AWS::Bedrock::IntelligentPromptRouter Resource Type - +## Example Usage + +```terraform +data "aws_region" "current" {} + +# Create the Bedrock Intelligent Prompt Router +resource "awscc_bedrock_intelligent_prompt_router" "example" { + prompt_router_name = "example-intelligent-prompt-router" + description = "Example intelligent prompt router for routing between Claude models based on response quality" + + # Primary models to route between (limited to exactly 2 models) + models = [ + { + model_arn = "arn:aws:bedrock:${data.aws_region.current.name}::foundation-model/anthropic.claude-3-5-sonnet-20241022-v2:0" + }, + { + model_arn = "arn:aws:bedrock:${data.aws_region.current.name}::foundation-model/anthropic.claude-3-haiku-20240307-v1:0" + } + ] + + # Fallback model (must be one of the models in the models list above) + fallback_model = { + model_arn = "arn:aws:bedrock:${data.aws_region.current.name}::foundation-model/anthropic.claude-3-haiku-20240307-v1:0" + } + + # Routing criteria based on response quality difference + # Value must be a multiple of 5 (likely as percentage: 5, 10, 15, 20, etc.) + routing_criteria = { + response_quality_difference = 20 + } + + tags = [ + { + key = "ModifiedBy" + value = "AWSCC" + } + ] +} +``` ## Schema @@ -74,4 +111,4 @@ Import is supported using the following syntax: ```shell $ terraform import awscc_bedrock_intelligent_prompt_router.example "prompt_router_arn" -``` +``` \ No newline at end of file diff --git a/docs/resources/cognito_user_pool_domain.md b/docs/resources/cognito_user_pool_domain.md index b23865e117..15470865c9 100644 --- a/docs/resources/cognito_user_pool_domain.md +++ b/docs/resources/cognito_user_pool_domain.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_cognito_user_pool_domain Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,48 @@ description: |- Resource Type definition for AWS::Cognito::UserPoolDomain +## Example Usage + +### Configure Cognito User Pool Domain + +Creates a custom domain for a Cognito User Pool with dynamic naming based on the AWS account ID, enabling a branded URL for user authentication endpoints. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +# Get current account ID for dynamic naming +data "aws_caller_identity" "current" {} + +# Create the Cognito User Pool +resource "aws_cognito_user_pool" "example" { + name = "my-user-pool" + auto_verified_attributes = ["email"] + username_attributes = ["email"] + + verification_message_template { + default_email_option = "CONFIRM_WITH_CODE" + } + + admin_create_user_config { + allow_admin_create_user_only = false + } + + email_configuration { + email_sending_account = "COGNITO_DEFAULT" + } + + tags = { + "Modified By" = "AWS" + } +} + +# Create the User Pool Domain +resource "awscc_cognito_user_pool_domain" "example" { + domain = "my-example-domain-${data.aws_caller_identity.current.account_id}" + user_pool_id = aws_cognito_user_pool.example.id +} +``` ## Schema diff --git a/docs/resources/deadline_limit.md b/docs/resources/deadline_limit.md index c28662bf6d..c0a4a187a6 100644 --- a/docs/resources/deadline_limit.md +++ b/docs/resources/deadline_limit.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_deadline_limit Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,30 @@ description: |- Definition of AWS::Deadline::Limit Resource Type +## Example Usage + +```terraform +resource "awscc_deadline_farm" "example" { + display_name = "ExampleRenderFarm" + description = "Example Deadline Farm for demonstrating limit configuration" + tags = [ + { + key = "ModifiedBy" + value = "AWSCC" + } + ] +} + +# Create a Deadline Limit for CPU usage +resource "awscc_deadline_limit" "example" { + farm_id = awscc_deadline_farm.example.farm_id + display_name = "CPU Limit" + description = "CPU core usage limit for the render farm" + amount_requirement_name = "amount.cpu" + max_count = 100 +} +``` ## Schema diff --git a/docs/resources/deadline_queue.md b/docs/resources/deadline_queue.md index 7d0b6dec67..0cda53a316 100644 --- a/docs/resources/deadline_queue.md +++ b/docs/resources/deadline_queue.md @@ -1,5 +1,4 @@ --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_deadline_queue Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +9,107 @@ description: |- Definition of AWS::Deadline::Queue Resource Type - +## Example Usage + +```terraform +# Create S3 bucket for job attachments +resource "awscc_s3_bucket" "example" { + bucket_name = "deadline-job-attachments-${random_id.bucket_suffix.hex}" + + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} + +# Generate random suffix for bucket name uniqueness +resource "random_id" "bucket_suffix" { + byte_length = 4 +} + +resource "awscc_deadline_farm" "example" { + display_name = "ExampleRenderFarm" + description = "Example Deadline Farm for queue demonstration" + + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} + +# Create storage profiles for different operating systems +resource "awscc_deadline_storage_profile" "linux_storage" { + display_name = "Linux Shared Storage" + farm_id = awscc_deadline_farm.example.farm_id + os_family = "LINUX" + + file_system_locations = [{ + name = "shared storage" + path = "/mnt/shared" + type = "SHARED" + }, { + name = "render assets" + path = "/mnt/assets" + type = "SHARED" + }] +} + +resource "awscc_deadline_storage_profile" "windows_storage" { + display_name = "Windows Shared Storage" + farm_id = awscc_deadline_farm.example.farm_id + os_family = "WINDOWS" + + file_system_locations = [{ + name = "shared storage" + path = "Z:\\" + type = "SHARED" + }, { + name = "render assets" + path = "Y:\\" + type = "SHARED" + }] +} + +# Create an advanced Deadline Queue with job attachment settings +resource "awscc_deadline_queue" "example" { + display_name = "AdvancedRenderQueue" + description = "Advanced render queue with S3 job attachments and custom settings" + farm_id = awscc_deadline_farm.example.farm_id + default_budget_action = "STOP_SCHEDULING_AND_COMPLETE_TASKS" + + # Configure job attachment settings for S3 + job_attachment_settings = { + s3_bucket_name = awscc_s3_bucket.example.bucket_name + root_prefix = "job-attachments/" + } + + # Configure job run-as user settings for POSIX systems + job_run_as_user = { + run_as = "QUEUE_CONFIGURED_USER" + posix = { + user = "deadline-worker" + group = "deadline-group" + } + } + + # Specify allowed storage profile IDs (dynamically referenced) + allowed_storage_profile_ids = [ + awscc_deadline_storage_profile.linux_storage.storage_profile_id, + awscc_deadline_storage_profile.windows_storage.storage_profile_id + ] + + # Specify required file system location names + required_file_system_location_names = [ + "shared storage", + "render assets" + ] + + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} +``` ## Schema @@ -88,4 +187,4 @@ Import is supported using the following syntax: ```shell $ terraform import awscc_deadline_queue.example "arn" -``` +``` \ No newline at end of file diff --git a/docs/resources/deadline_queue_environment.md b/docs/resources/deadline_queue_environment.md index e80436246f..326bf205a4 100644 --- a/docs/resources/deadline_queue_environment.md +++ b/docs/resources/deadline_queue_environment.md @@ -1,5 +1,4 @@ --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_deadline_queue_environment Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +9,40 @@ description: |- Definition of AWS::Deadline::QueueEnvironment Resource Type +## Example Usage +```terraform +resource "awscc_deadline_farm" "example" { + display_name = "Example Farm" + description = "Example Deadline Farm" + + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} + +resource "awscc_deadline_queue" "example" { + display_name = "Example Queue" + farm_id = awscc_deadline_farm.example.farm_id +} + +resource "awscc_deadline_queue_environment" "example" { + farm_id = awscc_deadline_farm.example.farm_id + queue_id = awscc_deadline_queue.example.queue_id + priority = 50 + template_type = "JSON" + template = jsonencode({ + specificationVersion = "environment-2023-09" + environment = { + name = "ExampleEnvironment" + variables = { + EXAMPLE_VAR = "example_value" + } + } + }) +} +``` ## Schema @@ -35,4 +67,4 @@ Import is supported using the following syntax: ```shell $ terraform import awscc_deadline_queue_environment.example "farm_id|queue_id|queue_environment_id" -``` +``` \ No newline at end of file diff --git a/docs/resources/deadline_queue_fleet_association.md b/docs/resources/deadline_queue_fleet_association.md index 9f15ba0030..98e3c8b611 100644 --- a/docs/resources/deadline_queue_fleet_association.md +++ b/docs/resources/deadline_queue_fleet_association.md @@ -1,5 +1,4 @@ --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_deadline_queue_fleet_association Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +9,118 @@ description: |- Definition of AWS::Deadline::QueueFleetAssociation Resource Type +## Example Usage +```terraform +resource "awscc_deadline_farm" "example" { + display_name = "example" + description = "Example" + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} + +# Create IAM role for the queue session +resource "awscc_iam_role" "queue_session_role" { + role_name = "example" + assume_role_policy_document = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "credentials.deadline.amazonaws.com" + } + } + ] + }) + + # Add basic permissions for queue session operations + managed_policy_arns = [ + "arn:aws:iam::aws:policy/AWSDeadlineCloud-UserAccessJobs" + ] +} + +# Create the Deadline Queue +resource "awscc_deadline_queue" "example" { + display_name = "example" + farm_id = awscc_deadline_farm.example.farm_id + + job_run_as_user = { + run_as = "QUEUE_CONFIGURED_USER" + posix = { + user = "deadline-user" + group = "deadline-group" + } + } + + role_arn = awscc_iam_role.queue_session_role.arn +} + + +# Create IAM role for the fleet +resource "awscc_iam_role" "complete_fleet_role" { + role_name = "deadline-fleet-role" + assume_role_policy_document = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "credentials.deadline.amazonaws.com" + } + } + ] + }) + + # Add basic permissions for Deadline fleet operations + managed_policy_arns = [ + "arn:aws:iam::aws:policy/AWSDeadlineCloud-FleetWorker" + ] +} + +# Create the Deadline Fleet +resource "awscc_deadline_fleet" "example" { + display_name = "example" + farm_id = awscc_deadline_farm.example.farm_id + max_worker_count = 20 + min_worker_count = 1 + role_arn = awscc_iam_role.complete_fleet_role.arn + + configuration = { + service_managed_ec_2 = { + instance_capabilities = { + cpu_architecture_type = "x86_64" + os_family = "LINUX" + memory_mi_b = { + min = 4096 + max = 16384 + } + v_cpu_count = { + min = 2 + max = 8 + } + root_ebs_volume = { + size_gi_b = 100 + } + } + instance_market_options = { + type = "spot" + } + } + } +} + +# Create Queue Fleet Association +resource "awscc_deadline_queue_fleet_association" "complete_association" { + farm_id = awscc_deadline_farm.example.farm_id + queue_id = awscc_deadline_queue.example.queue_id + fleet_id = awscc_deadline_fleet.example.fleet_id +} +``` ## Schema @@ -31,4 +141,4 @@ Import is supported using the following syntax: ```shell $ terraform import awscc_deadline_queue_fleet_association.example "farm_id|fleet_id|queue_id" -``` +``` \ No newline at end of file diff --git a/docs/resources/deadline_queue_limit_association.md b/docs/resources/deadline_queue_limit_association.md index f65aab48c7..240c42a76b 100644 --- a/docs/resources/deadline_queue_limit_association.md +++ b/docs/resources/deadline_queue_limit_association.md @@ -1,5 +1,4 @@ --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_deadline_queue_limit_association Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +9,70 @@ description: |- Definition of AWS::Deadline::QueueLimitAssociation Resource Type +## Example Usage +```terraform +resource "awscc_deadline_farm" "example" { + display_name = "example" + description = "Example" + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} + +# Create IAM role for the queue session +resource "awscc_iam_role" "queue_session_role" { + role_name = "example" + assume_role_policy_document = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "credentials.deadline.amazonaws.com" + } + } + ] + }) + + # Add basic permissions for queue session operations + managed_policy_arns = [ + "arn:aws:iam::aws:policy/AWSDeadlineCloud-UserAccessJobs" + ] +} + +# Create the Deadline Queue +resource "awscc_deadline_queue" "example" { + display_name = "example" + farm_id = awscc_deadline_farm.example.farm_id + + job_run_as_user = { + run_as = "QUEUE_CONFIGURED_USER" + posix = { + user = "deadline-user" + group = "deadline-group" + } + } + + role_arn = awscc_iam_role.queue_session_role.arn +} + +resource "awscc_deadline_limit" "example" { + display_name = "CPULimit" + farm_id = awscc_deadline_farm.example.farm_id + amount_requirement_name = "amount.cpu_cores" + max_count = 100 +} + + +resource "awscc_deadline_queue_limit_association" "cpu_association" { + farm_id = awscc_deadline_farm.example.farm_id + queue_id = awscc_deadline_queue.example.queue_id + limit_id = awscc_deadline_limit.example.limit_id +} +``` ## Schema @@ -31,4 +93,4 @@ Import is supported using the following syntax: ```shell $ terraform import awscc_deadline_queue_limit_association.example "farm_id|limit_id|queue_id" -``` +``` \ No newline at end of file diff --git a/docs/resources/deadline_storage_profile.md b/docs/resources/deadline_storage_profile.md index 7c3661e761..8528d14708 100644 --- a/docs/resources/deadline_storage_profile.md +++ b/docs/resources/deadline_storage_profile.md @@ -1,5 +1,4 @@ --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_deadline_storage_profile Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +9,35 @@ description: |- Definition of AWS::Deadline::StorageProfile Resource Type - +## Example Usage + +```terraform +resource "awscc_deadline_farm" "example" { + display_name = "ExampleRenderFarm" + description = "Example Deadline render farm for Linux storage profile" + + tags = [ + { + key = "ManagedBy" + value = "AWSCC" + } + ] +} + +resource "awscc_deadline_storage_profile" "example" { + display_name = "Linux Storage Profile" + farm_id = awscc_deadline_farm.example.farm_id + os_family = "LINUX" + + file_system_locations = [ + { + name = "SharedAssets" + path = "/mnt/shared/assets" + type = "SHARED" + } + ] +} +``` ## Schema @@ -45,4 +72,4 @@ Import is supported using the following syntax: ```shell $ terraform import awscc_deadline_storage_profile.example "farm_id|storage_profile_id" -``` +``` \ No newline at end of file diff --git a/docs/resources/dsql_cluster.md b/docs/resources/dsql_cluster.md index 9ea5af3153..1b326f9ff3 100644 --- a/docs/resources/dsql_cluster.md +++ b/docs/resources/dsql_cluster.md @@ -1,5 +1,4 @@ --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_dsql_cluster Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +9,21 @@ description: |- Resource Type definition for AWS::DSQL::Cluster +## Example Usage +```terraform +# Basic DSQL Cluster +resource "awscc_dsql_cluster" "example" { + deletion_protection_enabled = false + + tags = [ + { + key = "ModifiedBy" + value = "AWSCC" + } + ] +} +``` ## Schema @@ -53,4 +66,4 @@ Import is supported using the following syntax: ```shell $ terraform import awscc_dsql_cluster.example "identifier" -``` +``` \ No newline at end of file diff --git a/docs/resources/ec2_route_server.md b/docs/resources/ec2_route_server.md index f98cc6c3d3..52dee57ce2 100644 --- a/docs/resources/ec2_route_server.md +++ b/docs/resources/ec2_route_server.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_ec2_route_server Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,49 @@ description: |- VPC Route Server - +## Example Usage + +### Configure EC2 Route Server with Persistent Routes + +This configuration creates an EC2 Route Server with ASN 65000 in a custom VPC, enabling route persistence for 5 minutes and SNS notifications for route changes. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +# Get current AWS region details +data "aws_region" "current" {} + +# VPC and subnet for the route server +resource "awscc_ec2_vpc" "main" { + cidr_block = "10.0.0.0/16" + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +resource "awscc_ec2_subnet" "main" { + vpc_id = awscc_ec2_vpc.main.id + cidr_block = "10.0.1.0/24" + availability_zone = "${data.aws_region.current.name}a" + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +# Create Route Server +resource "awscc_ec2_route_server" "example" { + amazon_side_asn = 65000 + sns_notifications_enabled = true + persist_routes = "enable" + persist_routes_duration = 5 + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} +``` ## Schema diff --git a/docs/resources/ec2_route_server_peer.md b/docs/resources/ec2_route_server_peer.md index 5ad6510fd6..db799887b5 100644 --- a/docs/resources/ec2_route_server_peer.md +++ b/docs/resources/ec2_route_server_peer.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_ec2_route_server_peer Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,108 @@ description: |- VPC Route Server Peer - +## Example Usage + +### Configure Route Server BGP Peer + +Creates a BGP peer connection for an AWS Route Server with ASN 65000, establishing routing communication through a Transit Gateway VPC attachment with specified peer address 10.0.1.100. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +# Data source for region +data "aws_region" "current" {} + +# Create VPC +resource "awscc_ec2_vpc" "main" { + cidr_block = "10.0.0.0/16" + enable_dns_hostnames = true + enable_dns_support = true + + tags = [{ + key = "Name" + value = "route-server-vpc" + }, { + key = "Modified By" + value = "AWSCC" + }] +} + +# Create Subnet +resource "awscc_ec2_subnet" "main" { + vpc_id = awscc_ec2_vpc.main.id + cidr_block = "10.0.1.0/24" + availability_zone = "${data.aws_region.current.name}a" + + tags = [{ + key = "Name" + value = "route-server-subnet" + }, { + key = "Modified By" + value = "AWSCC" + }] +} + +# Create an internet gateway +resource "awscc_ec2_internet_gateway" "main" { + tags = [{ + key = "Name" + value = "route-server-igw" + }, { + key = "Modified By" + value = "AWSCC" + }] +} + +# Attach Internet Gateway to VPC +resource "aws_internet_gateway_attachment" "main" { + internet_gateway_id = awscc_ec2_internet_gateway.main.id + vpc_id = awscc_ec2_vpc.main.id +} + +# Route Server +resource "awscc_ec2_transit_gateway" "main" { + description = "Transit gateway for route server" + tags = [{ + key = "Name" + value = "route-server-tgw" + }, { + key = "Modified By" + value = "AWSCC" + }] +} + +# Create Route Server endpoint +resource "awscc_ec2_transit_gateway_vpc_attachment" "main" { + vpc_id = awscc_ec2_vpc.main.id + subnet_ids = [awscc_ec2_subnet.main.id] + transit_gateway_id = awscc_ec2_transit_gateway.main.id + + tags = [{ + key = "Name" + value = "route-server-endpoint" + }, { + key = "Modified By" + value = "AWSCC" + }] +} + +# Create Route Server Peer +resource "awscc_ec2_route_server_peer" "example" { + route_server_endpoint_id = awscc_ec2_transit_gateway_vpc_attachment.main.id + peer_address = "10.0.1.100" + + bgp_options = { + peer_asn = 65000 + peer_liveness_detection = "bgp-keepalive" + } + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} +``` ## Schema diff --git a/docs/resources/evs_environment.md b/docs/resources/evs_environment.md index b1641d2e5d..e79e7fc18e 100644 --- a/docs/resources/evs_environment.md +++ b/docs/resources/evs_environment.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_evs_environment Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,164 @@ description: |- An environment created within the EVS service - +## Example Usage + +### VMware Cloud on AWS SDDC Environment Setup + +Creates an AWS VMware Cloud SDDC environment with a 4-node i4i.metal cluster, complete network configuration including VPC, subnets, and security groups, along with all required VLANs for VMware Cloud Foundation (VCF) deployment. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +# Create a VPC for the EVS environment +resource "awscc_ec2_vpc" "evs" { + cidr_block = "10.0.0.0/16" + tags = [{ + key = "Name" + value = "evs-vpc" + }] +} + +# Create a subnet for service access +resource "awscc_ec2_subnet" "service_access" { + vpc_id = awscc_ec2_vpc.evs.id + cidr_block = "10.0.1.0/24" + tags = [{ + key = "Name" + value = "evs-service-access" + }] +} + +# Create a security group for EVS service access +resource "awscc_ec2_security_group" "evs_service" { + group_name = "evs-service-access" + group_description = "Security group for EVS service access" + vpc_id = awscc_ec2_vpc.evs.id + security_group_ingress = [{ + from_port = 443 + to_port = 443 + ip_protocol = "tcp" + cidr_ip = "0.0.0.0/0" + }] + security_group_egress = [{ + from_port = -1 + to_port = -1 + ip_protocol = "-1" + cidr_ip = "0.0.0.0/0" + }] + tags = [{ + key = "Name" + value = "evs-service-sg" + }] +} + +# Create an SSH key pair for the hosts +resource "awscc_ec2_key_pair" "evs_hosts" { + key_name = "evs-hosts-key" + tags = [{ + key = "Name" + value = "evs-hosts-key" + }] +} + +# Create the EVS Environment +resource "awscc_evs_environment" "example" { + environment_name = "example-evs" + site_id = "examplesite" + vpc_id = awscc_ec2_vpc.evs.id + vcf_version = "VCF-5.2.1" + + service_access_subnet_id = awscc_ec2_subnet.service_access.id + terms_accepted = true + + connectivity_info = { + private_route_server_peerings = ["10.0.0.1", "10.0.0.2"] + } + + license_info = { + solution_key = "ABCD1-EFGH2-IJKL3-MNOP4-QRST5" + vsan_key = "VSAN1-VSAN2-VSAN3-VSAN4-VSAN5" + } + + vcf_hostnames = { + cloud_builder = "cloudbuilder" + nsx = "nsx" + nsx_edge_1 = "nsxedge1" + nsx_edge_2 = "nsxedge2" + nsx_manager_1 = "nsxmanager1" + nsx_manager_2 = "nsxmanager2" + nsx_manager_3 = "nsxmanager3" + sddc_manager = "sddcmanager" + v_center = "vcenter" + } + + service_access_security_groups = { + security_groups = [awscc_ec2_security_group.evs_service.id] + } + + initial_vlans = { + vmk_management = { + cidr = "10.0.10.0/24" + } + vm_management = { + cidr = "10.0.11.0/24" + } + v_san = { + cidr = "10.0.12.0/24" + } + v_motion = { + cidr = "10.0.13.0/24" + } + v_tep = { + cidr = "10.0.14.0/24" + } + edge_v_tep = { + cidr = "10.0.15.0/24" + } + nsx_up_link = { + cidr = "10.0.16.0/24" + } + hcx = { + cidr = "10.0.17.0/24" + } + expansion_vlan_1 = { + cidr = "10.0.18.0/24" + } + expansion_vlan_2 = { + cidr = "10.0.19.0/24" + } + } + + # Required host configuration (must have exactly 4 hosts) + hosts = [ + { + instance_type = "i4i.metal" + host_name = "evs-host-1" + key_name = awscc_ec2_key_pair.evs_hosts.key_name + }, + { + instance_type = "i4i.metal" + host_name = "evs-host-2" + key_name = awscc_ec2_key_pair.evs_hosts.key_name + }, + { + instance_type = "i4i.metal" + host_name = "evs-host-3" + key_name = awscc_ec2_key_pair.evs_hosts.key_name + }, + { + instance_type = "i4i.metal" + host_name = "evs-host-4" + key_name = awscc_ec2_key_pair.evs_hosts.key_name + } + ] + + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} +``` ## Schema diff --git a/docs/resources/guardduty_publishing_destination.md b/docs/resources/guardduty_publishing_destination.md index c2050cf747..2beb6a6122 100644 --- a/docs/resources/guardduty_publishing_destination.md +++ b/docs/resources/guardduty_publishing_destination.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_guardduty_publishing_destination Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,108 @@ description: |- Resource Type definition for AWS::GuardDuty::PublishingDestination. - +## Example Usage + +### GuardDuty Findings Export to S3 + +Configure GuardDuty to export its findings to an S3 bucket with KMS encryption, including all necessary IAM permissions and bucket policies for secure findings storage and access. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} +data "aws_guardduty_detector" "existing" {} + +# S3 bucket for findings +resource "awscc_s3_bucket" "findings" { + bucket_name = "guardduty-findings-${data.aws_region.current.name}-${data.aws_caller_identity.current.account_id}" + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +# KMS key for encrypting findings +resource "awscc_kms_key" "findings" { + description = "KMS key for GuardDuty findings" + key_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Sid = "Enable IAM User Permissions" + Effect = "Allow" + Principal = { + AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root" + } + Action = "kms:*" + Resource = "*" + }, + { + Sid = "Allow GuardDuty to encrypt findings" + Effect = "Allow" + Principal = { + Service = "guardduty.amazonaws.com" + } + Action = [ + "kms:GenerateDataKey", + "kms:Encrypt" + ] + Resource = "*" + } + ] + }) + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +# S3 Bucket policy allowing GuardDuty to write findings +resource "awscc_s3_bucket_policy" "findings" { + bucket = awscc_s3_bucket.findings.id + policy_document = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Sid = "Allow GuardDuty to write findings" + Effect = "Allow" + Principal = { + Service = "guardduty.amazonaws.com" + } + Action = [ + "s3:GetBucketLocation", + "s3:PutObject" + ] + Resource = [ + "arn:aws:s3:::${awscc_s3_bucket.findings.id}", + "arn:aws:s3:::${awscc_s3_bucket.findings.id}/*" + ], + Condition = { + StringEquals = { + "aws:SourceArn" = "arn:aws:guardduty:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:detector/${data.aws_guardduty_detector.existing.id}", + "aws:SourceAccount" = data.aws_caller_identity.current.account_id + } + } + } + ] + }) +} + +# GuardDuty Publishing Destination +resource "awscc_guardduty_publishing_destination" "example" { + detector_id = data.aws_guardduty_detector.existing.id + destination_type = "S3" + destination_properties = { + destination_arn = "arn:aws:s3:::${awscc_s3_bucket.findings.id}" + kms_key_arn = awscc_kms_key.findings.arn + } + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} +``` ## Schema diff --git a/docs/resources/iotsitewise_dataset.md b/docs/resources/iotsitewise_dataset.md index dd2f3c4b34..f84907d409 100644 --- a/docs/resources/iotsitewise_dataset.md +++ b/docs/resources/iotsitewise_dataset.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_iotsitewise_dataset Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,91 @@ description: |- Resource schema for AWS::IoTSiteWise::Dataset. - +## Example Usage + +### IoT SiteWise Dataset with Kendra Integration + +Creates an IoT SiteWise dataset that integrates with Amazon Kendra knowledge base, including necessary IAM role and policy configuration for secure access to Kendra resources. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} + +resource "awscc_iam_role" "dataset_role" { + role_name = "iotsitewise-dataset-role" + assume_role_policy_document = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "iotsitewise.amazonaws.com" + } + } + ] + }) + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +resource "aws_iam_policy" "dataset_policy" { + name = "iotsitewise-dataset-policy" + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "kendra:ListTagsForResource", + "kendra:GetKnowledgeBase", + "kendra:DescribeKnowledgeBase" + ] + Resource = "*" + } + ] + }) + + tags = { + "Modified By" = "AWSCC" + } +} + +resource "aws_iam_role_policy_attachment" "dataset_role_policy" { + policy_arn = aws_iam_policy.dataset_policy.arn + role = awscc_iam_role.dataset_role.role_name +} + +locals { + knowledge_base_arn = "arn:aws:kendra:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:knowledgebase/example" +} + +resource "awscc_iotsitewise_dataset" "example" { + dataset_name = "example-dataset" + dataset_description = "Example IoT SiteWise Dataset" + + dataset_source = { + source_type = "KENDRA" + source_format = "KNOWLEDGE_BASE" + source_detail = { + kendra = { + knowledge_base_arn = local.knowledge_base_arn + role_arn = awscc_iam_role.dataset_role.arn + } + } + } + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} +``` ## Schema diff --git a/docs/resources/lightsail_instance_snapshot.md b/docs/resources/lightsail_instance_snapshot.md index 7544feab95..6c39967c8b 100644 --- a/docs/resources/lightsail_instance_snapshot.md +++ b/docs/resources/lightsail_instance_snapshot.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_lightsail_instance_snapshot Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,42 @@ description: |- Resource Type definition for AWS::Lightsail::InstanceSnapshot +## Example Usage + +### Create Lightsail Instance Snapshot + +Creates a snapshot of an Amazon Lightsail instance, demonstrating the configuration of both the source Lightsail instance and its snapshot using the AWSCC provider. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +# Get current AWS region +data "aws_region" "current" {} +# Create a Lightsail instance first to take a snapshot from +resource "awscc_lightsail_instance" "example" { + instance_name = "example-instance" + availability_zone = "${data.aws_region.current.name}a" + blueprint_id = "amazon_linux_2" + bundle_id = "nano_2_0" + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +# Create the instance snapshot +resource "awscc_lightsail_instance_snapshot" "example" { + instance_name = awscc_lightsail_instance.example.instance_name + instance_snapshot_name = "example-snapshot" + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} +``` ## Schema diff --git a/docs/resources/neptune_db_cluster_parameter_group.md b/docs/resources/neptune_db_cluster_parameter_group.md index c111d9646c..70261789e8 100644 --- a/docs/resources/neptune_db_cluster_parameter_group.md +++ b/docs/resources/neptune_db_cluster_parameter_group.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_neptune_db_cluster_parameter_group Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,33 @@ description: |- The AWS::Neptune::DBClusterParameterGroup resource creates a new Amazon Neptune DB cluster parameter group +## Example Usage + +### Configure Neptune DB Cluster Parameters + +Creates a Neptune DB cluster parameter group with custom configuration settings, enabling audit logging and setting query timeout parameters for Neptune 1.2 family databases. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. +```terraform +# Example Neptune DB Cluster Parameter Group +resource "awscc_neptune_db_cluster_parameter_group" "example" { + name = "example-neptune-cluster-pg" + family = "neptune1.2" + description = "Example Neptune cluster parameter group" + + # Example parameters in JSON format + parameters = jsonencode({ + "neptune_enable_audit_log" = "1" + "neptune_query_timeout" = "120000" + }) + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} +``` ## Schema diff --git a/docs/resources/networkfirewall_vpc_endpoint_association.md b/docs/resources/networkfirewall_vpc_endpoint_association.md index 619dd109d8..dabddcf0c2 100644 --- a/docs/resources/networkfirewall_vpc_endpoint_association.md +++ b/docs/resources/networkfirewall_vpc_endpoint_association.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_networkfirewall_vpc_endpoint_association Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,37 @@ description: |- Resource type definition for AWS::NetworkFirewall::VpcEndpointAssociation +## Example Usage + +### Network Firewall VPC Association + +Associates a Network Firewall with a VPC endpoint by configuring the subnet mapping and IP address type, enabling the firewall to protect network traffic in the specified VPC subnet. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +# Example of Network Firewall VPC Endpoint Association configuration +resource "awscc_networkfirewall_vpc_endpoint_association" "example" { + # The ARN of an existing Network Firewall + firewall_arn = "arn:aws:network-firewall:us-west-2:123456789012:firewall/example-firewall" + # The ID of an existing VPC + vpc_id = "vpc-1234567890abcdef0" + + # The subnet mapping configuration + subnet_mapping = { + subnet_id = "subnet-1234567890abcdef0" + ip_address_type = "IPV4" + } + + description = "Example VPC endpoint association" + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} +``` ## Schema diff --git a/docs/resources/omics_workflow_version.md b/docs/resources/omics_workflow_version.md index 886a54eb82..35cd4406de 100644 --- a/docs/resources/omics_workflow_version.md +++ b/docs/resources/omics_workflow_version.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_omics_workflow_version Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,65 @@ description: |- Definition of AWS::Omics::WorkflowVersion Resource Type. - +## Example Usage + +### Deploy Omics Workflow Version + +Creates an AWS Omics workflow version with S3-based definition file, supporting infrastructure (S3 bucket), and configurable parameters template for running genomics workflows using WDL engine. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +# Get AWS account ID and region +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} + +# Create S3 bucket for workflow files +resource "awscc_s3_bucket" "workflow" { + bucket_name = "omics-workflow-${data.aws_caller_identity.current.account_id}-${data.aws_region.current.name}" + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +# Upload workflow file (keeping AWS standard as no AWSCC equivalent) +resource "aws_s3_object" "workflow" { + bucket = awscc_s3_bucket.workflow.id + key = "workflow.wdl" + source = "workflow.wdl" + etag = filemd5("workflow.wdl") +} + +# Create the Omics Workflow +resource "awscc_omics_workflow" "example" { + name = "example-workflow" + description = "Example Omics Workflow" + engine = "WDL" + definition_uri = "s3://${awscc_s3_bucket.workflow.id}/${aws_s3_object.workflow.key}" + tags = { + "Modified By" = "AWSCC" + } +} + +# Create a workflow version +resource "awscc_omics_workflow_version" "example" { + workflow_id = awscc_omics_workflow.example.id + version_name = "v1.0.0" + description = "Example workflow version" + engine = "WDL" + definition_uri = "s3://${awscc_s3_bucket.workflow.id}/${aws_s3_object.workflow.key}" + parameter_template = { + "name" = { + description = "Name to say hello to" + optional = true + } + } + tags = { + "Modified By" = "AWSCC" + } +} +``` ## Schema diff --git a/docs/resources/quicksight_folder.md b/docs/resources/quicksight_folder.md index 746eed5d08..66abc54263 100644 --- a/docs/resources/quicksight_folder.md +++ b/docs/resources/quicksight_folder.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_quicksight_folder Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,51 @@ description: |- Definition of the AWS::QuickSight::Folder Resource Type. - +## Example Usage + +```terraform +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} + +resource "awscc_quicksight_folder" "example" { + aws_account_id = data.aws_caller_identity.current.account_id + folder_id = "analytics-team-folder" + name = "example" + folder_type = "SHARED" + sharing_model = "ACCOUNT" + + # Grant permissions to users and groups + permissions = [ + { + principal = "arn:aws:quicksight:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:user/default/analytics-admin" + actions = [ + "quicksight:CreateFolder", + "quicksight:DescribeFolder", + "quicksight:UpdateFolder", + "quicksight:DeleteFolder", + "quicksight:CreateFolderMembership", + "quicksight:DeleteFolderMembership", + "quicksight:DescribeFolderPermissions", + "quicksight:UpdateFolderPermissions" + ] + }, + { + principal = "arn:aws:quicksight:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:group/default/analytics-team" + actions = [ + "quicksight:DescribeFolder", + "quicksight:CreateFolderMembership" + ] + } + ] + + tags = [ + { + key = "ModifiedBy" + value = "AWSCC" + } + ] +} +``` ## Schema diff --git a/docs/resources/s3express_access_point.md b/docs/resources/s3express_access_point.md index e6991be479..5c1824b6d3 100644 --- a/docs/resources/s3express_access_point.md +++ b/docs/resources/s3express_access_point.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_s3express_access_point Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,64 @@ description: |- The AWS::S3Express::AccessPoint resource is an Amazon S3 resource type that you can use to access buckets. - +## Example Usage + +### S3 Express Access Point with Scoped Permissions + +Creates an S3 Express access point for a directory bucket with scoped permissions for specific prefixes and operations, while enforcing strict public access blocking for enhanced security. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} + +# Create a directory bucket first (requires S3 Express) +resource "awscc_s3express_directory_bucket" "example" { + bucket_name = "example-express-directory-bucket" + data_redundancy = "SingleAvailabilityZone" + location_name = "${data.aws_region.current.name}a" +} + +data "aws_iam_policy_document" "access_point_policy" { + statement { + effect = "Allow" + principals { + type = "AWS" + identifiers = [ + data.aws_caller_identity.current.arn + ] + } + actions = [ + "s3:GetObject", + "s3:PutObject", + "s3:ListBucket" + ] + resources = [ + "arn:aws:s3:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:accesspoint/*", + "arn:aws:s3:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:accesspoint/*/object/*" + ] + } +} + +resource "awscc_s3express_access_point" "example" { + name = "example-access-point" + bucket = awscc_s3express_directory_bucket.example.id + policy = jsonencode(data.aws_iam_policy_document.access_point_policy.json) + + public_access_block_configuration = { + block_public_acls = true + block_public_policy = true + ignore_public_acls = true + restrict_public_buckets = true + } + + scope = { + permissions = ["GetObject", "PutObject", "ListBucket"] + prefixes = ["documents/", "images/"] + } +} +``` ## Schema diff --git a/docs/resources/ssmguiconnect_preferences.md b/docs/resources/ssmguiconnect_preferences.md index 91ac4572a1..0ba698d1d7 100644 --- a/docs/resources/ssmguiconnect_preferences.md +++ b/docs/resources/ssmguiconnect_preferences.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_ssmguiconnect_preferences Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,87 @@ description: |- Definition of AWS::SSMGuiConnect::Preferences Resource Type - +## Example Usage + +### SSM GUI Connect Recording Configuration + +Set up SSM GUI Connect with secure session recording preferences using KMS encryption and S3 bucket storage for recorded sessions, ensuring encrypted and private storage of connection recordings. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +```terraform +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} + +# Create a basic KMS key for testing +resource "awscc_kms_key" "recording_key" { + description = "KMS key for SSM GUI Connect recording encryption" + key_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Sid = "Enable IAM User Permissions" + Effect = "Allow" + Principal = { + AWS = "*" + } + Action = "kms:*" + Resource = "*" + Condition = { + StringEquals = { + "kms:CallerAccount" = "${data.aws_caller_identity.current.account_id}" + } + } + } + ] + }) + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +# Create S3 bucket for recordings with integrated public access block and encryption +resource "awscc_s3_bucket" "recordings" { + bucket_name = "ssm-gui-connect-recordings-${data.aws_caller_identity.current.account_id}-${data.aws_region.current.name}" + + public_access_block_configuration = { + block_public_acls = true + block_public_policy = true + ignore_public_acls = true + restrict_public_buckets = true + } + + bucket_encryption = { + server_side_encryption_configuration = [{ + server_side_encryption_by_default = { + kms_master_key_id = awscc_kms_key.recording_key.id + sse_algorithm = "aws:kms" + } + }] + } + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +resource "awscc_ssmguiconnect_preferences" "example" { + connection_recording_preferences = { + kms_key_arn = awscc_kms_key.recording_key.id + recording_destinations = { + s3_buckets = [ + { + bucket_name = awscc_s3_bucket.recordings.id + bucket_owner = data.aws_caller_identity.current.account_id + } + ] + } + } +} +``` ## Schema diff --git a/docs/resources/xray_transaction_search_config.md b/docs/resources/xray_transaction_search_config.md index 93885d83c5..6624ebb3aa 100644 --- a/docs/resources/xray_transaction_search_config.md +++ b/docs/resources/xray_transaction_search_config.md @@ -1,5 +1,5 @@ + --- -# generated by https://github.com/hashicorp/terraform-plugin-docs page_title: "awscc_xray_transaction_search_config Resource - terraform-provider-awscc" subcategory: "" description: |- @@ -10,7 +10,49 @@ description: |- This schema provides construct and validation rules for AWS-XRay TransactionSearchConfig resource parameters. +## Example Usage + +### Configure XRay Transaction Search with CloudWatch Integration + +Configures AWS X-Ray transaction search with 100% indexing percentage while setting up the necessary CloudWatch Logs permissions to allow X-Ray service to store and process trace data. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. +```terraform +# Required policy for CloudWatch Logs +data "aws_iam_policy_document" "xray_cloudwatch_policy" { + statement { + effect = "Allow" + actions = [ + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents", + "logs:GetLogEvents", + "logs:PutRetentionPolicy", + "logs:GetLogGroupFields", + "logs:GetQueryResults" + ] + resources = ["*"] + principals { + type = "Service" + identifiers = ["xray.amazonaws.com"] + } + } +} + +# Create the CloudWatch Logs resource policy +resource "aws_cloudwatch_log_resource_policy" "xray" { + policy_document = data.aws_iam_policy_document.xray_cloudwatch_policy.json + policy_name = "xray-spans-policy" +} + +# XRay Transaction Search Config +resource "awscc_xray_transaction_search_config" "example" { + indexing_percentage = 100 + + depends_on = [aws_cloudwatch_log_resource_policy.xray] +} +``` ## Schema diff --git a/examples/resources/awscc_applicationsignals_discovery/main.tf b/examples/resources/awscc_applicationsignals_discovery/main.tf new file mode 100644 index 0000000000..b249e4b260 --- /dev/null +++ b/examples/resources/awscc_applicationsignals_discovery/main.tf @@ -0,0 +1,2 @@ +resource "awscc_applicationsignals_discovery" "example" { +} \ No newline at end of file diff --git a/examples/resources/awscc_batch_consumable_resource/main.tf b/examples/resources/awscc_batch_consumable_resource/main.tf new file mode 100644 index 0000000000..3ccb342dab --- /dev/null +++ b/examples/resources/awscc_batch_consumable_resource/main.tf @@ -0,0 +1,14 @@ +# Batch Consumable Resource Example +resource "awscc_batch_consumable_resource" "demo" { + resource_type = "REPLENISHABLE" + total_quantity = 10 + consumable_resource_name = "demo-license-resource" + + tags = [{ + key = "Environment" + value = "demo" + }, { + key = "Modified By" + value = "AWSCC" + }] +} \ No newline at end of file diff --git a/examples/resources/awscc_bedrock_intelligent_prompt_router/main.tf b/examples/resources/awscc_bedrock_intelligent_prompt_router/main.tf new file mode 100644 index 0000000000..002016ffef --- /dev/null +++ b/examples/resources/awscc_bedrock_intelligent_prompt_router/main.tf @@ -0,0 +1,35 @@ +data "aws_region" "current" {} + +# Create the Bedrock Intelligent Prompt Router +resource "awscc_bedrock_intelligent_prompt_router" "example" { + prompt_router_name = "example-intelligent-prompt-router" + description = "Example intelligent prompt router for routing between Claude models based on response quality" + + # Primary models to route between (limited to exactly 2 models) + models = [ + { + model_arn = "arn:aws:bedrock:${data.aws_region.current.name}::foundation-model/anthropic.claude-3-5-sonnet-20241022-v2:0" + }, + { + model_arn = "arn:aws:bedrock:${data.aws_region.current.name}::foundation-model/anthropic.claude-3-haiku-20240307-v1:0" + } + ] + + # Fallback model (must be one of the models in the models list above) + fallback_model = { + model_arn = "arn:aws:bedrock:${data.aws_region.current.name}::foundation-model/anthropic.claude-3-haiku-20240307-v1:0" + } + + # Routing criteria based on response quality difference + # Value must be a multiple of 5 (likely as percentage: 5, 10, 15, 20, etc.) + routing_criteria = { + response_quality_difference = 20 + } + + tags = [ + { + key = "ModifiedBy" + value = "AWSCC" + } + ] +} diff --git a/examples/resources/awscc_cognito_user_pool_domain/main.tf b/examples/resources/awscc_cognito_user_pool_domain/main.tf new file mode 100644 index 0000000000..151c3f1129 --- /dev/null +++ b/examples/resources/awscc_cognito_user_pool_domain/main.tf @@ -0,0 +1,32 @@ +# Get current account ID for dynamic naming +data "aws_caller_identity" "current" {} + +# Create the Cognito User Pool +resource "aws_cognito_user_pool" "example" { + name = "my-user-pool" + + auto_verified_attributes = ["email"] + username_attributes = ["email"] + + verification_message_template { + default_email_option = "CONFIRM_WITH_CODE" + } + + admin_create_user_config { + allow_admin_create_user_only = false + } + + email_configuration { + email_sending_account = "COGNITO_DEFAULT" + } + + tags = { + "Modified By" = "AWS" + } +} + +# Create the User Pool Domain +resource "awscc_cognito_user_pool_domain" "example" { + domain = "my-example-domain-${data.aws_caller_identity.current.account_id}" + user_pool_id = aws_cognito_user_pool.example.id +} \ No newline at end of file diff --git a/examples/resources/awscc_deadline_limit/main.tf b/examples/resources/awscc_deadline_limit/main.tf new file mode 100644 index 0000000000..c3c61bbf3e --- /dev/null +++ b/examples/resources/awscc_deadline_limit/main.tf @@ -0,0 +1,20 @@ +resource "awscc_deadline_farm" "example" { + display_name = "ExampleRenderFarm" + description = "Example Deadline Farm for demonstrating limit configuration" + + tags = [ + { + key = "ModifiedBy" + value = "AWSCC" + } + ] +} + +# Create a Deadline Limit for CPU usage +resource "awscc_deadline_limit" "example" { + farm_id = awscc_deadline_farm.example.farm_id + display_name = "CPU Limit" + description = "CPU core usage limit for the render farm" + amount_requirement_name = "amount.cpu" + max_count = 100 +} diff --git a/examples/resources/awscc_deadline_queue/main.tf b/examples/resources/awscc_deadline_queue/main.tf new file mode 100644 index 0000000000..b513f131bc --- /dev/null +++ b/examples/resources/awscc_deadline_queue/main.tf @@ -0,0 +1,97 @@ +# Create S3 bucket for job attachments +resource "awscc_s3_bucket" "example" { + bucket_name = "deadline-job-attachments-${random_id.bucket_suffix.hex}" + + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} + +# Generate random suffix for bucket name uniqueness +resource "random_id" "bucket_suffix" { + byte_length = 4 +} + +resource "awscc_deadline_farm" "example" { + display_name = "ExampleRenderFarm" + description = "Example Deadline Farm for queue demonstration" + + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} + +# Create storage profiles for different operating systems +resource "awscc_deadline_storage_profile" "linux_storage" { + display_name = "Linux Shared Storage" + farm_id = awscc_deadline_farm.example.farm_id + os_family = "LINUX" + + file_system_locations = [{ + name = "shared storage" + path = "/mnt/shared" + type = "SHARED" + }, { + name = "render assets" + path = "/mnt/assets" + type = "SHARED" + }] +} + +resource "awscc_deadline_storage_profile" "windows_storage" { + display_name = "Windows Shared Storage" + farm_id = awscc_deadline_farm.example.farm_id + os_family = "WINDOWS" + + file_system_locations = [{ + name = "shared storage" + path = "Z:\\" + type = "SHARED" + }, { + name = "render assets" + path = "Y:\\" + type = "SHARED" + }] +} + +# Create an advanced Deadline Queue with job attachment settings +resource "awscc_deadline_queue" "example" { + display_name = "AdvancedRenderQueue" + description = "Advanced render queue with S3 job attachments and custom settings" + farm_id = awscc_deadline_farm.example.farm_id + default_budget_action = "STOP_SCHEDULING_AND_COMPLETE_TASKS" + + # Configure job attachment settings for S3 + job_attachment_settings = { + s3_bucket_name = awscc_s3_bucket.example.bucket_name + root_prefix = "job-attachments/" + } + + # Configure job run-as user settings for POSIX systems + job_run_as_user = { + run_as = "QUEUE_CONFIGURED_USER" + posix = { + user = "deadline-worker" + group = "deadline-group" + } + } + + # Specify allowed storage profile IDs (dynamically referenced) + allowed_storage_profile_ids = [ + awscc_deadline_storage_profile.linux_storage.storage_profile_id, + awscc_deadline_storage_profile.windows_storage.storage_profile_id + ] + + # Specify required file system location names + required_file_system_location_names = [ + "shared storage", + "render assets" + ] + + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} \ No newline at end of file diff --git a/examples/resources/awscc_deadline_queue_environment/main.tf b/examples/resources/awscc_deadline_queue_environment/main.tf new file mode 100644 index 0000000000..375cbd2c6a --- /dev/null +++ b/examples/resources/awscc_deadline_queue_environment/main.tf @@ -0,0 +1,30 @@ +resource "awscc_deadline_farm" "example" { + display_name = "Example Farm" + description = "Example Deadline Farm" + + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} + +resource "awscc_deadline_queue" "example" { + display_name = "Example Queue" + farm_id = awscc_deadline_farm.example.farm_id +} + +resource "awscc_deadline_queue_environment" "example" { + farm_id = awscc_deadline_farm.example.farm_id + queue_id = awscc_deadline_queue.example.queue_id + priority = 50 + template_type = "JSON" + template = jsonencode({ + specificationVersion = "environment-2023-09" + environment = { + name = "ExampleEnvironment" + variables = { + EXAMPLE_VAR = "example_value" + } + } + }) +} diff --git a/examples/resources/awscc_deadline_queue_fleet_association/main.tf b/examples/resources/awscc_deadline_queue_fleet_association/main.tf new file mode 100644 index 0000000000..4b20875887 --- /dev/null +++ b/examples/resources/awscc_deadline_queue_fleet_association/main.tf @@ -0,0 +1,108 @@ +resource "awscc_deadline_farm" "example" { + display_name = "example" + description = "Example" + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} + +# Create IAM role for the queue session +resource "awscc_iam_role" "queue_session_role" { + role_name = "example" + assume_role_policy_document = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "credentials.deadline.amazonaws.com" + } + } + ] + }) + + # Add basic permissions for queue session operations + managed_policy_arns = [ + "arn:aws:iam::aws:policy/AWSDeadlineCloud-UserAccessJobs" + ] +} + +# Create the Deadline Queue +resource "awscc_deadline_queue" "example" { + display_name = "example" + farm_id = awscc_deadline_farm.example.farm_id + + job_run_as_user = { + run_as = "QUEUE_CONFIGURED_USER" + posix = { + user = "deadline-user" + group = "deadline-group" + } + } + + role_arn = awscc_iam_role.queue_session_role.arn +} + + +# Create IAM role for the fleet +resource "awscc_iam_role" "complete_fleet_role" { + role_name = "deadline-fleet-role" + assume_role_policy_document = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "credentials.deadline.amazonaws.com" + } + } + ] + }) + + # Add basic permissions for Deadline fleet operations + managed_policy_arns = [ + "arn:aws:iam::aws:policy/AWSDeadlineCloud-FleetWorker" + ] +} + +# Create the Deadline Fleet +resource "awscc_deadline_fleet" "example" { + display_name = "example" + farm_id = awscc_deadline_farm.example.farm_id + max_worker_count = 20 + min_worker_count = 1 + role_arn = awscc_iam_role.complete_fleet_role.arn + + configuration = { + service_managed_ec_2 = { + instance_capabilities = { + cpu_architecture_type = "x86_64" + os_family = "LINUX" + memory_mi_b = { + min = 4096 + max = 16384 + } + v_cpu_count = { + min = 2 + max = 8 + } + root_ebs_volume = { + size_gi_b = 100 + } + } + instance_market_options = { + type = "spot" + } + } + } +} + +# Create Queue Fleet Association +resource "awscc_deadline_queue_fleet_association" "complete_association" { + farm_id = awscc_deadline_farm.example.farm_id + queue_id = awscc_deadline_queue.example.queue_id + fleet_id = awscc_deadline_fleet.example.fleet_id +} diff --git a/examples/resources/awscc_deadline_queue_limit_association/main.tf b/examples/resources/awscc_deadline_queue_limit_association/main.tf new file mode 100644 index 0000000000..f1cccc18ff --- /dev/null +++ b/examples/resources/awscc_deadline_queue_limit_association/main.tf @@ -0,0 +1,60 @@ +resource "awscc_deadline_farm" "example" { + display_name = "example" + description = "Example" + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} + +# Create IAM role for the queue session +resource "awscc_iam_role" "queue_session_role" { + role_name = "example" + assume_role_policy_document = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "credentials.deadline.amazonaws.com" + } + } + ] + }) + + # Add basic permissions for queue session operations + managed_policy_arns = [ + "arn:aws:iam::aws:policy/AWSDeadlineCloud-UserAccessJobs" + ] +} + +# Create the Deadline Queue +resource "awscc_deadline_queue" "example" { + display_name = "example" + farm_id = awscc_deadline_farm.example.farm_id + + job_run_as_user = { + run_as = "QUEUE_CONFIGURED_USER" + posix = { + user = "deadline-user" + group = "deadline-group" + } + } + + role_arn = awscc_iam_role.queue_session_role.arn +} + +resource "awscc_deadline_limit" "example" { + display_name = "CPULimit" + farm_id = awscc_deadline_farm.example.farm_id + amount_requirement_name = "amount.cpu_cores" + max_count = 100 +} + + +resource "awscc_deadline_queue_limit_association" "cpu_association" { + farm_id = awscc_deadline_farm.example.farm_id + queue_id = awscc_deadline_queue.example.queue_id + limit_id = awscc_deadline_limit.example.limit_id +} diff --git a/examples/resources/awscc_deadline_storage_profile/main.tf b/examples/resources/awscc_deadline_storage_profile/main.tf new file mode 100644 index 0000000000..04bfa2e338 --- /dev/null +++ b/examples/resources/awscc_deadline_storage_profile/main.tf @@ -0,0 +1,25 @@ +resource "awscc_deadline_farm" "example" { + display_name = "ExampleRenderFarm" + description = "Example Deadline render farm for Linux storage profile" + + tags = [ + { + key = "ManagedBy" + value = "AWSCC" + } + ] +} + +resource "awscc_deadline_storage_profile" "example" { + display_name = "Linux Storage Profile" + farm_id = awscc_deadline_farm.example.farm_id + os_family = "LINUX" + + file_system_locations = [ + { + name = "SharedAssets" + path = "/mnt/shared/assets" + type = "SHARED" + } + ] +} \ No newline at end of file diff --git a/examples/resources/awscc_dsql_cluster/main.tf b/examples/resources/awscc_dsql_cluster/main.tf new file mode 100644 index 0000000000..7e6f3d8941 --- /dev/null +++ b/examples/resources/awscc_dsql_cluster/main.tf @@ -0,0 +1,11 @@ +# Basic DSQL Cluster +resource "awscc_dsql_cluster" "example" { + deletion_protection_enabled = false + + tags = [ + { + key = "ModifiedBy" + value = "AWSCC" + } + ] +} diff --git a/examples/resources/awscc_ec2_route_server/main.tf b/examples/resources/awscc_ec2_route_server/main.tf new file mode 100644 index 0000000000..ec9e238763 --- /dev/null +++ b/examples/resources/awscc_ec2_route_server/main.tf @@ -0,0 +1,33 @@ +# Get current AWS region details +data "aws_region" "current" {} + +# VPC and subnet for the route server +resource "awscc_ec2_vpc" "main" { + cidr_block = "10.0.0.0/16" + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +resource "awscc_ec2_subnet" "main" { + vpc_id = awscc_ec2_vpc.main.id + cidr_block = "10.0.1.0/24" + availability_zone = "${data.aws_region.current.name}a" + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +# Create Route Server +resource "awscc_ec2_route_server" "example" { + amazon_side_asn = 65000 + sns_notifications_enabled = true + persist_routes = "enable" + persist_routes_duration = 5 + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} \ No newline at end of file diff --git a/examples/resources/awscc_ec2_route_server_peer/main.tf b/examples/resources/awscc_ec2_route_server_peer/main.tf new file mode 100644 index 0000000000..e44b3b6664 --- /dev/null +++ b/examples/resources/awscc_ec2_route_server_peer/main.tf @@ -0,0 +1,92 @@ +# Data source for region +data "aws_region" "current" {} + +# Create VPC +resource "awscc_ec2_vpc" "main" { + cidr_block = "10.0.0.0/16" + enable_dns_hostnames = true + enable_dns_support = true + + tags = [{ + key = "Name" + value = "route-server-vpc" + }, { + key = "Modified By" + value = "AWSCC" + }] +} + +# Create Subnet +resource "awscc_ec2_subnet" "main" { + vpc_id = awscc_ec2_vpc.main.id + cidr_block = "10.0.1.0/24" + availability_zone = "${data.aws_region.current.name}a" + + tags = [{ + key = "Name" + value = "route-server-subnet" + }, { + key = "Modified By" + value = "AWSCC" + }] +} + +# Create an internet gateway +resource "awscc_ec2_internet_gateway" "main" { + tags = [{ + key = "Name" + value = "route-server-igw" + }, { + key = "Modified By" + value = "AWSCC" + }] +} + +# Attach Internet Gateway to VPC +resource "aws_internet_gateway_attachment" "main" { + internet_gateway_id = awscc_ec2_internet_gateway.main.id + vpc_id = awscc_ec2_vpc.main.id +} + +# Route Server +resource "awscc_ec2_transit_gateway" "main" { + description = "Transit gateway for route server" + tags = [{ + key = "Name" + value = "route-server-tgw" + }, { + key = "Modified By" + value = "AWSCC" + }] +} + +# Create Route Server endpoint +resource "awscc_ec2_transit_gateway_vpc_attachment" "main" { + vpc_id = awscc_ec2_vpc.main.id + subnet_ids = [awscc_ec2_subnet.main.id] + transit_gateway_id = awscc_ec2_transit_gateway.main.id + + tags = [{ + key = "Name" + value = "route-server-endpoint" + }, { + key = "Modified By" + value = "AWSCC" + }] +} + +# Create Route Server Peer +resource "awscc_ec2_route_server_peer" "example" { + route_server_endpoint_id = awscc_ec2_transit_gateway_vpc_attachment.main.id + peer_address = "10.0.1.100" + + bgp_options = { + peer_asn = 65000 + peer_liveness_detection = "bgp-keepalive" + } + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} \ No newline at end of file diff --git a/examples/resources/awscc_evs_environment/main.tf b/examples/resources/awscc_evs_environment/main.tf new file mode 100644 index 0000000000..f3a1c2e171 --- /dev/null +++ b/examples/resources/awscc_evs_environment/main.tf @@ -0,0 +1,148 @@ +# Create a VPC for the EVS environment +resource "awscc_ec2_vpc" "evs" { + cidr_block = "10.0.0.0/16" + tags = [{ + key = "Name" + value = "evs-vpc" + }] +} + +# Create a subnet for service access +resource "awscc_ec2_subnet" "service_access" { + vpc_id = awscc_ec2_vpc.evs.id + cidr_block = "10.0.1.0/24" + tags = [{ + key = "Name" + value = "evs-service-access" + }] +} + +# Create a security group for EVS service access +resource "awscc_ec2_security_group" "evs_service" { + group_name = "evs-service-access" + group_description = "Security group for EVS service access" + vpc_id = awscc_ec2_vpc.evs.id + security_group_ingress = [{ + from_port = 443 + to_port = 443 + ip_protocol = "tcp" + cidr_ip = "0.0.0.0/0" + }] + security_group_egress = [{ + from_port = -1 + to_port = -1 + ip_protocol = "-1" + cidr_ip = "0.0.0.0/0" + }] + tags = [{ + key = "Name" + value = "evs-service-sg" + }] +} + +# Create an SSH key pair for the hosts +resource "awscc_ec2_key_pair" "evs_hosts" { + key_name = "evs-hosts-key" + tags = [{ + key = "Name" + value = "evs-hosts-key" + }] +} + +# Create the EVS Environment +resource "awscc_evs_environment" "example" { + environment_name = "example-evs" + site_id = "examplesite" + vpc_id = awscc_ec2_vpc.evs.id + vcf_version = "VCF-5.2.1" + + service_access_subnet_id = awscc_ec2_subnet.service_access.id + terms_accepted = true + + connectivity_info = { + private_route_server_peerings = ["10.0.0.1", "10.0.0.2"] + } + + license_info = { + solution_key = "ABCD1-EFGH2-IJKL3-MNOP4-QRST5" + vsan_key = "VSAN1-VSAN2-VSAN3-VSAN4-VSAN5" + } + + vcf_hostnames = { + cloud_builder = "cloudbuilder" + nsx = "nsx" + nsx_edge_1 = "nsxedge1" + nsx_edge_2 = "nsxedge2" + nsx_manager_1 = "nsxmanager1" + nsx_manager_2 = "nsxmanager2" + nsx_manager_3 = "nsxmanager3" + sddc_manager = "sddcmanager" + v_center = "vcenter" + } + + service_access_security_groups = { + security_groups = [awscc_ec2_security_group.evs_service.id] + } + + initial_vlans = { + vmk_management = { + cidr = "10.0.10.0/24" + } + vm_management = { + cidr = "10.0.11.0/24" + } + v_san = { + cidr = "10.0.12.0/24" + } + v_motion = { + cidr = "10.0.13.0/24" + } + v_tep = { + cidr = "10.0.14.0/24" + } + edge_v_tep = { + cidr = "10.0.15.0/24" + } + nsx_up_link = { + cidr = "10.0.16.0/24" + } + hcx = { + cidr = "10.0.17.0/24" + } + expansion_vlan_1 = { + cidr = "10.0.18.0/24" + } + expansion_vlan_2 = { + cidr = "10.0.19.0/24" + } + } + + # Required host configuration (must have exactly 4 hosts) + hosts = [ + { + instance_type = "i4i.metal" + host_name = "evs-host-1" + key_name = awscc_ec2_key_pair.evs_hosts.key_name + }, + { + instance_type = "i4i.metal" + host_name = "evs-host-2" + key_name = awscc_ec2_key_pair.evs_hosts.key_name + }, + { + instance_type = "i4i.metal" + host_name = "evs-host-3" + key_name = awscc_ec2_key_pair.evs_hosts.key_name + }, + { + instance_type = "i4i.metal" + host_name = "evs-host-4" + key_name = awscc_ec2_key_pair.evs_hosts.key_name + } + ] + + tags = [{ + key = "ModifiedBy" + value = "AWSCC" + }] +} \ No newline at end of file diff --git a/examples/resources/awscc_guardduty_publishing_destination/main.tf b/examples/resources/awscc_guardduty_publishing_destination/main.tf new file mode 100644 index 0000000000..0b0f6fe2f1 --- /dev/null +++ b/examples/resources/awscc_guardduty_publishing_destination/main.tf @@ -0,0 +1,92 @@ +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} +data "aws_guardduty_detector" "existing" {} + +# S3 bucket for findings +resource "awscc_s3_bucket" "findings" { + bucket_name = "guardduty-findings-${data.aws_region.current.name}-${data.aws_caller_identity.current.account_id}" + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +# KMS key for encrypting findings +resource "awscc_kms_key" "findings" { + description = "KMS key for GuardDuty findings" + key_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Sid = "Enable IAM User Permissions" + Effect = "Allow" + Principal = { + AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root" + } + Action = "kms:*" + Resource = "*" + }, + { + Sid = "Allow GuardDuty to encrypt findings" + Effect = "Allow" + Principal = { + Service = "guardduty.amazonaws.com" + } + Action = [ + "kms:GenerateDataKey", + "kms:Encrypt" + ] + Resource = "*" + } + ] + }) + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +# S3 Bucket policy allowing GuardDuty to write findings +resource "awscc_s3_bucket_policy" "findings" { + bucket = awscc_s3_bucket.findings.id + policy_document = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Sid = "Allow GuardDuty to write findings" + Effect = "Allow" + Principal = { + Service = "guardduty.amazonaws.com" + } + Action = [ + "s3:GetBucketLocation", + "s3:PutObject" + ] + Resource = [ + "arn:aws:s3:::${awscc_s3_bucket.findings.id}", + "arn:aws:s3:::${awscc_s3_bucket.findings.id}/*" + ], + Condition = { + StringEquals = { + "aws:SourceArn" = "arn:aws:guardduty:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:detector/${data.aws_guardduty_detector.existing.id}", + "aws:SourceAccount" = data.aws_caller_identity.current.account_id + } + } + } + ] + }) +} + +# GuardDuty Publishing Destination +resource "awscc_guardduty_publishing_destination" "example" { + detector_id = data.aws_guardduty_detector.existing.id + destination_type = "S3" + destination_properties = { + destination_arn = "arn:aws:s3:::${awscc_s3_bucket.findings.id}" + kms_key_arn = awscc_kms_key.findings.arn + } + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} \ No newline at end of file diff --git a/examples/resources/awscc_iotsitewise_dataset/main.tf b/examples/resources/awscc_iotsitewise_dataset/main.tf new file mode 100644 index 0000000000..687f1966ef --- /dev/null +++ b/examples/resources/awscc_iotsitewise_dataset/main.tf @@ -0,0 +1,75 @@ +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} + +resource "awscc_iam_role" "dataset_role" { + role_name = "iotsitewise-dataset-role" + assume_role_policy_document = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "iotsitewise.amazonaws.com" + } + } + ] + }) + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +resource "aws_iam_policy" "dataset_policy" { + name = "iotsitewise-dataset-policy" + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "kendra:ListTagsForResource", + "kendra:GetKnowledgeBase", + "kendra:DescribeKnowledgeBase" + ] + Resource = "*" + } + ] + }) + + tags = { + "Modified By" = "AWSCC" + } +} + +resource "aws_iam_role_policy_attachment" "dataset_role_policy" { + policy_arn = aws_iam_policy.dataset_policy.arn + role = awscc_iam_role.dataset_role.role_name +} + +locals { + knowledge_base_arn = "arn:aws:kendra:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:knowledgebase/example" +} + +resource "awscc_iotsitewise_dataset" "example" { + dataset_name = "example-dataset" + dataset_description = "Example IoT SiteWise Dataset" + + dataset_source = { + source_type = "KENDRA" + source_format = "KNOWLEDGE_BASE" + source_detail = { + kendra = { + knowledge_base_arn = local.knowledge_base_arn + role_arn = awscc_iam_role.dataset_role.arn + } + } + } + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} \ No newline at end of file diff --git a/examples/resources/awscc_lightsail_instance_snapshot/main.tf b/examples/resources/awscc_lightsail_instance_snapshot/main.tf new file mode 100644 index 0000000000..8d6205137c --- /dev/null +++ b/examples/resources/awscc_lightsail_instance_snapshot/main.tf @@ -0,0 +1,26 @@ +# Get current AWS region +data "aws_region" "current" {} + +# Create a Lightsail instance first to take a snapshot from +resource "awscc_lightsail_instance" "example" { + instance_name = "example-instance" + availability_zone = "${data.aws_region.current.name}a" + blueprint_id = "amazon_linux_2" + bundle_id = "nano_2_0" + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +# Create the instance snapshot +resource "awscc_lightsail_instance_snapshot" "example" { + instance_name = awscc_lightsail_instance.example.instance_name + instance_snapshot_name = "example-snapshot" + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} \ No newline at end of file diff --git a/examples/resources/awscc_neptune_db_cluster_parameter_group/main.tf b/examples/resources/awscc_neptune_db_cluster_parameter_group/main.tf new file mode 100644 index 0000000000..c8a0a321d7 --- /dev/null +++ b/examples/resources/awscc_neptune_db_cluster_parameter_group/main.tf @@ -0,0 +1,17 @@ +# Example Neptune DB Cluster Parameter Group +resource "awscc_neptune_db_cluster_parameter_group" "example" { + name = "example-neptune-cluster-pg" + family = "neptune1.2" + description = "Example Neptune cluster parameter group" + + # Example parameters in JSON format + parameters = jsonencode({ + "neptune_enable_audit_log" = "1" + "neptune_query_timeout" = "120000" + }) + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} \ No newline at end of file diff --git a/examples/resources/awscc_networkfirewall_vpc_endpoint_association/main.tf b/examples/resources/awscc_networkfirewall_vpc_endpoint_association/main.tf new file mode 100644 index 0000000000..0591881930 --- /dev/null +++ b/examples/resources/awscc_networkfirewall_vpc_endpoint_association/main.tf @@ -0,0 +1,21 @@ +# Example of Network Firewall VPC Endpoint Association configuration +resource "awscc_networkfirewall_vpc_endpoint_association" "example" { + # The ARN of an existing Network Firewall + firewall_arn = "arn:aws:network-firewall:us-west-2:123456789012:firewall/example-firewall" + + # The ID of an existing VPC + vpc_id = "vpc-1234567890abcdef0" + + # The subnet mapping configuration + subnet_mapping = { + subnet_id = "subnet-1234567890abcdef0" + ip_address_type = "IPV4" + } + + description = "Example VPC endpoint association" + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} \ No newline at end of file diff --git a/examples/resources/awscc_omics_workflow_version/main.tf b/examples/resources/awscc_omics_workflow_version/main.tf new file mode 100644 index 0000000000..1d22524a8d --- /dev/null +++ b/examples/resources/awscc_omics_workflow_version/main.tf @@ -0,0 +1,49 @@ +# Get AWS account ID and region +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} + +# Create S3 bucket for workflow files +resource "awscc_s3_bucket" "workflow" { + bucket_name = "omics-workflow-${data.aws_caller_identity.current.account_id}-${data.aws_region.current.name}" + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +# Upload workflow file (keeping AWS standard as no AWSCC equivalent) +resource "aws_s3_object" "workflow" { + bucket = awscc_s3_bucket.workflow.id + key = "workflow.wdl" + source = "workflow.wdl" + etag = filemd5("workflow.wdl") +} + +# Create the Omics Workflow +resource "awscc_omics_workflow" "example" { + name = "example-workflow" + description = "Example Omics Workflow" + engine = "WDL" + definition_uri = "s3://${awscc_s3_bucket.workflow.id}/${aws_s3_object.workflow.key}" + tags = { + "Modified By" = "AWSCC" + } +} + +# Create a workflow version +resource "awscc_omics_workflow_version" "example" { + workflow_id = awscc_omics_workflow.example.id + version_name = "v1.0.0" + description = "Example workflow version" + engine = "WDL" + definition_uri = "s3://${awscc_s3_bucket.workflow.id}/${aws_s3_object.workflow.key}" + parameter_template = { + "name" = { + description = "Name to say hello to" + optional = true + } + } + tags = { + "Modified By" = "AWSCC" + } +} \ No newline at end of file diff --git a/examples/resources/awscc_quicksight_folder/main.tf b/examples/resources/awscc_quicksight_folder/main.tf new file mode 100644 index 0000000000..ea68471a86 --- /dev/null +++ b/examples/resources/awscc_quicksight_folder/main.tf @@ -0,0 +1,41 @@ +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} + +resource "awscc_quicksight_folder" "example" { + aws_account_id = data.aws_caller_identity.current.account_id + folder_id = "analytics-team-folder" + name = "example" + folder_type = "SHARED" + sharing_model = "ACCOUNT" + + # Grant permissions to users and groups + permissions = [ + { + principal = "arn:aws:quicksight:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:user/default/analytics-admin" + actions = [ + "quicksight:CreateFolder", + "quicksight:DescribeFolder", + "quicksight:UpdateFolder", + "quicksight:DeleteFolder", + "quicksight:CreateFolderMembership", + "quicksight:DeleteFolderMembership", + "quicksight:DescribeFolderPermissions", + "quicksight:UpdateFolderPermissions" + ] + }, + { + principal = "arn:aws:quicksight:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:group/default/analytics-team" + actions = [ + "quicksight:DescribeFolder", + "quicksight:CreateFolderMembership" + ] + } + ] + + tags = [ + { + key = "ModifiedBy" + value = "AWSCC" + } + ] +} diff --git a/examples/resources/awscc_s3express_access_point/main.tf b/examples/resources/awscc_s3express_access_point/main.tf new file mode 100644 index 0000000000..4bf0446e3f --- /dev/null +++ b/examples/resources/awscc_s3express_access_point/main.tf @@ -0,0 +1,48 @@ +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} + +# Create a directory bucket first (requires S3 Express) +resource "awscc_s3express_directory_bucket" "example" { + bucket_name = "example-express-directory-bucket" + data_redundancy = "SingleAvailabilityZone" + location_name = "${data.aws_region.current.name}a" +} + +data "aws_iam_policy_document" "access_point_policy" { + statement { + effect = "Allow" + principals { + type = "AWS" + identifiers = [ + data.aws_caller_identity.current.arn + ] + } + actions = [ + "s3:GetObject", + "s3:PutObject", + "s3:ListBucket" + ] + resources = [ + "arn:aws:s3:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:accesspoint/*", + "arn:aws:s3:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:accesspoint/*/object/*" + ] + } +} + +resource "awscc_s3express_access_point" "example" { + name = "example-access-point" + bucket = awscc_s3express_directory_bucket.example.id + policy = jsonencode(data.aws_iam_policy_document.access_point_policy.json) + + public_access_block_configuration = { + block_public_acls = true + block_public_policy = true + ignore_public_acls = true + restrict_public_buckets = true + } + + scope = { + permissions = ["GetObject", "PutObject", "ListBucket"] + prefixes = ["documents/", "images/"] + } +} \ No newline at end of file diff --git a/examples/resources/awscc_ssmguiconnect_preferences/main.tf b/examples/resources/awscc_ssmguiconnect_preferences/main.tf new file mode 100644 index 0000000000..ec6cd68298 --- /dev/null +++ b/examples/resources/awscc_ssmguiconnect_preferences/main.tf @@ -0,0 +1,71 @@ +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} + +# Create a basic KMS key for testing +resource "awscc_kms_key" "recording_key" { + description = "KMS key for SSM GUI Connect recording encryption" + key_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Sid = "Enable IAM User Permissions" + Effect = "Allow" + Principal = { + AWS = "*" + } + Action = "kms:*" + Resource = "*" + Condition = { + StringEquals = { + "kms:CallerAccount" = "${data.aws_caller_identity.current.account_id}" + } + } + } + ] + }) + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +# Create S3 bucket for recordings with integrated public access block and encryption +resource "awscc_s3_bucket" "recordings" { + bucket_name = "ssm-gui-connect-recordings-${data.aws_caller_identity.current.account_id}-${data.aws_region.current.name}" + + public_access_block_configuration = { + block_public_acls = true + block_public_policy = true + ignore_public_acls = true + restrict_public_buckets = true + } + + bucket_encryption = { + server_side_encryption_configuration = [{ + server_side_encryption_by_default = { + kms_master_key_id = awscc_kms_key.recording_key.id + sse_algorithm = "aws:kms" + } + }] + } + + tags = [{ + key = "Modified By" + value = "AWSCC" + }] +} + +resource "awscc_ssmguiconnect_preferences" "example" { + connection_recording_preferences = { + kms_key_arn = awscc_kms_key.recording_key.id + recording_destinations = { + s3_buckets = [ + { + bucket_name = awscc_s3_bucket.recordings.id + bucket_owner = data.aws_caller_identity.current.account_id + } + ] + } + } +} \ No newline at end of file diff --git a/examples/resources/awscc_xray_transaction_search_config/main.tf b/examples/resources/awscc_xray_transaction_search_config/main.tf new file mode 100644 index 0000000000..fa2b974961 --- /dev/null +++ b/examples/resources/awscc_xray_transaction_search_config/main.tf @@ -0,0 +1,33 @@ +# Required policy for CloudWatch Logs +data "aws_iam_policy_document" "xray_cloudwatch_policy" { + statement { + effect = "Allow" + actions = [ + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents", + "logs:GetLogEvents", + "logs:PutRetentionPolicy", + "logs:GetLogGroupFields", + "logs:GetQueryResults" + ] + resources = ["*"] + principals { + type = "Service" + identifiers = ["xray.amazonaws.com"] + } + } +} + +# Create the CloudWatch Logs resource policy +resource "aws_cloudwatch_log_resource_policy" "xray" { + policy_document = data.aws_iam_policy_document.xray_cloudwatch_policy.json + policy_name = "xray-spans-policy" +} + +# XRay Transaction Search Config +resource "awscc_xray_transaction_search_config" "example" { + indexing_percentage = 100 + + depends_on = [aws_cloudwatch_log_resource_policy.xray] +} \ No newline at end of file diff --git a/templates/resources/applicationsignals_discovery.md.tmpl b/templates/resources/applicationsignals_discovery.md.tmpl new file mode 100644 index 0000000000..5a13fb3639 --- /dev/null +++ b/templates/resources/applicationsignals_discovery.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### Configure Application Signals Discovery + +Configures AWS Application Signals discovery service which appears to be empty or pending configuration details. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/batch_consumable_resource.md.tmpl b/templates/resources/batch_consumable_resource.md.tmpl new file mode 100644 index 0000000000..76e0532360 --- /dev/null +++ b/templates/resources/batch_consumable_resource.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### AWS Batch Consumable Resource Configuration + +Creates a replenishable consumable resource for AWS Batch with a total quantity of 10 units, enabling license management for batch workloads. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/bedrock_intelligent_prompt_router.md.tmpl b/templates/resources/bedrock_intelligent_prompt_router.md.tmpl new file mode 100644 index 0000000000..f530418e0e --- /dev/null +++ b/templates/resources/bedrock_intelligent_prompt_router.md.tmpl @@ -0,0 +1,25 @@ +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} \ No newline at end of file diff --git a/templates/resources/cognito_user_pool_domain.md.tmpl b/templates/resources/cognito_user_pool_domain.md.tmpl new file mode 100644 index 0000000000..9f037a12ee --- /dev/null +++ b/templates/resources/cognito_user_pool_domain.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### Configure Cognito User Pool Domain + +Creates a custom domain for a Cognito User Pool with dynamic naming based on the AWS account ID, enabling a branded URL for user authentication endpoints. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/deadline_limit.md.tmpl b/templates/resources/deadline_limit.md.tmpl new file mode 100644 index 0000000000..473c1fd144 --- /dev/null +++ b/templates/resources/deadline_limit.md.tmpl @@ -0,0 +1,26 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/deadline_queue.md.tmpl b/templates/resources/deadline_queue.md.tmpl new file mode 100644 index 0000000000..f530418e0e --- /dev/null +++ b/templates/resources/deadline_queue.md.tmpl @@ -0,0 +1,25 @@ +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} \ No newline at end of file diff --git a/templates/resources/deadline_queue_environment.md.tmpl b/templates/resources/deadline_queue_environment.md.tmpl new file mode 100644 index 0000000000..f530418e0e --- /dev/null +++ b/templates/resources/deadline_queue_environment.md.tmpl @@ -0,0 +1,25 @@ +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} \ No newline at end of file diff --git a/templates/resources/deadline_queue_fleet_association.md.tmpl b/templates/resources/deadline_queue_fleet_association.md.tmpl new file mode 100644 index 0000000000..f530418e0e --- /dev/null +++ b/templates/resources/deadline_queue_fleet_association.md.tmpl @@ -0,0 +1,25 @@ +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} \ No newline at end of file diff --git a/templates/resources/deadline_queue_limit_association.md.tmpl b/templates/resources/deadline_queue_limit_association.md.tmpl new file mode 100644 index 0000000000..f530418e0e --- /dev/null +++ b/templates/resources/deadline_queue_limit_association.md.tmpl @@ -0,0 +1,25 @@ +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} \ No newline at end of file diff --git a/templates/resources/deadline_storage_profile.md.tmpl b/templates/resources/deadline_storage_profile.md.tmpl new file mode 100644 index 0000000000..f530418e0e --- /dev/null +++ b/templates/resources/deadline_storage_profile.md.tmpl @@ -0,0 +1,25 @@ +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} \ No newline at end of file diff --git a/templates/resources/dsql_cluster.md.tmpl b/templates/resources/dsql_cluster.md.tmpl new file mode 100644 index 0000000000..f530418e0e --- /dev/null +++ b/templates/resources/dsql_cluster.md.tmpl @@ -0,0 +1,25 @@ +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} \ No newline at end of file diff --git a/templates/resources/ec2_route_server.md.tmpl b/templates/resources/ec2_route_server.md.tmpl new file mode 100644 index 0000000000..bdf7e71150 --- /dev/null +++ b/templates/resources/ec2_route_server.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### Configure EC2 Route Server with Persistent Routes + +This configuration creates an EC2 Route Server with ASN 65000 in a custom VPC, enabling route persistence for 5 minutes and SNS notifications for route changes. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/ec2_route_server_peer.md.tmpl b/templates/resources/ec2_route_server_peer.md.tmpl new file mode 100644 index 0000000000..f52d81d840 --- /dev/null +++ b/templates/resources/ec2_route_server_peer.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### Configure Route Server BGP Peer + +Creates a BGP peer connection for an AWS Route Server with ASN 65000, establishing routing communication through a Transit Gateway VPC attachment with specified peer address 10.0.1.100. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/evs_environment.md.tmpl b/templates/resources/evs_environment.md.tmpl new file mode 100644 index 0000000000..0258110aaa --- /dev/null +++ b/templates/resources/evs_environment.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### VMware Cloud on AWS SDDC Environment Setup + +Creates an AWS VMware Cloud SDDC environment with a 4-node i4i.metal cluster, complete network configuration including VPC, subnets, and security groups, along with all required VLANs for VMware Cloud Foundation (VCF) deployment. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/guardduty_publishing_destination.md.tmpl b/templates/resources/guardduty_publishing_destination.md.tmpl new file mode 100644 index 0000000000..0e42f94f30 --- /dev/null +++ b/templates/resources/guardduty_publishing_destination.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### GuardDuty Findings Export to S3 + +Configure GuardDuty to export its findings to an S3 bucket with KMS encryption, including all necessary IAM permissions and bucket policies for secure findings storage and access. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/iotsitewise_dataset.md.tmpl b/templates/resources/iotsitewise_dataset.md.tmpl new file mode 100644 index 0000000000..ec03316d90 --- /dev/null +++ b/templates/resources/iotsitewise_dataset.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### IoT SiteWise Dataset with Kendra Integration + +Creates an IoT SiteWise dataset that integrates with Amazon Kendra knowledge base, including necessary IAM role and policy configuration for secure access to Kendra resources. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/lightsail_instance_snapshot.md.tmpl b/templates/resources/lightsail_instance_snapshot.md.tmpl new file mode 100644 index 0000000000..f6056d0a74 --- /dev/null +++ b/templates/resources/lightsail_instance_snapshot.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### Create Lightsail Instance Snapshot + +Creates a snapshot of an Amazon Lightsail instance, demonstrating the configuration of both the source Lightsail instance and its snapshot using the AWSCC provider. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/neptune_db_cluster_parameter_group.md.tmpl b/templates/resources/neptune_db_cluster_parameter_group.md.tmpl new file mode 100644 index 0000000000..173d475844 --- /dev/null +++ b/templates/resources/neptune_db_cluster_parameter_group.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### Configure Neptune DB Cluster Parameters + +Creates a Neptune DB cluster parameter group with custom configuration settings, enabling audit logging and setting query timeout parameters for Neptune 1.2 family databases. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/networkfirewall_vpc_endpoint_association.md.tmpl b/templates/resources/networkfirewall_vpc_endpoint_association.md.tmpl new file mode 100644 index 0000000000..ac5c5cd719 --- /dev/null +++ b/templates/resources/networkfirewall_vpc_endpoint_association.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### Network Firewall VPC Association + +Associates a Network Firewall with a VPC endpoint by configuring the subnet mapping and IP address type, enabling the firewall to protect network traffic in the specified VPC subnet. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/omics_workflow_version.md.tmpl b/templates/resources/omics_workflow_version.md.tmpl new file mode 100644 index 0000000000..538e14a031 --- /dev/null +++ b/templates/resources/omics_workflow_version.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### Deploy Omics Workflow Version + +Creates an AWS Omics workflow version with S3-based definition file, supporting infrastructure (S3 bucket), and configurable parameters template for running genomics workflows using WDL engine. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/quicksight_folder.md.tmpl b/templates/resources/quicksight_folder.md.tmpl new file mode 100644 index 0000000000..473c1fd144 --- /dev/null +++ b/templates/resources/quicksight_folder.md.tmpl @@ -0,0 +1,26 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/s3express_access_point.md.tmpl b/templates/resources/s3express_access_point.md.tmpl new file mode 100644 index 0000000000..4f87cf0f5c --- /dev/null +++ b/templates/resources/s3express_access_point.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### S3 Express Access Point with Scoped Permissions + +Creates an S3 Express access point for a directory bucket with scoped permissions for specific prefixes and operations, while enforcing strict public access blocking for enhanced security. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/ssmguiconnect_preferences.md.tmpl b/templates/resources/ssmguiconnect_preferences.md.tmpl new file mode 100644 index 0000000000..775748d83d --- /dev/null +++ b/templates/resources/ssmguiconnect_preferences.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### SSM GUI Connect Recording Configuration + +Set up SSM GUI Connect with secure session recording preferences using KMS encryption and S3 bucket storage for recorded sessions, ensuring encrypted and private storage of connection recordings. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }} diff --git a/templates/resources/xray_transaction_search_config.md.tmpl b/templates/resources/xray_transaction_search_config.md.tmpl new file mode 100644 index 0000000000..9ee528483a --- /dev/null +++ b/templates/resources/xray_transaction_search_config.md.tmpl @@ -0,0 +1,32 @@ + +--- +page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}" +subcategory: "" +description: |- +{{ .Description | plainmarkdown | trimspace | prefixlines " " }} +--- + +# {{.Name}} ({{.Type}}) + +{{ .Description | trimspace }} + +## Example Usage + +### Configure XRay Transaction Search with CloudWatch Integration + +Configures AWS X-Ray transaction search with 100% indexing percentage while setting up the necessary CloudWatch Logs permissions to allow X-Ray service to store and process trace data. + +~> This example is generated by LLM using Amazon Bedrock and validated using terraform validate, apply and destroy. While we strive for accuracy and quality, please note that the information provided may not be entirely error-free or up-to-date. We recommend independently verifying the content. + +{{ tffile (printf "examples/resources/%s/main.tf" .Name)}} + +{{ .SchemaMarkdown | trimspace }} +{{- if .HasImport }} + +## Import + +Import is supported using the following syntax: + +{{ codefile "shell" .ImportFile }} + +{{- end }}