From fab5d17e7d8c91b0e71f5255c06a0005955906f3 Mon Sep 17 00:00:00 2001 From: github-actions Date: Tue, 25 Mar 2025 16:04:51 +0000 Subject: [PATCH] chore(schema): update --- samtranslator/schema/schema.json | 328 +++--- schema_source/cloudformation-docs.json | 1215 +++++++++++++++++----- schema_source/cloudformation.schema.json | 328 +++--- 3 files changed, 1306 insertions(+), 565 deletions(-) diff --git a/samtranslator/schema/schema.json b/samtranslator/schema/schema.json index b8dc13fb1..2490b1dfb 100644 --- a/samtranslator/schema/schema.json +++ b/samtranslator/schema/schema.json @@ -9042,7 +9042,7 @@ "type": "string" }, "RetrievalRoleArn": { - "markdownDescription": "The ARN of an IAM role with permission to access the configuration at the specified `LocationUri` .\n\n> A retrieval role ARN is not required for configurations stored in the AWS AppConfig hosted configuration store. It is required for all other sources that store your configuration.", + "markdownDescription": "The ARN of an IAM role with permission to access the configuration at the specified `LocationUri` .\n\n> A retrieval role ARN is not required for configurations stored in AWS CodePipeline or the AWS AppConfig hosted configuration store. It is required for all other sources that store your configuration.", "title": "RetrievalRoleArn", "type": "string" }, @@ -26643,7 +26643,7 @@ "type": "string" }, "ScheduleExpression": { - "markdownDescription": "A CRON expression in specified timezone when a restore testing plan is executed.", + "markdownDescription": "A CRON expression in specified timezone when a restore testing plan is executed. When no CRON expression is provided, AWS Backup will use the default expression `cron(0 5 ? * * *)` .", "title": "ScheduleExpression", "type": "string" }, @@ -27042,7 +27042,7 @@ "title": "EksConfiguration" }, "ReplaceComputeEnvironment": { - "markdownDescription": "Specifies whether the compute environment is replaced if an update is made that requires replacing the instances in the compute environment. The default value is `true` . To enable more properties to be updated, set this property to `false` . When changing the value of this property to `false` , do not change any other properties at the same time. If other properties are changed at the same time, and the change needs to be rolled back but it can't, it's possible for the stack to go into the `UPDATE_ROLLBACK_FAILED` state. You can't update a stack that is in the `UPDATE_ROLLBACK_FAILED` state. However, if you can continue to roll it back, you can return the stack to its original settings and then try to update it again. For more information, see [Continue rolling back an update](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html) in the *AWS CloudFormation User Guide* .\n\nThe properties that can't be changed without replacing the compute environment are in the [`ComputeResources`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html) property type: [`AllocationStrategy`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-allocationstrategy) , [`BidPercentage`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-bidpercentage) , [`Ec2Configuration`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2configuration) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`ImageId`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-imageid) , [`InstanceRole`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancerole) , [`InstanceTypes`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancetypes) , [`LaunchTemplate`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-launchtemplate) , [`MaxvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-maxvcpus) , [`MinvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-minvcpus) , [`PlacementGroup`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-placementgroup) , [`SecurityGroupIds`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-securitygroupids) , [`Subnets`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-subnets) , [Tags](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-tags) , [`Type`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-type) , and [`UpdateToLatestImageVersion`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-updatetolatestimageversion) .", + "markdownDescription": "Specifies whether the compute environment is replaced if an update is made that requires replacing the instances in the compute environment. The default value is `true` . To enable more properties to be updated, set this property to `false` . When changing the value of this property to `false` , do not change any other properties at the same time. If other properties are changed at the same time, and the change needs to be rolled back but it can't, it's possible for the stack to go into the `UPDATE_ROLLBACK_FAILED` state. You can't update a stack that is in the `UPDATE_ROLLBACK_FAILED` state. However, if you can continue to roll it back, you can return the stack to its original settings and then try to update it again. For more information, see [Continue rolling back an update](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html) in the *AWS CloudFormation User Guide* .\n\n`ReplaceComputeEnvironment` is not applicable for Fargate compute environments. Fargate compute environments are always updated without interruption.\n\nThe properties that can't be changed without replacing the compute environment are in the [`ComputeResources`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html) property type: [`AllocationStrategy`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-allocationstrategy) , [`BidPercentage`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-bidpercentage) , [`Ec2Configuration`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2configuration) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`ImageId`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-imageid) , [`InstanceRole`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancerole) , [`InstanceTypes`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancetypes) , [`LaunchTemplate`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-launchtemplate) , [`MaxvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-maxvcpus) , [`MinvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-minvcpus) , [`PlacementGroup`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-placementgroup) , [`SecurityGroupIds`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-securitygroupids) , [`Subnets`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-subnets) , [Tags](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-tags) , [`Type`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-type) , and [`UpdateToLatestImageVersion`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-updatetolatestimageversion) .", "title": "ReplaceComputeEnvironment", "type": "boolean" }, @@ -31907,7 +31907,7 @@ "additionalProperties": false, "properties": { "KeyspaceName": { - "markdownDescription": "The name of the keyspace to be created. The keyspace name is case sensitive. If you don't specify a name, AWS CloudFormation generates a unique ID and uses that ID for the keyspace name. For more information, see [Name type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-name.html) .\n\n*Length constraints:* Minimum length of 3. Maximum length of 255.\n\n*Pattern:* `^[a-zA-Z0-9][a-zA-Z0-9_]{1,47}$`", + "markdownDescription": "The name of the keyspace to be created. The keyspace name is case sensitive. If you don't specify a name, AWS CloudFormation generates a unique ID and uses that ID for the keyspace name. For more information, see [Name type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-name.html) .\n\n*Length constraints:* Minimum length of 1. Maximum length of 48.", "title": "KeyspaceName", "type": "string" }, @@ -32992,7 +32992,7 @@ "type": "string" }, "QueryLogStatus": { - "markdownDescription": "An indicator as to whether query logging has been enabled or disabled for the collaboration.", + "markdownDescription": "An indicator as to whether query logging has been enabled or disabled for the collaboration.\n\nWhen `ENABLED` , AWS Clean Rooms logs details about queries run within this collaboration and those logs can be viewed in Amazon CloudWatch Logs. The default value is `DISABLED` .", "title": "QueryLogStatus", "type": "string" }, @@ -33174,7 +33174,7 @@ "type": "array" }, "AnalysisMethod": { - "markdownDescription": "The analysis method for the configured table. The only valid value is currently `DIRECT_QUERY`.", + "markdownDescription": "The analysis method for the configured table.\n\n`DIRECT_QUERY` allows SQL queries to be run directly on this table.\n\n`DIRECT_JOB` allows PySpark jobs to be run directly on this table.\n\n`MULTIPLE` allows both SQL queries and PySpark jobs to be run directly on this table.", "title": "AnalysisMethod", "type": "string" }, @@ -33687,7 +33687,7 @@ "title": "PaymentConfiguration" }, "QueryLogStatus": { - "markdownDescription": "An indicator as to whether query logging has been enabled or disabled for the membership.", + "markdownDescription": "An indicator as to whether query logging has been enabled or disabled for the membership.\n\nWhen `ENABLED` , AWS Clean Rooms logs details about queries run within this collaboration and those logs can be viewed in Amazon CloudWatch Logs. The default value is `DISABLED` .", "title": "QueryLogStatus", "type": "string" }, @@ -35192,7 +35192,7 @@ }, "Parameters": { "additionalProperties": true, - "markdownDescription": "The set value pairs that represent the parameters passed to CloudFormation when this nested stack is created. Each parameter has a name corresponding to a parameter defined in the embedded template and a value representing the value that you want to set for the parameter.\n\n> If you use the `Ref` function to pass a parameter value to a nested stack, comma-delimited list parameters must be of type `String` . In other words, you can't pass values that are of type `CommaDelimitedList` to nested stacks. \n\nConditional. Required if the nested stack requires input parameters.\n\nWhether an update causes interruptions depends on the resources that are being updated. An update never causes a nested stack to be replaced.", + "markdownDescription": "The set value pairs that represent the parameters passed to CloudFormation when this nested stack is created. Each parameter has a name corresponding to a parameter defined in the embedded template and a value representing the value that you want to set for the parameter.\n\n> If you use the `Ref` function to pass a parameter value to a nested stack, comma-delimited list parameters must be of type `String` . In other words, you can't pass values that are of type `CommaDelimitedList` to nested stacks. \n\nRequired if the nested stack requires input parameters.\n\nWhether an update causes interruptions depends on the resources that are being updated. An update never causes a nested stack to be replaced.", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -35282,17 +35282,17 @@ "additionalProperties": false, "properties": { "AdministrationRoleARN": { - "markdownDescription": "The Amazon Resource Number (ARN) of the IAM role to use to create this stack set. Specify an IAM role only if you are using customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account.\n\nUse customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account. For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the *AWS CloudFormation User Guide* .\n\n*Minimum* : `20`\n\n*Maximum* : `2048`", + "markdownDescription": "The Amazon Resource Number (ARN) of the IAM role to use to create this stack set. Specify an IAM role only if you are using customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account.\n\nUse customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account. For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the *AWS CloudFormation User Guide* .\n\nValid only if the permissions model is `SELF_MANAGED` .", "title": "AdministrationRoleARN", "type": "string" }, "AutoDeployment": { "$ref": "#/definitions/AWS::CloudFormation::StackSet.AutoDeployment", - "markdownDescription": "[ `Service-managed` permissions] Describes whether StackSets automatically deploys to AWS Organizations accounts that are added to a target organization or organizational unit (OU).", + "markdownDescription": "Describes whether StackSets automatically deploys to AWS Organizations accounts that are added to a target organization or organizational unit (OU). For more information, see [Manage automatic deployments for CloudFormation StackSets that use service-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-manage-auto-deployment.html) in the *AWS CloudFormation User Guide* .\n\nRequired if the permissions model is `SERVICE_MANAGED` . (Not used with self-managed permissions.)", "title": "AutoDeployment" }, "CallAs": { - "markdownDescription": "[Service-managed permissions] Specifies whether you are acting as an account administrator in the organization's management account or as a delegated administrator in a member account.\n\nBy default, `SELF` is specified. Use `SELF` for stack sets with self-managed permissions.\n\n- To create a stack set with service-managed permissions while signed in to the management account, specify `SELF` .\n- To create a stack set with service-managed permissions while signed in to a delegated administrator account, specify `DELEGATED_ADMIN` .\n\nYour AWS account must be registered as a delegated admin in the management account. For more information, see [Register a delegated administrator](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-delegated-admin.html) in the *AWS CloudFormation User Guide* .\n\nStack sets with service-managed permissions are created in the management account, including stack sets that are created by delegated administrators.\n\n*Valid Values* : `SELF` | `DELEGATED_ADMIN`", + "markdownDescription": "Specifies whether you are acting as an account administrator in the organization's management account or as a delegated administrator in a member account.\n\nBy default, `SELF` is specified. Use `SELF` for stack sets with self-managed permissions.\n\n- To create a stack set with service-managed permissions while signed in to the management account, specify `SELF` .\n- To create a stack set with service-managed permissions while signed in to a delegated administrator account, specify `DELEGATED_ADMIN` .\n\nYour AWS account must be registered as a delegated admin in the management account. For more information, see [Register a delegated administrator](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-delegated-admin.html) in the *AWS CloudFormation User Guide* .\n\nStack sets with service-managed permissions are created in the management account, including stack sets that are created by delegated administrators.\n\nValid only if the permissions model is `SERVICE_MANAGED` .", "title": "CallAs", "type": "string" }, @@ -35305,12 +35305,12 @@ "type": "array" }, "Description": { - "markdownDescription": "A description of the stack set.\n\n*Minimum* : `1`\n\n*Maximum* : `1024`", + "markdownDescription": "A description of the stack set.", "title": "Description", "type": "string" }, "ExecutionRoleName": { - "markdownDescription": "The name of the IAM execution role to use to create the stack set. If you don't specify an execution role, CloudFormation uses the `AWSCloudFormationStackSetExecutionRole` role for the stack set operation.\n\n*Minimum* : `1`\n\n*Maximum* : `64`\n\n*Pattern* : `[a-zA-Z_0-9+=,.@-]+`", + "markdownDescription": "The name of the IAM execution role to use to create the stack set. If you don't specify an execution role, CloudFormation uses the `AWSCloudFormationStackSetExecutionRole` role for the stack set operation.\n\nValid only if the permissions model is `SELF_MANAGED` .\n\n*Pattern* : `[a-zA-Z_0-9+=,.@-]+`", "title": "ExecutionRoleName", "type": "string" }, @@ -35346,7 +35346,7 @@ "type": "array" }, "StackSetName": { - "markdownDescription": "The name to associate with the stack set. The name must be unique in the Region where you create your stack set.\n\n> The `StackSetName` property is required.", + "markdownDescription": "The name to associate with the stack set. The name must be unique in the Region where you create your stack set.", "title": "StackSetName", "type": "string" }, @@ -35487,7 +35487,7 @@ "items": { "type": "string" }, - "markdownDescription": "The order of the Regions where you want to perform the stack operation.\n\n> `RegionOrder` isn't followed if `AutoDeployment` is enabled.", + "markdownDescription": "The order of the Regions where you want to perform the stack operation.", "title": "RegionOrder", "type": "array" } @@ -39233,7 +39233,7 @@ "type": "array" }, "Field": { - "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.\n\nThe following are valid values for network activity events:\n\n- `cloudtrail.amazonaws.com`\n- `ec2.amazonaws.com`\n- `kms.amazonaws.com`\n- `s3.amazonaws.com`\n- `secretsmanager.amazonaws.com`\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor information about filtering data events on the `resources.ARN` field, see [Filtering data events by resources.ARN](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/filtering-data-events.html#filtering-data-events-resourcearn) in the *AWS CloudTrail User Guide* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", + "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.\n\nThe following are valid values for network activity events:\n\n- `cloudtrail.amazonaws.com`\n- `ec2.amazonaws.com`\n- `kms.amazonaws.com`\n- `s3.amazonaws.com`\n- `secretsmanager.amazonaws.com`\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", "title": "Field", "type": "string" }, @@ -39469,7 +39469,7 @@ "type": "string" }, "SnsTopicName": { - "markdownDescription": "Specifies the name of the Amazon SNS topic defined for notification of log file delivery. The maximum length is 256 characters.", + "markdownDescription": "Specifies the name or ARN of the Amazon SNS topic defined for notification of log file delivery. The maximum length is 256 characters.", "title": "SnsTopicName", "type": "string" }, @@ -39556,7 +39556,7 @@ "type": "array" }, "Field": { - "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.\n\nThe following are valid values for network activity events:\n\n- `cloudtrail.amazonaws.com`\n- `ec2.amazonaws.com`\n- `kms.amazonaws.com`\n- `s3.amazonaws.com`\n- `secretsmanager.amazonaws.com`\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor information about filtering data events on the `resources.ARN` field, see [Filtering data events by resources.ARN](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/filtering-data-events.html#filtering-data-events-resourcearn) in the *AWS CloudTrail User Guide* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", + "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.\n\nThe following are valid values for network activity events:\n\n- `cloudtrail.amazonaws.com`\n- `ec2.amazonaws.com`\n- `kms.amazonaws.com`\n- `s3.amazonaws.com`\n- `secretsmanager.amazonaws.com`\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", "title": "Field", "type": "string" }, @@ -40933,7 +40933,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "A list of tags to be applied to the package group.", + "markdownDescription": "", "title": "Tags", "type": "array" } @@ -40970,7 +40970,7 @@ "properties": { "Restrictions": { "$ref": "#/definitions/AWS::CodeArtifact::PackageGroup.Restrictions", - "markdownDescription": "The origin configuration settings that determine how package versions can enter repositories.", + "markdownDescription": "", "title": "Restrictions" } }, @@ -40986,12 +40986,12 @@ "items": { "type": "string" }, - "markdownDescription": "The repositories to add to the allowed repositories list. The allowed repositories list is used when the `RestrictionMode` is set to `ALLOW_SPECIFIC_REPOSITORIES` .", + "markdownDescription": "", "title": "Repositories", "type": "array" }, "RestrictionMode": { - "markdownDescription": "The package group origin restriction setting. When the value is `INHERIT` , the value is set to the value of the first parent package group which does not have a value of `INHERIT` .", + "markdownDescription": "", "title": "RestrictionMode", "type": "string" } @@ -41006,17 +41006,17 @@ "properties": { "ExternalUpstream": { "$ref": "#/definitions/AWS::CodeArtifact::PackageGroup.RestrictionType", - "markdownDescription": "The package group origin restriction setting for external, upstream repositories.", + "markdownDescription": "", "title": "ExternalUpstream" }, "InternalUpstream": { "$ref": "#/definitions/AWS::CodeArtifact::PackageGroup.RestrictionType", - "markdownDescription": "The package group origin restriction setting for internal, upstream repositories.", + "markdownDescription": "", "title": "InternalUpstream" }, "Publish": { "$ref": "#/definitions/AWS::CodeArtifact::PackageGroup.RestrictionType", - "markdownDescription": "The package group origin restriction setting for publishing packages.", + "markdownDescription": "", "title": "Publish" } }, @@ -41578,7 +41578,7 @@ "title": "RegistryCredential" }, "Type": { - "markdownDescription": "The type of build environment to use for related builds.\n\n- The environment type `ARM_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), and EU (Frankfurt).\n- The environment type `LINUX_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), China (Beijing), and China (Ningxia).\n- The environment type `LINUX_GPU_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney) , China (Beijing), and China (Ningxia).\n\n- The environment types `ARM_LAMBDA_CONTAINER` and `LINUX_LAMBDA_CONTAINER` are available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), and South America (S\u00e3o Paulo).\n\n- The environment types `WINDOWS_CONTAINER` and `WINDOWS_SERVER_2019_CONTAINER` are available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland).\n\n> If you're using compute fleets during project creation, `type` will be ignored. \n\nFor more information, see [Build environment compute types](https://docs.aws.amazon.com//codebuild/latest/userguide/build-env-ref-compute-types.html) in the *AWS CodeBuild user guide* .", + "markdownDescription": "The type of build environment to use for related builds.\n\n> If you're using compute fleets during project creation, `type` will be ignored. \n\nFor more information, see [Build environment compute types](https://docs.aws.amazon.com//codebuild/latest/userguide/build-env-ref-compute-types.html) in the *AWS CodeBuild user guide* .", "title": "Type", "type": "string" } @@ -41962,7 +41962,7 @@ "type": "string" }, "Type": { - "markdownDescription": "The type of webhook filter. There are nine webhook filter types: `EVENT` , `ACTOR_ACCOUNT_ID` , `HEAD_REF` , `BASE_REF` , `FILE_PATH` , `COMMIT_MESSAGE` , `TAG_NAME` , `RELEASE_NAME` , and `WORKFLOW_NAME` .\n\n- EVENT\n\n- A webhook event triggers a build when the provided `pattern` matches one of nine event types: `PUSH` , `PULL_REQUEST_CREATED` , `PULL_REQUEST_UPDATED` , `PULL_REQUEST_CLOSED` , `PULL_REQUEST_REOPENED` , `PULL_REQUEST_MERGED` , `RELEASED` , `PRERELEASED` , and `WORKFLOW_JOB_QUEUED` . The `EVENT` patterns are specified as a comma-separated string. For example, `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` filters all push, pull request created, and pull request updated events.\n\n> Types `PULL_REQUEST_REOPENED` and `WORKFLOW_JOB_QUEUED` work with GitHub and GitHub Enterprise only. Types `RELEASED` and `PRERELEASED` work with GitHub only.\n- ACTOR_ACCOUNT_ID\n\n- A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression `pattern` .\n- HEAD_REF\n\n- A webhook event triggers a build when the head reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` and `refs/tags/tag-name` .\n\n> Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.\n- BASE_REF\n\n- A webhook event triggers a build when the base reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` .\n\n> Works with pull request events only.\n- FILE_PATH\n\n- A webhook triggers a build when the path of a changed file matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- COMMIT_MESSAGE\n\n- A webhook triggers a build when the head commit message matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- TAG_NAME\n\n- A webhook triggers a build when the tag name of the release matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- RELEASE_NAME\n\n- A webhook triggers a build when the release name matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- REPOSITORY_NAME\n\n- A webhook triggers a build when the repository name matches the regular expression pattern.\n\n> Works with GitHub global or organization webhooks only.\n- WORKFLOW_NAME\n\n- A webhook triggers a build when the workflow name matches the regular expression `pattern` .\n\n> Works with `WORKFLOW_JOB_QUEUED` events only. > For CodeBuild-hosted Buildkite runner builds, WORKFLOW_NAME filters will filter by pipeline name.", + "markdownDescription": "The type of webhook filter. There are 11 webhook filter types: `EVENT` , `ACTOR_ACCOUNT_ID` , `HEAD_REF` , `BASE_REF` , `FILE_PATH` , `COMMIT_MESSAGE` , `TAG_NAME` , `RELEASE_NAME` , `REPOSITORY_NAME` , `ORGANIZATION_NAME` , and `WORKFLOW_NAME` .\n\n- EVENT\n\n- A webhook event triggers a build when the provided `pattern` matches one of nine event types: `PUSH` , `PULL_REQUEST_CREATED` , `PULL_REQUEST_UPDATED` , `PULL_REQUEST_CLOSED` , `PULL_REQUEST_REOPENED` , `PULL_REQUEST_MERGED` , `RELEASED` , `PRERELEASED` , and `WORKFLOW_JOB_QUEUED` . The `EVENT` patterns are specified as a comma-separated string. For example, `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` filters all push, pull request created, and pull request updated events.\n\n> Types `PULL_REQUEST_REOPENED` and `WORKFLOW_JOB_QUEUED` work with GitHub and GitHub Enterprise only. Types `RELEASED` and `PRERELEASED` work with GitHub only.\n- ACTOR_ACCOUNT_ID\n\n- A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression `pattern` .\n- HEAD_REF\n\n- A webhook event triggers a build when the head reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` and `refs/tags/tag-name` .\n\n> Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.\n- BASE_REF\n\n- A webhook event triggers a build when the base reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` .\n\n> Works with pull request events only.\n- FILE_PATH\n\n- A webhook triggers a build when the path of a changed file matches the regular expression `pattern` .\n\n> Works with push and pull request events only.\n- COMMIT_MESSAGE\n\n- A webhook triggers a build when the head commit message matches the regular expression `pattern` .\n\n> Works with push and pull request events only.\n- TAG_NAME\n\n- A webhook triggers a build when the tag name of the release matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- RELEASE_NAME\n\n- A webhook triggers a build when the release name matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- REPOSITORY_NAME\n\n- A webhook triggers a build when the repository name matches the regular expression `pattern` .\n\n> Works with GitHub global or organization webhooks only.\n- ORGANIZATION_NAME\n\n- A webhook triggers a build when the organization name matches the regular expression `pattern` .\n\n> Works with GitHub global webhooks only.\n- WORKFLOW_NAME\n\n- A webhook triggers a build when the workflow name matches the regular expression `pattern` .\n\n> Works with `WORKFLOW_JOB_QUEUED` events only. > For CodeBuild-hosted Buildkite runner builds, WORKFLOW_NAME filters will filter by pipeline name.", "title": "Type", "type": "string" } @@ -45758,7 +45758,7 @@ }, "DeviceConfiguration": { "$ref": "#/definitions/AWS::Cognito::UserPool.DeviceConfiguration", - "markdownDescription": "The device-remembering configuration for a user pool. Device remembering or device tracking is a \"Remember me on this device\" option for user pools that perform authentication with the device key of a trusted device in the back end, instead of a user-provided MFA code. For more information about device authentication, see [Working with user devices in your user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-device-tracking.html) . A null value indicates that you have deactivated device remembering in your user pool.\n\n> When you provide a value for any `DeviceConfiguration` field, you activate the Amazon Cognito device-remembering feature. For more infor", + "markdownDescription": "The device-remembering configuration for a user pool. Device remembering or device tracking is a \"Remember me on this device\" option for user pools that perform authentication with the device key of a trusted device in the back end, instead of a user-provided MFA code. For more information about device authentication, see [Working with user devices in your user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-device-tracking.html) . A null value indicates that you have deactivated device remembering in your user pool.\n\n> When you provide a value for any `DeviceConfiguration` field, you activate the Amazon Cognito device-remembering feature. For more information, see [Working with devices](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-device-tracking.html) .", "title": "DeviceConfiguration" }, "EmailConfiguration": { @@ -53428,7 +53428,7 @@ "items": { "$ref": "#/definitions/AWS::ControlTower::EnabledBaseline.Parameter" }, - "markdownDescription": "Parameters that are applied when enabling this `Baseline` . These parameters configure the behavior of the baseline.", + "markdownDescription": "Shows the parameters that are applied when enabling this `Baseline` .", "title": "Parameters", "type": "array" }, @@ -53436,7 +53436,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "Tags associated with input to `EnableBaseline` .", + "markdownDescription": "", "title": "Tags", "type": "array" }, @@ -53478,12 +53478,12 @@ "additionalProperties": false, "properties": { "Key": { - "markdownDescription": "A string denoting the parameter key.", + "markdownDescription": "", "title": "Key", "type": "string" }, "Value": { - "markdownDescription": "A low-level `Document` object of any type (for example, a Java Object).", + "markdownDescription": "", "title": "Value", "type": "object" } @@ -53542,7 +53542,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "Tags to be applied to the enabled control.", + "markdownDescription": "", "title": "Tags", "type": "array" }, @@ -62463,7 +62463,7 @@ "title": "OnPremConfig" }, "ServerHostname": { - "markdownDescription": "Specifies the Domain Name System (DNS) name or IP version 4 address of the NFS file server that your DataSync agent connects to.", + "markdownDescription": "Specifies the DNS name or IP version 4 address of the NFS file server that your DataSync agent connects to.", "title": "ServerHostname", "type": "string" }, @@ -62599,7 +62599,7 @@ "type": "string" }, "ServerHostname": { - "markdownDescription": "Specifies the domain name or IP address of the object storage server. A DataSync agent uses this hostname to mount the object storage server in a network.", + "markdownDescription": "Specifies the domain name or IP version 4 (IPv4) address of the object storage server that your DataSync agent connects to.", "title": "ServerHostname", "type": "string" }, @@ -62816,7 +62816,7 @@ "type": "string" }, "ServerHostname": { - "markdownDescription": "Specifies the domain name or IP address of the SMB file server that your DataSync agent will mount.\n\nRemember the following when configuring this parameter:\n\n- You can't specify an IP version 6 (IPv6) address.\n- If you're using Kerberos authentication, you must specify a domain name.", + "markdownDescription": "Specifies the domain name or IP address of the SMB file server that your DataSync agent connects to.\n\nRemember the following when configuring this parameter:\n\n- You can't specify an IP version 6 (IPv6) address.\n- If you're using Kerberos authentication, you must specify a domain name.", "title": "ServerHostname", "type": "string" }, @@ -63553,7 +63553,7 @@ "title": "Schedule" }, "Type": { - "markdownDescription": "The type of the data source.", + "markdownDescription": "The type of the data source. In Amazon DataZone, you can use data sources to import technical metadata of assets (data) from the source databases or data warehouses into Amazon DataZone. In the current release of Amazon DataZone, you can create and run data sources for AWS Glue and Amazon Redshift.", "title": "Type", "type": "string" } @@ -65768,7 +65768,7 @@ "additionalProperties": false, "properties": { "AutoEnableMembers": { - "markdownDescription": "Indicates whether to automatically enable new organization accounts as member accounts in the organization behavior graph.\n\nBy default, this property is set to `false` . If you want to change the value of this property, you must be the Detective administrator for the organization. For more information on setting a Detective administrator account, see [AWS::Detective::OrganizationAdmin](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-detective-organizationadmin.html)", + "markdownDescription": "Indicates whether to automatically enable new organization accounts as member accounts in the organization behavior graph.\n\nBy default, this property is set to `false` . If you want to change the value of this property, you must be the Detective administrator for the organization. For more information on setting a Detective administrator account, see [AWS::Detective::OrganizationAdmin](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-detective-organizationadmin.html) .", "title": "AutoEnableMembers", "type": "boolean" }, @@ -69821,7 +69821,7 @@ "items": { "type": "string" }, - "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", + "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", "title": "AcceleratorTypes", "type": "array" }, @@ -72922,7 +72922,7 @@ "items": { "type": "string" }, - "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", + "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", "title": "AcceleratorTypes", "type": "array" }, @@ -73117,7 +73117,7 @@ "items": { "$ref": "#/definitions/AWS::EC2::LaunchTemplate.ElasticGpuSpecification" }, - "markdownDescription": "Deprecated.\n\n> Amazon Elastic Graphics reached end of life on January 8, 2024. For workloads that require graphics acceleration, we recommend that you use Amazon EC2 G4ad, G4dn, or G5 instances.", + "markdownDescription": "Deprecated.\n\n> Amazon Elastic Graphics reached end of life on January 8, 2024.", "title": "ElasticGpuSpecifications", "type": "array" }, @@ -73125,7 +73125,7 @@ "items": { "$ref": "#/definitions/AWS::EC2::LaunchTemplate.LaunchTemplateElasticInferenceAccelerator" }, - "markdownDescription": "> Amazon Elastic Inference is no longer available. \n\nAn elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads.\n\nYou cannot specify accelerators from different generations in the same request.\n\n> Starting April 15, 2023, AWS will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.", + "markdownDescription": "> Amazon Elastic Inference is no longer available. \n\nAn elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads.\n\nYou cannot specify accelerators from different generations in the same request.", "title": "ElasticInferenceAccelerators", "type": "array" }, @@ -76600,7 +76600,7 @@ "type": "string" }, "GroupName": { - "markdownDescription": "The name of the security group.\n\nConstraints: Up to 255 characters in length. Cannot start with `sg-` .\n\nValid characters: a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*", + "markdownDescription": "The name of the security group. Names are case-insensitive and must be unique within the VPC.\n\nConstraints: Up to 255 characters in length. Can't start with `sg-` .\n\nValid characters: a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*", "title": "GroupName", "type": "string" }, @@ -77442,7 +77442,7 @@ "items": { "type": "string" }, - "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", + "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", "title": "AcceleratorTypes", "type": "array" }, @@ -82850,7 +82850,7 @@ "type": "string" }, "RepositoryPolicy": { - "markdownDescription": "he repository policy to apply to repositories created using the template. A repository policy is a permissions policy associated with a repository to control access permissions.", + "markdownDescription": "The repository policy to apply to repositories created using the template. A repository policy is a permissions policy associated with a repository to control access permissions.", "title": "RepositoryPolicy", "type": "string" }, @@ -83651,7 +83651,7 @@ "additionalProperties": false, "properties": { "AssignPublicIp": { - "markdownDescription": "Whether the task's elastic network interface receives a public IP address.\n\nConsider the following when you set this value:\n\n- When you use `create-service` or `update-service` , the default is `DISABLED` .\n- When the service `deploymentController` is `ECS` , the value must be `DISABLED` .\n- When you use `create-service` or `update-service` , the default is `ENABLED` .", + "markdownDescription": "Whether the task's elastic network interface receives a public IP address.\n\nConsider the following when you set this value:\n\n- When you use `create-service` or `update-service` , the default is `DISABLED` .\n- When the service `deploymentController` is `ECS` , the value must be `DISABLED` .", "title": "AssignPublicIp", "type": "string" }, @@ -85460,7 +85460,7 @@ "additionalProperties": false, "properties": { "AssignPublicIp": { - "markdownDescription": "Whether the task's elastic network interface receives a public IP address.\n\nConsider the following when you set this value:\n\n- When you use `create-service` or `update-service` , the default is `DISABLED` .\n- When the service `deploymentController` is `ECS` , the value must be `DISABLED` .\n- When you use `create-service` or `update-service` , the default is `ENABLED` .", + "markdownDescription": "Whether the task's elastic network interface receives a public IP address.\n\nConsider the following when you set this value:\n\n- When you use `create-service` or `update-service` , the default is `DISABLED` .\n- When the service `deploymentController` is `ECS` , the value must be `DISABLED` .", "title": "AssignPublicIp", "type": "string" }, @@ -87754,7 +87754,7 @@ "type": "array" }, "EbsOptimized": { - "markdownDescription": "Indicates whether an Amazon EBS volume is EBS-optimized.", + "markdownDescription": "Indicates whether an Amazon EBS volume is EBS-optimized. The default is false. You should explicitly set this value to true to enable the Amazon EBS-optimized setting for an EC2 instance.", "title": "EbsOptimized", "type": "boolean" } @@ -88573,7 +88573,7 @@ "type": "array" }, "EbsOptimized": { - "markdownDescription": "Indicates whether an Amazon EBS volume is EBS-optimized.", + "markdownDescription": "Indicates whether an Amazon EBS volume is EBS-optimized. The default is false. You should explicitly set this value to true to enable the Amazon EBS-optimized setting for an EC2 instance.", "title": "EbsOptimized", "type": "boolean" } @@ -88984,7 +88984,7 @@ "type": "array" }, "EbsOptimized": { - "markdownDescription": "Indicates whether an Amazon EBS volume is EBS-optimized.", + "markdownDescription": "Indicates whether an Amazon EBS volume is EBS-optimized. The default is false. You should explicitly set this value to true to enable the Amazon EBS-optimized setting for an EC2 instance.", "title": "EbsOptimized", "type": "boolean" } @@ -90756,7 +90756,7 @@ "additionalProperties": false, "properties": { "CacheParameterGroupFamily": { - "markdownDescription": "The name of the cache parameter group family that this cache parameter group is compatible with.\n\nValid values are: `memcached1.4` | `memcached1.5` | `memcached1.6` | `redis2.6` | `redis2.8` | `redis3.2` | `redis4.0` | `redis5.0` | `redis6.x` | `redis7`", + "markdownDescription": "The name of the cache parameter group family that this cache parameter group is compatible with.\n\nValid values are: `valkey8` | `valkey7` | `memcached1.4` | `memcached1.5` | `memcached1.6` | `redis2.6` | `redis2.8` | `redis3.2` | `redis4.0` | `redis5.0` | `redis6.x` | `redis7`", "title": "CacheParameterGroupFamily", "type": "string" }, @@ -93223,7 +93223,7 @@ "type": "boolean" }, "Mode": { - "markdownDescription": "The client certificate handling method. The possible values are `off` , `passthrough` , and `verify` . The default value is `off` .", + "markdownDescription": "The client certificate handling method. Options are `off` , `passthrough` or `verify` . The default value is `off` .", "title": "Mode", "type": "string" }, @@ -100855,7 +100855,7 @@ "additionalProperties": false, "properties": { "CopyTagsToSnapshots": { - "markdownDescription": "A Boolean value indicating whether tags for the volume should be copied to snapshots. This value defaults to `false` . If it's set to `true` , all tags for the volume are copied to snapshots where the user doesn't specify tags. If this value is `true` , and you specify one or more tags, only the specified tags are copied to snapshots. If you specify one or more tags when creating the snapshot, no tags are copied from the volume, regardless of this value.", + "markdownDescription": "A Boolean value indicating whether tags for the volume should be copied to snapshots. This value defaults to `false` . If this value is set to `true` , and you do not specify any tags, all tags for the original volume are copied over to snapshots. If this value is\u00a0set to `true` , and you do specify one or more tags, only the specified tags for the original volume are copied over to snapshots. If you specify one or more tags when creating a new snapshot, no tags are copied over from the original volume, regardless of this value.", "title": "CopyTagsToSnapshots", "type": "boolean" }, @@ -102797,18 +102797,18 @@ "type": "string" }, "OperatingSystem": { - "markdownDescription": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use Amazon GameLift server SDK 4.x., first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to Amazon GameLift server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", + "markdownDescription": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "title": "OperatingSystem", "type": "string" }, "ServerSdkVersion": { - "markdownDescription": "A server SDK version you used when integrating your game server build with Amazon GameLift. For more information see [Integrate games with custom game servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/integration-custom-intro.html) . By default Amazon GameLift sets this value to `4.0.2` .", + "markdownDescription": "A server SDK version you used when integrating your game server build with Amazon GameLift Servers. For more information see [Integrate games with custom game servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/integration-custom-intro.html) . By default Amazon GameLift Servers sets this value to `4.0.2` .", "title": "ServerSdkVersion", "type": "string" }, "StorageLocation": { "$ref": "#/definitions/AWS::GameLift::Build.StorageLocation", - "markdownDescription": "Information indicating where your game build files are stored. Use this parameter only when creating a build with files stored in an Amazon S3 bucket that you own. The storage location must specify an Amazon S3 bucket name and key. The location must also specify a role ARN that you set up to allow Amazon GameLift to access your Amazon S3 bucket. The S3 bucket and your new build must be in the same Region.\n\nIf a `StorageLocation` is specified, the size of your file can be found in your Amazon S3 bucket. Amazon GameLift will report a `SizeOnDisk` of 0.", + "markdownDescription": "Information indicating where your game build files are stored. Use this parameter only when creating a build with files stored in an Amazon S3 bucket that you own. The storage location must specify an Amazon S3 bucket name and key. The location must also specify a role ARN that you set up to allow Amazon GameLift Servers to access your Amazon S3 bucket. The S3 bucket and your new build must be in the same Region.\n\nIf a `StorageLocation` is specified, the size of your file can be found in your Amazon S3 bucket. Amazon GameLift Servers will report a `SizeOnDisk` of 0.", "title": "StorageLocation" }, "Version": { @@ -102917,7 +102917,7 @@ "type": "string" }, "OperatingSystem": { - "markdownDescription": "The platform that all containers in the container group definition run on.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use Amazon GameLift server SDK 4.x, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to Amazon GameLift server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", + "markdownDescription": "The platform that all containers in the container group definition run on.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "title": "OperatingSystem", "type": "string" }, @@ -103199,7 +103199,7 @@ "properties": { "AnywhereConfiguration": { "$ref": "#/definitions/AWS::GameLift::Fleet.AnywhereConfiguration", - "markdownDescription": "Amazon GameLift Anywhere configuration options.", + "markdownDescription": "Amazon GameLift Servers Anywhere configuration options.", "title": "AnywhereConfiguration" }, "ApplyCapacity": { @@ -103214,7 +103214,7 @@ }, "CertificateConfiguration": { "$ref": "#/definitions/AWS::GameLift::Fleet.CertificateConfiguration", - "markdownDescription": "Prompts Amazon GameLift to generate a TLS/SSL certificate for the fleet. Amazon GameLift uses the certificates to encrypt traffic between game clients and the game servers running on Amazon GameLift. By default, the `CertificateConfiguration` is `DISABLED` . You can't change this property after you create the fleet.\n\nAWS Certificate Manager (ACM) certificates expire after 13 months. Certificate expiration can cause fleets to fail, preventing players from connecting to instances in the fleet. We recommend you replace fleets before 13 months, consider using fleet aliases for a smooth transition.\n\n> ACM isn't available in all AWS regions. A fleet creation request with certificate generation enabled in an unsupported Region, fails with a 4xx error. For more information about the supported Regions, see [Supported Regions](https://docs.aws.amazon.com/acm/latest/userguide/acm-regions.html) in the *AWS Certificate Manager User Guide* .", + "markdownDescription": "Prompts Amazon GameLift Servers to generate a TLS/SSL certificate for the fleet. Amazon GameLift Servers uses the certificates to encrypt traffic between game clients and the game servers running on Amazon GameLift Servers. By default, the `CertificateConfiguration` is `DISABLED` . You can't change this property after you create the fleet.\n\nAWS Certificate Manager (ACM) certificates expire after 13 months. Certificate expiration can cause fleets to fail, preventing players from connecting to instances in the fleet. We recommend you replace fleets before 13 months, consider using fleet aliases for a smooth transition.\n\n> ACM isn't available in all AWS regions. A fleet creation request with certificate generation enabled in an unsupported Region, fails with a 4xx error. For more information about the supported Regions, see [Supported Regions](https://docs.aws.amazon.com/acm/latest/userguide/acm-regions.html) in the *AWS Certificate Manager User Guide* .", "title": "CertificateConfiguration" }, "ComputeType": { @@ -103239,12 +103239,12 @@ "items": { "$ref": "#/definitions/AWS::GameLift::Fleet.IpPermission" }, - "markdownDescription": "The IP address ranges and port settings that allow inbound traffic to access game server processes and other processes on this fleet. Set this parameter for managed EC2 fleets. You can leave this parameter empty when creating the fleet, but you must call [](https://docs.aws.amazon.com/gamelift/latest/apireference/API_UpdateFleetPortSettings) to set it before players can connect to game sessions. As a best practice, we recommend opening ports for remote access only when you need them and closing them when you're finished. For Realtime Servers fleets, Amazon GameLift automatically sets TCP and UDP ranges.", + "markdownDescription": "The IP address ranges and port settings that allow inbound traffic to access game server processes and other processes on this fleet. Set this parameter for managed EC2 fleets. You can leave this parameter empty when creating the fleet, but you must call [](https://docs.aws.amazon.com/gamelift/latest/apireference/API_UpdateFleetPortSettings) to set it before players can connect to game sessions. As a best practice, we recommend opening ports for remote access only when you need them and closing them when you're finished. For Amazon GameLift Servers Realtime fleets, Amazon GameLift Servers automatically sets TCP and UDP ranges.", "title": "EC2InboundPermissions", "type": "array" }, "EC2InstanceType": { - "markdownDescription": "The Amazon GameLift-supported Amazon EC2 instance type to use with managed EC2 fleets. Instance type determines the computing resources that will be used to host your game servers, including CPU, memory, storage, and networking capacity. See [Amazon Elastic Compute Cloud Instance Types](https://docs.aws.amazon.com/ec2/instance-types/) for detailed descriptions of Amazon EC2 instance types.", + "markdownDescription": "The Amazon GameLift Servers-supported Amazon EC2 instance type to use with managed EC2 fleets. Instance type determines the computing resources that will be used to host your game servers, including CPU, memory, storage, and networking capacity. See [Amazon Elastic Compute Cloud Instance Types](https://docs.aws.amazon.com/ec2/instance-types/) for detailed descriptions of Amazon EC2 instance types.", "title": "EC2InstanceType", "type": "string" }, @@ -103267,7 +103267,7 @@ "items": { "$ref": "#/definitions/AWS::GameLift::Fleet.LocationConfiguration" }, - "markdownDescription": "A set of remote locations to deploy additional instances to and manage as a multi-location fleet. Use this parameter when creating a fleet in AWS Regions that support multiple locations. You can add any AWS Region or Local Zone that's supported by Amazon GameLift. Provide a list of one or more AWS Region codes, such as `us-west-2` , or Local Zone names. When using this parameter, Amazon GameLift requires you to include your home location in the request. For a list of supported Regions and Local Zones, see [Amazon GameLift service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", + "markdownDescription": "A set of remote locations to deploy additional instances to and manage as a multi-location fleet. Use this parameter when creating a fleet in AWS Regions that support multiple locations. You can add any AWS Region or Local Zone that's supported by Amazon GameLift Servers. Provide a list of one or more AWS Region codes, such as `us-west-2` , or Local Zone names. When using this parameter, Amazon GameLift Servers requires you to include your home location in the request. For a list of supported Regions and Local Zones, see [Amazon GameLift Servers service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", "title": "Locations", "type": "array" }, @@ -103300,12 +103300,12 @@ "type": "string" }, "PeerVpcAwsAccountId": { - "markdownDescription": "Used when peering your Amazon GameLift fleet with a VPC, the unique identifier for the AWS account that owns the VPC. You can find your account ID in the AWS Management Console under account settings.", + "markdownDescription": "Used when peering your Amazon GameLift Servers fleet with a VPC, the unique identifier for the AWS account that owns the VPC. You can find your account ID in the AWS Management Console under account settings.", "title": "PeerVpcAwsAccountId", "type": "string" }, "PeerVpcId": { - "markdownDescription": "A unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same Region as your fleet. To look up a VPC ID, use the [VPC Dashboard](https://docs.aws.amazon.com/vpc/) in the AWS Management Console . Learn more about VPC peering in [VPC Peering with Amazon GameLift Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/vpc-peering.html) .", + "markdownDescription": "A unique identifier for a VPC with resources to be accessed by your Amazon GameLift Servers fleet. The VPC must be in the same Region as your fleet. To look up a VPC ID, use the [VPC Dashboard](https://docs.aws.amazon.com/vpc/) in the AWS Management Console . Learn more about VPC peering in [VPC Peering with Amazon GameLift Servers Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/vpc-peering.html) .", "title": "PeerVpcId", "type": "string" }, @@ -103363,7 +103363,7 @@ "additionalProperties": false, "properties": { "Cost": { - "markdownDescription": "The cost to run your fleet per hour. Amazon GameLift uses the provided cost of your fleet to balance usage in queues. For more information about queues, see [Setting up queues](https://docs.aws.amazon.com/gamelift/latest/developerguide/queues-intro.html) in the *Amazon GameLift Developer Guide* .", + "markdownDescription": "The cost to run your fleet per hour. Amazon GameLift Servers uses the provided cost of your fleet to balance usage in queues. For more information about queues, see [Setting up queues](https://docs.aws.amazon.com/gamelift/latest/developerguide/queues-intro.html) in the *Amazon GameLift Servers Developer Guide* .", "title": "Cost", "type": "string" } @@ -103499,7 +103499,7 @@ "additionalProperties": false, "properties": { "Location": { - "markdownDescription": "An AWS Region code, such as `us-west-2` . For a list of supported Regions and Local Zones, see [Amazon GameLift service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", + "markdownDescription": "An AWS Region code, such as `us-west-2` . For a list of supported Regions and Local Zones, see [Amazon GameLift Servers service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", "title": "Location", "type": "string" }, @@ -103518,7 +103518,7 @@ "additionalProperties": false, "properties": { "NewGameSessionsPerCreator": { - "markdownDescription": "A policy that puts limits on the number of game sessions that a player can create within a specified span of time. With this policy, you can control players' ability to consume available resources.\n\nThe policy is evaluated when a player tries to create a new game session. On receiving a `CreateGameSession` request, Amazon GameLift checks that the player (identified by `CreatorId` ) has created fewer than game session limit in the specified time period.", + "markdownDescription": "A policy that puts limits on the number of game sessions that a player can create within a specified span of time. With this policy, you can control players' ability to consume available resources.\n\nThe policy is evaluated when a player tries to create a new game session. On receiving a `CreateGameSession` request, Amazon GameLift Servers checks that the player (identified by `CreatorId` ) has created fewer than game session limit in the specified time period.", "title": "NewGameSessionsPerCreator", "type": "number" }, @@ -103573,7 +103573,7 @@ "type": "string" }, "MetricName": { - "markdownDescription": "Name of the Amazon GameLift-defined metric that is used to trigger a scaling adjustment. For detailed descriptions of fleet metrics, see [Monitor Amazon GameLift with Amazon CloudWatch](https://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html) .\n\n- *ActivatingGameSessions* -- Game sessions in the process of being created.\n- *ActiveGameSessions* -- Game sessions that are currently running.\n- *ActiveInstances* -- Fleet instances that are currently running at least one game session.\n- *AvailableGameSessions* -- Additional game sessions that fleet could host simultaneously, given current capacity.\n- *AvailablePlayerSessions* -- Empty player slots in currently active game sessions. This includes game sessions that are not currently accepting players. Reserved player slots are not included.\n- *CurrentPlayerSessions* -- Player slots in active game sessions that are being used by a player or are reserved for a player.\n- *IdleInstances* -- Active instances that are currently hosting zero game sessions.\n- *PercentAvailableGameSessions* -- Unused percentage of the total number of game sessions that a fleet could host simultaneously, given current capacity. Use this metric for a target-based scaling policy.\n- *PercentIdleInstances* -- Percentage of the total number of active instances that are hosting zero game sessions.\n- *QueueDepth* -- Pending game session placement requests, in any queue, where the current fleet is the top-priority destination.\n- *WaitTime* -- Current wait time for pending game session placement requests, in any queue, where the current fleet is the top-priority destination.", + "markdownDescription": "Name of the Amazon GameLift Servers-defined metric that is used to trigger a scaling adjustment. For detailed descriptions of fleet metrics, see [Monitor Amazon GameLift Servers with Amazon CloudWatch](https://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html) .\n\n- *ActivatingGameSessions* -- Game sessions in the process of being created.\n- *ActiveGameSessions* -- Game sessions that are currently running.\n- *ActiveInstances* -- Fleet instances that are currently running at least one game session.\n- *AvailableGameSessions* -- Additional game sessions that fleet could host simultaneously, given current capacity.\n- *AvailablePlayerSessions* -- Empty player slots in currently active game sessions. This includes game sessions that are not currently accepting players. Reserved player slots are not included.\n- *CurrentPlayerSessions* -- Player slots in active game sessions that are being used by a player or are reserved for a player.\n- *IdleInstances* -- Active instances that are currently hosting zero game sessions.\n- *PercentAvailableGameSessions* -- Unused percentage of the total number of game sessions that a fleet could host simultaneously, given current capacity. Use this metric for a target-based scaling policy.\n- *PercentIdleInstances* -- Percentage of the total number of active instances that are hosting zero game sessions.\n- *QueueDepth* -- Pending game session placement requests, in any queue, where the current fleet is the top-priority destination.\n- *WaitTime* -- Current wait time for pending game session placement requests, in any queue, where the current fleet is the top-priority destination.", "title": "MetricName", "type": "string" }, @@ -103633,7 +103633,7 @@ "type": "number" }, "LaunchPath": { - "markdownDescription": "The location of a game build executable or Realtime script. Game builds and Realtime scripts are installed on instances at the root:\n\n- Windows (custom game builds only): `C:\\game` . Example: \" `C:\\game\\MyGame\\server.exe` \"\n- Linux: `/local/game` . Examples: \" `/local/game/MyGame/server.exe` \" or \" `/local/game/MyRealtimeScript.js` \"\n\n> Amazon GameLift doesn't support the use of setup scripts that launch the game executable. For custom game builds, this parameter must indicate the executable that calls the server SDK operations `initSDK()` and `ProcessReady()` .", + "markdownDescription": "The location of a game build executable or Realtime script. Game builds and Realtime scripts are installed on instances at the root:\n\n- Windows (custom game builds only): `C:\\game` . Example: \" `C:\\game\\MyGame\\server.exe` \"\n- Linux: `/local/game` . Examples: \" `/local/game/MyGame/server.exe` \" or \" `/local/game/MyRealtimeScript.js` \"\n\n> Amazon GameLift Servers doesn't support the use of setup scripts that launch the game executable. For custom game builds, this parameter must indicate the executable that calls the server SDK operations `initSDK()` and `ProcessReady()` .", "title": "LaunchPath", "type": "string" }, @@ -103704,7 +103704,7 @@ "title": "AutoScalingPolicy" }, "BalancingStrategy": { - "markdownDescription": "Indicates how Amazon GameLift FleetIQ balances the use of Spot Instances and On-Demand Instances in the game server group. Method options include the following:\n\n- `SPOT_ONLY` - Only Spot Instances are used in the game server group. If Spot Instances are unavailable or not viable for game hosting, the game server group provides no hosting capacity until Spot Instances can again be used. Until then, no new instances are started, and the existing nonviable Spot Instances are terminated (after current gameplay ends) and are not replaced.\n- `SPOT_PREFERRED` - (default value) Spot Instances are used whenever available in the game server group. If Spot Instances are unavailable, the game server group continues to provide hosting capacity by falling back to On-Demand Instances. Existing nonviable Spot Instances are terminated (after current gameplay ends) and are replaced with new On-Demand Instances.\n- `ON_DEMAND_ONLY` - Only On-Demand Instances are used in the game server group. No Spot Instances are used, even when available, while this balancing strategy is in force.", + "markdownDescription": "Indicates how Amazon GameLift Servers FleetIQ balances the use of Spot Instances and On-Demand Instances in the game server group. Method options include the following:\n\n- `SPOT_ONLY` - Only Spot Instances are used in the game server group. If Spot Instances are unavailable or not viable for game hosting, the game server group provides no hosting capacity until Spot Instances can again be used. Until then, no new instances are started, and the existing nonviable Spot Instances are terminated (after current gameplay ends) and are not replaced.\n- `SPOT_PREFERRED` - (default value) Spot Instances are used whenever available in the game server group. If Spot Instances are unavailable, the game server group continues to provide hosting capacity by falling back to On-Demand Instances. Existing nonviable Spot Instances are terminated (after current gameplay ends) and are replaced with new On-Demand Instances.\n- `ON_DEMAND_ONLY` - Only On-Demand Instances are used in the game server group. No Spot Instances are used, even when available, while this balancing strategy is in force.", "title": "BalancingStrategy", "type": "string" }, @@ -103727,27 +103727,27 @@ "items": { "$ref": "#/definitions/AWS::GameLift::GameServerGroup.InstanceDefinition" }, - "markdownDescription": "The set of Amazon EC2 instance types that Amazon GameLift FleetIQ can use when balancing and automatically scaling instances in the corresponding Auto Scaling group.", + "markdownDescription": "The set of Amazon EC2 instance types that Amazon GameLift Servers FleetIQ can use when balancing and automatically scaling instances in the corresponding Auto Scaling group.", "title": "InstanceDefinitions", "type": "array" }, "LaunchTemplate": { "$ref": "#/definitions/AWS::GameLift::GameServerGroup.LaunchTemplate", - "markdownDescription": "The Amazon EC2 launch template that contains configuration settings and game server code to be deployed to all instances in the game server group. You can specify the template using either the template name or ID. For help with creating a launch template, see [Creating a Launch Template for an Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html) in the *Amazon Elastic Compute Cloud Auto Scaling User Guide* . After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.\n\n> If you specify network interfaces in your launch template, you must explicitly set the property `AssociatePublicIpAddress` to \"true\". If no network interface is specified in the launch template, Amazon GameLift FleetIQ uses your account's default VPC.", + "markdownDescription": "The Amazon EC2 launch template that contains configuration settings and game server code to be deployed to all instances in the game server group. You can specify the template using either the template name or ID. For help with creating a launch template, see [Creating a Launch Template for an Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html) in the *Amazon Elastic Compute Cloud Auto Scaling User Guide* . After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.\n\n> If you specify network interfaces in your launch template, you must explicitly set the property `AssociatePublicIpAddress` to \"true\". If no network interface is specified in the launch template, Amazon GameLift Servers FleetIQ uses your account's default VPC.", "title": "LaunchTemplate" }, "MaxSize": { - "markdownDescription": "The maximum number of instances allowed in the Amazon EC2 Auto Scaling group. During automatic scaling events, Amazon GameLift FleetIQ and EC2 do not scale up the group above this maximum. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", + "markdownDescription": "The maximum number of instances allowed in the Amazon EC2 Auto Scaling group. During automatic scaling events, Amazon GameLift Servers FleetIQ and EC2 do not scale up the group above this maximum. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", "title": "MaxSize", "type": "number" }, "MinSize": { - "markdownDescription": "The minimum number of instances allowed in the Amazon EC2 Auto Scaling group. During automatic scaling events, Amazon GameLift FleetIQ and Amazon EC2 do not scale down the group below this minimum. In production, this value should be set to at least 1. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", + "markdownDescription": "The minimum number of instances allowed in the Amazon EC2 Auto Scaling group. During automatic scaling events, Amazon GameLift Servers FleetIQ and Amazon EC2 do not scale down the group below this minimum. In production, this value should be set to at least 1. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", "title": "MinSize", "type": "number" }, "RoleArn": { - "markdownDescription": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) for an IAM role that allows Amazon GameLift to access your Amazon EC2 Auto Scaling groups.", + "markdownDescription": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) for an IAM role that allows Amazon GameLift Servers to access your Amazon EC2 Auto Scaling groups.", "title": "RoleArn", "type": "string" }, @@ -103763,7 +103763,7 @@ "items": { "type": "string" }, - "markdownDescription": "A list of virtual private cloud (VPC) subnets to use with instances in the game server group. By default, all Amazon GameLift FleetIQ-supported Availability Zones are used. You can use this parameter to specify VPCs that you've set up. This property cannot be updated after the game server group is created, and the corresponding Auto Scaling group will always use the property value that is set with this request, even if the Auto Scaling group is updated directly.", + "markdownDescription": "A list of virtual private cloud (VPC) subnets to use with instances in the game server group. By default, all Amazon GameLift Servers FleetIQ-supported Availability Zones are used. You can use this parameter to specify VPCs that you've set up. This property cannot be updated after the game server group is created, and the corresponding Auto Scaling group will always use the property value that is set with this request, even if the Auto Scaling group is updated directly.", "title": "VpcSubnets", "type": "array" } @@ -103800,7 +103800,7 @@ "additionalProperties": false, "properties": { "EstimatedInstanceWarmup": { - "markdownDescription": "Length of time, in seconds, it takes for a new instance to start new game server processes and register with Amazon GameLift FleetIQ. Specifying a warm-up time can be useful, particularly with game servers that take a long time to start up, because it avoids prematurely starting new instances.", + "markdownDescription": "Length of time, in seconds, it takes for a new instance to start new game server processes and register with Amazon GameLift Servers FleetIQ. Specifying a warm-up time can be useful, particularly with game servers that take a long time to start up, because it avoids prematurely starting new instances.", "title": "EstimatedInstanceWarmup", "type": "number" }, @@ -103824,7 +103824,7 @@ "type": "string" }, "WeightedCapacity": { - "markdownDescription": "Instance weighting that indicates how much this instance type contributes to the total capacity of a game server group. Instance weights are used by Amazon GameLift FleetIQ to calculate the instance type's cost per unit hour and better identify the most cost-effective options. For detailed information on weighting instance capacity, see [Instance Weighting](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-weighting.html) in the *Amazon Elastic Compute Cloud Auto Scaling User Guide* . Default value is \"1\".", + "markdownDescription": "Instance weighting that indicates how much this instance type contributes to the total capacity of a game server group. Instance weights are used by Amazon GameLift Servers FleetIQ to calculate the instance type's cost per unit hour and better identify the most cost-effective options. For detailed information on weighting instance capacity, see [Instance Weighting](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-weighting.html) in the *Amazon Elastic Compute Cloud Auto Scaling User Guide* . Default value is \"1\".", "title": "WeightedCapacity", "type": "string" } @@ -103936,7 +103936,7 @@ "items": { "$ref": "#/definitions/AWS::GameLift::GameSessionQueue.PlayerLatencyPolicy" }, - "markdownDescription": "A set of policies that enforce a sliding cap on player latency when processing game sessions placement requests. Use multiple policies to gradually relax the cap over time if Amazon GameLift can't make a placement. Policies are evaluated in order starting with the lowest maximum latency value.", + "markdownDescription": "A set of policies that enforce a sliding cap on player latency when processing game sessions placement requests. Use multiple policies to gradually relax the cap over time if Amazon GameLift Servers can't make a placement. Policies are evaluated in order starting with the lowest maximum latency value.", "title": "PlayerLatencyPolicies", "type": "array" }, @@ -103954,7 +103954,7 @@ "type": "array" }, "TimeoutInSeconds": { - "markdownDescription": "The maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a `TIMED_OUT` status.", + "markdownDescription": "The maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a `TIMED_OUT` status. If you don't specify a request timeout, the queue uses a default value.", "title": "TimeoutInSeconds", "type": "number" } @@ -104033,7 +104033,7 @@ "items": { "type": "string" }, - "markdownDescription": "The prioritization order to use for fleet locations, when the `PriorityOrder` property includes `LOCATION` . Locations can include AWS Region codes (such as `us-west-2` ), local zones, and custom locations (for Anywhere fleets). Each location must be listed only once. For details, see [Amazon GameLift service locations.](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html)", + "markdownDescription": "The prioritization order to use for fleet locations, when the `PriorityOrder` property includes `LOCATION` . Locations can include AWS Region codes (such as `us-west-2` ), local zones, and custom locations (for Anywhere fleets). Each location must be listed only once. For details, see [Amazon GameLift Servers service locations.](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html)", "title": "LocationOrder", "type": "array" }, @@ -104041,7 +104041,7 @@ "items": { "type": "string" }, - "markdownDescription": "A custom sequence to use when prioritizing where to place new game sessions. Each priority type is listed once.\n\n- `LATENCY` -- Amazon GameLift prioritizes locations where the average player latency is lowest. Player latency data is provided in each game session placement request.\n- `COST` -- Amazon GameLift prioritizes destinations with the lowest current hosting costs. Cost is evaluated based on the location, instance type, and fleet type (Spot or On-Demand) of each destination in the queue.\n- `DESTINATION` -- Amazon GameLift prioritizes based on the list order of destinations in the queue configuration.\n- `LOCATION` -- Amazon GameLift prioritizes based on the provided order of locations, as defined in `LocationOrder` .", + "markdownDescription": "A custom sequence to use when prioritizing where to place new game sessions. Each priority type is listed once.\n\n- `LATENCY` -- Amazon GameLift Servers prioritizes locations where the average player latency is lowest. Player latency data is provided in each game session placement request.\n- `COST` -- Amazon GameLift Servers prioritizes queue destinations with the lowest current hosting costs. Cost is evaluated based on the destination's location, instance type, and fleet type (Spot or On-Demand).\n- `DESTINATION` -- Amazon GameLift Servers prioritizes based on the list order of destinations in the queue configuration.\n- `LOCATION` -- Amazon GameLift Servers prioritizes based on the provided order of locations, as defined in `LocationOrder` .", "title": "PriorityOrder", "type": "array" } @@ -104194,7 +104194,7 @@ "type": "string" }, "FlexMatchMode": { - "markdownDescription": "Indicates whether this matchmaking configuration is being used with Amazon GameLift hosting or as a standalone matchmaking solution.\n\n- *STANDALONE* - FlexMatch forms matches and returns match information, including players and team assignments, in a [MatchmakingSucceeded](https://docs.aws.amazon.com/gamelift/latest/flexmatchguide/match-events.html#match-events-matchmakingsucceeded) event.\n- *WITH_QUEUE* - FlexMatch forms matches and uses the specified Amazon GameLift queue to start a game session for the match.", + "markdownDescription": "Indicates whether this matchmaking configuration is being used with Amazon GameLift Servers hosting or as a standalone matchmaking solution.\n\n- *STANDALONE* - FlexMatch forms matches and returns match information, including players and team assignments, in a [MatchmakingSucceeded](https://docs.aws.amazon.com/gamelift/latest/flexmatchguide/match-events.html#match-events-matchmakingsucceeded) event.\n- *WITH_QUEUE* - FlexMatch forms matches and uses the specified Amazon GameLift Servers queue to start a game session for the match.", "title": "FlexMatchMode", "type": "string" }, @@ -104215,7 +104215,7 @@ "items": { "type": "string" }, - "markdownDescription": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) that is assigned to a Amazon GameLift game session queue resource and uniquely identifies it. ARNs are unique across all Regions. Format is `arn:aws:gamelift:::gamesessionqueue/` . Queues can be located in any Region. Queues are used to start new Amazon GameLift-hosted game sessions for matches that are created with this matchmaking configuration. If `FlexMatchMode` is set to `STANDALONE` , do not set this parameter.", + "markdownDescription": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) that is assigned to a Amazon GameLift Servers game session queue resource and uniquely identifies it. ARNs are unique across all Regions. Format is `arn:aws:gamelift:::gamesessionqueue/` . Queues can be located in any Region. Queues are used to start new Amazon GameLift Servers-hosted game sessions for matches that are created with this matchmaking configuration. If `FlexMatchMode` is set to `STANDALONE` , do not set this parameter.", "title": "GameSessionQueueArns", "type": "array" }, @@ -104425,7 +104425,7 @@ }, "StorageLocation": { "$ref": "#/definitions/AWS::GameLift::Script.S3Location", - "markdownDescription": "The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the \"key\"), and a role ARN that allows Amazon GameLift to access the Amazon S3 storage location. The S3 bucket must be in the same Region where you want to create a new script. By default, Amazon GameLift uploads the latest version of the zip file; if you have S3 object versioning turned on, you can use the `ObjectVersion` parameter to specify an earlier version.", + "markdownDescription": "The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the \"key\"), and a role ARN that allows Amazon GameLift Servers to access the Amazon S3 storage location. The S3 bucket must be in the same Region where you want to create a new script. By default, Amazon GameLift Servers uploads the latest version of the zip file; if you have S3 object versioning turned on, you can use the `ObjectVersion` parameter to specify an earlier version.", "title": "StorageLocation" }, "Tags": { @@ -104472,7 +104472,7 @@ "additionalProperties": false, "properties": { "Bucket": { - "markdownDescription": "An Amazon S3 bucket identifier. Thename of the S3 bucket.\n\n> Amazon GameLift doesn't support uploading from Amazon S3 buckets with names that contain a dot (.).", + "markdownDescription": "An Amazon S3 bucket identifier. Thename of the S3 bucket.\n\n> Amazon GameLift Servers doesn't support uploading from Amazon S3 buckets with names that contain a dot (.).", "title": "Bucket", "type": "string" }, @@ -104482,12 +104482,12 @@ "type": "string" }, "ObjectVersion": { - "markdownDescription": "The version of the file, if object versioning is turned on for the bucket. Amazon GameLift uses this information when retrieving files from an S3 bucket that you own. Use this parameter to specify a specific version of the file. If not set, the latest version of the file is retrieved.", + "markdownDescription": "The version of the file, if object versioning is turned on for the bucket. Amazon GameLift Servers uses this information when retrieving files from an S3 bucket that you own. Use this parameter to specify a specific version of the file. If not set, the latest version of the file is retrieved.", "title": "ObjectVersion", "type": "string" }, "RoleArn": { - "markdownDescription": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) for an IAM role that allows Amazon GameLift to access the S3 bucket.", + "markdownDescription": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) for an IAM role that allows Amazon GameLift Servers to access the S3 bucket.", "title": "RoleArn", "type": "string" } @@ -120355,7 +120355,7 @@ }, "DeviceCertificateExpiringCheck": { "$ref": "#/definitions/AWS::IoT::AccountAuditConfiguration.AuditCheckConfiguration", - "markdownDescription": "Checks if a device certificate is expiring. This check applies to device certificates expiring within 30 days or that have expired.", + "markdownDescription": "Checks if a device certificate is expiring. By default, this check applies to device certificates expiring within 30 days or that have expired. You can modify this threshold by configuring the DeviceCertExpirationAuditCheckConfiguration.", "title": "DeviceCertificateExpiringCheck" }, "DeviceCertificateKeyQualityCheck": { @@ -122979,12 +122979,12 @@ "additionalProperties": false, "properties": { "Description": { - "markdownDescription": "", + "markdownDescription": "A summary of the package being created. This can be used to outline the package's contents or purpose.", "title": "Description", "type": "string" }, "PackageName": { - "markdownDescription": "", + "markdownDescription": "The name of the new software package.", "title": "PackageName", "type": "string" }, @@ -122992,7 +122992,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "", + "markdownDescription": "Metadata that can be used to manage the package.", "title": "Tags", "type": "array" } @@ -123428,7 +123428,7 @@ "additionalProperties": false, "properties": { "DeprecateThingType": { - "markdownDescription": "Deprecates a thing type. You can not associate new things with deprecated thing type. You cannot update `ThingTypeProperties` if the thing type is deprecated.\n\nRequires permission to access the [DeprecateThingType](https://docs.aws.amazon.com//service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions) action.", + "markdownDescription": "Deprecates a thing type. You can not associate new things with deprecated thing type.\n\nRequires permission to access the [DeprecateThingType](https://docs.aws.amazon.com//service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions) action.", "title": "DeprecateThingType", "type": "boolean" }, @@ -123528,7 +123528,7 @@ "additionalProperties": false, "properties": { "RuleName": { - "markdownDescription": "The name of the rule.\n\n*Pattern* : `^[a-zA-Z0-9_]+$`", + "markdownDescription": "The name of the rule.", "title": "RuleName", "type": "string" }, @@ -130183,9 +130183,7 @@ "additionalProperties": false, "properties": { "Greengrass": { - "$ref": "#/definitions/AWS::IoTSiteWise::Gateway.Greengrass", - "markdownDescription": "A gateway that runs on AWS IoT Greengrass .", - "title": "Greengrass" + "$ref": "#/definitions/AWS::IoTSiteWise::Gateway.Greengrass" }, "GreengrassV2": { "$ref": "#/definitions/AWS::IoTSiteWise::Gateway.GreengrassV2", @@ -130194,7 +130192,7 @@ }, "SiemensIE": { "$ref": "#/definitions/AWS::IoTSiteWise::Gateway.SiemensIE", - "markdownDescription": "A AWS IoT SiteWise Edge gateway that runs on a Siemens Industrial Edge Device.", + "markdownDescription": "An AWS IoT SiteWise Edge gateway that runs on a Siemens Industrial Edge Device.", "title": "SiemensIE" } }, @@ -133820,7 +133818,7 @@ "type": "string" }, "ConnectorName": { - "markdownDescription": "The name of the connector.", + "markdownDescription": "The name of the connector.\n\nThe connector name must be unique and can include up to 128 characters. Valid characters you can include in a connector name are: a-z, A-Z, 0-9, and -.", "title": "ConnectorName", "type": "string" }, @@ -140376,7 +140374,7 @@ }, "ProcessingConfiguration": { "$ref": "#/definitions/AWS::KinesisFirehose::DeliveryStream.ProcessingConfiguration", - "markdownDescription": "Specifies configuration for Snowflake.", + "markdownDescription": "", "title": "ProcessingConfiguration" }, "RetryOptions": { @@ -140983,7 +140981,7 @@ "type": "string" }, "Parameters": { - "markdownDescription": "A key-value map that provides an additional configuration on your data lake. `CrossAccountVersion` is the key you can configure in the `Parameters` field. Accepted values for the `CrossAccountVersion` key are 1, 2, and 3.", + "markdownDescription": "A key-value map that provides an additional configuration on your data lake. `CrossAccountVersion` is the key you can configure in the `Parameters` field. Accepted values for the `CrossAccountVersion` key are 1, 2, 3, and 4.", "title": "Parameters", "type": "object" }, @@ -142428,12 +142426,12 @@ "properties": { "OnFailure": { "$ref": "#/definitions/AWS::Lambda::EventInvokeConfig.OnFailure", - "markdownDescription": "The destination configuration for failed invocations.", + "markdownDescription": "The destination configuration for failed invocations.\n\n> When using an Amazon SQS queue as a destination, FIFO queues cannot be used.", "title": "OnFailure" }, "OnSuccess": { "$ref": "#/definitions/AWS::Lambda::EventInvokeConfig.OnSuccess", - "markdownDescription": "The destination configuration for successful invocations.", + "markdownDescription": "The destination configuration for successful invocations.\n\n> When using an Amazon SQS queue as a destination, FIFO queues cannot be used.", "title": "OnSuccess" } }, @@ -142513,7 +142511,7 @@ "type": "number" }, "BisectBatchOnFunctionError": { - "markdownDescription": "(Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.", + "markdownDescription": "(Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.\n\n> When using `BisectBatchOnFunctionError` , check the `BatchSize` parameter in the `OnFailure` destination message's metadata. The `BatchSize` could be greater than 1 since Lambda consolidates failed messages metadata when writing to the `OnFailure` destination.", "title": "BisectBatchOnFunctionError", "type": "boolean" }, @@ -148845,7 +148843,7 @@ "items": { "type": "string" }, - "markdownDescription": "A list of allowed actions that an API key resource grants permissions to perform. You must have at least one action for each type of resource. For example, if you have a place resource, you must include at least one place action.\n\nThe following are valid values for the actions.\n\n- *Map actions*\n\n- `geo:GetMap*` - Allows all actions needed for map rendering.\n- *Place actions*\n\n- `geo:SearchPlaceIndexForText` - Allows geocoding.\n- `geo:SearchPlaceIndexForPosition` - Allows reverse geocoding.\n- `geo:SearchPlaceIndexForSuggestions` - Allows generating suggestions from text.\n- `geo:GetPlace` - Allows finding a place by place ID.\n- *Route actions*\n\n- `geo:CalculateRoute` - Allows point to point routing.\n- `geo:CalculateRouteMatrix` - Allows calculating a matrix of routes.\n\n> You must use these strings exactly. For example, to provide access to map rendering, the only valid action is `geo:GetMap*` as an input to the list. `[\"geo:GetMap*\"]` is valid but `[\"geo:GetMapTile\"]` is not. Similarly, you cannot use `[\"geo:SearchPlaceIndexFor*\"]` - you must list each of the Place actions separately.", + "markdownDescription": "A list of allowed actions that an API key resource grants permissions to perform. You must have at least one action for each type of resource. For example, if you have a place resource, you must include at least one place action.\n\nThe following are valid values for the actions.\n\n- *Map actions*\n\n- `geo:GetMap*` - Allows all actions needed for map rendering.\n- *Enhanced Maps actions*\n\n- `geo-maps:GetTile` - Allows getting map tiles for rendering.\n- `geo-maps:GetStaticMap` - Allows getting static map images.\n- *Place actions*\n\n- `geo:SearchPlaceIndexForText` - Allows finding geo coordinates of a known place.\n- `geo:SearchPlaceIndexForPosition` - Allows getting nearest address to geo coordinates.\n- `geo:SearchPlaceIndexForSuggestions` - Allows suggestions based on an incomplete or misspelled query.\n- `geo:GetPlace` - Allows getting details of a place.\n- *Enhanced Places actions*\n\n- `geo-places:Autcomplete` - Allows auto-completion of search text.\n- `geo-places:Geocode` - Allows finding geo coordinates of a known place.\n- `geo-places:GetPlace` - Allows getting details of a place.\n- `geo-places:ReverseGeocode` - Allows getting nearest address to geo coordinates.\n- `geo-places:SearchNearby` - Allows category based places search around geo coordinates.\n- `geo-places:SearchText` - Allows place or address search based on free-form text.\n- `geo-places:Suggest` - Allows suggestions based on an incomplete or misspelled query.\n- *Route actions*\n\n- `geo:CalculateRoute` - Allows point to point routing.\n- `geo:CalculateRouteMatrix` - Allows matrix routing.\n- *Enhanced Routes actions*\n\n- `geo-routes:CalculateIsolines` - Allows isoline calculation.\n- `geo-routes:CalculateRoutes` - Allows point to point routing.\n- `geo-routes:CalculateRouteMatrix` - Allows matrix routing.\n- `geo-routes:OptimizeWaypoints` - Allows computing the best sequence of waypoints.\n- `geo-routes:SnapToRoads` - Allows snapping GPS points to a likely route.\n\n> You must use these strings exactly. For example, to provide access to map rendering, the only valid action is `geo:GetMap*` as an input to the list. `[\"geo:GetMap*\"]` is valid but `[\"geo:GetTile\"]` is not. Similarly, you cannot use `[\"geo:SearchPlaceIndexFor*\"]` - you must list each of the Place actions separately.", "title": "AllowActions", "type": "array" }, @@ -151840,7 +151838,7 @@ "additionalProperties": false, "properties": { "ClusterArn": { - "markdownDescription": "", + "markdownDescription": "The Amazon Resource Name (ARN) that uniquely identifies the cluster.", "title": "ClusterArn", "type": "string" }, @@ -151848,7 +151846,7 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "List of Amazon Resource Name (ARN)s of Secrets Manager secrets.", "title": "SecretArnList", "type": "array" } @@ -151916,67 +151914,67 @@ "properties": { "BrokerNodeGroupInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.BrokerNodeGroupInfo", - "markdownDescription": "", + "markdownDescription": "Information about the broker nodes in the cluster.", "title": "BrokerNodeGroupInfo" }, "ClientAuthentication": { "$ref": "#/definitions/AWS::MSK::Cluster.ClientAuthentication", - "markdownDescription": "", + "markdownDescription": "Includes all client authentication related information.", "title": "ClientAuthentication" }, "ClusterName": { - "markdownDescription": "", + "markdownDescription": "The name of the cluster.", "title": "ClusterName", "type": "string" }, "ConfigurationInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.ConfigurationInfo", - "markdownDescription": "", + "markdownDescription": "Represents the configuration that you want MSK to use for the cluster.", "title": "ConfigurationInfo" }, "CurrentVersion": { - "markdownDescription": "", + "markdownDescription": "The version of the cluster that you want to update.", "title": "CurrentVersion", "type": "string" }, "EncryptionInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.EncryptionInfo", - "markdownDescription": "", + "markdownDescription": "Includes all encryption-related information.", "title": "EncryptionInfo" }, "EnhancedMonitoring": { - "markdownDescription": "", + "markdownDescription": "Specifies the level of monitoring for the MSK cluster.", "title": "EnhancedMonitoring", "type": "string" }, "KafkaVersion": { - "markdownDescription": "", + "markdownDescription": "The version of Apache Kafka. You can use Amazon MSK to create clusters that use [supported Apache Kafka versions](https://docs.aws.amazon.com/msk/latest/developerguide/supported-kafka-versions.html) .", "title": "KafkaVersion", "type": "string" }, "LoggingInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.LoggingInfo", - "markdownDescription": "", + "markdownDescription": "Logging info details for the cluster.", "title": "LoggingInfo" }, "NumberOfBrokerNodes": { - "markdownDescription": "", + "markdownDescription": "The number of broker nodes in the cluster.", "title": "NumberOfBrokerNodes", "type": "number" }, "OpenMonitoring": { "$ref": "#/definitions/AWS::MSK::Cluster.OpenMonitoring", - "markdownDescription": "", + "markdownDescription": "The settings for open monitoring.", "title": "OpenMonitoring" }, "StorageMode": { - "markdownDescription": "", + "markdownDescription": "This controls storage mode for supported storage tiers.", "title": "StorageMode", "type": "string" }, "Tags": { "additionalProperties": true, - "markdownDescription": "", + "markdownDescription": "An arbitrary set of tags (key-value pairs) for the cluster.", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -152040,7 +152038,7 @@ "additionalProperties": false, "properties": { "BrokerAZDistribution": { - "markdownDescription": "", + "markdownDescription": "This parameter is currently not in use.", "title": "BrokerAZDistribution", "type": "string" }, @@ -152048,13 +152046,13 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "The list of subnets to connect to in the client virtual private cloud (VPC). Amazon creates elastic network interfaces (ENIs) inside these subnets. Client applications use ENIs to produce and consume data.\n\nIf you use the US West (N. California) Region, specify exactly two subnets. For other Regions where Amazon MSK is available, you can specify either two or three subnets. The subnets that you specify must be in distinct Availability Zones. When you create a cluster, Amazon MSK distributes the broker nodes evenly across the subnets that you specify.\n\nClient subnets can't occupy the Availability Zone with ID `use1-az3` .", "title": "ClientSubnets", "type": "array" }, "ConnectivityInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.ConnectivityInfo", - "markdownDescription": "", + "markdownDescription": "Information about the cluster's connectivity setting.", "title": "ConnectivityInfo" }, "InstanceType": { @@ -152066,13 +152064,13 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "The security groups to associate with the ENIs in order to specify who can connect to and communicate with the Amazon MSK cluster. If you don't specify a security group, Amazon MSK uses the default security group associated with the VPC. If you specify security groups that were shared with you, you must ensure that you have permissions to them. Specifically, you need the `ec2:DescribeSecurityGroups` permission.", "title": "SecurityGroups", "type": "array" }, "StorageInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.StorageInfo", - "markdownDescription": "", + "markdownDescription": "Contains information about storage volumes attached to Amazon MSK broker nodes.", "title": "StorageInfo" } }, @@ -152192,12 +152190,12 @@ "additionalProperties": false, "properties": { "ClientBroker": { - "markdownDescription": "", + "markdownDescription": "Indicates the encryption setting for data in transit between clients and brokers. You must set it to one of the following values.\n\n- `TLS` : Indicates that client-broker communication is enabled with TLS only.\n- `TLS_PLAINTEXT` : Indicates that client-broker communication is enabled for both TLS-encrypted, as well as plaintext data.\n- `PLAINTEXT` : Indicates that client-broker communication is enabled in plaintext only.\n\nThe default value is `TLS` .", "title": "ClientBroker", "type": "string" }, "InCluster": { - "markdownDescription": "", + "markdownDescription": "When set to true, it indicates that data communication among the broker nodes of the cluster is encrypted. When set to false, the communication happens in plaintext.\n\nThe default value is true.", "title": "InCluster", "type": "boolean" } @@ -152214,7 +152212,7 @@ }, "EncryptionInTransit": { "$ref": "#/definitions/AWS::MSK::Cluster.EncryptionInTransit", - "markdownDescription": "", + "markdownDescription": "The details for encryption in transit.", "title": "EncryptionInTransit" } }, @@ -152644,7 +152642,7 @@ "additionalProperties": false, "properties": { "Description": { - "markdownDescription": "", + "markdownDescription": "The description of the configuration.", "title": "Description", "type": "string" }, @@ -152652,22 +152650,22 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "The [versions of Apache Kafka](https://docs.aws.amazon.com/msk/latest/developerguide/supported-kafka-versions.html) with which you can use this MSK configuration.\n\nWhen you update the `KafkaVersionsList` property, AWS CloudFormation recreates a new configuration with the updated property before deleting the old configuration. Such an update requires a [resource replacement](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html#update-replacement) . To successfully update `KafkaVersionsList` , you must also update the `Name` property in the same operation.\n\nIf your configuration is attached with any clusters created using the AWS Management Console or AWS CLI , you'll need to manually delete the old configuration from the console after the update completes.\n\nFor more information, see [Can\u2019t update KafkaVersionsList in MSK configuration](https://docs.aws.amazon.com/msk/latest/developerguide/troubleshooting.html#troubleshoot-kafkaversionslist-cfn-update-failure) in the *Amazon MSK Developer Guide* .", "title": "KafkaVersionsList", "type": "array" }, "LatestRevision": { "$ref": "#/definitions/AWS::MSK::Configuration.LatestRevision", - "markdownDescription": "", + "markdownDescription": "Latest revision of the MSK configuration.", "title": "LatestRevision" }, "Name": { - "markdownDescription": "", + "markdownDescription": "The name of the configuration. Configuration names are strings that match the regex \"^[0-9A-Za-z][0-9A-Za-z-]{0,}$\".", "title": "Name", "type": "string" }, "ServerProperties": { - "markdownDescription": "", + "markdownDescription": "Contents of the `server.properties` file. When using this property, you must ensure that the contents of the file are base64 encoded. When using the console, the SDK, or the AWS CLI , the contents of `server.properties` can be in plaintext.", "title": "ServerProperties", "type": "string" } @@ -152703,17 +152701,17 @@ "additionalProperties": false, "properties": { "CreationTime": { - "markdownDescription": "", + "markdownDescription": "The time when the configuration revision was created.", "title": "CreationTime", "type": "string" }, "Description": { - "markdownDescription": "", + "markdownDescription": "The description of the configuration revision.", "title": "Description", "type": "string" }, "Revision": { - "markdownDescription": "", + "markdownDescription": "The revision number.", "title": "Revision", "type": "number" } @@ -152756,8 +152754,6 @@ "additionalProperties": false, "properties": { "CurrentVersion": { - "markdownDescription": "The current version number of the replicator.", - "title": "CurrentVersion", "type": "string" }, "Description": { @@ -153054,17 +153050,17 @@ "properties": { "ClientAuthentication": { "$ref": "#/definitions/AWS::MSK::ServerlessCluster.ClientAuthentication", - "markdownDescription": "", + "markdownDescription": "Includes all client authentication related information.", "title": "ClientAuthentication" }, "ClusterName": { - "markdownDescription": "", + "markdownDescription": "The name of the cluster.", "title": "ClusterName", "type": "string" }, "Tags": { "additionalProperties": true, - "markdownDescription": "", + "markdownDescription": "An arbitrary set of tags (key-value pairs) for the cluster.", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -153077,7 +153073,7 @@ "items": { "$ref": "#/definitions/AWS::MSK::ServerlessCluster.VpcConfig" }, - "markdownDescription": "", + "markdownDescription": "VPC configuration information for the serverless cluster.", "title": "VpcConfigs", "type": "array" } @@ -153221,7 +153217,7 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "The list of subnets in the client VPC to connect to.", "title": "ClientSubnets", "type": "array" }, @@ -153229,13 +153225,13 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "The security groups to attach to the ENIs for the broker nodes.", "title": "SecurityGroups", "type": "array" }, "Tags": { "additionalProperties": true, - "markdownDescription": "", + "markdownDescription": "An arbitrary set of tags (key-value pairs) you specify while creating the VPC connection.", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -153245,12 +153241,12 @@ "type": "object" }, "TargetClusterArn": { - "markdownDescription": "", + "markdownDescription": "The Amazon Resource Name (ARN) of the cluster.", "title": "TargetClusterArn", "type": "string" }, "VpcId": { - "markdownDescription": "", + "markdownDescription": "The VPC ID of the remote client.", "title": "VpcId", "type": "string" } @@ -166547,7 +166543,7 @@ "type": "string" }, "ProvisionedMemory": { - "markdownDescription": "The provisioned memory-optimized Neptune Capacity Units (m-NCUs) to use for the graph.\n\nMin = 128", + "markdownDescription": "The provisioned memory-optimized Neptune Capacity Units (m-NCUs) to use for the graph.\n\nMin = 16", "title": "ProvisionedMemory", "type": "number" }, @@ -167521,7 +167517,7 @@ "items": { "$ref": "#/definitions/AWS::NetworkFirewall::RuleGroup.PortRange" }, - "markdownDescription": "The destination ports to inspect for. If not specified, this matches with any destination port. This setting is only used for protocols 6 (TCP) and 17 (UDP).\n\nYou can specify individual ports, for example `1994` and you can specify port ranges, for example `1990:1994` .", + "markdownDescription": "The destination port to inspect for. You can specify an individual port, for example `1994` and you can specify a port range, for example `1990:1994` . To match with any port, specify `ANY` .\n\nThis setting is only used for protocols 6 (TCP) and 17 (UDP).", "title": "DestinationPorts", "type": "array" }, @@ -167537,7 +167533,7 @@ "items": { "type": "number" }, - "markdownDescription": "The protocols to inspect for, specified using each protocol's assigned internet protocol number (IANA). If not specified, this matches with any protocol.", + "markdownDescription": "The protocols to inspect for, specified using the assigned internet protocol number (IANA) for each protocol. If not specified, this matches with any protocol.", "title": "Protocols", "type": "array" }, @@ -167545,7 +167541,7 @@ "items": { "$ref": "#/definitions/AWS::NetworkFirewall::RuleGroup.PortRange" }, - "markdownDescription": "The source ports to inspect for. If not specified, this matches with any source port. This setting is only used for protocols 6 (TCP) and 17 (UDP).\n\nYou can specify individual ports, for example `1994` and you can specify port ranges, for example `1990:1994` .", + "markdownDescription": "The source port to inspect for. You can specify an individual port, for example `1994` and you can specify a port range, for example `1990:1994` . To match with any port, specify `ANY` .\n\nIf not specified, this matches with any source port.\n\nThis setting is only used for protocols 6 (TCP) and 17 (UDP).", "title": "SourcePorts", "type": "array" }, @@ -168111,7 +168107,7 @@ "items": { "type": "number" }, - "markdownDescription": "The protocols to decrypt for inspection, specified using each protocol's assigned internet protocol number\n(IANA). Network Firewall currently supports only TCP.", + "markdownDescription": "The protocols to inspect for, specified using the assigned internet protocol number (IANA) for each protocol. If not specified, this matches with any protocol.\n\nNetwork Firewall currently supports only TCP.", "title": "Protocols", "type": "array" }, @@ -170788,7 +170784,7 @@ "items": { "type": "string" }, - "markdownDescription": "An array of strings that define which types of data that the source account shares with the monitoring account. Valid values are `AWS::CloudWatch::Metric | AWS::Logs::LogGroup | AWS::XRay::Trace | AWS::ApplicationInsights::Application | AWS::InternetMonitor::Monitor` .", + "markdownDescription": "An array of strings that define which types of data that the source account shares with the monitoring account. Valid values are `AWS::CloudWatch::Metric | AWS::Logs::LogGroup | AWS::XRay::Trace | AWS::ApplicationInsights::Application | AWS::InternetMonitor::Monitor | AWS::ApplicationSignals::Service | AWS::ApplicationSignals::ServiceLevelObjective` .", "title": "ResourceTypes", "type": "array" }, @@ -170841,7 +170837,7 @@ "properties": { "LogGroupConfiguration": { "$ref": "#/definitions/AWS::Oam::Link.LinkFilter", - "markdownDescription": "Use this structure to filter which log groups are to share log events from this source account to the monitoring account.", + "markdownDescription": "Use this structure to filter which log groups are to send log events from the source account to the monitoring account.", "title": "LogGroupConfiguration" }, "MetricConfiguration": { @@ -170856,7 +170852,7 @@ "additionalProperties": false, "properties": { "Filter": { - "markdownDescription": "When used in `MetricConfiguration` this field specifies which metric namespaces are to be shared with the monitoring account\n\nWhen used in `LogGroupConfiguration` this field specifies which log groups are to share their log events with the monitoring account. Use the term `LogGroupName` and one or more of the following operands.\n\nUse single quotation marks (') around log group names and metric namespaces.\n\nThe matching of log group names and metric namespaces is case sensitive. Each filter has a limit of five conditional operands. Conditional operands are `AND` and `OR` .\n\n- `=` and `!=`\n- `AND`\n- `OR`\n- `LIKE` and `NOT LIKE` . These can be used only as prefix searches. Include a `%` at the end of the string that you want to search for and include.\n- `IN` and `NOT IN` , using parentheses `( )`\n\nExamples:\n\n- `Namespace NOT LIKE 'AWS/%'` includes only namespaces that don't start with `AWS/` , such as custom namespaces.\n- `Namespace IN ('AWS/EC2', 'AWS/ELB', 'AWS/S3')` includes only the metrics in the EC2, Elastic Load Balancing , and Amazon S3 namespaces.\n- `Namespace = 'AWS/EC2' OR Namespace NOT LIKE 'AWS/%'` includes only the EC2 namespace and your custom namespaces.\n- `LogGroupName IN ('This-Log-Group', 'Other-Log-Group')` includes only the log groups with names `This-Log-Group` and `Other-Log-Group` .\n- `LogGroupName NOT IN ('Private-Log-Group', 'Private-Log-Group-2')` includes all log groups except the log groups with names `Private-Log-Group` and `Private-Log-Group-2` .\n- `LogGroupName LIKE 'aws/lambda/%' OR LogGroupName LIKE 'AWSLogs%'` includes all log groups that have names that start with `aws/lambda/` or `AWSLogs` .\n\n> If you are updating a link that uses filters, you can specify `*` as the only value for the `filter` parameter to delete the filter and share all log groups with the monitoring account.", + "markdownDescription": "", "title": "Filter", "type": "string" } @@ -182216,7 +182212,7 @@ "type": "string" }, "Timestamp": { - "markdownDescription": "The time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.", + "markdownDescription": "A [dynamic path parameter](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html) to a field in the payload containing the time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.\n\nThe value cannot be a static timestamp as the provided timestamp would be applied to all events delivered by the Pipe, regardless of when they are actually delivered.\n\nIf no dynamic path parameter is provided, the default value is the time the invocation is processed by the Pipe.", "title": "Timestamp", "type": "string" } @@ -207623,6 +207619,8 @@ "additionalProperties": false, "properties": { "AvailabilityStatus": { + "markdownDescription": "The availaiblity status of a visual's menu options.", + "title": "AvailabilityStatus", "type": "string" } }, @@ -224646,7 +224644,7 @@ "type": "string" }, "PerformanceInsightsRetentionPeriod": { - "markdownDescription": "The number of days to retain Performance Insights data.\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB clusters\n\nValid Values:\n\n- `7`\n- *month* * 31, where *month* is a number of months from 1-23. Examples: `93` (3 months * 31), `341` (11 months * 31), `589` (19 months * 31)\n- `731`\n\nDefault: `7` days\n\nIf you specify a retention period that isn't valid, such as `94` , Amazon RDS issues an error.", + "markdownDescription": "The number of days to retain Performance Insights data. When creating a DB cluster without enabling Performance Insights, you can't specify the parameter `PerformanceInsightsRetentionPeriod` .\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB clusters\n\nValid Values:\n\n- `7`\n- *month* * 31, where *month* is a number of months from 1-23. Examples: `93` (3 months * 31), `341` (11 months * 31), `589` (19 months * 31)\n- `731`\n\nDefault: `7` days\n\nIf you specify a retention period that isn't valid, such as `94` , Amazon RDS issues an error.", "title": "PerformanceInsightsRetentionPeriod", "type": "number" }, @@ -225122,7 +225120,7 @@ "type": "string" }, "DBSubnetGroupName": { - "markdownDescription": "A DB subnet group to associate with the DB instance. If you update this value, the new subnet group must be a subnet group in a new VPC.\n\nIf there's no DB subnet group, then the DB instance isn't a VPC DB instance.\n\nFor more information about using Amazon RDS in a VPC, see [Amazon VPC and Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html) in the *Amazon RDS User Guide* .\n\nThis setting doesn't apply to Amazon Aurora DB instances. The DB subnet group is managed by the DB cluster. If specified, the setting must match the DB cluster setting.", + "markdownDescription": "A DB subnet group to associate with the DB instance. If you update this value, the new subnet group must be a subnet group in a new VPC.\n\nIf you don't specify a DB subnet group, RDS uses the default DB subnet group if one exists. If a default DB subnet group does not exist, and you don't specify a `DBSubnetGroupName` , the DB instance fails to launch.\n\nFor more information about using Amazon RDS in a VPC, see [Amazon VPC and Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html) in the *Amazon RDS User Guide* .\n\nThis setting doesn't apply to Amazon Aurora DB instances. The DB subnet group is managed by the DB cluster. If specified, the setting must match the DB cluster setting.", "title": "DBSubnetGroupName", "type": "string" }, @@ -225283,7 +225281,7 @@ "type": "string" }, "PerformanceInsightsRetentionPeriod": { - "markdownDescription": "The number of days to retain Performance Insights data.\n\nThis setting doesn't apply to RDS Custom DB instances.\n\nValid Values:\n\n- `7`\n- *month* * 31, where *month* is a number of months from 1-23. Examples: `93` (3 months * 31), `341` (11 months * 31), `589` (19 months * 31)\n- `731`\n\nDefault: `7` days\n\nIf you specify a retention period that isn't valid, such as `94` , Amazon RDS returns an error.", + "markdownDescription": "The number of days to retain Performance Insights data. When creating a DB instance without enabling Performance Insights, you can't specify the parameter `PerformanceInsightsRetentionPeriod` .\n\nThis setting doesn't apply to RDS Custom DB instances.\n\nValid Values:\n\n- `7`\n- *month* * 31, where *month* is a number of months from 1-23. Examples: `93` (3 months * 31), `341` (11 months * 31), `589` (19 months * 31)\n- `731`\n\nDefault: `7` days\n\nIf you specify a retention period that isn't valid, such as `94` , Amazon RDS returns an error.", "title": "PerformanceInsightsRetentionPeriod", "type": "number" }, @@ -226001,7 +225999,7 @@ "type": "number" }, "InitQuery": { - "markdownDescription": "One or more SQL statements for the proxy to run when opening each new database connection. Typically used with `SET` statements to make sure that each connection has identical settings such as time zone and character set. For multiple statements, use semicolons as the separator. You can also include multiple variables in a single `SET` statement, such as `SET x=1, y=2` .\n\nDefault: no initialization query", + "markdownDescription": "Add an initialization query, or modify the current one. You can specify one or more SQL statements for the proxy to run when opening each new database connection. The setting is typically used with `SET` statements to make sure that each connection has identical settings. Make sure that the query you add is valid. To include multiple variables in a single `SET` statement, use comma separators.\n\nFor example: `SET variable1=value1, variable2=value2`\n\nFor multiple statements, use semicolons as the separator.\n\nDefault: no initialization query", "title": "InitQuery", "type": "string" }, @@ -245731,7 +245729,7 @@ "items": { "$ref": "#/definitions/AWS::SageMaker::Domain.CustomImage" }, - "markdownDescription": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.", + "markdownDescription": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.\n\nThe maximum number of custom images are as follows.\n\n- On a domain level: 200\n- On a space level: 5\n- On a user profile level: 5", "title": "CustomImages", "type": "array" }, @@ -245985,7 +245983,7 @@ "type": "string" }, "EndpointName": { - "markdownDescription": "The name of the endpoint.The name must be unique within an AWS Region in your AWS account. The name is case-insensitive in `CreateEndpoint` , but the case is preserved and must be matched in [](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_runtime_InvokeEndpoint.html) .", + "markdownDescription": "The name of the endpoint. The name must be unique within an AWS Region in your AWS account. The name is case-insensitive in `CreateEndpoint` , but the case is preserved and must be matched in [](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_runtime_InvokeEndpoint.html) .", "title": "EndpointName", "type": "string" }, @@ -253033,7 +253031,7 @@ "items": { "$ref": "#/definitions/AWS::SageMaker::Space.CustomImage" }, - "markdownDescription": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.", + "markdownDescription": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.\n\nThe maximum number of custom images are as follows.\n\n- On a domain level: 200\n- On a space level: 5\n- On a user profile level: 5", "title": "CustomImages", "type": "array" }, @@ -253475,7 +253473,7 @@ "items": { "$ref": "#/definitions/AWS::SageMaker::UserProfile.CustomImage" }, - "markdownDescription": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.", + "markdownDescription": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.\n\nThe maximum number of custom images are as follows.\n\n- On a domain level: 200\n- On a space level: 5\n- On a user profile level: 5", "title": "CustomImages", "type": "array" }, @@ -262532,7 +262530,7 @@ }, "MagneticStoreWriteProperties": { "$ref": "#/definitions/AWS::Timestream::Table.MagneticStoreWriteProperties", - "markdownDescription": "Contains properties to set on the table when enabling magnetic store writes.\n\nThis object has the following attributes:\n\n- *EnableMagneticStoreWrites* : A `boolean` flag to enable magnetic store writes.\n- *MagneticStoreRejectedDataLocation* : The location to write error reports for records rejected, asynchronously, during magnetic store writes. Only `S3Configuration` objects are allowed. The `S3Configuration` object has the following attributes:\n\n- *BucketName* : The name of the S3 bucket.\n- *EncryptionOption* : The encryption option for the S3 location. Valid values are S3 server-side encryption with an S3 managed key ( `SSE_S3` ) or AWS managed key ( `SSE_KMS` ).\n- *KmsKeyId* : The AWS KMS key ID to use when encrypting with an AWS managed key.\n- *ObjectKeyPrefix* : The prefix to use option for the objects stored in S3.\n\nBoth `BucketName` and `EncryptionOption` are *required* when `S3Configuration` is specified. If you specify `SSE_KMS` as your `EncryptionOption` then `KmsKeyId` is *required* .\n\n`EnableMagneticStoreWrites` attribute is *required* when `MagneticStoreWriteProperties` is specified. `MagneticStoreRejectedDataLocation` attribute is *required* when `EnableMagneticStoreWrites` is set to `true` .\n\nSee the following examples:\n\n*JSON*\n\n```json\n{ \"Type\" : AWS::Timestream::Table\", \"Properties\":{ \"DatabaseName\":\"TestDatabase\", \"TableName\":\"TestTable\", \"MagneticStoreWriteProperties\":{ \"EnableMagneticStoreWrites\":true, \"MagneticStoreRejectedDataLocation\":{ \"S3Configuration\":{ \"BucketName\":\"testbucket\", \"EncryptionOption\":\"SSE_KMS\", \"KmsKeyId\":\"1234abcd-12ab-34cd-56ef-1234567890ab\", \"ObjectKeyPrefix\":\"prefix\" } } } }\n}\n```\n\n*YAML*\n\n```\nType: AWS::Timestream::Table\nDependsOn: TestDatabase\nProperties: TableName: \"TestTable\" DatabaseName: \"TestDatabase\" MagneticStoreWriteProperties: EnableMagneticStoreWrites: true MagneticStoreRejectedDataLocation: S3Configuration: BucketName: \"testbucket\" EncryptionOption: \"SSE_KMS\" KmsKeyId: \"1234abcd-12ab-34cd-56ef-1234567890ab\" ObjectKeyPrefix: \"prefix\"\n```", + "markdownDescription": "Contains properties to set on the table when enabling magnetic store writes.\n\nThis object has the following attributes:\n\n- *EnableMagneticStoreWrites* : A `boolean` flag to enable magnetic store writes.\n- *MagneticStoreRejectedDataLocation* : The location to write error reports for records rejected, asynchronously, during magnetic store writes. Only `S3Configuration` objects are allowed. The `S3Configuration` object has the following attributes:\n\n- *BucketName* : The name of the S3 bucket.\n- *EncryptionOption* : The encryption option for the S3 location. Valid values are S3 server-side encryption with an S3 managed key ( `SSE_S3` ) or AWS managed key ( `SSE_KMS` ).\n- *KmsKeyId* : The AWS KMS key ID to use when encrypting with an AWS managed key.\n- *ObjectKeyPrefix* : The prefix to use option for the objects stored in S3.\n\nBoth `BucketName` and `EncryptionOption` are *required* when `S3Configuration` is specified. If you specify `SSE_KMS` as your `EncryptionOption` then `KmsKeyId` is *required* .\n\n`EnableMagneticStoreWrites` attribute is *required* when `MagneticStoreWriteProperties` is specified. `MagneticStoreRejectedDataLocation` attribute is *required* when `EnableMagneticStoreWrites` is set to `true` .\n\nSee the following examples:\n\n*JSON*\n\n```json\n{ \"Type\" : AWS::Timestream::Table\", \"Properties\":{ \"DatabaseName\":\"TestDatabase\", \"TableName\":\"TestTable\", \"MagneticStoreWriteProperties\":{ \"EnableMagneticStoreWrites\":true, \"MagneticStoreRejectedDataLocation\":{ \"S3Configuration\":{ \"BucketName\":\" amzn-s3-demo-bucket \", \"EncryptionOption\":\"SSE_KMS\", \"KmsKeyId\":\"1234abcd-12ab-34cd-56ef-1234567890ab\", \"ObjectKeyPrefix\":\"prefix\" } } } }\n}\n```\n\n*YAML*\n\n```\nType: AWS::Timestream::Table\nDependsOn: TestDatabase\nProperties: TableName: \"TestTable\" DatabaseName: \"TestDatabase\" MagneticStoreWriteProperties: EnableMagneticStoreWrites: true MagneticStoreRejectedDataLocation: S3Configuration: BucketName: \" amzn-s3-demo-bucket \" EncryptionOption: \"SSE_KMS\" KmsKeyId: \"1234abcd-12ab-34cd-56ef-1234567890ab\" ObjectKeyPrefix: \"prefix\"\n```", "title": "MagneticStoreWriteProperties" }, "RetentionProperties": { diff --git a/schema_source/cloudformation-docs.json b/schema_source/cloudformation-docs.json index e7e3fdde1..18914382a 100644 --- a/schema_source/cloudformation-docs.json +++ b/schema_source/cloudformation-docs.json @@ -374,7 +374,7 @@ "AWS::AmazonMQ::Broker User": { "ConsoleAccess": "Enables access to the ActiveMQ web console for the ActiveMQ user. Does not apply to RabbitMQ brokers.", "Groups": "The list of groups (20 maximum) to which the ActiveMQ user belongs. This value can contain only alphanumeric characters, dashes, periods, underscores, and tildes (- . _ ~). This value must be 2-100 characters long. Does not apply to RabbitMQ brokers.", - "JolokiaApiAccess": "", + "JolokiaApiAccess": "Turn on Jolokia access for your ActiveMQ broker user (Does not apply to RabbitMQ brokers).", "Password": "The password of the user. This value must be at least 12 characters long, must contain at least 4 unique characters, and must not contain commas, colons, or equal signs (,:=).", "ReplicationUser": "Defines if this user is intended for CRDR replication purposes.", "Username": "The username of the broker user. For Amazon MQ for ActiveMQ brokers, this value can contain only alphanumeric characters, dashes, periods, underscores, and tildes (- . _ ~). For Amazon MQ for RabbitMQ brokers, this value can contain only alphanumeric characters, dashes, periods, underscores (- . _). This value must not contain a tilde (~) character. Amazon MQ prohibts using guest as a valid usename. This value must be 2-100 characters long.\n\n> Do not add personally identifiable information (PII) or other confidential or sensitive information in broker usernames. Broker usernames are accessible to other AWS services, including CloudWatch Logs . Broker usernames are not intended to be used for private or sensitive data." @@ -1343,7 +1343,7 @@ "KmsKeyIdentifier": "The AWS Key Management Service key identifier (key ID, key alias, or key ARN) provided when the resource was created or updated.", "LocationUri": "A URI to locate the configuration. You can specify the following:\n\n- For the AWS AppConfig hosted configuration store and for feature flags, specify `hosted` .\n- For an AWS Systems Manager Parameter Store parameter, specify either the parameter name in the format `ssm-parameter://` or the ARN.\n- For an AWS CodePipeline pipeline, specify the URI in the following format: `codepipeline` ://.\n- For an AWS Secrets Manager secret, specify the URI in the following format: `secretsmanager` ://.\n- For an Amazon S3 object, specify the URI in the following format: `s3:///` . Here is an example: `s3://amzn-s3-demo-bucket/my-app/us-east-1/my-config.json`\n- For an SSM document, specify either the document name in the format `ssm-document://` or the Amazon Resource Name (ARN).", "Name": "A name for the configuration profile.", - "RetrievalRoleArn": "The ARN of an IAM role with permission to access the configuration at the specified `LocationUri` .\n\n> A retrieval role ARN is not required for configurations stored in the AWS AppConfig hosted configuration store. It is required for all other sources that store your configuration.", + "RetrievalRoleArn": "The ARN of an IAM role with permission to access the configuration at the specified `LocationUri` .\n\n> A retrieval role ARN is not required for configurations stored in AWS CodePipeline or the AWS AppConfig hosted configuration store. It is required for all other sources that store your configuration.", "Tags": "Metadata to assign to the configuration profile. Tags help organize and categorize your AWS AppConfig resources. Each tag consists of a key and an optional value, both of which you define.", "Type": "The type of configurations contained in the profile. AWS AppConfig supports `feature flags` and `freeform` configurations. We recommend you create feature flag configurations to enable or disable new features and freeform configurations to distribute configurations to an application. When calling this API, enter one of the following values for `Type` :\n\n`AWS.AppConfig.FeatureFlags`\n\n`AWS.Freeform`", "Validators": "A list of methods for validating the configuration." @@ -1387,8 +1387,8 @@ "Tags": "Assigns metadata to an AWS AppConfig resource. Tags help organize and categorize your AWS AppConfig resources. Each tag consists of a key and an optional value, both of which you define. You can specify a maximum of 50 tags for a resource." }, "AWS::AppConfig::DeploymentStrategy Tag": { - "Key": "", - "Value": "" + "Key": "The tag key.", + "Value": "An optional tag value." }, "AWS::AppConfig::Environment": { "ApplicationId": "The application ID.", @@ -3221,7 +3221,12 @@ "AWS::AppSync::DomainName": { "CertificateArn": "The Amazon Resource Name (ARN) of the certificate. This will be an AWS Certificate Manager certificate.", "Description": "The decription for your domain name.", - "DomainName": "The domain name." + "DomainName": "The domain name.", + "Tags": "A set of tags (key-value pairs) for this domain name." + }, + "AWS::AppSync::DomainName Tag": { + "Key": "Describes the key of the tag.", + "Value": "Describes the value of the tag." }, "AWS::AppSync::DomainNameApiAssociation": { "ApiId": "The API ID.", @@ -3743,9 +3748,11 @@ "LogGroupName": "The CloudWatch log group name to be associated with the monitored log.", "PatternSet": "The log pattern set." }, + "AWS::ApplicationSignals::Discovery": {}, "AWS::ApplicationSignals::ServiceLevelObjective": { "BurnRateConfigurations": "Each object in this array defines the length of the look-back window used to calculate one burn rate metric for this SLO. The burn rate measures how fast the service is consuming the error budget, relative to the attainment goal of the SLO.", "Description": "An optional description for this SLO.", + "ExclusionWindows": "", "Goal": "This structure contains the attributes that determine the goal of an SLO. This includes the time period for evaluation and the attainment threshold.", "Name": "A name for this SLO.", "RequestBasedSli": "A structure containing information about the performance metric that this SLO monitors, if this is a request-based SLO.", @@ -3764,6 +3771,12 @@ "Name": "The name of the dimension. Dimension names must contain only ASCII characters, must include at least one non-whitespace character, and cannot start with a colon ( `:` ). ASCII control characters are not supported as part of dimension names.", "Value": "The value of the dimension. Dimension values must contain only ASCII characters and must include at least one non-whitespace character. ASCII control characters are not supported as part of dimension values." }, + "AWS::ApplicationSignals::ServiceLevelObjective ExclusionWindow": { + "Reason": "A description explaining why this time period should be excluded from SLO calculations.", + "RecurrenceRule": "The recurrence rule for the SLO time window exclusion. Supports both cron and rate expressions.", + "StartTime": "The start of the SLO time window exclusion. Defaults to current time if not specified.", + "Window": "The SLO time window exclusion ." + }, "AWS::ApplicationSignals::ServiceLevelObjective Goal": { "AttainmentGoal": "The threshold that determines if the goal is being met.\n\nIf this is a period-based SLO, the attainment goal is the percentage of good periods that meet the threshold requirements to the total periods within the interval. For example, an attainment goal of 99.9% means that within your interval, you are targeting 99.9% of the periods to be in healthy state.\n\nIf this is a request-based SLO, the attainment goal is the percentage of requests that must be successful to meet the attainment goal.\n\nIf you omit this parameter, 99 is used to represent 99% as the attainment goal.", "Interval": "The time period used to evaluate the SLO. It can be either a calendar interval or rolling interval.\n\nIf you omit this parameter, a rolling interval of 7 days is used.", @@ -3795,13 +3808,16 @@ "BadCountMetric": "If you want to count \"bad requests\" to determine the percentage of successful requests for this request-based SLO, specify the metric to use as \"bad requests\" in this structure.", "GoodCountMetric": "If you want to count \"good requests\" to determine the percentage of successful requests for this request-based SLO, specify the metric to use as \"good requests\" in this structure." }, + "AWS::ApplicationSignals::ServiceLevelObjective RecurrenceRule": { + "Expression": "A cron or rate expression that specifies the schedule for the exclusion window." + }, "AWS::ApplicationSignals::ServiceLevelObjective RequestBasedSli": { "ComparisonOperator": "The arithmetic operation used when comparing the specified metric to the threshold.", "MetricThreshold": "This value is the threshold that the observed metric values of the SLI metric are compared to.", "RequestBasedSliMetric": "A structure that contains information about the metric that the SLO monitors." }, "AWS::ApplicationSignals::ServiceLevelObjective RequestBasedSliMetric": { - "KeyAttributes": "This is a string-to-string map that contains information about the type of object that this SLO is related to. It can include the following fields.\n\n- `Type` designates the type of object that this SLO is related to.\n- `ResourceType` specifies the type of the resource. This field is used only when the value of the `Type` field is `Resource` or `AWS::Resource` .\n- `Name` specifies the name of the object. This is used only if the value of the `Type` field is `Service` , `RemoteService` , or `AWS::Service` .\n- `Identifier` identifies the resource objects of this resource. This is used only if the value of the `Type` field is `Resource` or `AWS::Resource` .\n- `Environment` specifies the location where this object is hosted, or what it belongs to.", + "KeyAttributes": "This is a string-to-string map that contains information about the type of object that this SLO is related to. It can include the following fields.\n\n- `Type` designates the type of object that this SLO is related to.\n- `ResourceType` specifies the type of the resource. This field is used only when the value of the `Type` field is `Resource` or `AWS::Resource` .\n- `Name` specifies the name of the object. This is used only if the value of the `Type` field is `Service` , `RemoteService` , or `AWS::Service` .\n- `Identifier` identifies the resource objects of this resource. This is used only if the value of the `Type` field is `Resource` or `AWS::Resource` .\n- `Environment` specifies the location where this object is hosted, or what it belongs to.\n- `AwsAccountId` allows you to create an SLO for an object that exists in another account.", "MetricType": "If the SLO monitors either the `LATENCY` or `AVAILABILITY` metric that Application Signals collects, this field displays which of those metrics is used.", "MonitoredRequestCountMetric": "Use this structure to define the metric that you want to use as the \"good request\" or \"bad request\" value for a request-based SLO. This value observed for the metric defined in `TotalRequestCountMetric` will be divided by the number found for `MonitoredRequestCountMetric` to determine the percentage of successful requests that this SLO tracks.", "OperationName": "If the SLO monitors a specific operation of the service, this field displays that operation name.", @@ -3828,6 +3844,10 @@ "Key": "A string that you can use to assign a value. The combination of tag keys and values can help you organize and categorize your resources.", "Value": "The value for the specified tag key." }, + "AWS::ApplicationSignals::ServiceLevelObjective Window": { + "Duration": "The number of time units for the exclusion window length.", + "DurationUnit": "The unit of time for the exclusion window duration. Valid values: MINUTE, HOUR, DAY, MONTH." + }, "AWS::Athena::CapacityReservation": { "CapacityAssignmentConfiguration": "Assigns Athena workgroups (and hence their queries) to capacity reservations. A capacity reservation can have only one capacity assignment configuration, but the capacity assignment configuration can be made up of multiple individual assignments. Each assignment specifies how Athena queries can consume capacity from the capacity reservation that their workgroup is mapped to.", "Name": "The name of the capacity reservation.", @@ -3845,9 +3865,12 @@ "Value": "A tag value. The tag value length is from 0 to 256 Unicode characters in UTF-8. You can use letters and numbers representable in UTF-8, and the following characters: + - = . _ : / @. Tag values are case-sensitive." }, "AWS::Athena::DataCatalog": { + "ConnectionType": "The type of connection for a `FEDERATED` data catalog (for example, `REDSHIFT` , `MYSQL` , or `SQLSERVER` ). For information about individual connectors, see [Available data source connectors](https://docs.aws.amazon.com/athena/latest/ug/connectors-available.html) .", "Description": "A description of the data catalog.", + "Error": "Text of the error that occurred during data catalog creation or deletion.", "Name": "The name of the data catalog. The catalog name must be unique for the AWS account and can use a maximum of 128 alphanumeric, underscore, at sign, or hyphen characters.", "Parameters": "Specifies the Lambda function or functions to use for the data catalog. The mapping used depends on the catalog type.\n\n- The `HIVE` data catalog type uses the following syntax. The `metadata-function` parameter is required. `The sdk-version` parameter is optional and defaults to the currently supported version.\n\n`metadata-function= *lambda_arn* , sdk-version= *version_number*`\n- The `LAMBDA` data catalog type uses one of the following sets of required parameters, but not both.\n\n- When one Lambda function processes metadata and another Lambda function reads data, the following syntax is used. Both parameters are required.\n\n`metadata-function= *lambda_arn* , record-function= *lambda_arn*`\n- A composite Lambda function that processes both metadata and data uses the following syntax.\n\n`function= *lambda_arn*`\n- The `GLUE` type takes a catalog ID parameter and is required. The `*catalog_id*` is the account ID of the AWS account to which the Glue catalog belongs.\n\n`catalog-id= *catalog_id*`\n\n- The `GLUE` data catalog type also applies to the default `AwsDataCatalog` that already exists in your account, of which you can have only one and cannot modify.", + "Status": "The status of the creation or deletion of the data catalog.\n\n- The `LAMBDA` , `GLUE` , and `HIVE` data catalog types are created synchronously. Their status is either `CREATE_COMPLETE` or `CREATE_FAILED` .\n- The `FEDERATED` data catalog type is created asynchronously.\n\nData catalog creation status:\n\n- `CREATE_IN_PROGRESS` : Federated data catalog creation in progress.\n- `CREATE_COMPLETE` : Data catalog creation complete.\n- `CREATE_FAILED` : Data catalog could not be created.\n- `CREATE_FAILED_CLEANUP_IN_PROGRESS` : Federated data catalog creation failed and is being removed.\n- `CREATE_FAILED_CLEANUP_COMPLETE` : Federated data catalog creation failed and was removed.\n- `CREATE_FAILED_CLEANUP_FAILED` : Federated data catalog creation failed but could not be removed.\n\nData catalog deletion status:\n\n- `DELETE_IN_PROGRESS` : Federated data catalog deletion in progress.\n- `DELETE_COMPLETE` : Federated data catalog deleted.\n- `DELETE_FAILED` : Federated data catalog could not be deleted.", "Tags": "The tags (key-value pairs) to associate with this resource.", "Type": "The type of data catalog: `LAMBDA` for a federated catalog, `GLUE` for AWS Glue Catalog, or `HIVE` for an external hive metastore." }, @@ -4706,7 +4729,7 @@ "AWS::Backup::RestoreTestingPlan": { "RecoveryPointSelection": "The specified criteria to assign a set of resources, such as recovery point types or backup vaults.", "RestoreTestingPlanName": "The RestoreTestingPlanName is a unique string that is the name of the restore testing plan. This cannot be changed after creation, and it must consist of only alphanumeric characters and underscores.", - "ScheduleExpression": "A CRON expression in specified timezone when a restore testing plan is executed.", + "ScheduleExpression": "A CRON expression in specified timezone when a restore testing plan is executed. When no CRON expression is provided, AWS Backup will use the default expression `cron(0 5 ? * * *)` .", "ScheduleExpressionTimezone": "Optional. This is the timezone in which the schedule expression is set. By default, ScheduleExpressions are in UTC. You can modify this to a specified timezone.", "ScheduleStatus": "This parameter is not currently supported.", "StartWindowHours": "Defaults to 24 hours.\n\nA value in hours after a restore test is scheduled before a job will be canceled if it doesn't start successfully. This value is optional. If this value is included, this parameter has a maximum value of 168 hours (one week).", @@ -4759,7 +4782,7 @@ "ComputeResources": "The ComputeResources property type specifies details of the compute resources managed by the compute environment. This parameter is required for managed compute environments. For more information, see [Compute Environments](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) in the ** .", "Context": "Reserved.", "EksConfiguration": "The details for the Amazon EKS cluster that supports the compute environment.", - "ReplaceComputeEnvironment": "Specifies whether the compute environment is replaced if an update is made that requires replacing the instances in the compute environment. The default value is `true` . To enable more properties to be updated, set this property to `false` . When changing the value of this property to `false` , do not change any other properties at the same time. If other properties are changed at the same time, and the change needs to be rolled back but it can't, it's possible for the stack to go into the `UPDATE_ROLLBACK_FAILED` state. You can't update a stack that is in the `UPDATE_ROLLBACK_FAILED` state. However, if you can continue to roll it back, you can return the stack to its original settings and then try to update it again. For more information, see [Continue rolling back an update](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html) in the *AWS CloudFormation User Guide* .\n\nThe properties that can't be changed without replacing the compute environment are in the [`ComputeResources`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html) property type: [`AllocationStrategy`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-allocationstrategy) , [`BidPercentage`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-bidpercentage) , [`Ec2Configuration`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2configuration) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`ImageId`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-imageid) , [`InstanceRole`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancerole) , [`InstanceTypes`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancetypes) , [`LaunchTemplate`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-launchtemplate) , [`MaxvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-maxvcpus) , [`MinvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-minvcpus) , [`PlacementGroup`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-placementgroup) , [`SecurityGroupIds`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-securitygroupids) , [`Subnets`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-subnets) , [Tags](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-tags) , [`Type`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-type) , and [`UpdateToLatestImageVersion`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-updatetolatestimageversion) .", + "ReplaceComputeEnvironment": "Specifies whether the compute environment is replaced if an update is made that requires replacing the instances in the compute environment. The default value is `true` . To enable more properties to be updated, set this property to `false` . When changing the value of this property to `false` , do not change any other properties at the same time. If other properties are changed at the same time, and the change needs to be rolled back but it can't, it's possible for the stack to go into the `UPDATE_ROLLBACK_FAILED` state. You can't update a stack that is in the `UPDATE_ROLLBACK_FAILED` state. However, if you can continue to roll it back, you can return the stack to its original settings and then try to update it again. For more information, see [Continue rolling back an update](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html) in the *AWS CloudFormation User Guide* .\n\n`ReplaceComputeEnvironment` is not applicable for Fargate compute environments. Fargate compute environments are always updated without interruption.\n\nThe properties that can't be changed without replacing the compute environment are in the [`ComputeResources`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html) property type: [`AllocationStrategy`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-allocationstrategy) , [`BidPercentage`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-bidpercentage) , [`Ec2Configuration`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2configuration) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`ImageId`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-imageid) , [`InstanceRole`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancerole) , [`InstanceTypes`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancetypes) , [`LaunchTemplate`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-launchtemplate) , [`MaxvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-maxvcpus) , [`MinvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-minvcpus) , [`PlacementGroup`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-placementgroup) , [`SecurityGroupIds`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-securitygroupids) , [`Subnets`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-subnets) , [Tags](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-tags) , [`Type`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-type) , and [`UpdateToLatestImageVersion`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-updatetolatestimageversion) .", "ServiceRole": "The full Amazon Resource Name (ARN) of the IAM role that allows AWS Batch to make calls to other AWS services on your behalf. For more information, see [AWS Batch service IAM role](https://docs.aws.amazon.com/batch/latest/userguide/service_IAM_role.html) in the *AWS Batch User Guide* .\n\n> If your account already created the AWS Batch service-linked role, that role is used by default for your compute environment unless you specify a different role here. If the AWS Batch service-linked role doesn't exist in your account, and no role is specified here, the service attempts to create the AWS Batch service-linked role in your account. \n\nIf your specified role has a path other than `/` , then you must specify either the full role ARN (recommended) or prefix the role name with the path. For example, if a role with the name `bar` has a path of `/foo/` , specify `/foo/bar` as the role name. For more information, see [Friendly names and paths](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-friendly-names) in the *IAM User Guide* .\n\n> Depending on how you created your AWS Batch service role, its ARN might contain the `service-role` path prefix. When you only specify the name of the service role, AWS Batch assumes that your ARN doesn't use the `service-role` path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.", "State": "The state of the compute environment. If the state is `ENABLED` , then the compute environment accepts jobs from a queue and can scale out automatically based on queues.\n\nIf the state is `ENABLED` , then the AWS Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.\n\nIf the state is `DISABLED` , then the AWS Batch scheduler doesn't attempt to place jobs within the environment. Jobs in a `STARTING` or `RUNNING` state continue to progress normally. Managed compute environments in the `DISABLED` state don't scale out.\n\n> Compute environments in a `DISABLED` state may continue to incur billing charges. To prevent additional charges, turn off and then delete the compute environment. For more information, see [State](https://docs.aws.amazon.com/batch/latest/userguide/compute_environment_parameters.html#compute_environment_state) in the *AWS Batch User Guide* . \n\nWhen an instance is idle, the instance scales down to the `minvCpus` value. However, the instance size doesn't change. For example, consider a `c5.8xlarge` instance with a `minvCpus` value of `4` and a `desiredvCpus` value of `36` . This instance doesn't scale down to a `c5.large` instance.", "Tags": "The tags applied to the compute environment.", @@ -4812,7 +4835,14 @@ "JobExecutionTimeoutMinutes": "Specifies the job timeout (in minutes) when the compute environment infrastructure is updated. The default value is 30.", "TerminateJobsOnUpdate": "Specifies whether jobs are automatically terminated when the computer environment infrastructure is updated. The default value is `false` ." }, + "AWS::Batch::ConsumableResource": { + "ConsumableResourceName": "The name of the consumable resource.", + "ResourceType": "Indicates whether the resource is available to be re-used after a job completes. Can be one of:\n\n- `REPLENISHABLE`\n- `NON_REPLENISHABLE`", + "Tags": "The tags that you apply to the consumable resource to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see [Tagging your AWS Batch resources](https://docs.aws.amazon.com/batch/latest/userguide/using-tags.html) .", + "TotalQuantity": "The total amount of the consumable resource that is available." + }, "AWS::Batch::JobDefinition": { + "ConsumableResourceProperties": "Contains a list of consumable resources required by the job.", "ContainerProperties": "An object with properties specific to Amazon ECS-based jobs. When `containerProperties` is used in the job definition, it can't be used in addition to `eksProperties` , `ecsProperties` , or `nodeProperties` .", "EcsProperties": "An object that contains the properties for the Amazon ECS resources of a job.When `ecsProperties` is used in the job definition, it can't be used in addition to `containerProperties` , `eksProperties` , or `nodeProperties` .", "EksProperties": "An object with properties that are specific to Amazon EKS-based jobs. When `eksProperties` is used in the job definition, it can't be used in addition to `containerProperties` , `ecsProperties` , or `nodeProperties` .", @@ -4827,6 +4857,13 @@ "Timeout": "The timeout time for jobs that are submitted with this job definition. After the amount of time you specify passes, AWS Batch terminates your jobs if they aren't finished.", "Type": "The type of job definition. For more information about multi-node parallel jobs, see [Creating a multi-node parallel job definition](https://docs.aws.amazon.com/batch/latest/userguide/multi-node-job-def.html) in the *AWS Batch User Guide* .\n\n- If the value is `container` , then one of the following is required: `containerProperties` , `ecsProperties` , or `eksProperties` .\n- If the value is `multinode` , then `nodeProperties` is required.\n\n> If the job is run on Fargate resources, then `multinode` isn't supported." }, + "AWS::Batch::JobDefinition ConsumableResourceProperties": { + "ConsumableResourceList": "The list of consumable resources required by a job." + }, + "AWS::Batch::JobDefinition ConsumableResourceRequirement": { + "ConsumableResource": "The name or ARN of the consumable resource.", + "Quantity": "The quantity of the consumable resource that is needed." + }, "AWS::Batch::JobDefinition ContainerProperties": { "Command": "The command that's passed to the container. This parameter maps to `Cmd` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `COMMAND` parameter to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . For more information, see [https://docs.docker.com/engine/reference/builder/#cmd](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/builder/#cmd) .", "Environment": "The environment variables to pass to a container. This parameter maps to `Env` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--env` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .\n\n> We don't recommend using plaintext environment variables for sensitive information, such as credential data. > Environment variables cannot start with \" `AWS_BATCH` \". This naming convention is reserved for variables that AWS Batch sets.", @@ -5042,6 +5079,7 @@ "NumNodes": "The number of nodes that are associated with a multi-node parallel job." }, "AWS::Batch::JobDefinition NodeRangeProperty": { + "ConsumableResourceProperties": "Contains a list of consumable resources required by a job.", "Container": "The container details for the node range.", "EcsProperties": "This is an object that represents the properties of the node range for a multi-node parallel job.", "EksProperties": "This is an object that represents the properties of the node range for a multi-node parallel job.", @@ -5228,6 +5266,7 @@ "Type": "The data type of the parameter." }, "AWS::Bedrock::Agent PromptConfiguration": { + "AdditionalModelRequestFields": "If the Converse or ConverseStream operations support the model, `additionalModelRequestFields` contains additional inference parameters, beyond the base set of inference parameters in the `inferenceConfiguration` field.\n\nFor more information, see [Inference request parameters and response fields for foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html) .", "BasePromptTemplate": "Defines the prompt template with which to replace the default prompt template. You can use placeholder variables in the base prompt template to customize the prompt. For more information, see [Prompt template placeholder variables](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-placeholders.html) . For more information, see [Configure the prompt templates](https://docs.aws.amazon.com/bedrock/latest/userguide/advanced-prompts-configure.html) .", "FoundationModel": "The agent's foundation model.", "InferenceConfiguration": "Contains inference parameters to use when the agent invokes a foundation model in the part of the agent sequence defined by the `promptType` . For more information, see [Inference parameters for foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html) .", @@ -5275,8 +5314,138 @@ "CopyFrom": "The ARN of the model or system-defined inference profile that is the source for the inference profile." }, "AWS::Bedrock::ApplicationInferenceProfile Tag": { - "Key": "The tag's key.", - "Value": "The tag's value." + "Key": "The key associated with a tag.", + "Value": "The value associated with a tag." + }, + "AWS::Bedrock::Blueprint": { + "BlueprintName": "The blueprint's name.", + "KmsEncryptionContext": "Name-value pairs to include as an encryption context.", + "KmsKeyId": "The AWS KMS key to use for encryption.", + "Schema": "The blueprint's schema.", + "Tags": "", + "Type": "The blueprint's type." + }, + "AWS::Bedrock::Blueprint Tag": { + "Key": "The key associated with a tag.", + "Value": "The value associated with a tag." + }, + "AWS::Bedrock::DataAutomationProject": { + "CustomOutputConfiguration": "Blueprints to apply to objects processed by the project.", + "KmsEncryptionContext": "The AWS KMS encryption context to use for encryption.", + "KmsKeyId": "The AWS KMS key to use for encryption.", + "OverrideConfiguration": "Additional settings for the project.", + "ProjectDescription": "The project's description.", + "ProjectName": "The project's name.", + "StandardOutputConfiguration": "The project's standard output configuration.", + "Tags": "" + }, + "AWS::Bedrock::DataAutomationProject AudioExtractionCategory": { + "State": "Whether generating categorical data from audio is enabled.", + "Types": "The types of data to generate." + }, + "AWS::Bedrock::DataAutomationProject AudioStandardExtraction": { + "Category": "Settings for generating data from audio." + }, + "AWS::Bedrock::DataAutomationProject AudioStandardGenerativeField": { + "State": "Whether generating descriptions is enabled for audio.", + "Types": "The types of description to generate." + }, + "AWS::Bedrock::DataAutomationProject AudioStandardOutputConfiguration": { + "Extraction": "Settings for populating data fields that describe the audio.", + "GenerativeField": "Whether to generate descriptions of the data." + }, + "AWS::Bedrock::DataAutomationProject BlueprintItem": { + "BlueprintArn": "The blueprint's ARN.", + "BlueprintStage": "The blueprint's stage.", + "BlueprintVersion": "The blueprint's version." + }, + "AWS::Bedrock::DataAutomationProject CustomOutputConfiguration": { + "Blueprints": "A list of blueprints." + }, + "AWS::Bedrock::DataAutomationProject DocumentBoundingBox": { + "State": "Whether bounding boxes are enabled for documents." + }, + "AWS::Bedrock::DataAutomationProject DocumentExtractionGranularity": { + "Types": "Granularity settings for documents." + }, + "AWS::Bedrock::DataAutomationProject DocumentOutputAdditionalFileFormat": { + "State": "Whether additional file formats are enabled for a project." + }, + "AWS::Bedrock::DataAutomationProject DocumentOutputFormat": { + "AdditionalFileFormat": "Output settings for additional file formats.", + "TextFormat": "An output text format." + }, + "AWS::Bedrock::DataAutomationProject DocumentOutputTextFormat": { + "Types": "The types of output text to generate." + }, + "AWS::Bedrock::DataAutomationProject DocumentOverrideConfiguration": { + "Splitter": "Whether document splitter is enabled for a project." + }, + "AWS::Bedrock::DataAutomationProject DocumentStandardExtraction": { + "BoundingBox": "Whether to generate bounding boxes.", + "Granularity": "Which granularities to generate data for." + }, + "AWS::Bedrock::DataAutomationProject DocumentStandardGenerativeField": { + "State": "Whether generating descriptions is enabled for documents." + }, + "AWS::Bedrock::DataAutomationProject DocumentStandardOutputConfiguration": { + "Extraction": "Settings for populating data fields that describe the document.", + "GenerativeField": "Whether to generate descriptions.", + "OutputFormat": "The output format to generate." + }, + "AWS::Bedrock::DataAutomationProject ImageBoundingBox": { + "State": "Bounding box settings for a project." + }, + "AWS::Bedrock::DataAutomationProject ImageExtractionCategory": { + "State": "Whether generating categorical data from images is enabled.", + "Types": "The types of data to generate." + }, + "AWS::Bedrock::DataAutomationProject ImageStandardExtraction": { + "BoundingBox": "Settings for generating bounding boxes.", + "Category": "Settings for generating categorical data." + }, + "AWS::Bedrock::DataAutomationProject ImageStandardGenerativeField": { + "State": "Whether generating descriptions is enabled for images.", + "Types": "Settings for generating descriptions of images." + }, + "AWS::Bedrock::DataAutomationProject ImageStandardOutputConfiguration": { + "Extraction": "Settings for populating data fields that describe the image.", + "GenerativeField": "Whether to generate descriptions of the data." + }, + "AWS::Bedrock::DataAutomationProject OverrideConfiguration": { + "Document": "Additional settings for a project." + }, + "AWS::Bedrock::DataAutomationProject SplitterConfiguration": { + "State": "Whether document splitter is enabled for a project." + }, + "AWS::Bedrock::DataAutomationProject StandardOutputConfiguration": { + "Audio": "Settings for processing audio.", + "Document": "Settings for processing documents.", + "Image": "Settings for processing images.", + "Video": "Settings for processing video." + }, + "AWS::Bedrock::DataAutomationProject Tag": { + "Key": "The key associated with a tag.", + "Value": "The value associated with a tag." + }, + "AWS::Bedrock::DataAutomationProject VideoBoundingBox": { + "State": "Whether bounding boxes are enabled for video." + }, + "AWS::Bedrock::DataAutomationProject VideoExtractionCategory": { + "State": "Whether generating categorical data from video is enabled.", + "Types": "The types of data to generate." + }, + "AWS::Bedrock::DataAutomationProject VideoStandardExtraction": { + "BoundingBox": "Settings for generating bounding boxes.", + "Category": "Settings for generating categorical data." + }, + "AWS::Bedrock::DataAutomationProject VideoStandardGenerativeField": { + "State": "Whether generating descriptions is enabled for video.", + "Types": "The types of description to generate." + }, + "AWS::Bedrock::DataAutomationProject VideoStandardOutputConfiguration": { + "Extraction": "Settings for populating data fields that describe the video.", + "GenerativeField": "Whether to generate descriptions of the video." }, "AWS::Bedrock::DataSource": { "DataDeletionPolicy": "The data deletion policy for the data source.", @@ -5295,6 +5464,10 @@ "ParsingModality": "Specifies whether to enable parsing of multimodal data, including both text and/or images.", "ParsingPrompt": "Instructions for interpreting the contents of a document." }, + "AWS::Bedrock::DataSource BedrockFoundationModelContextEnrichmentConfiguration": { + "EnrichmentStrategyConfiguration": "The enrichment stategy used to provide additional context. For example, Neptune GraphRAG uses Amazon Bedrock foundation models to perform chunk entity extraction.", + "ModelArn": "The Amazon Resource Name (ARN) of the foundation model used for context enrichment." + }, "AWS::Bedrock::DataSource ChunkingConfiguration": { "ChunkingStrategy": "Knowledge base can split your source data into chunks. A *chunk* refers to an excerpt from a data source that is returned when the knowledge base that it belongs to is queried. You have the following options for chunking your data. If you opt for `NONE` , then you may want to pre-process your files by splitting them up such that each file corresponds to a chunk.\n\n- `FIXED_SIZE` \u2013 Amazon Bedrock splits your source data into chunks of the approximate size that you set in the `fixedSizeChunkingConfiguration` .\n- `HIERARCHICAL` \u2013 Split documents into layers of chunks where the first layer contains large chunks, and the second layer contains smaller chunks derived from the first layer.\n- `SEMANTIC` \u2013 Split documents into chunks based on groups of similar content derived with natural language processing.\n- `NONE` \u2013 Amazon Bedrock treats each file as one chunk. If you choose this option, you may want to pre-process your documents by splitting them into separate files.", "FixedSizeChunkingConfiguration": "Configurations for when you choose fixed-size chunking. If you set the `chunkingStrategy` as `NONE` , exclude this field.", @@ -5314,6 +5487,10 @@ "HostType": "The supported host type, whether online/cloud or server/on-premises.", "HostUrl": "The Confluence host URL or instance URL." }, + "AWS::Bedrock::DataSource ContextEnrichmentConfiguration": { + "BedrockFoundationModelConfiguration": "The configuration of the Amazon Bedrock foundation model used for context enrichment.", + "Type": "The method used for context enrichment. It must be Amazon Bedrock foundation models." + }, "AWS::Bedrock::DataSource CrawlFilterConfiguration": { "PatternObjectFilter": "The configuration of filtering certain objects or content types of the data source.", "Type": "The type of filtering that you want to apply to certain objects or content of the data source. For example, the `PATTERN` type is regular expression patterns you can apply to filter your content." @@ -5330,6 +5507,9 @@ "Type": "The type of data source.", "WebConfiguration": "The configuration of web URLs to crawl for your data source. You should be authorized to crawl the URLs.\n\n> Crawling web URLs as your data source is in preview release and is subject to change." }, + "AWS::Bedrock::DataSource EnrichmentStrategyConfiguration": { + "Method": "The method used for the context enrichment strategy." + }, "AWS::Bedrock::DataSource FixedSizeChunkingConfiguration": { "MaxTokens": "The maximum number of tokens to include in a chunk.", "OverlapPercentage": "The percentage of overlap between adjacent chunks of a data source." @@ -5421,6 +5601,7 @@ }, "AWS::Bedrock::DataSource VectorIngestionConfiguration": { "ChunkingConfiguration": "Details about how to chunk the documents in the data source. A *chunk* refers to an excerpt from a data source that is returned when the knowledge base that it belongs to is queried.", + "ContextEnrichmentConfiguration": "The context enrichment configuration used for ingestion of the data into the vector store.", "CustomTransformationConfiguration": "A custom document transformer for parsed data source documents.", "ParsingConfiguration": "Configurations for a parser to use for parsing documents in your data source. If you exclude this field, the default parser will be used." }, @@ -5428,9 +5609,12 @@ "CrawlerLimits": "The configuration of crawl limits for the web URLs.", "ExclusionFilters": "A list of one or more exclusion regular expression patterns to exclude certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn\u2019t crawled.", "InclusionFilters": "A list of one or more inclusion regular expression patterns to include certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn\u2019t crawled.", - "Scope": "The scope of what is crawled for your URLs.\n\nYou can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL \"https://docs.aws.amazon.com/bedrock/latest/userguide/\" and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain \"aws.amazon.com\" can also include sub domain \"docs.aws.amazon.com\"." + "Scope": "The scope of what is crawled for your URLs.\n\nYou can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL \"https://docs.aws.amazon.com/bedrock/latest/userguide/\" and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain \"aws.amazon.com\" can also include sub domain \"docs.aws.amazon.com\".", + "UserAgent": "Returns the user agent suffix for your web crawler.", + "UserAgentHeader": "A string used for identifying the crawler or bot when it accesses a web server. The user agent header value consists of the `bedrockbot` , UUID, and a user agent suffix for your crawler (if one is provided). By default, it is set to `bedrockbot_UUID` . You can optionally append a custom suffix to `bedrockbot_UUID` to allowlist a specific user agent permitted to access your source URLs." }, "AWS::Bedrock::DataSource WebCrawlerLimits": { + "MaxPages": "The max number of web pages crawled from your source URLs, up to 25,000 pages. If the web pages exceed this limit, the data source sync will fail and no web pages will be ingested.", "RateLimit": "The max rate at which pages are crawled, up to 300 per minute per host." }, "AWS::Bedrock::DataSource WebDataSourceConfiguration": { @@ -5784,8 +5968,8 @@ "RegexesConfig": "A list of regular expressions to configure to the guardrail." }, "AWS::Bedrock::Guardrail Tag": { - "Key": "The tag's key.", - "Value": "The tag's value." + "Key": "The key associated with a tag.", + "Value": "The value associated with a tag." }, "AWS::Bedrock::Guardrail TopicConfig": { "Definition": "A definition of the topic to deny.", @@ -5849,6 +6033,14 @@ "TextField": "The name of the field in which Amazon Bedrock stores the raw text from your data. The text is split according to the chunking strategy you choose.", "VectorField": "The name of the field in which Amazon Bedrock stores the vector embeddings for your data sources." }, + "AWS::Bedrock::KnowledgeBase NeptuneAnalyticsConfiguration": { + "FieldMapping": "Contains the names of the fields to which to map information about the vector store.", + "GraphArn": "The Amazon Resource Name (ARN) of the Neptune Analytics vector store." + }, + "AWS::Bedrock::KnowledgeBase NeptuneAnalyticsFieldMapping": { + "MetadataField": "The name of the field in which Amazon Bedrock stores metadata about the vector store.", + "TextField": "The name of the field in which Amazon Bedrock stores the raw text from your data. The text is split according to the chunking strategy you choose." + }, "AWS::Bedrock::KnowledgeBase OpenSearchServerlessConfiguration": { "CollectionArn": "The Amazon Resource Name (ARN) of the OpenSearch Service vector store.", "FieldMapping": "Contains the names of the fields to which to map information about the vector store.", @@ -5948,6 +6140,7 @@ }, "AWS::Bedrock::KnowledgeBase StorageConfiguration": { "MongoDbAtlasConfiguration": "Contains the storage configuration of the knowledge base in MongoDB Atlas.", + "NeptuneAnalyticsConfiguration": "Contains details about the Neptune Analytics configuration of the knowledge base in Amazon Neptune. For more information, see [Create a vector index in Amazon Neptune Analytics.](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup-neptune.html) .", "OpensearchServerlessConfiguration": "Contains the storage configuration of the knowledge base in Amazon OpenSearch Service.", "PineconeConfiguration": "Contains the storage configuration of the knowledge base in Pinecone.", "RdsConfiguration": "Contains details about the storage configuration of the knowledge base in Amazon RDS. For more information, see [Create a vector index in Amazon RDS](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup-rds.html) .", @@ -6051,7 +6244,7 @@ "AWS::Bedrock::Prompt ToolChoice": { "Any": "The model must request at least one tool (no text is generated).", "Auto": "(Default). The Model automatically decides if a tool should be called or whether to generate text instead.", - "Tool": "The Model must request the specified tool. Only supported by Anthropic Claude 3 models." + "Tool": "The Model must request the specified tool. Only supported by Amazon Nova models and Anthropic Claude 3 models." }, "AWS::Bedrock::Prompt ToolConfiguration": { "ToolChoice": "If supported by model, forces the model to request a tool.", @@ -6142,7 +6335,7 @@ "AWS::Bedrock::PromptVersion ToolChoice": { "Any": "The model must request at least one tool (no text is generated).", "Auto": "(Default). The Model automatically decides if a tool should be called or whether to generate text instead.", - "Tool": "The Model must request the specified tool. Only supported by Anthropic Claude 3 models." + "Tool": "The Model must request the specified tool. Only supported by Amazon Nova models and Anthropic Claude 3 models." }, "AWS::Bedrock::PromptVersion ToolConfiguration": { "ToolChoice": "If supported by model, forces the model to request a tool.", @@ -6382,7 +6575,12 @@ "Name": "The unique name of the Cost Category.", "RuleVersion": "The rule schema version in this particular Cost Category.", "Rules": "The array of CostCategoryRule in JSON array format.\n\n> Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.", - "SplitChargeRules": "The split charge rules that are used to allocate your charges between your Cost Category values." + "SplitChargeRules": "The split charge rules that are used to allocate your charges between your Cost Category values.", + "Tags": "" + }, + "AWS::CE::CostCategory ResourceTag": { + "Key": "The key that's associated with the tag.", + "Value": "The value that's associated with the tag." }, "AWS::CUR::ReportDefinition": { "AdditionalArtifacts": "A list of manifests that you want AWS to create for this report.", @@ -6400,7 +6598,7 @@ }, "AWS::Cassandra::Keyspace": { "ClientSideTimestampsEnabled": "Indicates whether client-side timestamps are enabled (true) or disabled (false) for all tables in the keyspace. To add a Region to a single-Region keyspace with at least one table, the value must be set to true. After you've enabled client-side timestamps for a table, you can\u2019t disable it again.", - "KeyspaceName": "The name of the keyspace to be created. The keyspace name is case sensitive. If you don't specify a name, AWS CloudFormation generates a unique ID and uses that ID for the keyspace name. For more information, see [Name type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-name.html) .\n\n*Length constraints:* Minimum length of 3. Maximum length of 255.\n\n*Pattern:* `^[a-zA-Z0-9][a-zA-Z0-9_]{1,47}$`", + "KeyspaceName": "The name of the keyspace to be created. The keyspace name is case sensitive. If you don't specify a name, AWS CloudFormation generates a unique ID and uses that ID for the keyspace name. For more information, see [Name type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-name.html) .\n\n*Length constraints:* Minimum length of 1. Maximum length of 48.", "ReplicationSpecification": "Specifies the `ReplicationStrategy` of a keyspace. The options are:\n\n- `SINGLE_REGION` for a single Region keyspace (optional) or\n- `MULTI_REGION` for a multi-Region keyspace\n\nIf no `ReplicationStrategy` is provided, the default is `SINGLE_REGION` . If you choose `MULTI_REGION` , you must also provide a `RegionList` with the AWS Regions that the keyspace is replicated in.", "Tags": "An array of key-value pairs to apply to this resource.\n\nFor more information, see [Tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html) ." }, @@ -6544,6 +6742,7 @@ "Tags": "The tags to add to the configuration.", "TeamId": "The ID of the Microsoft Team authorized with .\n\nTo get the team ID, you must perform the initial authorization flow with Microsoft Teams in the in chat applications console. Then you can copy and paste the team ID from the console. For more details, see steps 1-3 in [Tutorial: Get started with Microsoft Teams](https://docs.aws.amazon.com/chatbot/latest/adminguide/teams-setup.html) in the *in chat applications Administrator Guide* .", "TeamsChannelId": "The ID of the Microsoft Teams channel.\n\nTo get the channel ID, open Microsoft Teams, right click on the channel name in the left pane, then choose *Copy* . An example of the channel ID syntax is: `19%3ab6ef35dc342d56ba5654e6fc6d25a071%40thread.tacv2` .", + "TeamsChannelName": "", "TeamsTenantId": "The ID of the Microsoft Teams tenant.\n\nTo get the tenant ID, you must perform the initial authorization flow with Microsoft Teams in the in chat applications console. Then you can copy and paste the tenant ID from the console. For more details, see steps 1-3 in [Tutorial: Get started with Microsoft Teams](https://docs.aws.amazon.com/chatbot/latest/adminguide/teams-setup.html) in the *in chat applications Administrator Guide* .", "UserRoleRequired": "Enables use of a user role requirement in your chat configuration." }, @@ -6601,7 +6800,7 @@ "Description": "A description of the collaboration provided by the collaboration owner.", "Members": "A list of initial members, not including the creator. This list is immutable.", "Name": "A human-readable identifier provided by the collaboration owner. Display names are not unique.", - "QueryLogStatus": "An indicator as to whether query logging has been enabled or disabled for the collaboration.", + "QueryLogStatus": "An indicator as to whether query logging has been enabled or disabled for the collaboration.\n\nWhen `ENABLED` , AWS Clean Rooms logs details about queries run within this collaboration and those logs can be viewed in Amazon CloudWatch Logs. The default value is `DISABLED` .", "Tags": "An optional label that you can assign to a resource when you create it. Each tag consists of a key and an optional value, both of which you define. When you use tagging, you can also use tag-based access control in IAM policies to control access to this resource." }, "AWS::CleanRooms::Collaboration DataEncryptionMetadata": { @@ -6643,7 +6842,7 @@ }, "AWS::CleanRooms::ConfiguredTable": { "AllowedColumns": "The columns within the underlying AWS Glue table that can be utilized within collaborations.", - "AnalysisMethod": "The analysis method for the configured table. The only valid value is currently `DIRECT_QUERY`.", + "AnalysisMethod": "The analysis method for the configured table.\n\n`DIRECT_QUERY` allows SQL queries to be run directly on this table.\n\n`DIRECT_JOB` allows PySpark jobs to be run directly on this table.\n\n`MULTIPLE` allows both SQL queries and PySpark jobs to be run directly on this table.", "AnalysisRules": "The analysis rule that was created for the configured table.", "Description": "A description for the configured table.", "Name": "A name for the configured table.", @@ -6821,7 +7020,7 @@ "CollaborationIdentifier": "The unique ID for the associated collaboration.", "DefaultResultConfiguration": "The default protected query result configuration as specified by the member who can receive results.", "PaymentConfiguration": "The payment responsibilities accepted by the collaboration member.", - "QueryLogStatus": "An indicator as to whether query logging has been enabled or disabled for the membership.", + "QueryLogStatus": "An indicator as to whether query logging has been enabled or disabled for the membership.\n\nWhen `ENABLED` , AWS Clean Rooms logs details about queries run within this collaboration and those logs can be viewed in Amazon CloudWatch Logs. The default value is `DISABLED` .", "Tags": "An optional label that you can assign to a resource when you create it. Each tag consists of a key and an optional value, both of which you define. When you use tagging, you can also use tag-based access control in IAM policies to control access to this resource." }, "AWS::CleanRooms::Membership MembershipMLPaymentConfig": { @@ -7072,7 +7271,7 @@ "LastUpdateTime": "The time the stack was last updated. This field will only be returned if the stack has been updated at least once.", "NotificationARNs": "The Amazon SNS topic ARNs to publish stack related events. You can find your Amazon SNS topic ARNs using the Amazon SNS console or your Command Line Interface (CLI).", "Outputs": "A list of output structures.", - "Parameters": "The set value pairs that represent the parameters passed to CloudFormation when this nested stack is created. Each parameter has a name corresponding to a parameter defined in the embedded template and a value representing the value that you want to set for the parameter.\n\n> If you use the `Ref` function to pass a parameter value to a nested stack, comma-delimited list parameters must be of type `String` . In other words, you can't pass values that are of type `CommaDelimitedList` to nested stacks. \n\nConditional. Required if the nested stack requires input parameters.\n\nWhether an update causes interruptions depends on the resources that are being updated. An update never causes a nested stack to be replaced.", + "Parameters": "The set value pairs that represent the parameters passed to CloudFormation when this nested stack is created. Each parameter has a name corresponding to a parameter defined in the embedded template and a value representing the value that you want to set for the parameter.\n\n> If you use the `Ref` function to pass a parameter value to a nested stack, comma-delimited list parameters must be of type `String` . In other words, you can't pass values that are of type `CommaDelimitedList` to nested stacks. \n\nRequired if the nested stack requires input parameters.\n\nWhether an update causes interruptions depends on the resources that are being updated. An update never causes a nested stack to be replaced.", "ParentId": "For nested stacks--stacks created as resources for another stack--the stack ID of the direct parent of this stack. For the first level of nested stacks, the root stack is also the parent stack.\n\nFor more information, see [Embed stacks within other stacks using nested stacks](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html) in the *AWS CloudFormation User Guide* .", "RoleARN": "The Amazon Resource Name (ARN) of an IAM role that CloudFormation assumes to create the stack. CloudFormation uses the role's credentials to make calls on your behalf. CloudFormation always uses this role for all future operations on the stack. Provided that users have permission to operate on the stack, CloudFormation uses this role even if the users don't have permission to pass it. Ensure that the role grants least privilege.\n\nIf you don't specify a value, CloudFormation uses the role that was previously associated with the stack. If no role is available, CloudFormation uses a temporary session that's generated from your user credentials.", "RootId": "For nested stacks--stacks created as resources for another stack--the stack ID of the top-level stack to which the nested stack ultimately belongs.\n\nFor more information, see [Embed stacks within other stacks using nested stacks](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html) in the *AWS CloudFormation User Guide* .", @@ -7098,18 +7297,18 @@ "Value": "*Required* . A string containing the value for this tag. You can specify a maximum of 256 characters for a tag value." }, "AWS::CloudFormation::StackSet": { - "AdministrationRoleARN": "The Amazon Resource Number (ARN) of the IAM role to use to create this stack set. Specify an IAM role only if you are using customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account.\n\nUse customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account. For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the *AWS CloudFormation User Guide* .\n\n*Minimum* : `20`\n\n*Maximum* : `2048`", - "AutoDeployment": "[ `Service-managed` permissions] Describes whether StackSets automatically deploys to AWS Organizations accounts that are added to a target organization or organizational unit (OU).", - "CallAs": "[Service-managed permissions] Specifies whether you are acting as an account administrator in the organization's management account or as a delegated administrator in a member account.\n\nBy default, `SELF` is specified. Use `SELF` for stack sets with self-managed permissions.\n\n- To create a stack set with service-managed permissions while signed in to the management account, specify `SELF` .\n- To create a stack set with service-managed permissions while signed in to a delegated administrator account, specify `DELEGATED_ADMIN` .\n\nYour AWS account must be registered as a delegated admin in the management account. For more information, see [Register a delegated administrator](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-delegated-admin.html) in the *AWS CloudFormation User Guide* .\n\nStack sets with service-managed permissions are created in the management account, including stack sets that are created by delegated administrators.\n\n*Valid Values* : `SELF` | `DELEGATED_ADMIN`", + "AdministrationRoleARN": "The Amazon Resource Number (ARN) of the IAM role to use to create this stack set. Specify an IAM role only if you are using customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account.\n\nUse customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account. For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the *AWS CloudFormation User Guide* .\n\nValid only if the permissions model is `SELF_MANAGED` .", + "AutoDeployment": "Describes whether StackSets automatically deploys to AWS Organizations accounts that are added to a target organization or organizational unit (OU). For more information, see [Manage automatic deployments for CloudFormation StackSets that use service-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-manage-auto-deployment.html) in the *AWS CloudFormation User Guide* .\n\nRequired if the permissions model is `SERVICE_MANAGED` . (Not used with self-managed permissions.)", + "CallAs": "Specifies whether you are acting as an account administrator in the organization's management account or as a delegated administrator in a member account.\n\nBy default, `SELF` is specified. Use `SELF` for stack sets with self-managed permissions.\n\n- To create a stack set with service-managed permissions while signed in to the management account, specify `SELF` .\n- To create a stack set with service-managed permissions while signed in to a delegated administrator account, specify `DELEGATED_ADMIN` .\n\nYour AWS account must be registered as a delegated admin in the management account. For more information, see [Register a delegated administrator](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-delegated-admin.html) in the *AWS CloudFormation User Guide* .\n\nStack sets with service-managed permissions are created in the management account, including stack sets that are created by delegated administrators.\n\nValid only if the permissions model is `SERVICE_MANAGED` .", "Capabilities": "The capabilities that are allowed in the stack set. Some stack set templates might include resources that can affect permissions in your AWS account \u2014for example, by creating new IAM users. For more information, see [Acknowledging IAM resources in CloudFormation templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/control-access-with-iam.html#using-iam-capabilities) in the *AWS CloudFormation User Guide* .", - "Description": "A description of the stack set.\n\n*Minimum* : `1`\n\n*Maximum* : `1024`", - "ExecutionRoleName": "The name of the IAM execution role to use to create the stack set. If you don't specify an execution role, CloudFormation uses the `AWSCloudFormationStackSetExecutionRole` role for the stack set operation.\n\n*Minimum* : `1`\n\n*Maximum* : `64`\n\n*Pattern* : `[a-zA-Z_0-9+=,.@-]+`", + "Description": "A description of the stack set.", + "ExecutionRoleName": "The name of the IAM execution role to use to create the stack set. If you don't specify an execution role, CloudFormation uses the `AWSCloudFormationStackSetExecutionRole` role for the stack set operation.\n\nValid only if the permissions model is `SELF_MANAGED` .\n\n*Pattern* : `[a-zA-Z_0-9+=,.@-]+`", "ManagedExecution": "Describes whether StackSets performs non-conflicting operations concurrently and queues conflicting operations.\n\nWhen active, StackSets performs non-conflicting operations concurrently and queues conflicting operations. After conflicting operations finish, StackSets starts queued operations in request order.\n\n> If there are already running or queued operations, StackSets queues all incoming operations even if they are non-conflicting.\n> \n> You can't modify your stack set's execution configuration while there are running or queued operations for that stack set. \n\nWhen inactive (default), StackSets performs one operation at a time in request order.", "OperationPreferences": "The user-specified preferences for how CloudFormation performs a stack set operation.", "Parameters": "The input parameters for the stack set template.", "PermissionModel": "Describes how the IAM roles required for stack set operations are created.\n\n- With `SELF_MANAGED` permissions, you must create the administrator and execution roles required to deploy to target accounts. For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the *AWS CloudFormation User Guide* .\n- With `SERVICE_MANAGED` permissions, StackSets automatically creates the IAM roles required to deploy to accounts managed by AWS Organizations . For more information, see [Activate trusted access for stack sets with AWS Organizations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-activate-trusted-access.html) in the *AWS CloudFormation User Guide* .", "StackInstancesGroup": "A group of stack instances with parameters in some specific accounts and Regions.", - "StackSetName": "The name to associate with the stack set. The name must be unique in the Region where you create your stack set.\n\n> The `StackSetName` property is required.", + "StackSetName": "The name to associate with the stack set. The name must be unique in the Region where you create your stack set.", "Tags": "Key-value pairs to associate with this stack. CloudFormation also propagates these tags to supported resources in the stack. You can specify a maximum number of 50 tags.\n\nIf you don't specify this parameter, CloudFormation doesn't modify the stack's tags. If you specify an empty value, CloudFormation removes all associated tags.", "TemplateBody": "The structure that contains the template body, with a minimum length of 1 byte and a maximum length of 51,200 bytes.\n\nYou must include either `TemplateURL` or `TemplateBody` in a StackSet, but you can't use both. Dynamic references in the `TemplateBody` may not work correctly in all cases. It's recommended to pass templates containing dynamic references through `TemplateUrl` instead.", "TemplateURL": "The URL of a file containing the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket or a Systems Manager document. The location for an Amazon S3 bucket must start with `https://` .\n\nConditional: You must specify only one of the following parameters: `TemplateBody` , `TemplateURL` ." @@ -7134,7 +7333,7 @@ "MaxConcurrentCount": "The maximum number of accounts in which to perform this operation at one time. This is dependent on the value of `FailureToleranceCount` . `MaxConcurrentCount` is at most one more than the `FailureToleranceCount` .\n\nNote that this setting lets you specify the *maximum* for operations. For large deployments, under certain circumstances the actual number of accounts acted upon concurrently may be lower due to service throttling.\n\nConditional: You must specify either `MaxConcurrentCount` or `MaxConcurrentPercentage` , but not both.", "MaxConcurrentPercentage": "The maximum percentage of accounts in which to perform this operation at one time.\n\nWhen calculating the number of accounts based on the specified percentage, CloudFormation rounds down to the next whole number. This is true except in cases where rounding down would result is zero. In this case, CloudFormation sets the number as one instead.\n\nNote that this setting lets you specify the *maximum* for operations. For large deployments, under certain circumstances the actual number of accounts acted upon concurrently may be lower due to service throttling.\n\nConditional: You must specify either `MaxConcurrentCount` or `MaxConcurrentPercentage` , but not both.", "RegionConcurrencyType": "The concurrency type of deploying StackSets operations in Regions, could be in parallel or one Region at a time.", - "RegionOrder": "The order of the Regions where you want to perform the stack operation.\n\n> `RegionOrder` isn't followed if `AutoDeployment` is enabled." + "RegionOrder": "The order of the Regions where you want to perform the stack operation." }, "AWS::CloudFormation::StackSet Parameter": { "ParameterKey": "The key associated with the parameter. If you don't specify a key and value for a particular parameter, CloudFormation uses the default value that's specified in your template.", @@ -7754,7 +7953,7 @@ "AWS::CloudTrail::EventDataStore AdvancedFieldSelector": { "EndsWith": "An operator that includes events that match the last few characters of the event record field specified as the value of `Field` .", "Equals": "An operator that includes events that match the exact value of the event record field specified as the value of `Field` . This is the only valid operator that you can use with the `readOnly` , `eventCategory` , and `resources.type` fields.", - "Field": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.\n\nThe following are valid values for network activity events:\n\n- `cloudtrail.amazonaws.com`\n- `ec2.amazonaws.com`\n- `kms.amazonaws.com`\n- `s3.amazonaws.com`\n- `secretsmanager.amazonaws.com`\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor information about filtering data events on the `resources.ARN` field, see [Filtering data events by resources.ARN](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/filtering-data-events.html#filtering-data-events-resourcearn) in the *AWS CloudTrail User Guide* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", + "Field": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.\n\nThe following are valid values for network activity events:\n\n- `cloudtrail.amazonaws.com`\n- `ec2.amazonaws.com`\n- `kms.amazonaws.com`\n- `s3.amazonaws.com`\n- `secretsmanager.amazonaws.com`\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", "NotEndsWith": "An operator that excludes events that match the last few characters of the event record field specified as the value of `Field` .", "NotEquals": "An operator that excludes events that match the exact value of the event record field specified as the value of `Field` .", "NotStartsWith": "An operator that excludes events that match the first few characters of the event record field specified as the value of `Field` .", @@ -7785,7 +7984,7 @@ "KMSKeyId": "Specifies the AWS KMS key ID to use to encrypt the logs delivered by CloudTrail. The value can be an alias name prefixed by \"alias/\", a fully specified ARN to an alias, a fully specified ARN to a key, or a globally unique identifier.\n\nCloudTrail also supports AWS KMS multi-Region keys. For more information about multi-Region keys, see [Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide* .\n\nExamples:\n\n- alias/MyAliasName\n- arn:aws:kms:us-east-2:123456789012:alias/MyAliasName\n- arn:aws:kms:us-east-2:123456789012:key/12345678-1234-1234-1234-123456789012\n- 12345678-1234-1234-1234-123456789012", "S3BucketName": "Specifies the name of the Amazon S3 bucket designated for publishing log files. See [Amazon S3 Bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html) .", "S3KeyPrefix": "Specifies the Amazon S3 key prefix that comes after the name of the bucket you have designated for log file delivery. For more information, see [Finding Your CloudTrail Log Files](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/get-and-view-cloudtrail-log-files.html#cloudtrail-find-log-files) . The maximum length is 200 characters.", - "SnsTopicName": "Specifies the name of the Amazon SNS topic defined for notification of log file delivery. The maximum length is 256 characters.", + "SnsTopicName": "Specifies the name or ARN of the Amazon SNS topic defined for notification of log file delivery. The maximum length is 256 characters.", "Tags": "A custom set of tags (key-value pairs) for this trail.", "TrailName": "Specifies the name of the trail. The name must meet the following requirements:\n\n- Contain only ASCII letters (a-z, A-Z), numbers (0-9), periods (.), underscores (_), or dashes (-)\n- Start with a letter or number, and end with a letter or number\n- Be between 3 and 128 characters\n- Have no adjacent periods, underscores or dashes. Names like `my-_namespace` and `my--namespace` are not valid.\n- Not be in IP address format (for example, 192.168.5.4)" }, @@ -7796,7 +7995,7 @@ "AWS::CloudTrail::Trail AdvancedFieldSelector": { "EndsWith": "An operator that includes events that match the last few characters of the event record field specified as the value of `Field` .", "Equals": "An operator that includes events that match the exact value of the event record field specified as the value of `Field` . This is the only valid operator that you can use with the `readOnly` , `eventCategory` , and `resources.type` fields.", - "Field": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.\n\nThe following are valid values for network activity events:\n\n- `cloudtrail.amazonaws.com`\n- `ec2.amazonaws.com`\n- `kms.amazonaws.com`\n- `s3.amazonaws.com`\n- `secretsmanager.amazonaws.com`\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor information about filtering data events on the `resources.ARN` field, see [Filtering data events by resources.ARN](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/filtering-data-events.html#filtering-data-events-resourcearn) in the *AWS CloudTrail User Guide* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", + "Field": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.\n\nThe following are valid values for network activity events:\n\n- `cloudtrail.amazonaws.com`\n- `ec2.amazonaws.com`\n- `kms.amazonaws.com`\n- `s3.amazonaws.com`\n- `secretsmanager.amazonaws.com`\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", "NotEndsWith": "An operator that excludes events that match the last few characters of the event record field specified as the value of `Field` .", "NotEquals": "An operator that excludes events that match the exact value of the event record field specified as the value of `Field` .", "NotStartsWith": "An operator that excludes events that match the first few characters of the event record field specified as the value of `Field` .", @@ -7997,19 +8196,19 @@ "DomainOwner": "The 12-digit account number of the AWS account that owns the domain. It does not include dashes or spaces.", "OriginConfiguration": "Details about the package origin configuration of a package group.", "Pattern": "The pattern of the package group. The pattern determines which packages are associated with the package group.", - "Tags": "A list of tags to be applied to the package group." + "Tags": "" }, "AWS::CodeArtifact::PackageGroup OriginConfiguration": { - "Restrictions": "The origin configuration settings that determine how package versions can enter repositories." + "Restrictions": "" }, "AWS::CodeArtifact::PackageGroup RestrictionType": { - "Repositories": "The repositories to add to the allowed repositories list. The allowed repositories list is used when the `RestrictionMode` is set to `ALLOW_SPECIFIC_REPOSITORIES` .", - "RestrictionMode": "The package group origin restriction setting. When the value is `INHERIT` , the value is set to the value of the first parent package group which does not have a value of `INHERIT` ." + "Repositories": "", + "RestrictionMode": "" }, "AWS::CodeArtifact::PackageGroup Restrictions": { - "ExternalUpstream": "The package group origin restriction setting for external, upstream repositories.", - "InternalUpstream": "The package group origin restriction setting for internal, upstream repositories.", - "Publish": "The package group origin restriction setting for publishing packages." + "ExternalUpstream": "", + "InternalUpstream": "", + "Publish": "" }, "AWS::CodeArtifact::PackageGroup Tag": { "Key": "The tag key.", @@ -8135,7 +8334,7 @@ "ImagePullCredentialsType": "The type of credentials AWS CodeBuild uses to pull images in your build. There are two valid values:\n\n- `CODEBUILD` specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild service principal.\n- `SERVICE_ROLE` specifies that AWS CodeBuild uses your build project's service role.\n\nWhen you use a cross-account or private registry image, you must use SERVICE_ROLE credentials. When you use an AWS CodeBuild curated image, you must use CODEBUILD credentials.", "PrivilegedMode": "Enables running the Docker daemon inside a Docker container. Set to true only if the build project is used to build Docker images. Otherwise, a build that attempts to interact with the Docker daemon fails. The default setting is `false` .\n\nYou can initialize the Docker daemon during the install phase of your build by adding one of the following sets of commands to the install phase of your buildspec file:\n\nIf the operating system's base image is Ubuntu Linux:\n\n`- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay&`\n\n`- timeout 15 sh -c \"until docker info; do echo .; sleep 1; done\"`\n\nIf the operating system's base image is Alpine Linux and the previous command does not work, add the `-t` argument to `timeout` :\n\n`- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay&`\n\n`- timeout -t 15 sh -c \"until docker info; do echo .; sleep 1; done\"`", "RegistryCredential": "`RegistryCredential` is a property of the [AWS::CodeBuild::Project Environment](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-codebuild-project.html#cfn-codebuild-project-environment) property that specifies information about credentials that provide access to a private Docker registry. When this is set:\n\n- `imagePullCredentialsType` must be set to `SERVICE_ROLE` .\n- images cannot be curated or an Amazon ECR image.", - "Type": "The type of build environment to use for related builds.\n\n- The environment type `ARM_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), and EU (Frankfurt).\n- The environment type `LINUX_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), China (Beijing), and China (Ningxia).\n- The environment type `LINUX_GPU_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney) , China (Beijing), and China (Ningxia).\n\n- The environment types `ARM_LAMBDA_CONTAINER` and `LINUX_LAMBDA_CONTAINER` are available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), and South America (S\u00e3o Paulo).\n\n- The environment types `WINDOWS_CONTAINER` and `WINDOWS_SERVER_2019_CONTAINER` are available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland).\n\n> If you're using compute fleets during project creation, `type` will be ignored. \n\nFor more information, see [Build environment compute types](https://docs.aws.amazon.com//codebuild/latest/userguide/build-env-ref-compute-types.html) in the *AWS CodeBuild user guide* ." + "Type": "The type of build environment to use for related builds.\n\n> If you're using compute fleets during project creation, `type` will be ignored. \n\nFor more information, see [Build environment compute types](https://docs.aws.amazon.com//codebuild/latest/userguide/build-env-ref-compute-types.html) in the *AWS CodeBuild user guide* ." }, "AWS::CodeBuild::Project EnvironmentVariable": { "Name": "The name or key of the environment variable.", @@ -8221,7 +8420,7 @@ "AWS::CodeBuild::Project WebhookFilter": { "ExcludeMatchedPattern": "Used to indicate that the `pattern` determines which webhook events do not trigger a build. If true, then a webhook event that does not match the `pattern` triggers a build. If false, then a webhook event that matches the `pattern` triggers a build.", "Pattern": "For a `WebHookFilter` that uses `EVENT` type, a comma-separated string that specifies one or more events. For example, the webhook filter `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` allows all push, pull request created, and pull request updated events to trigger a build.\n\nFor a `WebHookFilter` that uses any of the other filter types, a regular expression pattern. For example, a `WebHookFilter` that uses `HEAD_REF` for its `type` and the pattern `^refs/heads/` triggers a build when the head reference is a branch with a reference name `refs/heads/branch-name` .", - "Type": "The type of webhook filter. There are nine webhook filter types: `EVENT` , `ACTOR_ACCOUNT_ID` , `HEAD_REF` , `BASE_REF` , `FILE_PATH` , `COMMIT_MESSAGE` , `TAG_NAME` , `RELEASE_NAME` , and `WORKFLOW_NAME` .\n\n- EVENT\n\n- A webhook event triggers a build when the provided `pattern` matches one of nine event types: `PUSH` , `PULL_REQUEST_CREATED` , `PULL_REQUEST_UPDATED` , `PULL_REQUEST_CLOSED` , `PULL_REQUEST_REOPENED` , `PULL_REQUEST_MERGED` , `RELEASED` , `PRERELEASED` , and `WORKFLOW_JOB_QUEUED` . The `EVENT` patterns are specified as a comma-separated string. For example, `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` filters all push, pull request created, and pull request updated events.\n\n> Types `PULL_REQUEST_REOPENED` and `WORKFLOW_JOB_QUEUED` work with GitHub and GitHub Enterprise only. Types `RELEASED` and `PRERELEASED` work with GitHub only.\n- ACTOR_ACCOUNT_ID\n\n- A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression `pattern` .\n- HEAD_REF\n\n- A webhook event triggers a build when the head reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` and `refs/tags/tag-name` .\n\n> Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.\n- BASE_REF\n\n- A webhook event triggers a build when the base reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` .\n\n> Works with pull request events only.\n- FILE_PATH\n\n- A webhook triggers a build when the path of a changed file matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- COMMIT_MESSAGE\n\n- A webhook triggers a build when the head commit message matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- TAG_NAME\n\n- A webhook triggers a build when the tag name of the release matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- RELEASE_NAME\n\n- A webhook triggers a build when the release name matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- REPOSITORY_NAME\n\n- A webhook triggers a build when the repository name matches the regular expression pattern.\n\n> Works with GitHub global or organization webhooks only.\n- WORKFLOW_NAME\n\n- A webhook triggers a build when the workflow name matches the regular expression `pattern` .\n\n> Works with `WORKFLOW_JOB_QUEUED` events only. > For CodeBuild-hosted Buildkite runner builds, WORKFLOW_NAME filters will filter by pipeline name." + "Type": "The type of webhook filter. There are 11 webhook filter types: `EVENT` , `ACTOR_ACCOUNT_ID` , `HEAD_REF` , `BASE_REF` , `FILE_PATH` , `COMMIT_MESSAGE` , `TAG_NAME` , `RELEASE_NAME` , `REPOSITORY_NAME` , `ORGANIZATION_NAME` , and `WORKFLOW_NAME` .\n\n- EVENT\n\n- A webhook event triggers a build when the provided `pattern` matches one of nine event types: `PUSH` , `PULL_REQUEST_CREATED` , `PULL_REQUEST_UPDATED` , `PULL_REQUEST_CLOSED` , `PULL_REQUEST_REOPENED` , `PULL_REQUEST_MERGED` , `RELEASED` , `PRERELEASED` , and `WORKFLOW_JOB_QUEUED` . The `EVENT` patterns are specified as a comma-separated string. For example, `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` filters all push, pull request created, and pull request updated events.\n\n> Types `PULL_REQUEST_REOPENED` and `WORKFLOW_JOB_QUEUED` work with GitHub and GitHub Enterprise only. Types `RELEASED` and `PRERELEASED` work with GitHub only.\n- ACTOR_ACCOUNT_ID\n\n- A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression `pattern` .\n- HEAD_REF\n\n- A webhook event triggers a build when the head reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` and `refs/tags/tag-name` .\n\n> Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.\n- BASE_REF\n\n- A webhook event triggers a build when the base reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` .\n\n> Works with pull request events only.\n- FILE_PATH\n\n- A webhook triggers a build when the path of a changed file matches the regular expression `pattern` .\n\n> Works with push and pull request events only.\n- COMMIT_MESSAGE\n\n- A webhook triggers a build when the head commit message matches the regular expression `pattern` .\n\n> Works with push and pull request events only.\n- TAG_NAME\n\n- A webhook triggers a build when the tag name of the release matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- RELEASE_NAME\n\n- A webhook triggers a build when the release name matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- REPOSITORY_NAME\n\n- A webhook triggers a build when the repository name matches the regular expression `pattern` .\n\n> Works with GitHub global or organization webhooks only.\n- ORGANIZATION_NAME\n\n- A webhook triggers a build when the organization name matches the regular expression `pattern` .\n\n> Works with GitHub global webhooks only.\n- WORKFLOW_NAME\n\n- A webhook triggers a build when the workflow name matches the regular expression `pattern` .\n\n> Works with `WORKFLOW_JOB_QUEUED` events only. > For CodeBuild-hosted Buildkite runner builds, WORKFLOW_NAME filters will filter by pipeline name." }, "AWS::CodeBuild::ReportGroup": { "DeleteReports": "When deleting a report group, specifies if reports within the report group should be deleted.\n\n- **true** - Deletes any reports that belong to the report group before deleting the report group.\n- **false** - You must delete any reports in the report group. This is the default value. If you delete a report group that contains one or more reports, an exception is thrown.", @@ -8849,7 +9048,7 @@ "AliasAttributes": "Attributes supported as an alias for this user pool. For more information about alias attributes, see [Customizing sign-in attributes](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html#user-pool-settings-aliases) .", "AutoVerifiedAttributes": "The attributes that you want your user pool to automatically verify. For more information, see [Verifying contact information at sign-up](https://docs.aws.amazon.com/cognito/latest/developerguide/signing-up-users-in-your-app.html#allowing-users-to-sign-up-and-confirm-themselves) .", "DeletionProtection": "When active, `DeletionProtection` prevents accidental deletion of your user\npool. Before you can delete a user pool that you have protected against deletion, you\nmust deactivate this feature.\n\nWhen you try to delete a protected user pool in a `DeleteUserPool` API request, Amazon Cognito returns an `InvalidParameterException` error. To delete a protected user pool, send a new `DeleteUserPool` request after you deactivate deletion protection in an `UpdateUserPool` API request.", - "DeviceConfiguration": "The device-remembering configuration for a user pool. Device remembering or device tracking is a \"Remember me on this device\" option for user pools that perform authentication with the device key of a trusted device in the back end, instead of a user-provided MFA code. For more information about device authentication, see [Working with user devices in your user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-device-tracking.html) . A null value indicates that you have deactivated device remembering in your user pool.\n\n> When you provide a value for any `DeviceConfiguration` field, you activate the Amazon Cognito device-remembering feature. For more infor", + "DeviceConfiguration": "The device-remembering configuration for a user pool. Device remembering or device tracking is a \"Remember me on this device\" option for user pools that perform authentication with the device key of a trusted device in the back end, instead of a user-provided MFA code. For more information about device authentication, see [Working with user devices in your user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-device-tracking.html) . A null value indicates that you have deactivated device remembering in your user pool.\n\n> When you provide a value for any `DeviceConfiguration` field, you activate the Amazon Cognito device-remembering feature. For more information, see [Working with devices](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-device-tracking.html) .", "EmailAuthenticationMessage": "", "EmailAuthenticationSubject": "", "EmailConfiguration": "The email configuration of your user pool. The email configuration type sets your preferred sending method, AWS Region, and sender for messages from your user pool.", @@ -10191,22 +10390,22 @@ "AWS::ControlTower::EnabledBaseline": { "BaselineIdentifier": "The specific `Baseline` enabled as part of the `EnabledBaseline` resource.", "BaselineVersion": "The enabled version of the `Baseline` .", - "Parameters": "Parameters that are applied when enabling this `Baseline` . These parameters configure the behavior of the baseline.", - "Tags": "Tags associated with input to `EnableBaseline` .", + "Parameters": "Shows the parameters that are applied when enabling this `Baseline` .", + "Tags": "", "TargetIdentifier": "The target on which to enable the `Baseline` ." }, "AWS::ControlTower::EnabledBaseline Parameter": { - "Key": "A string denoting the parameter key.", - "Value": "A low-level `Document` object of any type (for example, a Java Object)." + "Key": "", + "Value": "" }, "AWS::ControlTower::EnabledBaseline Tag": { - "Key": "A string that identifies a key-value pair.", - "Value": "A string parameter that describes an `EnabledBaseline` resource." + "Key": "", + "Value": "" }, "AWS::ControlTower::EnabledControl": { "ControlIdentifier": "The ARN of the control. Only *Strongly recommended* and *Elective* controls are permitted, with the exception of the *Region deny* control. For information on how to find the `controlIdentifier` , see [the overview page](https://docs.aws.amazon.com//controltower/latest/APIReference/Welcome.html) .", "Parameters": "Array of `EnabledControlParameter` objects.", - "Tags": "Tags to be applied to the enabled control.", + "Tags": "", "TargetIdentifier": "The ARN of the organizational unit. For information on how to find the `targetIdentifier` , see [the overview page](https://docs.aws.amazon.com//controltower/latest/APIReference/Welcome.html) ." }, "AWS::ControlTower::EnabledControl EnabledControlParameter": { @@ -10214,8 +10413,8 @@ "Value": "The value of a key/value pair. It can be of type `array` , `string` , `number` , `object` , or `boolean` . [Note: The *Type* field that follows may show a single type such as Number, which is only one possible type.]" }, "AWS::ControlTower::EnabledControl Tag": { - "Key": "The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with `aws` .", - "Value": "The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with `aws` ." + "Key": "", + "Value": "" }, "AWS::ControlTower::LandingZone": { "Manifest": "The landing zone manifest JSON text file that specifies the landing zone configurations.", @@ -10816,6 +11015,20 @@ "ServerName": "The name of the server on the DocumentDB source endpoint.", "SslMode": "" }, + "AWS::DMS::DataProvider IbmDb2LuwSettings": { + "CertificateArn": "", + "DatabaseName": "", + "Port": "", + "ServerName": "", + "SslMode": "" + }, + "AWS::DMS::DataProvider IbmDb2zOsSettings": { + "CertificateArn": "", + "DatabaseName": "", + "Port": "", + "ServerName": "", + "SslMode": "" + }, "AWS::DMS::DataProvider MariaDbSettings": { "CertificateArn": "", "Port": "", @@ -10871,6 +11084,8 @@ }, "AWS::DMS::DataProvider Settings": { "DocDbSettings": "", + "IbmDb2LuwSettings": "", + "IbmDb2zOsSettings": "", "MariaDbSettings": "", "MicrosoftSqlServerSettings": "", "MongoDbSettings": "", @@ -11913,7 +12128,7 @@ "AWS::DataSync::LocationNFS": { "MountOptions": "Specifies the options that DataSync can use to mount your NFS file server.", "OnPremConfig": "Specifies the Amazon Resource Name (ARN) of the DataSync agent that can connect to your NFS file server.\n\nYou can specify more than one agent. For more information, see [Using multiple DataSync agents](https://docs.aws.amazon.com/datasync/latest/userguide/do-i-need-datasync-agent.html#multiple-agents) .", - "ServerHostname": "Specifies the Domain Name System (DNS) name or IP version 4 address of the NFS file server that your DataSync agent connects to.", + "ServerHostname": "Specifies the DNS name or IP version 4 address of the NFS file server that your DataSync agent connects to.", "Subdirectory": "Specifies the export path in your NFS file server that you want DataSync to mount.\n\nThis path (or a subdirectory of the path) is where DataSync transfers data to or from. For information on configuring an export for DataSync, see [Accessing NFS file servers](https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html#accessing-nfs) .", "Tags": "Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least a name tag for your location." }, @@ -11933,7 +12148,7 @@ "BucketName": "Specifies the name of the object storage bucket involved in the transfer.", "SecretKey": "Specifies the secret key (for example, a password) if credentials are required to authenticate with the object storage server.", "ServerCertificate": "Specifies a certificate chain for DataSync to authenticate with your object storage system if the system uses a private or self-signed certificate authority (CA). You must specify a single `.pem` file with a full certificate chain (for example, `file:///home/user/.ssh/object_storage_certificates.pem` ).\n\nThe certificate chain might include:\n\n- The object storage system's certificate\n- All intermediate certificates (if there are any)\n- The root certificate of the signing CA\n\nYou can concatenate your certificates into a `.pem` file (which can be up to 32768 bytes before base64 encoding). The following example `cat` command creates an `object_storage_certificates.pem` file that includes three certificates:\n\n`cat object_server_certificate.pem intermediate_certificate.pem ca_root_certificate.pem > object_storage_certificates.pem`\n\nTo use this parameter, configure `ServerProtocol` to `HTTPS` .", - "ServerHostname": "Specifies the domain name or IP address of the object storage server. A DataSync agent uses this hostname to mount the object storage server in a network.", + "ServerHostname": "Specifies the domain name or IP version 4 (IPv4) address of the object storage server that your DataSync agent connects to.", "ServerPort": "Specifies the port that your object storage server accepts inbound network traffic on (for example, port 443).", "ServerProtocol": "Specifies the protocol that your object storage server uses to communicate.", "Subdirectory": "Specifies the object prefix for your object storage server. If this is a source location, DataSync only copies objects with this prefix. If this is a destination location, DataSync writes all objects with this prefix.", @@ -11967,7 +12182,7 @@ "KerberosPrincipal": "Specifies a Kerberos prinicpal, which is an identity in your Kerberos realm that has permission to access the files, folders, and file metadata in your SMB file server.\n\nA Kerberos principal might look like `HOST/kerberosuser@MYDOMAIN.ORG` .\n\nPrincipal names are case sensitive. Your DataSync task execution will fail if the principal that you specify for this parameter doesn\u2019t exactly match the principal that you use to create the keytab file.", "MountOptions": "Specifies the version of the SMB protocol that DataSync uses to access your SMB file server.", "Password": "Specifies the password of the user who can mount your SMB file server and has permission to access the files and folders involved in your transfer. This parameter applies only if `AuthenticationType` is set to `NTLM` .", - "ServerHostname": "Specifies the domain name or IP address of the SMB file server that your DataSync agent will mount.\n\nRemember the following when configuring this parameter:\n\n- You can't specify an IP version 6 (IPv6) address.\n- If you're using Kerberos authentication, you must specify a domain name.", + "ServerHostname": "Specifies the domain name or IP address of the SMB file server that your DataSync agent connects to.\n\nRemember the following when configuring this parameter:\n\n- You can't specify an IP version 6 (IPv6) address.\n- If you're using Kerberos authentication, you must specify a domain name.", "Subdirectory": "Specifies the name of the share exported by your SMB file server where DataSync will read or write data. You can include a subdirectory in the share path (for example, `/path/to/subdirectory` ). Make sure that other SMB clients in your network can also mount this path.\n\nTo copy all data in the subdirectory, DataSync must be able to mount the SMB share and access all of its data. For more information, see [Providing DataSync access to SMB file servers](https://docs.aws.amazon.com/datasync/latest/userguide/create-smb-location.html#configuring-smb-permissions) .", "Tags": "Specifies labels that help you categorize, filter, and search for your AWS resources. We recommend creating at least a name tag for your location.", "User": "Specifies the user that can mount and access the files, folders, and file metadata in your SMB file server. This parameter applies only if `AuthenticationType` is set to `NTLM` .\n\nFor information about choosing a user with the right level of access for your transfer, see [Providing DataSync access to SMB file servers](https://docs.aws.amazon.com/datasync/latest/userguide/create-smb-location.html#configuring-smb-permissions) ." @@ -12090,6 +12305,144 @@ "AWS::DataSync::Task Verified": { "ReportLevel": "Specifies whether you want your task report to include only what went wrong with your transfer or a list of what succeeded and didn't.\n\n- `ERRORS_ONLY` : A report shows what DataSync was unable to verify.\n- `SUCCESSES_AND_ERRORS` : A report shows what DataSync was able and unable to verify." }, + "AWS::DataZone::Connection": { + "AwsLocation": "The location where the connection is created.", + "Description": "Connection description.", + "DomainIdentifier": "The ID of the domain where the connection is created.", + "EnvironmentIdentifier": "The ID of the environment where the connection is created.", + "Name": "The name of the connection.", + "Props": "Connection props." + }, + "AWS::DataZone::Connection AthenaPropertiesInput": { + "WorkgroupName": "The Amazon Athena workgroup name of a connection." + }, + "AWS::DataZone::Connection AuthenticationConfigurationInput": { + "AuthenticationType": "The authentication type of a connection.", + "BasicAuthenticationCredentials": "The basic authentication credentials of a connection.", + "CustomAuthenticationCredentials": "The custom authentication credentials of a connection.", + "KmsKeyArn": "The KMS key ARN of a connection.", + "OAuth2Properties": "The oAuth2 properties of a connection.", + "SecretArn": "The secret ARN of a connection." + }, + "AWS::DataZone::Connection AuthorizationCodeProperties": { + "AuthorizationCode": "The authorization code of a connection.", + "RedirectUri": "The redirect URI of a connection." + }, + "AWS::DataZone::Connection AwsLocation": { + "AccessRole": "The access role of a connection.", + "AwsAccountId": "The account ID of a connection.", + "AwsRegion": "The Region of a connection.", + "IamConnectionId": "The IAM connection ID of a connection." + }, + "AWS::DataZone::Connection BasicAuthenticationCredentials": { + "Password": "The password for a connection.", + "UserName": "The user name for the connecion." + }, + "AWS::DataZone::Connection ConnectionPropertiesInput": { + "AthenaProperties": "The Amazon Athena properties of a connection.", + "GlueProperties": "The AWS Glue properties of a connection.", + "HyperPodProperties": "The hyper pod properties of a connection.", + "IamProperties": "The IAM properties of a connection.", + "RedshiftProperties": "The Amazon Redshift properties of a connection.", + "SparkEmrProperties": "The Spark EMR properties of a connection.", + "SparkGlueProperties": "The Spark AWS Glue properties of a connection." + }, + "AWS::DataZone::Connection GlueConnectionInput": { + "AthenaProperties": "The Amazon Athena properties of the AWS Glue connection.", + "AuthenticationConfiguration": "The authentication configuration of the AWS Glue connection.", + "ConnectionProperties": "The connection properties of the AWS Glue connection.", + "ConnectionType": "The connection type of the AWS Glue connection.", + "Description": "The description of the AWS Glue connection.", + "MatchCriteria": "The match criteria of the AWS Glue connection.", + "Name": "The name of the AWS Glue connection.", + "PhysicalConnectionRequirements": "The physical connection requirements for the AWS Glue connection.", + "PythonProperties": "The Python properties of the AWS Glue connection.", + "SparkProperties": "The Spark properties of the AWS Glue connection.", + "ValidateCredentials": "Speciefies whether to validate credentials of the AWS Glue connection.", + "ValidateForComputeEnvironments": "Speciefies whether to validate for compute environments of the AWS Glue connection." + }, + "AWS::DataZone::Connection GlueOAuth2Credentials": { + "AccessToken": "The access token of a connection.", + "JwtToken": "The jwt token of the connection.", + "RefreshToken": "The refresh token of the connection.", + "UserManagedClientApplicationClientSecret": "The user managed client application client secret of the connection." + }, + "AWS::DataZone::Connection GluePropertiesInput": { + "GlueConnectionInput": "The AWS Glue connection." + }, + "AWS::DataZone::Connection HyperPodPropertiesInput": { + "ClusterName": "The cluster name the hyper pod properties." + }, + "AWS::DataZone::Connection IamPropertiesInput": { + "GlueLineageSyncEnabled": "Specifies whether AWS Glue lineage sync is enabled for a connection." + }, + "AWS::DataZone::Connection LineageSyncSchedule": { + "Schedule": "The lineage sync schedule." + }, + "AWS::DataZone::Connection OAuth2ClientApplication": { + "AWSManagedClientApplicationReference": "The AWS managed client application reference in the OAuth2Client application.", + "UserManagedClientApplicationClientId": "The user managed client application client ID in the OAuth2Client application." + }, + "AWS::DataZone::Connection OAuth2Properties": { + "AuthorizationCodeProperties": "The authorization code properties of the OAuth2 properties.", + "OAuth2ClientApplication": "The OAuth2 client application of the OAuth2 properties.", + "OAuth2Credentials": "The OAuth2 credentials of the OAuth2 properties.", + "OAuth2GrantType": "The OAuth2 grant type of the OAuth2 properties.", + "TokenUrl": "The OAuth2 token URL of the OAuth2 properties.", + "TokenUrlParametersMap": "The OAuth2 token URL parameter map of the OAuth2 properties." + }, + "AWS::DataZone::Connection PhysicalConnectionRequirements": { + "AvailabilityZone": "The availability zone of the physical connection requirements of a connection.", + "SecurityGroupIdList": "The group ID list of the physical connection requirements of a connection.", + "SubnetId": "The subnet ID of the physical connection requirements of a connection.", + "SubnetIdList": "The subnet ID list of the physical connection requirements of a connection." + }, + "AWS::DataZone::Connection RedshiftCredentials": { + "SecretArn": "The secret ARN of the Amazon Redshift credentials of a connection.", + "UsernamePassword": "The username and password of the Amazon Redshift credentials of a connection." + }, + "AWS::DataZone::Connection RedshiftLineageSyncConfigurationInput": { + "Enabled": "Specifies whether the Amaon Redshift lineage sync configuration is enabled.", + "Schedule": "The schedule of the Amaon Redshift lineage sync configuration." + }, + "AWS::DataZone::Connection RedshiftPropertiesInput": { + "Credentials": "The Amaon Redshift credentials.", + "DatabaseName": "The Amazon Redshift database name.", + "Host": "The Amazon Redshift host.", + "LineageSync": "The lineage sync of the Amazon Redshift.", + "Port": "The Amaon Redshift port.", + "Storage": "The Amazon Redshift storage." + }, + "AWS::DataZone::Connection RedshiftStorageProperties": { + "ClusterName": "The cluster name in the Amazon Redshift storage properties.", + "WorkgroupName": "The workgroup name in the Amazon Redshift storage properties." + }, + "AWS::DataZone::Connection SparkEmrPropertiesInput": { + "ComputeArn": "The compute ARN of Spark EMR.", + "InstanceProfileArn": "The instance profile ARN of Spark EMR.", + "JavaVirtualEnv": "The java virtual env of the Spark EMR.", + "LogUri": "The log URI of the Spark EMR.", + "PythonVirtualEnv": "The Python virtual env of the Spark EMR.", + "RuntimeRole": "The runtime role of the Spark EMR.", + "TrustedCertificatesS3Uri": "The certificates S3 URI of the Spark EMR." + }, + "AWS::DataZone::Connection SparkGlueArgs": { + "Connection": "The connection in the Spark AWS Glue args." + }, + "AWS::DataZone::Connection SparkGluePropertiesInput": { + "AdditionalArgs": "The additional args in the Spark AWS Glue properties.", + "GlueConnectionName": "The AWS Glue connection name in the Spark AWS Glue properties.", + "GlueVersion": "The AWS Glue version in the Spark AWS Glue properties.", + "IdleTimeout": "The idle timeout in the Spark AWS Glue properties.", + "JavaVirtualEnv": "The Java virtual env in the Spark AWS Glue properties.", + "NumberOfWorkers": "The number of workers in the Spark AWS Glue properties.", + "PythonVirtualEnv": "The Python virtual env in the Spark AWS Glue properties.", + "WorkerType": "The worker type in the Spark AWS Glue properties." + }, + "AWS::DataZone::Connection UsernamePassword": { + "Password": "The password of a connection.", + "Username": "The username of a connection." + }, "AWS::DataZone::DataSource": { "AssetFormsInput": "The metadata forms attached to the assets that the data source works with.", "Configuration": "The configuration of the data source.", @@ -12103,7 +12456,7 @@ "PublishOnImport": "Specifies whether the assets that this data source creates in the inventory are to be also automatically published to the catalog.", "Recommendation": "Specifies whether the business name generation is to be enabled for this data source.", "Schedule": "The schedule of the data source runs.", - "Type": "The type of the data source." + "Type": "The type of the data source. In Amazon DataZone, you can use data sources to import technical metadata of assets (data) from the source databases or data warehouses into Amazon DataZone. In the current release of Amazon DataZone, you can create and run data sources for AWS Glue and Amazon Redshift." }, "AWS::DataZone::DataSource DataSourceConfigurationInput": { "GlueRunConfiguration": "The configuration of the AWS Glue data source.", @@ -12163,8 +12516,10 @@ "AWS::DataZone::Domain": { "Description": "The description of the Amazon DataZone domain.", "DomainExecutionRole": "The domain execution role that is created when an Amazon DataZone domain is created. The domain execution role is created in the AWS account that houses the Amazon DataZone domain.", + "DomainVersion": "The domain version.", "KmsKeyIdentifier": "The identifier of the AWS Key Management Service (KMS) key that is used to encrypt the Amazon DataZone domain, metadata, and reporting data.", "Name": "The name of the Amazon DataZone domain.", + "ServiceRole": "", "SingleSignOn": "The single sign-on details in Amazon DataZone.", "Tags": "The tags specified for the Amazon DataZone domain." }, @@ -12193,12 +12548,12 @@ "Value": "The value of the environment parameter." }, "AWS::DataZone::EnvironmentActions": { - "Description": "", + "Description": "The environment action description.", "DomainIdentifier": "The Amazon DataZone domain ID of the environment action.", "EnvironmentIdentifier": "The environment ID of the environment action.", "Identifier": "The ID of the environment action.", "Name": "The name of the environment action.", - "Parameters": "" + "Parameters": "The parameters of the environment action." }, "AWS::DataZone::EnvironmentActions AwsConsoleLinkParameters": { "Uri": "The URI of the console link specified as part of the environment action." @@ -12477,12 +12832,12 @@ "Type": "The type of file." }, "AWS::Detective::Graph": { - "AutoEnableMembers": "Indicates whether to automatically enable new organization accounts as member accounts in the organization behavior graph.\n\nBy default, this property is set to `false` . If you want to change the value of this property, you must be the Detective administrator for the organization. For more information on setting a Detective administrator account, see [AWS::Detective::OrganizationAdmin](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-detective-organizationadmin.html)", + "AutoEnableMembers": "Indicates whether to automatically enable new organization accounts as member accounts in the organization behavior graph.\n\nBy default, this property is set to `false` . If you want to change the value of this property, you must be the Detective administrator for the organization. For more information on setting a Detective administrator account, see [AWS::Detective::OrganizationAdmin](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-detective-organizationadmin.html) .", "Tags": "The tag values to assign to the new behavior graph." }, "AWS::Detective::Graph Tag": { - "Key": "", - "Value": "" + "Key": "One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values.", + "Value": "The optional part of a key-value pair that makes up a tag. A value acts as a descriptor in a tag category (key)." }, "AWS::Detective::MemberInvitation": { "DisableEmailNotification": "Whether to send an invitation email to the member account. If set to true, the member account does not receive an invitation email.", @@ -12992,6 +13347,14 @@ "Tenancy": "Indicates the tenancy of the Capacity Reservation. A Capacity Reservation can have one of the following tenancy settings:\n\n- `default` - The Capacity Reservation is created on hardware that is shared with other AWS accounts .\n- `dedicated` - The Capacity Reservation is created on single-tenant hardware that is dedicated to a single AWS account .", "UnusedReservationBillingOwnerId": "The ID of the AWS account to which to assign billing of the unused capacity of the Capacity Reservation. A request will be sent to the specified account. That account must accept the request for the billing to be assigned to their account. For more information, see [Billing assignment for shared Amazon EC2 Capacity Reservations](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/assign-billing.html) .\n\nYou can assign billing only for shared Capacity Reservations. To share a Capacity Reservation, you must add it to a resource share. For more information, see [AWS::RAM::ResourceShare](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ram-resourceshare.html) ." }, + "AWS::EC2::CapacityReservation CapacityAllocation": { + "AllocationType": "The usage type. `used` indicates that the instance capacity is in use by instances that are running in the Capacity Reservation.", + "Count": "The amount of instance capacity associated with the usage. For example a value of `4` indicates that instance capacity for 4 instances is currently in use." + }, + "AWS::EC2::CapacityReservation CommitmentInfo": { + "CommitmentEndDate": "The date and time at which the commitment duration expires, in the ISO8601 format in the UTC time zone ( `YYYY-MM-DDThh:mm:ss.sssZ` ). You can't decrease the instance count or cancel the Capacity Reservation before this date and time.", + "CommittedInstanceCount": "The instance capacity that you committed to when you requested the future-dated Capacity Reservation." + }, "AWS::EC2::CapacityReservation Tag": { "Key": "The tag key.", "Value": "The tag value." @@ -13198,7 +13561,7 @@ "AcceleratorManufacturers": "Indicates whether instance types must have accelerators by specific manufacturers.\n\n- For instance types with AWS devices, specify `amazon-web-services` .\n- For instance types with AMD devices, specify `amd` .\n- For instance types with Habana devices, specify `habana` .\n- For instance types with NVIDIA devices, specify `nvidia` .\n- For instance types with Xilinx devices, specify `xilinx` .\n\nDefault: Any manufacturer", "AcceleratorNames": "The accelerators that must be on the instance type.\n\n- For instance types with NVIDIA A10G GPUs, specify `a10g` .\n- For instance types with NVIDIA A100 GPUs, specify `a100` .\n- For instance types with NVIDIA H100 GPUs, specify `h100` .\n- For instance types with AWS Inferentia chips, specify `inferentia` .\n- For instance types with NVIDIA GRID K520 GPUs, specify `k520` .\n- For instance types with NVIDIA K80 GPUs, specify `k80` .\n- For instance types with NVIDIA M60 GPUs, specify `m60` .\n- For instance types with AMD Radeon Pro V520 GPUs, specify `radeon-pro-v520` .\n- For instance types with NVIDIA T4 GPUs, specify `t4` .\n- For instance types with NVIDIA T4G GPUs, specify `t4g` .\n- For instance types with Xilinx VU9P FPGAs, specify `vu9p` .\n- For instance types with NVIDIA V100 GPUs, specify `v100` .\n\nDefault: Any accelerator", "AcceleratorTotalMemoryMiB": "The minimum and maximum amount of total accelerator memory, in MiB.\n\nDefault: No minimum or maximum limits", - "AcceleratorTypes": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", + "AcceleratorTypes": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", "AllowedInstanceTypes": "The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes.\n\nYou can use strings with one or more wild cards, represented by an asterisk ( `*` ), to allow an instance type, size, or generation. The following are examples: `m5.8xlarge` , `c5*.*` , `m5a.*` , `r*` , `*3*` .\n\nFor example, if you specify `c5*` ,Amazon EC2 will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5a.*` , Amazon EC2 will allow all the M5a instance types, but not the M5n instance types.\n\n> If you specify `AllowedInstanceTypes` , you can't specify `ExcludedInstanceTypes` . \n\nDefault: All instance types", "BareMetal": "Indicates whether bare metal instance types must be included, excluded, or required.\n\n- To include bare metal instance types, specify `included` .\n- To require only bare metal instance types, specify `required` .\n- To exclude bare metal instance types, specify `excluded` .\n\nDefault: `excluded`", "BaselineEbsBandwidthMbps": "The minimum and maximum baseline bandwidth to Amazon EBS, in Mbps. For more information, see [Amazon EBS\u2013optimized instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html) in the *Amazon EC2 User Guide* .\n\nDefault: No minimum or maximum limits", @@ -13358,7 +13721,7 @@ "OutpostArn": "The Amazon Resource Name (ARN) of the AWS Outpost on which the Dedicated Host is allocated." }, "AWS::EC2::IPAM": { - "DefaultResourceDiscoveryOrganizationalUnitExclusions": "", + "DefaultResourceDiscoveryOrganizationalUnitExclusions": "If your IPAM is integrated with AWS Organizations, you can exclude an [organizational unit (OU)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html#organizationalunit) from being managed by IPAM. When you exclude an OU, IPAM will not manage the IP addresses in accounts in that OU. For more information, see [Exclude organizational units from IPAM](https://docs.aws.amazon.com/vpc/latest/ipam/exclude-ous.html) in the *Amazon Virtual Private Cloud IP Address Manager User Guide* .", "Description": "The description for the IPAM.", "EnablePrivateGua": "Enable this option to use your own GUA ranges as private IPv6 addresses. This option is disabled by default.", "OperatingRegions": "The operating Regions for an IPAM. Operating Regions are AWS Regions where the IPAM is allowed to manage IP address CIDRs. IPAM only discovers and monitors resources in the AWS Regions you select as operating Regions.\n\nFor more information about operating Regions, see [Create an IPAM](https://docs.aws.amazon.com//vpc/latest/ipam/create-ipam.html) in the *Amazon VPC IPAM User Guide* .", @@ -13420,14 +13783,14 @@ "AWS::EC2::IPAMResourceDiscovery": { "Description": "The resource discovery description.", "OperatingRegions": "The operating Regions for the resource discovery. Operating Regions are AWS Regions where the IPAM is allowed to manage IP address CIDRs. IPAM only discovers and monitors resources in the AWS Regions you select as operating Regions.", - "OrganizationalUnitExclusions": "", + "OrganizationalUnitExclusions": "If your IPAM is integrated with AWS Organizations, you can exclude an [organizational unit (OU)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html#organizationalunit) from being managed by IPAM. When you exclude an OU, IPAM will not manage the IP addresses in accounts in that OU. For more information, see [Exclude organizational units from IPAM](https://docs.aws.amazon.com/vpc/latest/ipam/exclude-ous.html) in the *Amazon Virtual Private Cloud IP Address Manager User Guide* .", "Tags": "A tag is a label that you assign to an AWS resource. Each tag consists of a key and an optional value. You can use tags to search and filter your resources or track your AWS costs." }, "AWS::EC2::IPAMResourceDiscovery IpamOperatingRegion": { "RegionName": "The name of the operating Region." }, "AWS::EC2::IPAMResourceDiscovery IpamResourceDiscoveryOrganizationalUnitExclusion": { - "OrganizationsEntityPath": "" + "OrganizationsEntityPath": "An AWS Organizations entity path. For more information on the entity path, see [Understand the AWS Organizations entity path](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_last-accessed-view-data-orgs.html#access_policies_access-advisor-viewing-orgs-entity-path) in the *AWS Identity and Access Management User Guide* ." }, "AWS::EC2::IPAMResourceDiscovery Tag": { "Key": "The tag key.", @@ -13702,7 +14065,7 @@ "AcceleratorManufacturers": "Indicates whether instance types must have accelerators by specific manufacturers.\n\n- For instance types with AWS devices, specify `amazon-web-services` .\n- For instance types with AMD devices, specify `amd` .\n- For instance types with Habana devices, specify `habana` .\n- For instance types with NVIDIA devices, specify `nvidia` .\n- For instance types with Xilinx devices, specify `xilinx` .\n\nDefault: Any manufacturer", "AcceleratorNames": "The accelerators that must be on the instance type.\n\n- For instance types with NVIDIA A10G GPUs, specify `a10g` .\n- For instance types with NVIDIA A100 GPUs, specify `a100` .\n- For instance types with NVIDIA H100 GPUs, specify `h100` .\n- For instance types with AWS Inferentia chips, specify `inferentia` .\n- For instance types with NVIDIA GRID K520 GPUs, specify `k520` .\n- For instance types with NVIDIA K80 GPUs, specify `k80` .\n- For instance types with NVIDIA M60 GPUs, specify `m60` .\n- For instance types with AMD Radeon Pro V520 GPUs, specify `radeon-pro-v520` .\n- For instance types with NVIDIA T4 GPUs, specify `t4` .\n- For instance types with NVIDIA T4G GPUs, specify `t4g` .\n- For instance types with Xilinx VU9P FPGAs, specify `vu9p` .\n- For instance types with NVIDIA V100 GPUs, specify `v100` .\n\nDefault: Any accelerator", "AcceleratorTotalMemoryMiB": "The minimum and maximum amount of total accelerator memory, in MiB.\n\nDefault: No minimum or maximum limits", - "AcceleratorTypes": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", + "AcceleratorTypes": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", "AllowedInstanceTypes": "The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes.\n\nYou can use strings with one or more wild cards, represented by an asterisk ( `*` ), to allow an instance type, size, or generation. The following are examples: `m5.8xlarge` , `c5*.*` , `m5a.*` , `r*` , `*3*` .\n\nFor example, if you specify `c5*` ,Amazon EC2 will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5a.*` , Amazon EC2 will allow all the M5a instance types, but not the M5n instance types.\n\n> If you specify `AllowedInstanceTypes` , you can't specify `ExcludedInstanceTypes` . \n\nDefault: All instance types", "BareMetal": "Indicates whether bare metal instance types must be included, excluded, or required.\n\n- To include bare metal instance types, specify `included` .\n- To require only bare metal instance types, specify `required` .\n- To exclude bare metal instance types, specify `excluded` .\n\nDefault: `excluded`", "BaselineEbsBandwidthMbps": "The minimum and maximum baseline bandwidth to Amazon EBS, in Mbps. For more information, see [Amazon EBS\u2013optimized instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html) in the *Amazon EC2 User Guide* .\n\nDefault: No minimum or maximum limits", @@ -13741,8 +14104,8 @@ "DisableApiStop": "Indicates whether to enable the instance for stop protection. For more information, see [Enable stop protection for your EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-stop-protection.html) in the *Amazon EC2 User Guide* .", "DisableApiTermination": "Indicates whether termination protection is enabled for the instance. The default is `false` , which means that you can terminate the instance using the Amazon EC2 console, command line tools, or API. You can enable termination protection when you launch an instance, while the instance is running, or while the instance is stopped.", "EbsOptimized": "Indicates whether the instance is optimized for Amazon EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal Amazon EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS-optimized instance.", - "ElasticGpuSpecifications": "Deprecated.\n\n> Amazon Elastic Graphics reached end of life on January 8, 2024. For workloads that require graphics acceleration, we recommend that you use Amazon EC2 G4ad, G4dn, or G5 instances.", - "ElasticInferenceAccelerators": "> Amazon Elastic Inference is no longer available. \n\nAn elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads.\n\nYou cannot specify accelerators from different generations in the same request.\n\n> Starting April 15, 2023, AWS will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.", + "ElasticGpuSpecifications": "Deprecated.\n\n> Amazon Elastic Graphics reached end of life on January 8, 2024.", + "ElasticInferenceAccelerators": "> Amazon Elastic Inference is no longer available. \n\nAn elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads.\n\nYou cannot specify accelerators from different generations in the same request.", "EnclaveOptions": "Indicates whether the instance is enabled for AWS Nitro Enclaves. For more information, see [What is Nitro Enclaves?](https://docs.aws.amazon.com/enclaves/latest/user/nitro-enclave.html) in the *AWS Nitro Enclaves User Guide* .\n\nYou can't enable AWS Nitro Enclaves and hibernation on the same instance.", "HibernationOptions": "Indicates whether an instance is enabled for hibernation. This parameter is valid only if the instance meets the [hibernation prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/hibernating-prerequisites.html) . For more information, see [Hibernate your Amazon EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html) in the *Amazon EC2 User Guide* .", "IamInstanceProfile": "The name or Amazon Resource Name (ARN) of an IAM instance profile.", @@ -13758,7 +14121,7 @@ "MetadataOptions": "The metadata options for the instance. For more information, see [Configure the Instance Metadata Service options](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-options.html) in the *Amazon EC2 User Guide* .", "Monitoring": "The monitoring for the instance.", "NetworkInterfaces": "The network interfaces for the instance.", - "NetworkPerformanceOptions": "", + "NetworkPerformanceOptions": "The settings for the network performance options for the instance. For more information, see [EC2 instance bandwidth weighting configuration](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configure-bandwidth-weighting.html) .", "Placement": "The placement for the instance.", "PrivateDnsNameOptions": "The hostname type for EC2 instances launched into this subnet and how DNS A and AAAA record queries should be handled. For more information, see [Amazon EC2 instance hostname types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-naming.html) in the *Amazon Elastic Compute Cloud User Guide* .", "RamDiskId": "The ID of the RAM disk.\n\n> We recommend that you use PV-GRUB instead of kernels and RAM disks. For more information, see [User provided kernels](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UserProvidedkernels.html) in the *Amazon EC2 User Guide* .", @@ -13831,6 +14194,9 @@ "Max": "The maximum number of network interfaces. To specify no maximum limit, omit this parameter.", "Min": "The minimum number of network interfaces. To specify no minimum limit, omit this parameter." }, + "AWS::EC2::LaunchTemplate NetworkPerformanceOptions": { + "BandwidthWeighting": "Specify the bandwidth weighting option to boost the associated type of baseline bandwidth, as follows:\n\n- **default** - This option uses the standard bandwidth configuration for your instance type.\n- **vpc-1** - This option boosts your networking baseline bandwidth and reduces your EBS baseline bandwidth.\n- **ebs-1** - This option boosts your EBS baseline bandwidth and reduces your networking baseline bandwidth." + }, "AWS::EC2::LaunchTemplate Placement": { "Affinity": "The affinity setting for an instance on a Dedicated Host.", "AvailabilityZone": "The Availability Zone for the instance.", @@ -14292,7 +14658,7 @@ }, "AWS::EC2::SecurityGroup": { "GroupDescription": "A description for the security group.\n\nConstraints: Up to 255 characters in length\n\nValid characters: a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*", - "GroupName": "The name of the security group.\n\nConstraints: Up to 255 characters in length. Cannot start with `sg-` .\n\nValid characters: a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*", + "GroupName": "The name of the security group. Names are case-insensitive and must be unique within the VPC.\n\nConstraints: Up to 255 characters in length. Can't start with `sg-` .\n\nValid characters: a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*", "SecurityGroupEgress": "The outbound rules associated with the security group. There is a short interruption during which you cannot connect to the security group.", "SecurityGroupIngress": "The inbound rules associated with the security group. There is a short interruption during which you cannot connect to the security group.", "Tags": "Any tags assigned to the security group.", @@ -14429,7 +14795,7 @@ "AcceleratorManufacturers": "Indicates whether instance types must have accelerators by specific manufacturers.\n\n- For instance types with AWS devices, specify `amazon-web-services` .\n- For instance types with AMD devices, specify `amd` .\n- For instance types with Habana devices, specify `habana` .\n- For instance types with NVIDIA devices, specify `nvidia` .\n- For instance types with Xilinx devices, specify `xilinx` .\n\nDefault: Any manufacturer", "AcceleratorNames": "The accelerators that must be on the instance type.\n\n- For instance types with NVIDIA A10G GPUs, specify `a10g` .\n- For instance types with NVIDIA A100 GPUs, specify `a100` .\n- For instance types with NVIDIA H100 GPUs, specify `h100` .\n- For instance types with AWS Inferentia chips, specify `inferentia` .\n- For instance types with NVIDIA GRID K520 GPUs, specify `k520` .\n- For instance types with NVIDIA K80 GPUs, specify `k80` .\n- For instance types with NVIDIA M60 GPUs, specify `m60` .\n- For instance types with AMD Radeon Pro V520 GPUs, specify `radeon-pro-v520` .\n- For instance types with NVIDIA T4 GPUs, specify `t4` .\n- For instance types with NVIDIA T4G GPUs, specify `t4g` .\n- For instance types with Xilinx VU9P FPGAs, specify `vu9p` .\n- For instance types with NVIDIA V100 GPUs, specify `v100` .\n\nDefault: Any accelerator", "AcceleratorTotalMemoryMiB": "The minimum and maximum amount of total accelerator memory, in MiB.\n\nDefault: No minimum or maximum limits", - "AcceleratorTypes": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", + "AcceleratorTypes": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", "AllowedInstanceTypes": "The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes.\n\nYou can use strings with one or more wild cards, represented by an asterisk ( `*` ), to allow an instance type, size, or generation. The following are examples: `m5.8xlarge` , `c5*.*` , `m5a.*` , `r*` , `*3*` .\n\nFor example, if you specify `c5*` ,Amazon EC2 will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5a.*` , Amazon EC2 will allow all the M5a instance types, but not the M5n instance types.\n\n> If you specify `AllowedInstanceTypes` , you can't specify `ExcludedInstanceTypes` . \n\nDefault: All instance types", "BareMetal": "Indicates whether bare metal instance types must be included, excluded, or required.\n\n- To include bare metal instance types, specify `included` .\n- To require only bare metal instance types, specify `required` .\n- To exclude bare metal instance types, specify `excluded` .\n\nDefault: `excluded`", "BaselineEbsBandwidthMbps": "The minimum and maximum baseline bandwidth to Amazon EBS, in Mbps. For more information, see [Amazon EBS\u2013optimized instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html) in the *Amazon EC2 User Guide* .\n\nDefault: No minimum or maximum limits", @@ -14880,6 +15246,8 @@ "GatewayLoadBalancerArns": "The Amazon Resource Names (ARNs) of the Gateway Load Balancers.", "NetworkLoadBalancerArns": "The Amazon Resource Names (ARNs) of the Network Load Balancers.", "PayerResponsibility": "The entity that is responsible for the endpoint costs. The default is the endpoint owner. If you set the payer responsibility to the service owner, you cannot set it back to the endpoint owner.", + "SupportedIpAddressTypes": "The supported IP address types. The possible values are `ipv4` and `ipv6` .", + "SupportedRegions": "The Regions from which service consumers can access the service.", "Tags": "The tags to associate with the service." }, "AWS::EC2::VPCEndpointService Tag": { @@ -15196,9 +15564,11 @@ }, "AWS::ECR::PullThroughCacheRule": { "CredentialArn": "The ARN of the Secrets Manager secret associated with the pull through cache rule.", + "CustomRoleArn": "The ARN of the IAM role associated with the pull through cache rule.", "EcrRepositoryPrefix": "The Amazon ECR repository prefix associated with the pull through cache rule.", "UpstreamRegistry": "The name of the upstream source registry associated with the pull through cache rule.", - "UpstreamRegistryUrl": "The upstream registry URL associated with the pull through cache rule." + "UpstreamRegistryUrl": "The upstream registry URL associated with the pull through cache rule.", + "UpstreamRepositoryPrefix": "The upstream repository prefix associated with the pull through cache rule." }, "AWS::ECR::RegistryPolicy": { "PolicyText": "The JSON policy text for your registry." @@ -15254,7 +15624,7 @@ "ImageTagMutability": "The tag mutability setting for the repository. If this parameter is omitted, the default setting of MUTABLE will be used which will allow image tags to be overwritten. If IMMUTABLE is specified, all image tags within the repository will be immutable which will prevent them from being overwritten.", "LifecyclePolicy": "The lifecycle policy to use for repositories created using the template.", "Prefix": "The repository namespace prefix associated with the repository creation template.", - "RepositoryPolicy": "he repository policy to apply to repositories created using the template. A repository policy is a permissions policy associated with a repository to control access permissions.", + "RepositoryPolicy": "The repository policy to apply to repositories created using the template. A repository policy is a permissions policy associated with a repository to control access permissions.", "ResourceTags": "The metadata to apply to the repository to help you categorize and organize. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters." }, "AWS::ECR::RepositoryCreationTemplate EncryptionConfiguration": { @@ -15375,7 +15745,7 @@ "VpcLatticeConfigurations": "The VPC Lattice configuration for the service being created." }, "AWS::ECS::Service AwsVpcConfiguration": { - "AssignPublicIp": "Whether the task's elastic network interface receives a public IP address.\n\nConsider the following when you set this value:\n\n- When you use `create-service` or `update-service` , the default is `DISABLED` .\n- When the service `deploymentController` is `ECS` , the value must be `DISABLED` .\n- When you use `create-service` or `update-service` , the default is `ENABLED` .", + "AssignPublicIp": "Whether the task's elastic network interface receives a public IP address.\n\nConsider the following when you set this value:\n\n- When you use `create-service` or `update-service` , the default is `DISABLED` .\n- When the service `deploymentController` is `ECS` , the value must be `DISABLED` .", "SecurityGroups": "The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.\n\n> All specified security groups must be from the same VPC.", "Subnets": "The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.\n\n> All specified subnets must be from the same VPC." }, @@ -15732,7 +16102,7 @@ "TaskDefinition": "The task definition for the tasks in the task set to use. If a revision isn't specified, the latest `ACTIVE` revision is used." }, "AWS::ECS::TaskSet AwsVpcConfiguration": { - "AssignPublicIp": "Whether the task's elastic network interface receives a public IP address.\n\nConsider the following when you set this value:\n\n- When you use `create-service` or `update-service` , the default is `DISABLED` .\n- When the service `deploymentController` is `ECS` , the value must be `DISABLED` .\n- When you use `create-service` or `update-service` , the default is `ENABLED` .", + "AssignPublicIp": "Whether the task's elastic network interface receives a public IP address.\n\nConsider the following when you set this value:\n\n- When you use `create-service` or `update-service` , the default is `DISABLED` .\n- When the service `deploymentController` is `ECS` , the value must be `DISABLED` .", "SecurityGroups": "The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.\n\n> All specified security groups must be from the same VPC.", "Subnets": "The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.\n\n> All specified subnets must be from the same VPC." }, @@ -16147,7 +16517,7 @@ }, "AWS::EMR::Cluster EbsConfiguration": { "EbsBlockDeviceConfigs": "An array of Amazon EBS volume specifications attached to a cluster instance.", - "EbsOptimized": "Indicates whether an Amazon EBS volume is EBS-optimized." + "EbsOptimized": "Indicates whether an Amazon EBS volume is EBS-optimized. The default is false. You should explicitly set this value to true to enable the Amazon EBS-optimized setting for an EC2 instance." }, "AWS::EMR::Cluster HadoopJarStepConfig": { "Args": "A list of command line arguments passed to the JAR file's main function when executed.", @@ -16324,7 +16694,7 @@ }, "AWS::EMR::InstanceFleetConfig EbsConfiguration": { "EbsBlockDeviceConfigs": "An array of Amazon EBS volume specifications attached to a cluster instance.", - "EbsOptimized": "Indicates whether an Amazon EBS volume is EBS-optimized." + "EbsOptimized": "Indicates whether an Amazon EBS volume is EBS-optimized. The default is false. You should explicitly set this value to true to enable the Amazon EBS-optimized setting for an EC2 instance." }, "AWS::EMR::InstanceFleetConfig InstanceFleetProvisioningSpecifications": { "OnDemandSpecification": "The launch specification for On-Demand Instances in the instance fleet, which determines the allocation strategy and capacity reservation options.\n\n> The instance fleet configuration is available only in Amazon EMR releases 4.8.0 and later, excluding 5.0.x versions. On-Demand Instances allocation strategy is available in Amazon EMR releases 5.12.1 and later.", @@ -16413,7 +16783,7 @@ }, "AWS::EMR::InstanceGroupConfig EbsConfiguration": { "EbsBlockDeviceConfigs": "An array of Amazon EBS volume specifications attached to a cluster instance.", - "EbsOptimized": "Indicates whether an Amazon EBS volume is EBS-optimized." + "EbsOptimized": "Indicates whether an Amazon EBS volume is EBS-optimized. The default is false. You should explicitly set this value to true to enable the Amazon EBS-optimized setting for an EC2 instance." }, "AWS::EMR::InstanceGroupConfig MetricDimension": { "Key": "The dimension name.", @@ -16698,7 +17068,7 @@ "PreferredAvailabilityZones": "A list of preferred availability zones for the nodes in this cluster." }, "AWS::ElastiCache::ParameterGroup": { - "CacheParameterGroupFamily": "The name of the cache parameter group family that this cache parameter group is compatible with.\n\nValid values are: `memcached1.4` | `memcached1.5` | `memcached1.6` | `redis2.6` | `redis2.8` | `redis3.2` | `redis4.0` | `redis5.0` | `redis6.x` | `redis7`", + "CacheParameterGroupFamily": "The name of the cache parameter group family that this cache parameter group is compatible with.\n\nValid values are: `valkey8` | `valkey7` | `memcached1.4` | `memcached1.5` | `memcached1.6` | `redis2.6` | `redis2.8` | `redis3.2` | `redis4.0` | `redis5.0` | `redis6.x` | `redis7`", "Description": "The description for this cache parameter group.", "Properties": "A comma-delimited list of parameter name/value pairs.\n\nFor example:\n\n```\n\"Properties\" : { \"cas_disabled\" : \"1\", \"chunk_size_growth_factor\" : \"1.02\"\n}\n```", "Tags": "A tag that can be added to an ElastiCache parameter group. Tags are composed of a Key/Value pair. You can use tags to categorize and track all your parameter groups. A tag with a null Value is permitted." @@ -17072,9 +17442,9 @@ "Value": "The value of the attribute." }, "AWS::ElasticLoadBalancingV2::Listener MutualAuthentication": { - "AdvertiseTrustStoreCaNames": "Indicates whether trust store CA certificate names are advertised. The default value is `off` .", + "AdvertiseTrustStoreCaNames": "Indicates whether trust store CA certificate names are advertised.", "IgnoreClientCertificateExpiry": "Indicates whether expired client certificates are ignored.", - "Mode": "The client certificate handling method. The possible values are `off` , `passthrough` , and `verify` . The default value is `off` .", + "Mode": "The client certificate handling method. Options are `off` , `passthrough` or `verify` . The default value is `off` .", "TrustStoreArn": "The Amazon Resource Name (ARN) of the trust store." }, "AWS::ElasticLoadBalancingV2::Listener RedirectConfig": { @@ -17202,6 +17572,7 @@ "EnablePrefixForIpv6SourceNat": "[Network Load Balancers with UDP listeners] Indicates whether to use an IPv6 prefix from each subnet for source NAT. The IP address type must be `dualstack` . The default value is `off` .", "EnforceSecurityGroupInboundRulesOnPrivateLinkTraffic": "Indicates whether to evaluate inbound security group rules for traffic sent to a Network Load Balancer through AWS PrivateLink . The default is `on` .", "IpAddressType": "The IP address type. Internal load balancers must use `ipv4` .\n\n[Application Load Balancers] The possible values are `ipv4` (IPv4 addresses), `dualstack` (IPv4 and IPv6 addresses), and `dualstack-without-public-ipv4` (public IPv6 addresses and private IPv4 and IPv6 addresses).\n\nApplication Load Balancer authentication supports IPv4 addresses only when connecting to an Identity Provider (IdP) or Amazon Cognito endpoint. Without a public IPv4 address the load balancer can't complete the authentication process, resulting in HTTP 500 errors.\n\n[Network Load Balancers and Gateway Load Balancers] The possible values are `ipv4` (IPv4 addresses) and `dualstack` (IPv4 and IPv6 addresses).", + "Ipv4IpamPoolId": "", "LoadBalancerAttributes": "The load balancer attributes.", "MinimumLoadBalancerCapacity": "The minimum capacity for a load balancer.", "Name": "The name of the load balancer. This name must be unique per region per account, can have a maximum of 32 characters, must contain only alphanumeric characters or hyphens, must not begin or end with a hyphen, and must not begin with \"internal-\".\n\nIf you don't specify a name, AWS CloudFormation generates a unique physical ID for the load balancer. If you specify a name, you cannot perform updates that require replacement of this resource, but you can perform other updates. To replace the resource, specify a new name.", @@ -18364,7 +18735,7 @@ "VolumeStyle": "Use to specify the style of an ONTAP volume. FSx for ONTAP offers two styles of volumes that you can use for different purposes, FlexVol and FlexGroup volumes. For more information, see [Volume styles](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/managing-volumes.html#volume-styles) in the Amazon FSx for NetApp ONTAP User Guide." }, "AWS::FSx::Volume OpenZFSConfiguration": { - "CopyTagsToSnapshots": "A Boolean value indicating whether tags for the volume should be copied to snapshots. This value defaults to `false` . If it's set to `true` , all tags for the volume are copied to snapshots where the user doesn't specify tags. If this value is `true` , and you specify one or more tags, only the specified tags are copied to snapshots. If you specify one or more tags when creating the snapshot, no tags are copied from the volume, regardless of this value.", + "CopyTagsToSnapshots": "A Boolean value indicating whether tags for the volume should be copied to snapshots. This value defaults to `false` . If this value is set to `true` , and you do not specify any tags, all tags for the original volume are copied over to snapshots. If this value is\u00a0set to `true` , and you do specify one or more tags, only the specified tags for the original volume are copied over to snapshots. If you specify one or more tags when creating a new snapshot, no tags are copied over from the original volume, regardless of this value.", "DataCompressionType": "Specifies the method used to compress the data on the volume. The compression type is `NONE` by default.\n\n- `NONE` - Doesn't compress the data on the volume. `NONE` is the default.\n- `ZSTD` - Compresses the data in the volume using the Zstandard (ZSTD) compression algorithm. Compared to LZ4, Z-Standard provides a better compression ratio to minimize on-disk storage utilization.\n- `LZ4` - Compresses the data in the volume using the LZ4 compression algorithm. Compared to Z-Standard, LZ4 is less compute-intensive and delivers higher write throughput speeds.", "NfsExports": "The configuration object for mounting a Network File System (NFS) file system.", "Options": "To delete the volume's child volumes, snapshots, and clones, use the string `DELETE_CHILD_VOLUMES_AND_SNAPSHOTS` .", @@ -18662,9 +19033,9 @@ }, "AWS::GameLift::Build": { "Name": "A descriptive label that is associated with a build. Build names do not need to be unique.", - "OperatingSystem": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use Amazon GameLift server SDK 4.x., first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to Amazon GameLift server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", - "ServerSdkVersion": "A server SDK version you used when integrating your game server build with Amazon GameLift. For more information see [Integrate games with custom game servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/integration-custom-intro.html) . By default Amazon GameLift sets this value to `4.0.2` .", - "StorageLocation": "Information indicating where your game build files are stored. Use this parameter only when creating a build with files stored in an Amazon S3 bucket that you own. The storage location must specify an Amazon S3 bucket name and key. The location must also specify a role ARN that you set up to allow Amazon GameLift to access your Amazon S3 bucket. The S3 bucket and your new build must be in the same Region.\n\nIf a `StorageLocation` is specified, the size of your file can be found in your Amazon S3 bucket. Amazon GameLift will report a `SizeOnDisk` of 0.", + "OperatingSystem": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", + "ServerSdkVersion": "A server SDK version you used when integrating your game server build with Amazon GameLift Servers. For more information see [Integrate games with custom game servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/integration-custom-intro.html) . By default Amazon GameLift Servers sets this value to `4.0.2` .", + "StorageLocation": "Information indicating where your game build files are stored. Use this parameter only when creating a build with files stored in an Amazon S3 bucket that you own. The storage location must specify an Amazon S3 bucket name and key. The location must also specify a role ARN that you set up to allow Amazon GameLift Servers to access your Amazon S3 bucket. The S3 bucket and your new build must be in the same Region.\n\nIf a `StorageLocation` is specified, the size of your file can be found in your Amazon S3 bucket. Amazon GameLift Servers will report a `SizeOnDisk` of 0.", "Version": "Version information that is associated with this build. Version strings do not need to be unique." }, "AWS::GameLift::Build StorageLocation": { @@ -18677,7 +19048,7 @@ "BillingType": "Indicates whether the fleet uses On-Demand or Spot instances for this fleet. Learn more about when to use [On-Demand versus Spot Instances](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-ec2-instances.html#gamelift-ec2-instances-spot) . You can't update this fleet property.\n\nBy default, this property is set to `ON_DEMAND` .", "DeploymentConfiguration": "Set of rules for processing a deployment for a container fleet update.", "Description": "A meaningful description of the container fleet.", - "FleetRoleArn": "The unique identifier for an AWS Identity and Access Management (IAM) role with permissions to run your containers on resources that are managed by Amazon GameLift. See [Set up an IAM service role](https://docs.aws.amazon.com/gamelift/latest/developerguide/setting-up-role.html) . This fleet property can't be changed.", + "FleetRoleArn": "The unique identifier for an AWS Identity and Access Management (IAM) role with permissions to run your containers on resources that are managed by Amazon GameLift Servers. See [Set up an IAM service role](https://docs.aws.amazon.com/gamelift/latest/developerguide/setting-up-role.html) . This fleet property can't be changed.", "GameServerContainerGroupDefinitionName": "The name of the fleet's game server container group definition, which describes how to deploy containers with your game server build and support software onto each fleet instance.", "GameServerContainerGroupsPerInstance": "The number of times to replicate the game server container group on each fleet instance.", "GameSessionCreationLimitPolicy": "A policy that limits the number of game sessions that each individual player can create on instances in this fleet. The limit applies for a specified span of time.", @@ -18685,9 +19056,9 @@ "InstanceInboundPermissions": "The IP address ranges and port settings that allow inbound traffic to access game server processes and other processes on this fleet.", "InstanceType": "The Amazon EC2 instance type to use for all instances in the fleet. Instance type determines the computing resources and processing power that's available to host your game servers. This includes including CPU, memory, storage, and networking capacity. You can't update this fleet property.", "Locations": "", - "LogConfiguration": "The method that is used to collect container logs for the fleet. Amazon GameLift saves all standard output for each container in logs, including game session logs.\n\n- `CLOUDWATCH` -- Send logs to an Amazon CloudWatch log group that you define. Each container emits a log stream, which is organized in the log group.\n- `S3` -- Store logs in an Amazon S3 bucket that you define.\n- `NONE` -- Don't collect container logs.", + "LogConfiguration": "The method that is used to collect container logs for the fleet. Amazon GameLift Servers saves all standard output for each container in logs, including game session logs.\n\n- `CLOUDWATCH` -- Send logs to an Amazon CloudWatch log group that you define. Each container emits a log stream, which is organized in the log group.\n- `S3` -- Store logs in an Amazon S3 bucket that you define.\n- `NONE` -- Don't collect container logs.", "MetricGroups": "The name of an AWS CloudWatch metric group to add this fleet to. Metric groups aggregate metrics for multiple fleets.", - "NewGameSessionProtectionPolicy": "Determines whether Amazon GameLift can shut down game sessions on the fleet that are actively running and hosting players. Amazon GameLift might prompt an instance shutdown when scaling down fleet capacity or when retiring unhealthy instances. You can also set game session protection for individual game sessions using [UpdateGameSession](https://docs.aws.amazon.com/gamelift/latest/apireference/API_UpdateGameSession.html) .\n\n- *NoProtection* -- Game sessions can be shut down during active gameplay.\n- *FullProtection* -- Game sessions in `ACTIVE` status can't be shut down.", + "NewGameSessionProtectionPolicy": "Determines whether Amazon GameLift Servers can shut down game sessions on the fleet that are actively running and hosting players. Amazon GameLift Servers might prompt an instance shutdown when scaling down fleet capacity or when retiring unhealthy instances. You can also set game session protection for individual game sessions using [UpdateGameSession](https://docs.aws.amazon.com/gamelift/latest/apireference/API_UpdateGameSession.html) .\n\n- *NoProtection* -- Game sessions can be shut down during active gameplay.\n- *FullProtection* -- Game sessions in `ACTIVE` status can't be shut down.", "PerInstanceContainerGroupDefinitionName": "The name of the fleet's per-instance container group definition.", "ScalingPolicies": "", "Tags": "" @@ -18705,7 +19076,7 @@ "LatestDeploymentId": "A unique identifier for a fleet deployment." }, "AWS::GameLift::ContainerFleet GameSessionCreationLimitPolicy": { - "NewGameSessionsPerCreator": "A policy that puts limits on the number of game sessions that a player can create within a specified span of time. With this policy, you can control players' ability to consume available resources.\n\nThe policy evaluates when a player tries to create a new game session. On receiving a `CreateGameSession` request, Amazon GameLift checks that the player (identified by `CreatorId` ) has created fewer than game session limit in the specified time period.", + "NewGameSessionsPerCreator": "A policy that puts limits on the number of game sessions that a player can create within a specified span of time. With this policy, you can control players' ability to consume available resources.\n\nThe policy evaluates when a player tries to create a new game session. On receiving a `CreateGameSession` request, Amazon GameLift Servers checks that the player (identified by `CreatorId` ) has created fewer than game session limit in the specified time period.", "PolicyPeriodInMinutes": "The time span used in evaluating the resource creation limit policy." }, "AWS::GameLift::ContainerFleet IpPermission": { @@ -18720,18 +19091,18 @@ "MinSize": "" }, "AWS::GameLift::ContainerFleet LocationConfiguration": { - "Location": "An AWS Region code, such as `us-west-2` . For a list of supported Regions and Local Zones, see [Amazon GameLift service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", + "Location": "An AWS Region code, such as `us-west-2` . For a list of supported Regions and Local Zones, see [Amazon GameLift Servers service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", "LocationCapacity": "", "StoppedActions": "" }, "AWS::GameLift::ContainerFleet LogConfiguration": { - "LogDestination": "The type of log collection to use for a fleet.\n\n- `CLOUDWATCH` -- (default value) Send logs to an Amazon CloudWatch log group that you define. Each container emits a log stream, which is organized in the log group.\n- `S3` -- Store logs in an Amazon S3 bucket that you define.\n- `NONE` -- Don't collect container logs.", + "LogDestination": "The type of log collection to use for a fleet.\n\n- `CLOUDWATCH` -- (default value) Send logs to an Amazon CloudWatch log group that you define. Each container emits a log stream, which is organized in the log group.\n- `S3` -- Store logs in an Amazon S3 bucket that you define. This bucket must reside in the fleet's home AWS Region.\n- `NONE` -- Don't collect container logs.", "S3BucketName": "If log destination is `S3` , logs are sent to the specified Amazon S3 bucket name." }, "AWS::GameLift::ContainerFleet ScalingPolicy": { "ComparisonOperator": "Comparison operator to use when measuring a metric against the threshold value.", "EvaluationPeriods": "Length of time (in minutes) the metric must be at or beyond the threshold before a scaling event is triggered.", - "MetricName": "Name of the Amazon GameLift-defined metric that is used to trigger a scaling adjustment. For detailed descriptions of fleet metrics, see [Monitor Amazon GameLift with Amazon CloudWatch](https://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html) .\n\n- *ActivatingGameSessions* -- Game sessions in the process of being created.\n- *ActiveGameSessions* -- Game sessions that are currently running.\n- *ActiveInstances* -- Fleet instances that are currently running at least one game session.\n- *AvailableGameSessions* -- Additional game sessions that fleet could host simultaneously, given current capacity.\n- *AvailablePlayerSessions* -- Empty player slots in currently active game sessions. This includes game sessions that are not currently accepting players. Reserved player slots are not included.\n- *CurrentPlayerSessions* -- Player slots in active game sessions that are being used by a player or are reserved for a player.\n- *IdleInstances* -- Active instances that are currently hosting zero game sessions.\n- *PercentAvailableGameSessions* -- Unused percentage of the total number of game sessions that a fleet could host simultaneously, given current capacity. Use this metric for a target-based scaling policy.\n- *PercentIdleInstances* -- Percentage of the total number of active instances that are hosting zero game sessions.\n- *QueueDepth* -- Pending game session placement requests, in any queue, where the current fleet is the top-priority destination.\n- *WaitTime* -- Current wait time for pending game session placement requests, in any queue, where the current fleet is the top-priority destination.", + "MetricName": "Name of the Amazon GameLift Servers-defined metric that is used to trigger a scaling adjustment. For detailed descriptions of fleet metrics, see [Monitor Amazon GameLift Servers with Amazon CloudWatch](https://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html) .\n\n- *ActivatingGameSessions* -- Game sessions in the process of being created.\n- *ActiveGameSessions* -- Game sessions that are currently running.\n- *ActiveInstances* -- Fleet instances that are currently running at least one game session.\n- *AvailableGameSessions* -- Additional game sessions that fleet could host simultaneously, given current capacity.\n- *AvailablePlayerSessions* -- Empty player slots in currently active game sessions. This includes game sessions that are not currently accepting players. Reserved player slots are not included.\n- *CurrentPlayerSessions* -- Player slots in active game sessions that are being used by a player or are reserved for a player.\n- *IdleInstances* -- Active instances that are currently hosting zero game sessions.\n- *PercentAvailableGameSessions* -- Unused percentage of the total number of game sessions that a fleet could host simultaneously, given current capacity. Use this metric for a target-based scaling policy.\n- *PercentIdleInstances* -- Percentage of the total number of active instances that are hosting zero game sessions.\n- *QueueDepth* -- Pending game session placement requests, in any queue, where the current fleet is the top-priority destination.\n- *WaitTime* -- Current wait time for pending game session placement requests, in any queue, where the current fleet is the top-priority destination.", "Name": "A descriptive label that is associated with a fleet's scaling policy. Policy names do not need to be unique.", "PolicyType": "The type of scaling policy to create. For a target-based policy, set the parameter *MetricName* to 'PercentAvailableGameSessions' and specify a *TargetConfiguration* . For a rule-based policy set the following parameters: *MetricName* , *ComparisonOperator* , *Threshold* , *EvaluationPeriods* , *ScalingAdjustmentType* , and *ScalingAdjustment* .", "ScalingAdjustment": "Amount of adjustment to make, based on the scaling adjustment type.", @@ -18747,10 +19118,10 @@ "TargetValue": "Desired value to use with a target-based scaling policy. The value must be relevant for whatever metric the scaling policy is using. For example, in a policy using the metric PercentAvailableGameSessions, the target value should be the preferred size of the fleet's buffer (the percent of capacity that should be idle and ready for new game sessions)." }, "AWS::GameLift::ContainerGroupDefinition": { - "ContainerGroupType": "The type of container group. Container group type determines how Amazon GameLift deploys the container group on each fleet instance.", + "ContainerGroupType": "The type of container group. Container group type determines how Amazon GameLift Servers deploys the container group on each fleet instance.", "GameServerContainerDefinition": "The definition for the game server container in this group. This property is used only when the container group type is `GAME_SERVER` . This container definition specifies a container image with the game server build.", "Name": "A descriptive identifier for the container group definition. The name value is unique in an AWS Region.", - "OperatingSystem": "The platform that all containers in the container group definition run on.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use Amazon GameLift server SDK 4.x, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to Amazon GameLift server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", + "OperatingSystem": "The platform that all containers in the container group definition run on.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "SourceVersionNumber": "", "SupportContainerDefinitions": "The set of definitions for support containers in this group. A container group definition might have zero support container definitions. Support container can be used in any type of container group.", "Tags": "", @@ -18787,11 +19158,11 @@ "ContainerName": "The container definition identifier. Container names are unique within a container group definition.", "DependsOn": "Indicates that the container relies on the status of other containers in the same container group during startup and shutdown sequences. A container might have dependencies on multiple containers.", "EnvironmentOverride": "A set of environment variables that's passed to the container on startup. See the [ContainerDefinition::environment](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDefinition.html#ECS-Type-ContainerDefinition-environment) parameter in the *Amazon Elastic Container Service API Reference* .", - "ImageUri": "The URI to the image that Amazon GameLift uses when deploying this container to a container fleet. For a more specific identifier, see `ResolvedImageDigest` .", + "ImageUri": "The URI to the image that Amazon GameLift Servers uses when deploying this container to a container fleet. For a more specific identifier, see `ResolvedImageDigest` .", "MountPoints": "A mount point that binds a path inside the container to a file or directory on the host system and lets it access the file or directory.", - "PortConfiguration": "The set of ports that are available to bind to processes in the container. For example, a game server process requires a container port to allow game clients to connect to it. Container ports aren't directly accessed by inbound traffic. Amazon GameLift maps these container ports to externally accessible connection ports, which are assigned as needed from the container fleet's `ConnectionPortRange` .", + "PortConfiguration": "The set of ports that are available to bind to processes in the container. For example, a game server process requires a container port to allow game clients to connect to it. Container ports aren't directly accessed by inbound traffic. Amazon GameLift Servers maps these container ports to externally accessible connection ports, which are assigned as needed from the container fleet's `ConnectionPortRange` .", "ResolvedImageDigest": "A unique and immutable identifier for the container image. The digest is a SHA 256 hash of the container image manifest.", - "ServerSdkVersion": "The Amazon GameLift server SDK version that the game server is integrated with. Only game servers using 5.2.0 or higher are compatible with container fleets." + "ServerSdkVersion": "The Amazon GameLift Servers server SDK version that the game server is integrated with. Only game servers using 5.2.0 or higher are compatible with container fleets." }, "AWS::GameLift::ContainerGroupDefinition PortConfiguration": { "ContainerPortRanges": "" @@ -18802,10 +19173,10 @@ "EnvironmentOverride": "A set of environment variables that's passed to the container on startup. See the [ContainerDefinition::environment](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDefinition.html#ECS-Type-ContainerDefinition-environment) parameter in the *Amazon Elastic Container Service API Reference* .", "Essential": "Indicates whether the container is vital to the container group. If an essential container fails, the entire container group restarts.", "HealthCheck": "A configuration for a non-terminal health check. A support container automatically restarts if it stops functioning or if it fails this health check.", - "ImageUri": "The URI to the image that Amazon GameLift deploys to a container fleet. For a more specific identifier, see `ResolvedImageDigest` .", - "MemoryHardLimitMebibytes": "The amount of memory that Amazon GameLift makes available to the container. If memory limits aren't set for an individual container, the container shares the container group's total memory allocation.\n\n*Related data type:* [ContainerGroupDefinition TotalMemoryLimitMebibytes](https://docs.aws.amazon.com/gamelift/latest/apireference/API_ContainerGroupDefinition.html)", + "ImageUri": "The URI to the image that Amazon GameLift Servers deploys to a container fleet. For a more specific identifier, see `ResolvedImageDigest` .", + "MemoryHardLimitMebibytes": "The amount of memory that Amazon GameLift Servers makes available to the container. If memory limits aren't set for an individual container, the container shares the container group's total memory allocation.\n\n*Related data type:* [ContainerGroupDefinition TotalMemoryLimitMebibytes](https://docs.aws.amazon.com/gamelift/latest/apireference/API_ContainerGroupDefinition.html)", "MountPoints": "A mount point that binds a path inside the container to a file or directory on the host system and lets it access the file or directory.", - "PortConfiguration": "A set of ports that allow access to the container from external users. Processes running in the container can bind to a one of these ports. Container ports aren't directly accessed by inbound traffic. Amazon GameLift maps these container ports to externally accessible connection ports, which are assigned as needed from the container fleet's `ConnectionPortRange` .", + "PortConfiguration": "A set of ports that allow access to the container from external users. Processes running in the container can bind to a one of these ports. Container ports aren't directly accessed by inbound traffic. Amazon GameLift Servers maps these container ports to externally accessible connection ports, which are assigned as needed from the container fleet's `ConnectionPortRange` .", "ResolvedImageDigest": "A unique and immutable identifier for the container image. The digest is a SHA 256 hash of the container image manifest.", "Vcpu": "The number of vCPU units that are reserved for the container. If no resources are reserved, the container shares the total vCPU limit for the container group.\n\n*Related data type:* [ContainerGroupDefinition TotalVcpuLimit](https://docs.aws.amazon.com/gamelift/latest/apireference/API_ContainerGroupDefinition.html)" }, @@ -18814,33 +19185,33 @@ "Value": "The value for a developer-defined key value pair for tagging an AWS resource." }, "AWS::GameLift::Fleet": { - "AnywhereConfiguration": "Amazon GameLift Anywhere configuration options.", + "AnywhereConfiguration": "Amazon GameLift Servers Anywhere configuration options.", "ApplyCapacity": "Current resource capacity settings for managed EC2 fleets and managed container fleets. For multi-location fleets, location values might refer to a fleet's remote location or its home Region.\n\n*Returned by:* [DescribeFleetCapacity](https://docs.aws.amazon.com/gamelift/latest/apireference/API_DescribeFleetCapacity.html) , [DescribeFleetLocationCapacity](https://docs.aws.amazon.com/gamelift/latest/apireference/API_DescribeFleetLocationCapacity.html) , [UpdateFleetCapacity](https://docs.aws.amazon.com/gamelift/latest/apireference/API_UpdateFleetCapacity.html)", "BuildId": "A unique identifier for a build to be deployed on the new fleet. If you are deploying the fleet with a custom game build, you must specify this property. The build must have been successfully uploaded to Amazon GameLift and be in a `READY` status. This fleet setting cannot be changed once the fleet is created.", - "CertificateConfiguration": "Prompts Amazon GameLift to generate a TLS/SSL certificate for the fleet. Amazon GameLift uses the certificates to encrypt traffic between game clients and the game servers running on Amazon GameLift. By default, the `CertificateConfiguration` is `DISABLED` . You can't change this property after you create the fleet.\n\nAWS Certificate Manager (ACM) certificates expire after 13 months. Certificate expiration can cause fleets to fail, preventing players from connecting to instances in the fleet. We recommend you replace fleets before 13 months, consider using fleet aliases for a smooth transition.\n\n> ACM isn't available in all AWS regions. A fleet creation request with certificate generation enabled in an unsupported Region, fails with a 4xx error. For more information about the supported Regions, see [Supported Regions](https://docs.aws.amazon.com/acm/latest/userguide/acm-regions.html) in the *AWS Certificate Manager User Guide* .", + "CertificateConfiguration": "Prompts Amazon GameLift Servers to generate a TLS/SSL certificate for the fleet. Amazon GameLift Servers uses the certificates to encrypt traffic between game clients and the game servers running on Amazon GameLift Servers. By default, the `CertificateConfiguration` is `DISABLED` . You can't change this property after you create the fleet.\n\nAWS Certificate Manager (ACM) certificates expire after 13 months. Certificate expiration can cause fleets to fail, preventing players from connecting to instances in the fleet. We recommend you replace fleets before 13 months, consider using fleet aliases for a smooth transition.\n\n> ACM isn't available in all AWS regions. A fleet creation request with certificate generation enabled in an unsupported Region, fails with a 4xx error. For more information about the supported Regions, see [Supported Regions](https://docs.aws.amazon.com/acm/latest/userguide/acm-regions.html) in the *AWS Certificate Manager User Guide* .", "ComputeType": "The type of compute resource used to host your game servers.\n\n- `EC2` \u2013 The game server build is deployed to Amazon EC2 instances for cloud hosting. This is the default setting.\n- `ANYWHERE` \u2013 Game servers and supporting software are deployed to compute resources that you provide and manage. With this compute type, you can also set the `AnywhereConfiguration` parameter.", "Description": "A description for the fleet.", "DesiredEC2Instances": "The number of EC2 instances that you want this fleet to host. When creating a new fleet, GameLift automatically sets this value to \"1\" and initiates a single instance. Once the fleet is active, update this value to trigger GameLift to add or remove instances from the fleet.", - "EC2InboundPermissions": "The IP address ranges and port settings that allow inbound traffic to access game server processes and other processes on this fleet. Set this parameter for managed EC2 fleets. You can leave this parameter empty when creating the fleet, but you must call [](https://docs.aws.amazon.com/gamelift/latest/apireference/API_UpdateFleetPortSettings) to set it before players can connect to game sessions. As a best practice, we recommend opening ports for remote access only when you need them and closing them when you're finished. For Realtime Servers fleets, Amazon GameLift automatically sets TCP and UDP ranges.", - "EC2InstanceType": "The Amazon GameLift-supported Amazon EC2 instance type to use with managed EC2 fleets. Instance type determines the computing resources that will be used to host your game servers, including CPU, memory, storage, and networking capacity. See [Amazon Elastic Compute Cloud Instance Types](https://docs.aws.amazon.com/ec2/instance-types/) for detailed descriptions of Amazon EC2 instance types.", + "EC2InboundPermissions": "The IP address ranges and port settings that allow inbound traffic to access game server processes and other processes on this fleet. Set this parameter for managed EC2 fleets. You can leave this parameter empty when creating the fleet, but you must call [](https://docs.aws.amazon.com/gamelift/latest/apireference/API_UpdateFleetPortSettings) to set it before players can connect to game sessions. As a best practice, we recommend opening ports for remote access only when you need them and closing them when you're finished. For Amazon GameLift Servers Realtime fleets, Amazon GameLift Servers automatically sets TCP and UDP ranges.", + "EC2InstanceType": "The Amazon GameLift Servers-supported Amazon EC2 instance type to use with managed EC2 fleets. Instance type determines the computing resources that will be used to host your game servers, including CPU, memory, storage, and networking capacity. See [Amazon Elastic Compute Cloud Instance Types](https://docs.aws.amazon.com/ec2/instance-types/) for detailed descriptions of Amazon EC2 instance types.", "FleetType": "Indicates whether to use On-Demand or Spot instances for this fleet. By default, this property is set to `ON_DEMAND` . Learn more about when to use [On-Demand versus Spot Instances](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-ec2-instances.html#gamelift-ec2-instances-spot) . This fleet property can't be changed after the fleet is created.", "InstanceRoleARN": "A unique identifier for an IAM role that manages access to your AWS services. With an instance role ARN set, any application that runs on an instance in this fleet can assume the role, including install scripts, server processes, and daemons (background processes). Create a role or look up a role's ARN by using the [IAM dashboard](https://docs.aws.amazon.com/iam/) in the AWS Management Console . Learn more about using on-box credentials for your game servers at [Access external resources from a game server](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-sdk-server-resources.html) . This attribute is used with fleets where `ComputeType` is `EC2` .", "InstanceRoleCredentialsProvider": "Indicates that fleet instances maintain a shared credentials file for the IAM role defined in `InstanceRoleArn` . Shared credentials allow applications that are deployed with the game server executable to communicate with other AWS resources. This property is used only when the game server is integrated with the server SDK version 5.x. For more information about using shared credentials, see [Communicate with other AWS resources from your fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-sdk-server-resources.html) . This attribute is used with fleets where `ComputeType` is `EC2` .", - "Locations": "A set of remote locations to deploy additional instances to and manage as a multi-location fleet. Use this parameter when creating a fleet in AWS Regions that support multiple locations. You can add any AWS Region or Local Zone that's supported by Amazon GameLift. Provide a list of one or more AWS Region codes, such as `us-west-2` , or Local Zone names. When using this parameter, Amazon GameLift requires you to include your home location in the request. For a list of supported Regions and Local Zones, see [Amazon GameLift service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", + "Locations": "A set of remote locations to deploy additional instances to and manage as a multi-location fleet. Use this parameter when creating a fleet in AWS Regions that support multiple locations. You can add any AWS Region or Local Zone that's supported by Amazon GameLift Servers. Provide a list of one or more AWS Region codes, such as `us-west-2` , or Local Zone names. When using this parameter, Amazon GameLift Servers requires you to include your home location in the request. For a list of supported Regions and Local Zones, see [Amazon GameLift Servers service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", "MaxSize": "The maximum number of instances that are allowed in the specified fleet location. If this parameter is not set, the default is 1.", "MetricGroups": "The name of an AWS CloudWatch metric group to add this fleet to. A metric group is used to aggregate the metrics for multiple fleets. You can specify an existing metric group name or set a new name to create a new metric group. A fleet can be included in only one metric group at a time.", "MinSize": "The minimum number of instances that are allowed in the specified fleet location. If this parameter is not set, the default is 0.", "Name": "A descriptive label that is associated with a fleet. Fleet names do not need to be unique.", "NewGameSessionProtectionPolicy": "The status of termination protection for active game sessions on the fleet. By default, this property is set to `NoProtection` .\n\n- *NoProtection* - Game sessions can be terminated during active gameplay as a result of a scale-down event.\n- *FullProtection* - Game sessions in `ACTIVE` status cannot be terminated during a scale-down event.", - "PeerVpcAwsAccountId": "Used when peering your Amazon GameLift fleet with a VPC, the unique identifier for the AWS account that owns the VPC. You can find your account ID in the AWS Management Console under account settings.", - "PeerVpcId": "A unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same Region as your fleet. To look up a VPC ID, use the [VPC Dashboard](https://docs.aws.amazon.com/vpc/) in the AWS Management Console . Learn more about VPC peering in [VPC Peering with Amazon GameLift Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/vpc-peering.html) .", + "PeerVpcAwsAccountId": "Used when peering your Amazon GameLift Servers fleet with a VPC, the unique identifier for the AWS account that owns the VPC. You can find your account ID in the AWS Management Console under account settings.", + "PeerVpcId": "A unique identifier for a VPC with resources to be accessed by your Amazon GameLift Servers fleet. The VPC must be in the same Region as your fleet. To look up a VPC ID, use the [VPC Dashboard](https://docs.aws.amazon.com/vpc/) in the AWS Management Console . Learn more about VPC peering in [VPC Peering with Amazon GameLift Servers Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/vpc-peering.html) .", "ResourceCreationLimitPolicy": "A policy that limits the number of game sessions that an individual player can create on instances in this fleet within a specified span of time.", "RuntimeConfiguration": "Instructions for how to launch and maintain server processes on instances in the fleet. The runtime configuration defines one or more server process configurations, each identifying a build executable or Realtime script file and the number of processes of that type to run concurrently.\n\n> The `RuntimeConfiguration` parameter is required unless the fleet is being configured using the older parameters `ServerLaunchPath` and `ServerLaunchParameters` , which are still supported for backward compatibility.", "ScalingPolicies": "Rule that controls how a fleet is scaled. Scaling policies are uniquely identified by the combination of name and fleet ID.", "ScriptId": "The unique identifier for a Realtime configuration script to be deployed on fleet instances. You can use either the script ID or ARN. Scripts must be uploaded to Amazon GameLift prior to creating the fleet. This fleet property cannot be changed later.\n\n> You can't use the `!Ref` command to reference a script created with a CloudFormation template for the fleet property `ScriptId` . Instead, use `Fn::GetAtt Script.Arn` or `Fn::GetAtt Script.Id` to retrieve either of these properties as input for `ScriptId` . Alternatively, enter a `ScriptId` string manually." }, "AWS::GameLift::Fleet AnywhereConfiguration": { - "Cost": "The cost to run your fleet per hour. Amazon GameLift uses the provided cost of your fleet to balance usage in queues. For more information about queues, see [Setting up queues](https://docs.aws.amazon.com/gamelift/latest/developerguide/queues-intro.html) in the *Amazon GameLift Developer Guide* ." + "Cost": "The cost to run your fleet per hour. Amazon GameLift Servers uses the provided cost of your fleet to balance usage in queues. For more information about queues, see [Setting up queues](https://docs.aws.amazon.com/gamelift/latest/developerguide/queues-intro.html) in the *Amazon GameLift Servers Developer Guide* ." }, "AWS::GameLift::Fleet CertificateConfiguration": { "CertificateType": "Indicates whether a TLS/SSL certificate is generated for a fleet.\n\nValid values include:\n\n- *GENERATED* - Generate a TLS/SSL certificate for this fleet.\n- *DISABLED* - (default) Do not generate a TLS/SSL certificate for this fleet." @@ -18857,11 +19228,11 @@ "MinSize": "The minimum number of instances that are allowed in the specified fleet location. If this parameter is not set, the default is 0." }, "AWS::GameLift::Fleet LocationConfiguration": { - "Location": "An AWS Region code, such as `us-west-2` . For a list of supported Regions and Local Zones, see [Amazon GameLift service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", + "Location": "An AWS Region code, such as `us-west-2` . For a list of supported Regions and Local Zones, see [Amazon GameLift Servers service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", "LocationCapacity": "Current resource capacity settings for managed EC2 fleets and managed container fleets. For multi-location fleets, location values might refer to a fleet's remote location or its home Region.\n\n*Returned by:* [DescribeFleetCapacity](https://docs.aws.amazon.com/gamelift/latest/apireference/API_DescribeFleetCapacity.html) , [DescribeFleetLocationCapacity](https://docs.aws.amazon.com/gamelift/latest/apireference/API_DescribeFleetLocationCapacity.html) , [UpdateFleetCapacity](https://docs.aws.amazon.com/gamelift/latest/apireference/API_UpdateFleetCapacity.html)" }, "AWS::GameLift::Fleet ResourceCreationLimitPolicy": { - "NewGameSessionsPerCreator": "A policy that puts limits on the number of game sessions that a player can create within a specified span of time. With this policy, you can control players' ability to consume available resources.\n\nThe policy is evaluated when a player tries to create a new game session. On receiving a `CreateGameSession` request, Amazon GameLift checks that the player (identified by `CreatorId` ) has created fewer than game session limit in the specified time period.", + "NewGameSessionsPerCreator": "A policy that puts limits on the number of game sessions that a player can create within a specified span of time. With this policy, you can control players' ability to consume available resources.\n\nThe policy is evaluated when a player tries to create a new game session. On receiving a `CreateGameSession` request, Amazon GameLift Servers checks that the player (identified by `CreatorId` ) has created fewer than game session limit in the specified time period.", "PolicyPeriodInMinutes": "The time span used in evaluating the resource creation limit policy." }, "AWS::GameLift::Fleet RuntimeConfiguration": { @@ -18873,7 +19244,7 @@ "ComparisonOperator": "Comparison operator to use when measuring a metric against the threshold value.", "EvaluationPeriods": "Length of time (in minutes) the metric must be at or beyond the threshold before a scaling event is triggered.", "Location": "The fleet location.", - "MetricName": "Name of the Amazon GameLift-defined metric that is used to trigger a scaling adjustment. For detailed descriptions of fleet metrics, see [Monitor Amazon GameLift with Amazon CloudWatch](https://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html) .\n\n- *ActivatingGameSessions* -- Game sessions in the process of being created.\n- *ActiveGameSessions* -- Game sessions that are currently running.\n- *ActiveInstances* -- Fleet instances that are currently running at least one game session.\n- *AvailableGameSessions* -- Additional game sessions that fleet could host simultaneously, given current capacity.\n- *AvailablePlayerSessions* -- Empty player slots in currently active game sessions. This includes game sessions that are not currently accepting players. Reserved player slots are not included.\n- *CurrentPlayerSessions* -- Player slots in active game sessions that are being used by a player or are reserved for a player.\n- *IdleInstances* -- Active instances that are currently hosting zero game sessions.\n- *PercentAvailableGameSessions* -- Unused percentage of the total number of game sessions that a fleet could host simultaneously, given current capacity. Use this metric for a target-based scaling policy.\n- *PercentIdleInstances* -- Percentage of the total number of active instances that are hosting zero game sessions.\n- *QueueDepth* -- Pending game session placement requests, in any queue, where the current fleet is the top-priority destination.\n- *WaitTime* -- Current wait time for pending game session placement requests, in any queue, where the current fleet is the top-priority destination.", + "MetricName": "Name of the Amazon GameLift Servers-defined metric that is used to trigger a scaling adjustment. For detailed descriptions of fleet metrics, see [Monitor Amazon GameLift Servers with Amazon CloudWatch](https://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html) .\n\n- *ActivatingGameSessions* -- Game sessions in the process of being created.\n- *ActiveGameSessions* -- Game sessions that are currently running.\n- *ActiveInstances* -- Fleet instances that are currently running at least one game session.\n- *AvailableGameSessions* -- Additional game sessions that fleet could host simultaneously, given current capacity.\n- *AvailablePlayerSessions* -- Empty player slots in currently active game sessions. This includes game sessions that are not currently accepting players. Reserved player slots are not included.\n- *CurrentPlayerSessions* -- Player slots in active game sessions that are being used by a player or are reserved for a player.\n- *IdleInstances* -- Active instances that are currently hosting zero game sessions.\n- *PercentAvailableGameSessions* -- Unused percentage of the total number of game sessions that a fleet could host simultaneously, given current capacity. Use this metric for a target-based scaling policy.\n- *PercentIdleInstances* -- Percentage of the total number of active instances that are hosting zero game sessions.\n- *QueueDepth* -- Pending game session placement requests, in any queue, where the current fleet is the top-priority destination.\n- *WaitTime* -- Current wait time for pending game session placement requests, in any queue, where the current fleet is the top-priority destination.", "Name": "A descriptive label that is associated with a fleet's scaling policy. Policy names do not need to be unique.", "PolicyType": "The type of scaling policy to create. For a target-based policy, set the parameter *MetricName* to 'PercentAvailableGameSessions' and specify a *TargetConfiguration* . For a rule-based policy set the following parameters: *MetricName* , *ComparisonOperator* , *Threshold* , *EvaluationPeriods* , *ScalingAdjustmentType* , and *ScalingAdjustment* .", "ScalingAdjustment": "Amount of adjustment to make, based on the scaling adjustment type.", @@ -18885,7 +19256,7 @@ }, "AWS::GameLift::Fleet ServerProcess": { "ConcurrentExecutions": "The number of server processes using this configuration that run concurrently on each instance or compute.", - "LaunchPath": "The location of a game build executable or Realtime script. Game builds and Realtime scripts are installed on instances at the root:\n\n- Windows (custom game builds only): `C:\\game` . Example: \" `C:\\game\\MyGame\\server.exe` \"\n- Linux: `/local/game` . Examples: \" `/local/game/MyGame/server.exe` \" or \" `/local/game/MyRealtimeScript.js` \"\n\n> Amazon GameLift doesn't support the use of setup scripts that launch the game executable. For custom game builds, this parameter must indicate the executable that calls the server SDK operations `initSDK()` and `ProcessReady()` .", + "LaunchPath": "The location of a game build executable or Realtime script. Game builds and Realtime scripts are installed on instances at the root:\n\n- Windows (custom game builds only): `C:\\game` . Example: \" `C:\\game\\MyGame\\server.exe` \"\n- Linux: `/local/game` . Examples: \" `/local/game/MyGame/server.exe` \" or \" `/local/game/MyRealtimeScript.js` \"\n\n> Amazon GameLift Servers doesn't support the use of setup scripts that launch the game executable. For custom game builds, this parameter must indicate the executable that calls the server SDK operations `initSDK()` and `ProcessReady()` .", "Parameters": "An optional list of parameters to pass to the server executable or Realtime script on launch.\n\nLength Constraints: Minimum length of 1. Maximum length of 1024.\n\nPattern: [A-Za-z0-9_:.+\\/\\\\\\- =@{},?'\\[\\]\"]+" }, "AWS::GameLift::Fleet TargetConfiguration": { @@ -18893,25 +19264,25 @@ }, "AWS::GameLift::GameServerGroup": { "AutoScalingPolicy": "Configuration settings to define a scaling policy for the Auto Scaling group that is optimized for game hosting. The scaling policy uses the metric `\"PercentUtilizedGameServers\"` to maintain a buffer of idle game servers that can immediately accommodate new games and players. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", - "BalancingStrategy": "Indicates how Amazon GameLift FleetIQ balances the use of Spot Instances and On-Demand Instances in the game server group. Method options include the following:\n\n- `SPOT_ONLY` - Only Spot Instances are used in the game server group. If Spot Instances are unavailable or not viable for game hosting, the game server group provides no hosting capacity until Spot Instances can again be used. Until then, no new instances are started, and the existing nonviable Spot Instances are terminated (after current gameplay ends) and are not replaced.\n- `SPOT_PREFERRED` - (default value) Spot Instances are used whenever available in the game server group. If Spot Instances are unavailable, the game server group continues to provide hosting capacity by falling back to On-Demand Instances. Existing nonviable Spot Instances are terminated (after current gameplay ends) and are replaced with new On-Demand Instances.\n- `ON_DEMAND_ONLY` - Only On-Demand Instances are used in the game server group. No Spot Instances are used, even when available, while this balancing strategy is in force.", + "BalancingStrategy": "Indicates how Amazon GameLift Servers FleetIQ balances the use of Spot Instances and On-Demand Instances in the game server group. Method options include the following:\n\n- `SPOT_ONLY` - Only Spot Instances are used in the game server group. If Spot Instances are unavailable or not viable for game hosting, the game server group provides no hosting capacity until Spot Instances can again be used. Until then, no new instances are started, and the existing nonviable Spot Instances are terminated (after current gameplay ends) and are not replaced.\n- `SPOT_PREFERRED` - (default value) Spot Instances are used whenever available in the game server group. If Spot Instances are unavailable, the game server group continues to provide hosting capacity by falling back to On-Demand Instances. Existing nonviable Spot Instances are terminated (after current gameplay ends) and are replaced with new On-Demand Instances.\n- `ON_DEMAND_ONLY` - Only On-Demand Instances are used in the game server group. No Spot Instances are used, even when available, while this balancing strategy is in force.", "DeleteOption": "The type of delete to perform. To delete a game server group, specify the `DeleteOption` . Options include the following:\n\n- `SAFE_DELETE` \u2013 (default) Terminates the game server group and Amazon EC2 Auto Scaling group only when it has no game servers that are in `UTILIZED` status.\n- `FORCE_DELETE` \u2013 Terminates the game server group, including all active game servers regardless of their utilization status, and the Amazon EC2 Auto Scaling group.\n- `RETAIN` \u2013 Does a safe delete of the game server group but retains the Amazon EC2 Auto Scaling group as is.", "GameServerGroupName": "A developer-defined identifier for the game server group. The name is unique for each Region in each AWS account.", "GameServerProtectionPolicy": "A flag that indicates whether instances in the game server group are protected from early termination. Unprotected instances that have active game servers running might be terminated during a scale-down event, causing players to be dropped from the game. Protected instances cannot be terminated while there are active game servers running except in the event of a forced game server group deletion (see ). An exception to this is with Spot Instances, which can be terminated by AWS regardless of protection status.", - "InstanceDefinitions": "The set of Amazon EC2 instance types that Amazon GameLift FleetIQ can use when balancing and automatically scaling instances in the corresponding Auto Scaling group.", - "LaunchTemplate": "The Amazon EC2 launch template that contains configuration settings and game server code to be deployed to all instances in the game server group. You can specify the template using either the template name or ID. For help with creating a launch template, see [Creating a Launch Template for an Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html) in the *Amazon Elastic Compute Cloud Auto Scaling User Guide* . After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.\n\n> If you specify network interfaces in your launch template, you must explicitly set the property `AssociatePublicIpAddress` to \"true\". If no network interface is specified in the launch template, Amazon GameLift FleetIQ uses your account's default VPC.", - "MaxSize": "The maximum number of instances allowed in the Amazon EC2 Auto Scaling group. During automatic scaling events, Amazon GameLift FleetIQ and EC2 do not scale up the group above this maximum. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", - "MinSize": "The minimum number of instances allowed in the Amazon EC2 Auto Scaling group. During automatic scaling events, Amazon GameLift FleetIQ and Amazon EC2 do not scale down the group below this minimum. In production, this value should be set to at least 1. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", - "RoleArn": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) for an IAM role that allows Amazon GameLift to access your Amazon EC2 Auto Scaling groups.", + "InstanceDefinitions": "The set of Amazon EC2 instance types that Amazon GameLift Servers FleetIQ can use when balancing and automatically scaling instances in the corresponding Auto Scaling group.", + "LaunchTemplate": "The Amazon EC2 launch template that contains configuration settings and game server code to be deployed to all instances in the game server group. You can specify the template using either the template name or ID. For help with creating a launch template, see [Creating a Launch Template for an Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html) in the *Amazon Elastic Compute Cloud Auto Scaling User Guide* . After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.\n\n> If you specify network interfaces in your launch template, you must explicitly set the property `AssociatePublicIpAddress` to \"true\". If no network interface is specified in the launch template, Amazon GameLift Servers FleetIQ uses your account's default VPC.", + "MaxSize": "The maximum number of instances allowed in the Amazon EC2 Auto Scaling group. During automatic scaling events, Amazon GameLift Servers FleetIQ and EC2 do not scale up the group above this maximum. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", + "MinSize": "The minimum number of instances allowed in the Amazon EC2 Auto Scaling group. During automatic scaling events, Amazon GameLift Servers FleetIQ and Amazon EC2 do not scale down the group below this minimum. In production, this value should be set to at least 1. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", + "RoleArn": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) for an IAM role that allows Amazon GameLift Servers to access your Amazon EC2 Auto Scaling groups.", "Tags": "A list of labels to assign to the new game server group resource. Tags are developer-defined key-value pairs. Tagging AWS resources is useful for resource management, access management, and cost allocation. For more information, see [Tagging AWS Resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) in the *AWS General Reference* . Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags, respectively. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.", - "VpcSubnets": "A list of virtual private cloud (VPC) subnets to use with instances in the game server group. By default, all Amazon GameLift FleetIQ-supported Availability Zones are used. You can use this parameter to specify VPCs that you've set up. This property cannot be updated after the game server group is created, and the corresponding Auto Scaling group will always use the property value that is set with this request, even if the Auto Scaling group is updated directly." + "VpcSubnets": "A list of virtual private cloud (VPC) subnets to use with instances in the game server group. By default, all Amazon GameLift Servers FleetIQ-supported Availability Zones are used. You can use this parameter to specify VPCs that you've set up. This property cannot be updated after the game server group is created, and the corresponding Auto Scaling group will always use the property value that is set with this request, even if the Auto Scaling group is updated directly." }, "AWS::GameLift::GameServerGroup AutoScalingPolicy": { - "EstimatedInstanceWarmup": "Length of time, in seconds, it takes for a new instance to start new game server processes and register with Amazon GameLift FleetIQ. Specifying a warm-up time can be useful, particularly with game servers that take a long time to start up, because it avoids prematurely starting new instances.", + "EstimatedInstanceWarmup": "Length of time, in seconds, it takes for a new instance to start new game server processes and register with Amazon GameLift Servers FleetIQ. Specifying a warm-up time can be useful, particularly with game servers that take a long time to start up, because it avoids prematurely starting new instances.", "TargetTrackingConfiguration": "Settings for a target-based scaling policy applied to Auto Scaling group. These settings are used to create a target-based policy that tracks the GameLift FleetIQ metric `PercentUtilizedGameServers` and specifies a target value for the metric. As player usage changes, the policy triggers to adjust the game server group capacity so that the metric returns to the target value." }, "AWS::GameLift::GameServerGroup InstanceDefinition": { "InstanceType": "An Amazon EC2 instance type designation.", - "WeightedCapacity": "Instance weighting that indicates how much this instance type contributes to the total capacity of a game server group. Instance weights are used by Amazon GameLift FleetIQ to calculate the instance type's cost per unit hour and better identify the most cost-effective options. For detailed information on weighting instance capacity, see [Instance Weighting](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-weighting.html) in the *Amazon Elastic Compute Cloud Auto Scaling User Guide* . Default value is \"1\"." + "WeightedCapacity": "Instance weighting that indicates how much this instance type contributes to the total capacity of a game server group. Instance weights are used by Amazon GameLift Servers FleetIQ to calculate the instance type's cost per unit hour and better identify the most cost-effective options. For detailed information on weighting instance capacity, see [Instance Weighting](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-weighting.html) in the *Amazon Elastic Compute Cloud Auto Scaling User Guide* . Default value is \"1\"." }, "AWS::GameLift::GameServerGroup LaunchTemplate": { "LaunchTemplateId": "A unique identifier for an existing Amazon EC2 launch template.", @@ -18931,10 +19302,10 @@ "FilterConfiguration": "A list of locations where a queue is allowed to place new game sessions. Locations are specified in the form of AWS Region codes, such as `us-west-2` . If this parameter is not set, game sessions can be placed in any queue location.", "Name": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region.", "NotificationTarget": "An SNS topic ARN that is set up to receive game session placement notifications. See [Setting up notifications for game session placement](https://docs.aws.amazon.com/gamelift/latest/developerguide/queue-notification.html) .", - "PlayerLatencyPolicies": "A set of policies that enforce a sliding cap on player latency when processing game sessions placement requests. Use multiple policies to gradually relax the cap over time if Amazon GameLift can't make a placement. Policies are evaluated in order starting with the lowest maximum latency value.", + "PlayerLatencyPolicies": "A set of policies that enforce a sliding cap on player latency when processing game sessions placement requests. Use multiple policies to gradually relax the cap over time if Amazon GameLift Servers can't make a placement. Policies are evaluated in order starting with the lowest maximum latency value.", "PriorityConfiguration": "Custom settings to use when prioritizing destinations and locations for game session placements. This configuration replaces the FleetIQ default prioritization process. Priority types that are not explicitly named will be automatically applied at the end of the prioritization process.", "Tags": "A list of labels to assign to the new game session queue resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see [Tagging AWS Resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) in the *AWS General Reference* . Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.", - "TimeoutInSeconds": "The maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a `TIMED_OUT` status." + "TimeoutInSeconds": "The maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a `TIMED_OUT` status. If you don't specify a request timeout, the queue uses a default value." }, "AWS::GameLift::GameSessionQueue FilterConfiguration": { "AllowedLocations": "A list of locations to allow game session placement in, in the form of AWS Region codes such as `us-west-2` ." @@ -18947,8 +19318,8 @@ "PolicyDurationSeconds": "The length of time, in seconds, that the policy is enforced while placing a new game session. A null value for this property means that the policy is enforced until the queue times out." }, "AWS::GameLift::GameSessionQueue PriorityConfiguration": { - "LocationOrder": "The prioritization order to use for fleet locations, when the `PriorityOrder` property includes `LOCATION` . Locations can include AWS Region codes (such as `us-west-2` ), local zones, and custom locations (for Anywhere fleets). Each location must be listed only once. For details, see [Amazon GameLift service locations.](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html)", - "PriorityOrder": "A custom sequence to use when prioritizing where to place new game sessions. Each priority type is listed once.\n\n- `LATENCY` -- Amazon GameLift prioritizes locations where the average player latency is lowest. Player latency data is provided in each game session placement request.\n- `COST` -- Amazon GameLift prioritizes destinations with the lowest current hosting costs. Cost is evaluated based on the location, instance type, and fleet type (Spot or On-Demand) of each destination in the queue.\n- `DESTINATION` -- Amazon GameLift prioritizes based on the list order of destinations in the queue configuration.\n- `LOCATION` -- Amazon GameLift prioritizes based on the provided order of locations, as defined in `LocationOrder` ." + "LocationOrder": "The prioritization order to use for fleet locations, when the `PriorityOrder` property includes `LOCATION` . Locations can include AWS Region codes (such as `us-west-2` ), local zones, and custom locations (for Anywhere fleets). Each location must be listed only once. For details, see [Amazon GameLift Servers service locations.](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html)", + "PriorityOrder": "A custom sequence to use when prioritizing where to place new game sessions. Each priority type is listed once.\n\n- `LATENCY` -- Amazon GameLift Servers prioritizes locations where the average player latency is lowest. Player latency data is provided in each game session placement request.\n- `COST` -- Amazon GameLift Servers prioritizes queue destinations with the lowest current hosting costs. Cost is evaluated based on the destination's location, instance type, and fleet type (Spot or On-Demand).\n- `DESTINATION` -- Amazon GameLift Servers prioritizes based on the list order of destinations in the queue configuration.\n- `LOCATION` -- Amazon GameLift Servers prioritizes based on the provided order of locations, as defined in `LocationOrder` ." }, "AWS::GameLift::GameSessionQueue Tag": { "Key": "The key for a developer-defined key value pair for tagging an AWS resource.", @@ -18970,10 +19341,10 @@ "CreationTime": "A time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example `\"1469498468.057\"` ).", "CustomEventData": "Information to add to all events related to the matchmaking configuration.", "Description": "A description for the matchmaking configuration.", - "FlexMatchMode": "Indicates whether this matchmaking configuration is being used with Amazon GameLift hosting or as a standalone matchmaking solution.\n\n- *STANDALONE* - FlexMatch forms matches and returns match information, including players and team assignments, in a [MatchmakingSucceeded](https://docs.aws.amazon.com/gamelift/latest/flexmatchguide/match-events.html#match-events-matchmakingsucceeded) event.\n- *WITH_QUEUE* - FlexMatch forms matches and uses the specified Amazon GameLift queue to start a game session for the match.", + "FlexMatchMode": "Indicates whether this matchmaking configuration is being used with Amazon GameLift Servers hosting or as a standalone matchmaking solution.\n\n- *STANDALONE* - FlexMatch forms matches and returns match information, including players and team assignments, in a [MatchmakingSucceeded](https://docs.aws.amazon.com/gamelift/latest/flexmatchguide/match-events.html#match-events-matchmakingsucceeded) event.\n- *WITH_QUEUE* - FlexMatch forms matches and uses the specified Amazon GameLift Servers queue to start a game session for the match.", "GameProperties": "A set of custom properties for a game session, formatted as key-value pairs. These properties are passed to a game server process with a request to start a new game session. See [Start a Game Session](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-sdk-server-api.html#gamelift-sdk-server-startsession) . This parameter is not used if `FlexMatchMode` is set to `STANDALONE` .", "GameSessionData": "A set of custom game session properties, formatted as a single string value. This data is passed to a game server process with a request to start a new game session. See [Start a Game Session](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-sdk-server-api.html#gamelift-sdk-server-startsession) . This parameter is not used if `FlexMatchMode` is set to `STANDALONE` .", - "GameSessionQueueArns": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) that is assigned to a Amazon GameLift game session queue resource and uniquely identifies it. ARNs are unique across all Regions. Format is `arn:aws:gamelift:::gamesessionqueue/` . Queues can be located in any Region. Queues are used to start new Amazon GameLift-hosted game sessions for matches that are created with this matchmaking configuration. If `FlexMatchMode` is set to `STANDALONE` , do not set this parameter.", + "GameSessionQueueArns": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) that is assigned to a Amazon GameLift Servers game session queue resource and uniquely identifies it. ARNs are unique across all Regions. Format is `arn:aws:gamelift:::gamesessionqueue/` . Queues can be located in any Region. Queues are used to start new Amazon GameLift Servers-hosted game sessions for matches that are created with this matchmaking configuration. If `FlexMatchMode` is set to `STANDALONE` , do not set this parameter.", "Name": "A unique identifier for the matchmaking configuration. This name is used to identify the configuration associated with a matchmaking request or ticket.", "NotificationTarget": "An SNS topic ARN that is set up to receive matchmaking notifications. See [Setting up notifications for matchmaking](https://docs.aws.amazon.com/gamelift/latest/flexmatchguide/match-notification.html) for more information.", "RequestTimeoutSeconds": "The maximum duration, in seconds, that a matchmaking ticket can remain in process before timing out. Requests that fail due to timing out can be resubmitted as needed.", @@ -19000,20 +19371,49 @@ }, "AWS::GameLift::Script": { "Name": "A descriptive label that is associated with a script. Script names do not need to be unique.", - "StorageLocation": "The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the \"key\"), and a role ARN that allows Amazon GameLift to access the Amazon S3 storage location. The S3 bucket must be in the same Region where you want to create a new script. By default, Amazon GameLift uploads the latest version of the zip file; if you have S3 object versioning turned on, you can use the `ObjectVersion` parameter to specify an earlier version.", + "StorageLocation": "The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the \"key\"), and a role ARN that allows Amazon GameLift Servers to access the Amazon S3 storage location. The S3 bucket must be in the same Region where you want to create a new script. By default, Amazon GameLift Servers uploads the latest version of the zip file; if you have S3 object versioning turned on, you can use the `ObjectVersion` parameter to specify an earlier version.", "Tags": "A list of labels to assign to the new script resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see [Tagging AWS Resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) in the *AWS General Reference* . Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.", "Version": "The version that is associated with a build or script. Version strings do not need to be unique." }, "AWS::GameLift::Script S3Location": { - "Bucket": "An Amazon S3 bucket identifier. Thename of the S3 bucket.\n\n> Amazon GameLift doesn't support uploading from Amazon S3 buckets with names that contain a dot (.).", + "Bucket": "An Amazon S3 bucket identifier. Thename of the S3 bucket.\n\n> Amazon GameLift Servers doesn't support uploading from Amazon S3 buckets with names that contain a dot (.).", "Key": "The name of the zip file that contains the build files or script files.", - "ObjectVersion": "The version of the file, if object versioning is turned on for the bucket. Amazon GameLift uses this information when retrieving files from an S3 bucket that you own. Use this parameter to specify a specific version of the file. If not set, the latest version of the file is retrieved.", - "RoleArn": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) for an IAM role that allows Amazon GameLift to access the S3 bucket." + "ObjectVersion": "The version of the file, if object versioning is turned on for the bucket. Amazon GameLift Servers uses this information when retrieving files from an S3 bucket that you own. Use this parameter to specify a specific version of the file. If not set, the latest version of the file is retrieved.", + "RoleArn": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) for an IAM role that allows Amazon GameLift Servers to access the S3 bucket." }, "AWS::GameLift::Script Tag": { "Key": "The key for a developer-defined key value pair for tagging an AWS resource.", "Value": "The value for a developer-defined key value pair for tagging an AWS resource." }, + "AWS::GameLiftStreams::Application": { + "ApplicationLogOutputUri": "An Amazon S3 URI to a bucket where you would like Amazon GameLift Streams to save application logs. Required if you specify one or more `ApplicationLogPaths` .", + "ApplicationLogPaths": "Locations of log files that your content generates during a stream session. Enter path values that are relative to the `ApplicationSourceUri` location. You can specify up to 10 log paths. Amazon GameLift Streams uploads designated log files to the Amazon S3 bucket that you specify in `ApplicationLogOutputUri` at the end of a stream session. To retrieve stored log files, call [GetStreamSession](https://docs.aws.amazon.com/gameliftstreams/latest/apireference/API_GetStreamSession.html) and get the `LogFileLocationUri` .", + "ApplicationSourceUri": "The location of the content that you want to stream. Enter an Amazon S3 URI to a bucket that contains your game or other application. The location can have a multi-level prefix structure, but it must include all the files needed to run the content. Amazon GameLift Streams copies everything under the specified location.\n\nThis value is immutable. To designate a different content location, create a new application.\n\n> The Amazon S3 bucket and the Amazon GameLift Streams application must be in the same AWS Region.", + "Description": "A human-readable label for the application. You can update this value later.", + "ExecutablePath": "The path and file name of the executable file that launches the content for streaming. Enter a path value that is relative to the location set in `ApplicationSourceUri` .", + "RuntimeEnvironment": "A set of configuration settings to run the application on a stream group. This configures the operating system, and can include compatibility layers and other drivers.", + "Tags": "A list of labels to assign to the new application resource. Tags are developer-defined key-value pairs. Tagging AWS resources is useful for resource management, access management and cost allocation. See [Tagging AWS Resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) in the *AWS General Reference* ." + }, + "AWS::GameLiftStreams::Application RuntimeEnvironment": { + "Type": "The operating system and other drivers. For Proton, this also includes the Proton compatibility layer.", + "Version": "Versioned container environment for the application operating system." + }, + "AWS::GameLiftStreams::StreamGroup": { + "DefaultApplication": "Object that identifies the Amazon GameLift Streams application to stream with this stream group.", + "Description": "A descriptive label for the stream group.", + "LocationConfigurations": "A set of one or more locations and the streaming capacity for each location. One of the locations MUST be your primary location, which is the AWS Region where you are specifying this resource.", + "StreamClass": "The target stream quality for sessions that are hosted in this stream group. Set a stream class that is appropriate to the type of content that you're streaming. Stream class determines the type of computing resources Amazon GameLift Streams uses and impacts the cost of streaming. The following options are available:\n\nA stream class can be one of the following:\n\n- *`gen5n_win2022` (NVIDIA, ultra)* Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.4, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA A10G Tensor GPU.\n\n- Reference resolution: 1080p\n- Reference frame rate: 60 fps\n- Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM\n- Tenancy: Supports 1 concurrent stream session\n- *`gen5n_high` (NVIDIA, high)* Supports applications with moderate to high 3D scene complexity. Uses NVIDIA A10G Tensor GPU.\n\n- Reference resolution: 1080p\n- Reference frame rate: 60 fps\n- Workload specifications: 4 vCPUs, 16 GB RAM, 12 GB VRAM\n- Tenancy: Supports up to 2 concurrent stream sessions\n- *`gen5n_ultra` (NVIDIA, ultra)* Supports applications with extremely high 3D scene complexity. Uses dedicated NVIDIA A10G Tensor GPU.\n\n- Reference resolution: 1080p\n- Reference frame rate: 60 fps\n- Workload specifications: 8 vCPUs, 32 GB RAM, 24 GB VRAM\n- Tenancy: Supports 1 concurrent stream session\n- *`gen4n_win2022` (NVIDIA, ultra)* Supports applications with extremely high 3D scene complexity. Runs applications on Microsoft Windows Server 2022 Base and supports DirectX 12. Compatible with Unreal Engine versions up through 5.4, 32 and 64-bit applications, and anti-cheat technology. Uses NVIDIA T4 Tensor GPU.\n\n- Reference resolution: 1080p\n- Reference frame rate: 60 fps\n- Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM\n- Tenancy: Supports 1 concurrent stream session\n- *`gen4n_high` (NVIDIA, high)* Supports applications with moderate to high 3D scene complexity. Uses NVIDIA T4 Tensor GPU.\n\n- Reference resolution: 1080p\n- Reference frame rate: 60 fps\n- Workload specifications: 4 vCPUs, 16 GB RAM, 8 GB VRAM\n- Tenancy: Supports up to 2 concurrent stream sessions\n- *`gen4n_ultra` (NVIDIA, ultra)* Supports applications with high 3D scene complexity. Uses dedicated NVIDIA T4 Tensor GPU.\n\n- Reference resolution: 1080p\n- Reference frame rate: 60 fps\n- Workload specifications: 8 vCPUs, 32 GB RAM, 16 GB VRAM\n- Tenancy: Supports 1 concurrent stream session", + "Tags": "A list of labels to assign to the new stream group resource. Tags are developer-defined key-value pairs. Tagging AWS resources is useful for resource management, access management and cost allocation. See [Tagging AWS Resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) in the *AWS General Reference* ." + }, + "AWS::GameLiftStreams::StreamGroup DefaultApplication": { + "Arn": "An [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html) that uniquely identifies the application resource. Format example: `arn:aws:gameliftstreams:us-west-2:123456789012:application/a-9ZY8X7Wv6` .", + "Id": "An ID that uniquely identifies the application resource. For example: `a-9ZY8X7Wv6` ." + }, + "AWS::GameLiftStreams::StreamGroup LocationConfiguration": { + "AlwaysOnCapacity": "The streaming capacity that is allocated and ready to handle stream requests without delay. You pay for this capacity whether it's in use or not. Best for quickest time from streaming request to streaming session.", + "LocationName": "A location's name. For example, `us-east-1` . For a complete list of locations that Amazon GameLift Streams supports, refer to [Regions and quotas](https://docs.aws.amazon.com/gameliftstreams/latest/developerguide/regions-quotas.html) in the *Amazon GameLift Streams Developer Guide* .", + "OnDemandCapacity": "The streaming capacity that Amazon GameLift Streams can allocate in response to stream requests, and then de-allocate when the session has terminated. This offers a cost control measure at the expense of a greater startup time (typically under 5 minutes)." + }, "AWS::GlobalAccelerator::Accelerator": { "Enabled": "Indicates whether the accelerator is enabled. The value is true or false. The default value is true.\n\nIf the value is set to true, the accelerator cannot be deleted. If set to false, accelerator can be deleted.", "IpAddressType": "The IP address type that an accelerator supports. For a standard accelerator, the value can be IPV4 or DUAL_STACK.", @@ -20616,10 +21016,18 @@ "RoleName": "The name of the role to associate the policy with.\n\nThis parameter allows (through its [regex pattern](https://docs.aws.amazon.com/http://wikipedia.org/wiki/regex) ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-" }, "AWS::IAM::SAMLProvider": { + "AddPrivateKey": "Specifies the new private key from your external identity provider. The private key must be a .pem file that uses AES-GCM or AES-CBC encryption algorithm to decrypt SAML assertions.", + "AssertionEncryptionMode": "Specifies the encryption setting for the SAML provider.", "Name": "The name of the provider to create.\n\nThis parameter allows (through its [regex pattern](https://docs.aws.amazon.com/http://wikipedia.org/wiki/regex) ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-", + "PrivateKeyList": "The private key metadata for the SAML provider.", + "RemovePrivateKey": "The Key ID of the private key to remove.", "SamlMetadataDocument": "An XML document generated by an identity provider (IdP) that supports SAML 2.0. The document includes the issuer's name, expiration information, and keys that can be used to validate the SAML authentication response (assertions) that are received from the IdP. You must generate the metadata document using the identity management software that is used as your organization's IdP.\n\nFor more information, see [About SAML 2.0-based federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html) in the *IAM User Guide*", "Tags": "A list of tags that you want to attach to the new IAM SAML provider. Each tag consists of a key name and an associated value. For more information about tagging, see [Tagging IAM resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) in the *IAM User Guide* .\n\n> If any one of the tags is invalid or if you exceed the allowed maximum number of tags, then the entire request fails and the resource is not created." }, + "AWS::IAM::SAMLProvider SAMLPrivateKey": { + "KeyId": "The unique identifier for the SAML private key.", + "Timestamp": "The date and time, in [ISO 8601 date-time](https://docs.aws.amazon.com/http://www.iso.org/iso/iso8601) format, when the private key was uploaded." + }, "AWS::IAM::SAMLProvider Tag": { "Key": "The key name that can be used to look up or retrieve the associated value. For example, `Department` or `Cost Center` are common choices.", "Value": "The value associated with this tag. For example, tags with a key name of `Department` could have values such as `Human Resources` , `Accounting` , and `Support` . Tags with a key name of `Cost Center` might have values that consist of the number associated with the different cost centers in your company. Typically, many resources have tags with the same key name but with different values.\n\n> AWS always interprets the tag `Value` as a single string. If you need to store an array, you can store comma-separated values in the string. However, you must interpret the value in your code." @@ -21412,7 +21820,8 @@ "CaCertificateExpiringCheck": "Checks if a CA certificate is expiring. This check applies to CA certificates expiring within 30 days or that have expired.", "CaCertificateKeyQualityCheck": "Checks the quality of the CA certificate key. The quality checks if the key is in a valid format, not expired, and if the key meets a minimum required size. This check applies to CA certificates that are `ACTIVE` or `PENDING_TRANSFER` .", "ConflictingClientIdsCheck": "Checks if multiple devices connect using the same client ID.", - "DeviceCertificateExpiringCheck": "Checks if a device certificate is expiring. This check applies to device certificates expiring within 30 days or that have expired.", + "DeviceCertificateAgeCheck": "Checks when a device certificate has been active for a number of days greater than or equal to the number you specify.", + "DeviceCertificateExpiringCheck": "Checks if a device certificate is expiring. By default, this check applies to device certificates expiring within 30 days or that have expired. You can modify this threshold by configuring the DeviceCertExpirationAuditCheckConfiguration.", "DeviceCertificateKeyQualityCheck": "Checks the quality of the device certificate key. The quality checks if the key is in a valid format, not expired, signed by a registered certificate authority, and if the key meets a minimum required size.", "DeviceCertificateSharedCheck": "Checks if multiple concurrent connections use the same X.509 certificate to authenticate with AWS IoT .", "IntermediateCaRevokedForActiveDeviceCertificatesCheck": "Checks if device certificates are still active despite being revoked by an intermediate CA.", @@ -21433,6 +21842,20 @@ "AWS::IoT::AccountAuditConfiguration AuditNotificationTargetConfigurations": { "Sns": "The `Sns` notification target." }, + "AWS::IoT::AccountAuditConfiguration CertAgeCheckCustomConfiguration": { + "CertAgeThresholdInDays": "The number of days that defines when a device certificate is considered to have aged. The check will report a finding if a certificate has been active for a number of days greater than or equal to this threshold value." + }, + "AWS::IoT::AccountAuditConfiguration CertExpirationCheckCustomConfiguration": { + "CertExpirationThresholdInDays": "The number of days before expiration that defines when a device certificate is considered to be approaching expiration. The check will report a finding if a certificate will expire within this number of days." + }, + "AWS::IoT::AccountAuditConfiguration DeviceCertAgeAuditCheckConfiguration": { + "Configuration": "Configuration settings for the device certificate age check, including the threshold in days for certificate age. This configuration is of type `CertAgeCheckCustomConfiguration` .", + "Enabled": "True if this audit check is enabled for this account." + }, + "AWS::IoT::AccountAuditConfiguration DeviceCertExpirationAuditCheckConfiguration": { + "Configuration": "Configuration settings for the device certificate expiration check, including the threshold in days before expiration. This configuration is of type `CertExpirationCheckCustomConfiguration`", + "Enabled": "True if this audit check is enabled for this account." + }, "AWS::IoT::Authorizer": { "AuthorizerFunctionArn": "The authorizer's Lambda function ARN.", "AuthorizerName": "The authorizer name.", @@ -21834,21 +22257,35 @@ "Value": "The tag's value." }, "AWS::IoT::SoftwarePackage": { - "Description": "", - "PackageName": "", - "Tags": "" + "Description": "A summary of the package being created. This can be used to outline the package's contents or purpose.", + "PackageName": "The name of the new software package.", + "Tags": "Metadata that can be used to manage the package." }, "AWS::IoT::SoftwarePackage Tag": { "Key": "The tag's key.", "Value": "The tag's value." }, "AWS::IoT::SoftwarePackageVersion": { + "Artifact": "", "Attributes": "Metadata that can be used to define a package version\u2019s configuration. For example, the S3 file location, configuration options that are being sent to the device or fleet.\n\nThe combined size of all the attributes on a package version is limited to 3KB.", "Description": "A summary of the package version being created. This can be used to outline the package's contents or purpose.", "PackageName": "The name of the associated software package.", + "Recipe": "", + "Sbom": "", "Tags": "Metadata that can be used to manage the package version.", "VersionName": "The name of the new package version." }, + "AWS::IoT::SoftwarePackageVersion PackageVersionArtifact": { + "S3Location": "" + }, + "AWS::IoT::SoftwarePackageVersion S3Location": { + "Bucket": "", + "Key": "", + "Version": "" + }, + "AWS::IoT::SoftwarePackageVersion Sbom": { + "S3Location": "" + }, "AWS::IoT::SoftwarePackageVersion Tag": { "Key": "The tag's key.", "Value": "The tag's value." @@ -21883,7 +22320,7 @@ "ThingName": "The name of the AWS IoT thing." }, "AWS::IoT::ThingType": { - "DeprecateThingType": "Deprecates a thing type. You can not associate new things with deprecated thing type. You cannot update `ThingTypeProperties` if the thing type is deprecated.\n\nRequires permission to access the [DeprecateThingType](https://docs.aws.amazon.com//service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions) action.", + "DeprecateThingType": "Deprecates a thing type. You can not associate new things with deprecated thing type.\n\nRequires permission to access the [DeprecateThingType](https://docs.aws.amazon.com//service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions) action.", "Tags": "Metadata which can be used to manage the thing type.", "ThingTypeName": "The name of the thing type.", "ThingTypeProperties": "The thing type properties for the thing type to create. It contains information about the new thing type including a description, a list of searchable thing attribute names, and a list of propagating attributes. After a thing type is created, you can only update `Mqtt5Configuration` ." @@ -21906,7 +22343,7 @@ "ThingTypeDescription": "The description of the thing type." }, "AWS::IoT::TopicRule": { - "RuleName": "The name of the rule.\n\n*Pattern* : `^[a-zA-Z0-9_]+$`", + "RuleName": "The name of the rule.", "Tags": "Metadata which can be used to manage the topic rule.\n\n> For URI Request parameters use format: ...key1=value1&key2=value2...\n> \n> For the CLI command-line parameter use format: --tags \"key1=value1&key2=value2...\"\n> \n> For the cli-input-json file use format: \"tags\": \"key1=value1&key2=value2...\"", "TopicRulePayload": "The rule payload." }, @@ -22853,6 +23290,7 @@ "MessageId": "The ID of the message.", "Name": "The name of the signal.", "Offset": "The offset used to calculate the signal value. Combined with factor, the calculation is `value = raw_value * factor + offset` .", + "SignalValueType": "The value type of the signal. The default value is `INTEGER` .", "StartBit": "Indicates the beginning of the CAN message." }, "AWS::IoTFleetWise::DecoderManifest CanSignalDecoder": { @@ -22901,11 +23339,13 @@ "BitMaskLength": "The number of bits to mask in a message.", "BitRightShift": "The number of positions to shift bits in the message.", "ByteLength": "The length of a message.", + "IsSigned": "Determines whether the message is signed ( `true` ) or not ( `false` ). If it's signed, the message can represent both positive and negative numbers. The `isSigned` parameter only applies to the `INTEGER` raw signal type, and it doesn't affect the `FLOATING_POINT` raw signal type. The default value is `false` .", "Offset": "The offset used to calculate the signal value. Combined with scaling, the calculation is `value = raw_value * scaling + offset` .", "Pid": "The diagnostic code used to request data from a vehicle for this signal.", "PidResponseLength": "The length of the requested data.", "Scaling": "A multiplier used to decode the message.", "ServiceMode": "The mode of operation (diagnostic service) in a message.", + "SignalValueType": "The value type of the signal. The default value is `INTEGER` .", "StartByte": "Indicates the beginning of the message." }, "AWS::IoTFleetWise::DecoderManifest ObdSignalDecoder": { @@ -23198,7 +23638,7 @@ "GatewayCapabilitySummaries": "A list of gateway capability summaries that each contain a namespace and status. Each gateway capability defines data sources for the gateway. To retrieve a capability configuration's definition, use [DescribeGatewayCapabilityConfiguration](https://docs.aws.amazon.com/iot-sitewise/latest/APIReference/API_DescribeGatewayCapabilityConfiguration.html) .", "GatewayName": "A unique name for the gateway.", "GatewayPlatform": "The gateway's platform. You can only specify one platform in a gateway.", - "GatewayVersion": "", + "GatewayVersion": "The version of the gateway. A value of `3` indicates an MQTT-enabled, V3 gateway, while `2` indicates a Classic streams, V2 gateway.", "Tags": "A list of key-value pairs that contain metadata for the gateway. For more information, see [Tagging your AWS IoT SiteWise resources](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/tag-resources.html) in the *AWS IoT SiteWise User Guide* ." }, "AWS::IoTSiteWise::Gateway GatewayCapabilitySummary": { @@ -23206,9 +23646,8 @@ "CapabilityNamespace": "The namespace of the capability configuration. For example, if you configure OPC-UA sources from the AWS IoT SiteWise console, your OPC-UA capability configuration has the namespace `iotsitewise:opcuacollector:version` , where `version` is a number such as `1` ." }, "AWS::IoTSiteWise::Gateway GatewayPlatform": { - "Greengrass": "A gateway that runs on AWS IoT Greengrass .", "GreengrassV2": "A gateway that runs on AWS IoT Greengrass V2 .", - "SiemensIE": "A AWS IoT SiteWise Edge gateway that runs on a Siemens Industrial Edge Device." + "SiemensIE": "An AWS IoT SiteWise Edge gateway that runs on a Siemens Industrial Edge Device." }, "AWS::IoTSiteWise::Gateway GreengrassV2": { "CoreDeviceOperatingSystem": "", @@ -23238,7 +23677,7 @@ "NotificationLambdaArn": "The [ARN](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) of the Lambda function that manages alarm notifications. For more information, see [Managing alarm notifications](https://docs.aws.amazon.com/iotevents/latest/developerguide/lambda-support.html) in the *AWS IoT Events Developer Guide* ." }, "AWS::IoTSiteWise::Portal PortalTypeEntry": { - "PortalTools": "" + "PortalTools": "The array of tools associated with the specified portal type. The possible values are `ASSISTANT` and `DASHBOARD` ." }, "AWS::IoTSiteWise::Portal Tag": { "Key": "The key or name that identifies the tag.", @@ -23745,7 +24184,7 @@ "Capacity": "The connector's compute capacity settings.", "ConnectorConfiguration": "The configuration of the connector.", "ConnectorDescription": "The description of the connector.", - "ConnectorName": "The name of the connector.", + "ConnectorName": "The name of the connector.\n\nThe connector name must be unique and can include up to 128 characters. Valid characters you can include in a connector name are: a-z, A-Z, 0-9, and -.", "KafkaCluster": "The details of the Apache Kafka cluster to which the connector is connected.", "KafkaClusterClientAuthentication": "The type of client authentication used to connect to the Apache Kafka cluster. The value is NONE when no client authentication is used.", "KafkaClusterEncryptionInTransit": "Details of encryption in transit to the Apache Kafka cluster.", @@ -25007,7 +25446,7 @@ "KeyPassphrase": "Passphrase to decrypt the private key when the key is encrypted. For information, see [Using Key Pair Authentication & Key Rotation](https://docs.aws.amazon.com/https://docs.snowflake.com/en/user-guide/data-load-snowpipe-streaming-configuration#using-key-pair-authentication-key-rotation) .", "MetaDataColumnName": "Specify a column name in the table, where the metadata information has to be loaded. When you enable this field, you will see the following column in the snowflake table, which differs based on the source type.\n\nFor Direct PUT as source\n\n`{ \"firehoseDeliveryStreamName\" : \"streamname\", \"IngestionTime\" : \"timestamp\" }`\n\nFor Kinesis Data Stream as source\n\n`\"kinesisStreamName\" : \"streamname\", \"kinesisShardId\" : \"Id\", \"kinesisPartitionKey\" : \"key\", \"kinesisSequenceNumber\" : \"1234\", \"subsequenceNumber\" : \"2334\", \"IngestionTime\" : \"timestamp\" }`", "PrivateKey": "The private key used to encrypt your Snowflake client. For information, see [Using Key Pair Authentication & Key Rotation](https://docs.aws.amazon.com/https://docs.snowflake.com/en/user-guide/data-load-snowpipe-streaming-configuration#using-key-pair-authentication-key-rotation) .", - "ProcessingConfiguration": "Specifies configuration for Snowflake.", + "ProcessingConfiguration": "", "RetryOptions": "The time period where Firehose will retry sending data to the chosen HTTP endpoint.", "RoleARN": "The Amazon Resource Name (ARN) of the Snowflake role", "S3BackupMode": "Choose an S3 backup mode", @@ -25105,7 +25544,7 @@ "CreateTableDefaultPermissions": "Specifies whether access control on a newly created table is managed by Lake Formation permissions or exclusively by IAM permissions.\n\nA null value indicates that the access is controlled by Lake Formation permissions. `ALL` permissions assigned to `IAM_ALLOWED_PRINCIPALS` group indicate that the user's IAM permissions determine the access to the table. This is referred to as the setting \"Use only IAM access control,\" and is to support the backward compatibility with the AWS Glue permission model implemented by IAM permissions.\n\nThe only permitted values are an empty array or an array that contains a single JSON object that grants `ALL` permissions to `IAM_ALLOWED_PRINCIPALS` .\n\nFor more information, see [Changing the default security settings for your data lake](https://docs.aws.amazon.com/lake-formation/latest/dg/change-settings.html) .", "ExternalDataFilteringAllowList": "A list of the account IDs of AWS accounts with Amazon EMR clusters or third-party engines that are allwed to perform data filtering.", "MutationType": "Specifies whether the data lake settings are updated by adding new values to the current settings ( `APPEND` ) or by replacing the current settings with new settings ( `REPLACE` ).\n\n> If you choose `REPLACE` , your current data lake settings will be replaced with the new values in your template.", - "Parameters": "A key-value map that provides an additional configuration on your data lake. `CrossAccountVersion` is the key you can configure in the `Parameters` field. Accepted values for the `CrossAccountVersion` key are 1, 2, and 3.", + "Parameters": "A key-value map that provides an additional configuration on your data lake. `CrossAccountVersion` is the key you can configure in the `Parameters` field. Accepted values for the `CrossAccountVersion` key are 1, 2, 3, and 4.", "TrustedResourceOwners": "An array of UTF-8 strings.\n\nA list of the resource-owning account IDs that the caller's account can use to share their user access details (user ARNs). The user ARNs can be logged in the resource owner's CloudTrail log. You may want to specify this property when you are in a high-trust boundary, such as the same team or company." }, "AWS::LakeFormation::DataLakeSettings DataLakePrincipal": { @@ -25303,8 +25742,8 @@ "Qualifier": "The identifier of a version or alias.\n\n- *Version* - A version number.\n- *Alias* - An alias name.\n- *Latest* - To specify the unpublished version, use `$LATEST` ." }, "AWS::Lambda::EventInvokeConfig DestinationConfig": { - "OnFailure": "The destination configuration for failed invocations.", - "OnSuccess": "The destination configuration for successful invocations." + "OnFailure": "The destination configuration for failed invocations.\n\n> When using an Amazon SQS queue as a destination, FIFO queues cannot be used.", + "OnSuccess": "The destination configuration for successful invocations.\n\n> When using an Amazon SQS queue as a destination, FIFO queues cannot be used." }, "AWS::Lambda::EventInvokeConfig OnFailure": { "Destination": "The Amazon Resource Name (ARN) of the destination resource.\n\nTo retain records of unsuccessful [asynchronous invocations](https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html#invocation-async-destinations) , you can configure an Amazon SNS topic, Amazon SQS queue, Amazon S3 bucket, Lambda function, or Amazon EventBridge event bus as the destination.\n\nTo retain records of failed invocations from [Kinesis](https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html) , [DynamoDB](https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html) , [self-managed Kafka](https://docs.aws.amazon.com/lambda/latest/dg/with-kafka.html#services-smaa-onfailure-destination) or [Amazon MSK](https://docs.aws.amazon.com/lambda/latest/dg/with-msk.html#services-msk-onfailure-destination) , you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination." @@ -25315,7 +25754,7 @@ "AWS::Lambda::EventSourceMapping": { "AmazonManagedKafkaEventSourceConfig": "Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.", "BatchSize": "The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).\n\n- *Amazon Kinesis* \u2013 Default 100. Max 10,000.\n- *Amazon DynamoDB Streams* \u2013 Default 100. Max 10,000.\n- *Amazon Simple Queue Service* \u2013 Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.\n- *Amazon Managed Streaming for Apache Kafka* \u2013 Default 100. Max 10,000.\n- *Self-managed Apache Kafka* \u2013 Default 100. Max 10,000.\n- *Amazon MQ (ActiveMQ and RabbitMQ)* \u2013 Default 100. Max 10,000.\n- *DocumentDB* \u2013 Default 100. Max 10,000.", - "BisectBatchOnFunctionError": "(Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.", + "BisectBatchOnFunctionError": "(Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.\n\n> When using `BisectBatchOnFunctionError` , check the `BatchSize` parameter in the `OnFailure` destination message's metadata. The `BatchSize` could be greater than 1 since Lambda consolidates failed messages metadata when writing to the `OnFailure` destination.", "DestinationConfig": "(Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.", "DocumentDBEventSourceConfig": "Specific configuration settings for a DocumentDB event source.", "Enabled": "When true, the event source mapping is active. When false, Lambda pauses polling and invocation.\n\nDefault: True", @@ -26399,7 +26838,7 @@ "Tags": "Applies one or more tags to the map resource. A tag is a key-value pair that helps manage, identify, search, and filter your resources by labelling them." }, "AWS::Location::APIKey ApiKeyRestrictions": { - "AllowActions": "A list of allowed actions that an API key resource grants permissions to perform. You must have at least one action for each type of resource. For example, if you have a place resource, you must include at least one place action.\n\nThe following are valid values for the actions.\n\n- *Map actions*\n\n- `geo:GetMap*` - Allows all actions needed for map rendering.\n- *Place actions*\n\n- `geo:SearchPlaceIndexForText` - Allows geocoding.\n- `geo:SearchPlaceIndexForPosition` - Allows reverse geocoding.\n- `geo:SearchPlaceIndexForSuggestions` - Allows generating suggestions from text.\n- `geo:GetPlace` - Allows finding a place by place ID.\n- *Route actions*\n\n- `geo:CalculateRoute` - Allows point to point routing.\n- `geo:CalculateRouteMatrix` - Allows calculating a matrix of routes.\n\n> You must use these strings exactly. For example, to provide access to map rendering, the only valid action is `geo:GetMap*` as an input to the list. `[\"geo:GetMap*\"]` is valid but `[\"geo:GetMapTile\"]` is not. Similarly, you cannot use `[\"geo:SearchPlaceIndexFor*\"]` - you must list each of the Place actions separately.", + "AllowActions": "A list of allowed actions that an API key resource grants permissions to perform. You must have at least one action for each type of resource. For example, if you have a place resource, you must include at least one place action.\n\nThe following are valid values for the actions.\n\n- *Map actions*\n\n- `geo:GetMap*` - Allows all actions needed for map rendering.\n- *Enhanced Maps actions*\n\n- `geo-maps:GetTile` - Allows getting map tiles for rendering.\n- `geo-maps:GetStaticMap` - Allows getting static map images.\n- *Place actions*\n\n- `geo:SearchPlaceIndexForText` - Allows finding geo coordinates of a known place.\n- `geo:SearchPlaceIndexForPosition` - Allows getting nearest address to geo coordinates.\n- `geo:SearchPlaceIndexForSuggestions` - Allows suggestions based on an incomplete or misspelled query.\n- `geo:GetPlace` - Allows getting details of a place.\n- *Enhanced Places actions*\n\n- `geo-places:Autcomplete` - Allows auto-completion of search text.\n- `geo-places:Geocode` - Allows finding geo coordinates of a known place.\n- `geo-places:GetPlace` - Allows getting details of a place.\n- `geo-places:ReverseGeocode` - Allows getting nearest address to geo coordinates.\n- `geo-places:SearchNearby` - Allows category based places search around geo coordinates.\n- `geo-places:SearchText` - Allows place or address search based on free-form text.\n- `geo-places:Suggest` - Allows suggestions based on an incomplete or misspelled query.\n- *Route actions*\n\n- `geo:CalculateRoute` - Allows point to point routing.\n- `geo:CalculateRouteMatrix` - Allows matrix routing.\n- *Enhanced Routes actions*\n\n- `geo-routes:CalculateIsolines` - Allows isoline calculation.\n- `geo-routes:CalculateRoutes` - Allows point to point routing.\n- `geo-routes:CalculateRouteMatrix` - Allows matrix routing.\n- `geo-routes:OptimizeWaypoints` - Allows computing the best sequence of waypoints.\n- `geo-routes:SnapToRoads` - Allows snapping GPS points to a likely route.\n\n> You must use these strings exactly. For example, to provide access to map rendering, the only valid action is `geo:GetMap*` as an input to the list. `[\"geo:GetMap*\"]` is valid but `[\"geo:GetTile\"]` is not. Similarly, you cannot use `[\"geo:SearchPlaceIndexFor*\"]` - you must list each of the Place actions separately.", "AllowReferers": "An optional list of allowed HTTP referers for which requests must originate from. Requests using this API key from other domains will not be allowed.\n\nRequirements:\n\n- Contain only alphanumeric characters (A\u2013Z, a\u2013z, 0\u20139) or any symbols in this list `$\\-._+!*`(),;/?:@=&`\n- May contain a percent (%) if followed by 2 hexadecimal digits (A-F, a-f, 0-9); this is used for URL encoding purposes.\n- May contain wildcard characters question mark (?) and asterisk (*).\n\nQuestion mark (?) will replace any single character (including hexadecimal digits).\n\nAsterisk (*) will replace any multiple characters (including multiple hexadecimal digits).\n- No spaces allowed. For example, `https://example.com` .", "AllowResources": "A list of allowed resource ARNs that a API key bearer can perform actions on.\n\n- The ARN must be the correct ARN for a map, place, or route ARN. You may include wildcards in the resource-id to match multiple resources of the same type.\n- The resources must be in the same `partition` , `region` , and `account-id` as the key that is being created.\n- Other than wildcards, you must include the full ARN, including the `arn` , `partition` , `service` , `region` , `account-id` and `resource-id` delimited by colons (:).\n- No spaces allowed, even with wildcards. For example, `arn:aws:geo:region: *account-id* :map/ExampleMap*` .\n\nFor more information about ARN format, see [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) ." }, @@ -26949,23 +27388,23 @@ "Fsx": "Defines the storage configuration for an Amazon FSx file system." }, "AWS::MSK::BatchScramSecret": { - "ClusterArn": "", - "SecretArnList": "" + "ClusterArn": "The Amazon Resource Name (ARN) that uniquely identifies the cluster.", + "SecretArnList": "List of Amazon Resource Name (ARN)s of Secrets Manager secrets." }, "AWS::MSK::Cluster": { - "BrokerNodeGroupInfo": "", - "ClientAuthentication": "", - "ClusterName": "", - "ConfigurationInfo": "", - "CurrentVersion": "", - "EncryptionInfo": "", - "EnhancedMonitoring": "", - "KafkaVersion": "", - "LoggingInfo": "", - "NumberOfBrokerNodes": "", - "OpenMonitoring": "", - "StorageMode": "", - "Tags": "" + "BrokerNodeGroupInfo": "Information about the broker nodes in the cluster.", + "ClientAuthentication": "Includes all client authentication related information.", + "ClusterName": "The name of the cluster.", + "ConfigurationInfo": "Represents the configuration that you want MSK to use for the cluster.", + "CurrentVersion": "The version of the cluster that you want to update.", + "EncryptionInfo": "Includes all encryption-related information.", + "EnhancedMonitoring": "Specifies the level of monitoring for the MSK cluster.", + "KafkaVersion": "The version of Apache Kafka. You can use Amazon MSK to create clusters that use [supported Apache Kafka versions](https://docs.aws.amazon.com/msk/latest/developerguide/supported-kafka-versions.html) .", + "LoggingInfo": "Logging info details for the cluster.", + "NumberOfBrokerNodes": "The number of broker nodes in the cluster.", + "OpenMonitoring": "The settings for open monitoring.", + "StorageMode": "This controls storage mode for supported storage tiers.", + "Tags": "An arbitrary set of tags (key-value pairs) for the cluster." }, "AWS::MSK::Cluster BrokerLogs": { "CloudWatchLogs": "", @@ -26973,12 +27412,12 @@ "S3": "" }, "AWS::MSK::Cluster BrokerNodeGroupInfo": { - "BrokerAZDistribution": "", - "ClientSubnets": "", - "ConnectivityInfo": "", + "BrokerAZDistribution": "This parameter is currently not in use.", + "ClientSubnets": "The list of subnets to connect to in the client virtual private cloud (VPC). Amazon creates elastic network interfaces (ENIs) inside these subnets. Client applications use ENIs to produce and consume data.\n\nIf you use the US West (N. California) Region, specify exactly two subnets. For other Regions where Amazon MSK is available, you can specify either two or three subnets. The subnets that you specify must be in distinct Availability Zones. When you create a cluster, Amazon MSK distributes the broker nodes evenly across the subnets that you specify.\n\nClient subnets can't occupy the Availability Zone with ID `use1-az3` .", + "ConnectivityInfo": "Information about the cluster's connectivity setting.", "InstanceType": "The type of Amazon EC2 instances to use for brokers. The following instance types are allowed: kafka.m5.large, kafka.m5.xlarge, kafka.m5.2xlarge, kafka.m5.4xlarge, kafka.m5.8xlarge, kafka.m5.12xlarge, kafka.m5.16xlarge, kafka.m5.24xlarge, and kafka.t3.small.", - "SecurityGroups": "", - "StorageInfo": "" + "SecurityGroups": "The security groups to associate with the ENIs in order to specify who can connect to and communicate with the Amazon MSK cluster. If you don't specify a security group, Amazon MSK uses the default security group associated with the VPC. If you specify security groups that were shared with you, you must ensure that you have permissions to them. Specifically, you need the `ec2:DescribeSecurityGroups` permission.", + "StorageInfo": "Contains information about storage volumes attached to Amazon MSK broker nodes." }, "AWS::MSK::Cluster ClientAuthentication": { "Sasl": "", @@ -27005,12 +27444,12 @@ "DataVolumeKMSKeyId": "" }, "AWS::MSK::Cluster EncryptionInTransit": { - "ClientBroker": "", - "InCluster": "" + "ClientBroker": "Indicates the encryption setting for data in transit between clients and brokers. You must set it to one of the following values.\n\n- `TLS` : Indicates that client-broker communication is enabled with TLS only.\n- `TLS_PLAINTEXT` : Indicates that client-broker communication is enabled for both TLS-encrypted, as well as plaintext data.\n- `PLAINTEXT` : Indicates that client-broker communication is enabled in plaintext only.\n\nThe default value is `TLS` .", + "InCluster": "When set to true, it indicates that data communication among the broker nodes of the cluster is encrypted. When set to false, the communication happens in plaintext.\n\nThe default value is true." }, "AWS::MSK::Cluster EncryptionInfo": { "EncryptionAtRest": "", - "EncryptionInTransit": "" + "EncryptionInTransit": "The details for encryption in transit." }, "AWS::MSK::Cluster Firehose": { "DeliveryStream": "", @@ -27089,19 +27528,18 @@ "Policy": "Resource policy for the cluster." }, "AWS::MSK::Configuration": { - "Description": "", - "KafkaVersionsList": "", - "LatestRevision": "", - "Name": "", - "ServerProperties": "" + "Description": "The description of the configuration.", + "KafkaVersionsList": "The [versions of Apache Kafka](https://docs.aws.amazon.com/msk/latest/developerguide/supported-kafka-versions.html) with which you can use this MSK configuration.\n\nWhen you update the `KafkaVersionsList` property, AWS CloudFormation recreates a new configuration with the updated property before deleting the old configuration. Such an update requires a [resource replacement](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html#update-replacement) . To successfully update `KafkaVersionsList` , you must also update the `Name` property in the same operation.\n\nIf your configuration is attached with any clusters created using the AWS Management Console or AWS CLI , you'll need to manually delete the old configuration from the console after the update completes.\n\nFor more information, see [Can\u2019t update KafkaVersionsList in MSK configuration](https://docs.aws.amazon.com/msk/latest/developerguide/troubleshooting.html#troubleshoot-kafkaversionslist-cfn-update-failure) in the *Amazon MSK Developer Guide* .", + "LatestRevision": "Latest revision of the MSK configuration.", + "Name": "The name of the configuration. Configuration names are strings that match the regex \"^[0-9A-Za-z][0-9A-Za-z-]{0,}$\".", + "ServerProperties": "Contents of the `server.properties` file. When using this property, you must ensure that the contents of the file are base64 encoded. When using the console, the SDK, or the AWS CLI , the contents of `server.properties` can be in plaintext." }, "AWS::MSK::Configuration LatestRevision": { - "CreationTime": "", - "Description": "", - "Revision": "" + "CreationTime": "The time when the configuration revision was created.", + "Description": "The description of the configuration revision.", + "Revision": "The revision number." }, "AWS::MSK::Replicator": { - "CurrentVersion": "The current version number of the replicator.", "Description": "A summary description of the replicator.", "KafkaClusters": "Kafka Clusters to use in setting up sources / targets for replication.", "ReplicationInfoList": "A list of replication configurations, where each configuration targets a given source cluster to target cluster replication flow.", @@ -27153,10 +27591,10 @@ "TopicsToReplicate": "List of regular expression patterns indicating the topics to copy." }, "AWS::MSK::ServerlessCluster": { - "ClientAuthentication": "", - "ClusterName": "", - "Tags": "", - "VpcConfigs": "" + "ClientAuthentication": "Includes all client authentication related information.", + "ClusterName": "The name of the cluster.", + "Tags": "An arbitrary set of tags (key-value pairs) for the cluster.", + "VpcConfigs": "VPC configuration information for the serverless cluster." }, "AWS::MSK::ServerlessCluster ClientAuthentication": { "Sasl": "" @@ -27173,11 +27611,11 @@ }, "AWS::MSK::VpcConnection": { "Authentication": "The type of private link authentication.", - "ClientSubnets": "", - "SecurityGroups": "", - "Tags": "", - "TargetClusterArn": "", - "VpcId": "" + "ClientSubnets": "The list of subnets in the client VPC to connect to.", + "SecurityGroups": "The security groups to attach to the ENIs for the broker nodes.", + "Tags": "An arbitrary set of tags (key-value pairs) you specify while creating the VPC connection.", + "TargetClusterArn": "The Amazon Resource Name (ARN) of the cluster.", + "VpcId": "The VPC ID of the remote client." }, "AWS::MWAA::Environment": { "AirflowConfigurationOptions": "A list of key-value pairs containing the Airflow configuration options for your environment. For example, `core.default_timezone: utc` . To learn more, see [Apache Airflow configuration options](https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-env-variables.html) .", @@ -29888,7 +30326,7 @@ "AWS::NeptuneGraph::Graph": { "DeletionProtection": "A value that indicates whether the graph has deletion protection enabled. The graph can't be deleted when deletion protection is enabled.", "GraphName": "The graph name. For example: `my-graph-1` .\n\nThe name must contain from 1 to 63 letters, numbers, or hyphens, and its first character must be a letter. It cannot end with a hyphen or contain two consecutive hyphens.\n\nIf you don't specify a graph name, a unique graph name is generated for you using the prefix `graph-for` , followed by a combination of `Stack Name` and a `UUID` .", - "ProvisionedMemory": "The provisioned memory-optimized Neptune Capacity Units (m-NCUs) to use for the graph.\n\nMin = 128", + "ProvisionedMemory": "The provisioned memory-optimized Neptune Capacity Units (m-NCUs) to use for the graph.\n\nMin = 16", "PublicConnectivity": "Specifies whether or not the graph can be reachable over the internet. All access to graphs is IAM authenticated.\n\nWhen the graph is publicly available, its domain name system (DNS) endpoint resolves to the public IP address from the internet. When the graph isn't publicly available, you need to create a `PrivateGraphEndpoint` in a given VPC to ensure the DNS name resolves to a private IP address that is reachable from the VPC.\n\nDefault: If not specified, the default value is false.\n\n> If enabling public connectivity for the first time, there will be a delay while it is enabled.", "ReplicaCount": "The number of replicas in other AZs.\n\nDefault: If not specified, the default value is 1.", "Tags": "Adds metadata tags to the new graph. These tags can also be used with cost allocation reporting, or used in a Condition statement in an IAM policy.", @@ -30035,10 +30473,10 @@ "ReferenceArn": "The Amazon Resource Name (ARN) of the resource to include in the `RuleGroup.IPSetReference` ." }, "AWS::NetworkFirewall::RuleGroup MatchAttributes": { - "DestinationPorts": "The destination ports to inspect for. If not specified, this matches with any destination port. This setting is only used for protocols 6 (TCP) and 17 (UDP).\n\nYou can specify individual ports, for example `1994` and you can specify port ranges, for example `1990:1994` .", + "DestinationPorts": "The destination port to inspect for. You can specify an individual port, for example `1994` and you can specify a port range, for example `1990:1994` . To match with any port, specify `ANY` .\n\nThis setting is only used for protocols 6 (TCP) and 17 (UDP).", "Destinations": "The destination IP addresses and address ranges to inspect for, in CIDR notation. If not specified, this matches with any destination address.", - "Protocols": "The protocols to inspect for, specified using each protocol's assigned internet protocol number (IANA). If not specified, this matches with any protocol.", - "SourcePorts": "The source ports to inspect for. If not specified, this matches with any source port. This setting is only used for protocols 6 (TCP) and 17 (UDP).\n\nYou can specify individual ports, for example `1994` and you can specify port ranges, for example `1990:1994` .", + "Protocols": "The protocols to inspect for, specified using the assigned internet protocol number (IANA) for each protocol. If not specified, this matches with any protocol.", + "SourcePorts": "The source port to inspect for. You can specify an individual port, for example `1994` and you can specify a port range, for example `1990:1994` . To match with any port, specify `ANY` .\n\nIf not specified, this matches with any source port.\n\nThis setting is only used for protocols 6 (TCP) and 17 (UDP).", "Sources": "The source IP addresses and address ranges to inspect for, in CIDR notation. If not specified, this matches with any source address.", "TCPFlags": "The TCP flags and masks to inspect for. If not specified, this matches with any settings. This setting is only used for protocol 6 (TCP)." }, @@ -30137,7 +30575,7 @@ "AWS::NetworkFirewall::TLSInspectionConfiguration ServerCertificateScope": { "DestinationPorts": "The destination ports to decrypt for inspection, in Transmission Control Protocol (TCP) format. If not specified, this matches with any destination port.\n\nYou can specify individual ports, for example `1994` , and you can specify port ranges, such as `1990:1994` .", "Destinations": "The destination IP addresses and address ranges to decrypt for inspection, in CIDR notation. If not specified, this\nmatches with any destination address.", - "Protocols": "The protocols to decrypt for inspection, specified using each protocol's assigned internet protocol number\n(IANA). Network Firewall currently supports only TCP.", + "Protocols": "The protocols to inspect for, specified using the assigned internet protocol number (IANA) for each protocol. If not specified, this matches with any protocol.\n\nNetwork Firewall currently supports only TCP.", "SourcePorts": "The source ports to decrypt for inspection, in Transmission Control Protocol (TCP) format. If not specified, this matches with any source port.\n\nYou can specify individual ports, for example `1994` , and you can specify port ranges, such as `1990:1994` .", "Sources": "The source IP addresses and address ranges to decrypt for inspection, in CIDR notation. If not specified, this\nmatches with any source address." }, @@ -30635,16 +31073,16 @@ "AWS::Oam::Link": { "LabelTemplate": "Specify a friendly human-readable name to use to identify this source account when you are viewing data from it in the monitoring account.\n\nYou can include the following variables in your template:\n\n- `$AccountName` is the name of the account\n- `$AccountEmail` is a globally-unique email address, which includes the email domain, such as `mariagarcia@example.com`\n- `$AccountEmailNoDomain` is an email address without the domain name, such as `mariagarcia`", "LinkConfiguration": "Use this structure to optionally create filters that specify that only some metric namespaces or log groups are to be shared from the source account to the monitoring account.", - "ResourceTypes": "An array of strings that define which types of data that the source account shares with the monitoring account. Valid values are `AWS::CloudWatch::Metric | AWS::Logs::LogGroup | AWS::XRay::Trace | AWS::ApplicationInsights::Application | AWS::InternetMonitor::Monitor` .", + "ResourceTypes": "An array of strings that define which types of data that the source account shares with the monitoring account. Valid values are `AWS::CloudWatch::Metric | AWS::Logs::LogGroup | AWS::XRay::Trace | AWS::ApplicationInsights::Application | AWS::InternetMonitor::Monitor | AWS::ApplicationSignals::Service | AWS::ApplicationSignals::ServiceLevelObjective` .", "SinkIdentifier": "The ARN of the sink in the monitoring account that you want to link to. You can use [ListSinks](https://docs.aws.amazon.com/OAM/latest/APIReference/API_ListSinks.html) to find the ARNs of sinks.", "Tags": "An array of key-value pairs to apply to the link.\n\nFor more information, see [Tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html) ." }, "AWS::Oam::Link LinkConfiguration": { - "LogGroupConfiguration": "Use this structure to filter which log groups are to share log events from this source account to the monitoring account.", + "LogGroupConfiguration": "Use this structure to filter which log groups are to send log events from the source account to the monitoring account.", "MetricConfiguration": "Use this structure to filter which metric namespaces are to be shared from the source account to the monitoring account." }, "AWS::Oam::Link LinkFilter": { - "Filter": "When used in `MetricConfiguration` this field specifies which metric namespaces are to be shared with the monitoring account\n\nWhen used in `LogGroupConfiguration` this field specifies which log groups are to share their log events with the monitoring account. Use the term `LogGroupName` and one or more of the following operands.\n\nUse single quotation marks (') around log group names and metric namespaces.\n\nThe matching of log group names and metric namespaces is case sensitive. Each filter has a limit of five conditional operands. Conditional operands are `AND` and `OR` .\n\n- `=` and `!=`\n- `AND`\n- `OR`\n- `LIKE` and `NOT LIKE` . These can be used only as prefix searches. Include a `%` at the end of the string that you want to search for and include.\n- `IN` and `NOT IN` , using parentheses `( )`\n\nExamples:\n\n- `Namespace NOT LIKE 'AWS/%'` includes only namespaces that don't start with `AWS/` , such as custom namespaces.\n- `Namespace IN ('AWS/EC2', 'AWS/ELB', 'AWS/S3')` includes only the metrics in the EC2, Elastic Load Balancing , and Amazon S3 namespaces.\n- `Namespace = 'AWS/EC2' OR Namespace NOT LIKE 'AWS/%'` includes only the EC2 namespace and your custom namespaces.\n- `LogGroupName IN ('This-Log-Group', 'Other-Log-Group')` includes only the log groups with names `This-Log-Group` and `Other-Log-Group` .\n- `LogGroupName NOT IN ('Private-Log-Group', 'Private-Log-Group-2')` includes all log groups except the log groups with names `Private-Log-Group` and `Private-Log-Group-2` .\n- `LogGroupName LIKE 'aws/lambda/%' OR LogGroupName LIKE 'AWSLogs%'` includes all log groups that have names that start with `aws/lambda/` or `AWSLogs` .\n\n> If you are updating a link that uses filters, you can specify `*` as the only value for the `filter` parameter to delete the filter and share all log groups with the monitoring account." + "Filter": "" }, "AWS::Oam::Sink": { "Name": "A name for the sink.", @@ -30750,6 +31188,41 @@ "Key": "The key to use in the tag.", "Value": "The value of the tag." }, + "AWS::OpenSearchServerless::Index": { + "CollectionEndpoint": "", + "IndexName": "", + "Mappings": "", + "Settings": "" + }, + "AWS::OpenSearchServerless::Index Index": { + "Knn": "", + "KnnAlgoParamEfSearch": "", + "RefreshInterval": "" + }, + "AWS::OpenSearchServerless::Index IndexSettings": { + "Index": "" + }, + "AWS::OpenSearchServerless::Index Mappings": { + "Properties": "" + }, + "AWS::OpenSearchServerless::Index Method": { + "Engine": "", + "Name": "", + "Parameters": "", + "SpaceType": "" + }, + "AWS::OpenSearchServerless::Index Parameters": { + "EfConstruction": "", + "M": "" + }, + "AWS::OpenSearchServerless::Index PropertyMapping": { + "Dimension": "", + "Index": "", + "Method": "", + "Properties": "", + "Type": "", + "Value": "" + }, "AWS::OpenSearchServerless::LifecyclePolicy": { "Description": "The description of the lifecycle policy.", "Name": "The name of the lifecycle policy.", @@ -31270,6 +31743,7 @@ "VpcInformation": "Information of the VPC and security group(s) used with the connector." }, "AWS::PCAConnectorAD::Connector VpcInformation": { + "IpAddressType": "", "SecurityGroupIds": "The security groups used with the connector. You can use a maximum of 4 security groups with a connector." }, "AWS::PCAConnectorAD::DirectoryRegistration": { @@ -31564,7 +32038,7 @@ "Tags": "1 or more tags added to the resource. Each tag consists of a tag key and tag value. The tag value is optional and can be an empty string." }, "AWS::PCS::ComputeNodeGroup CustomLaunchTemplate": { - "Id": "The ID of the EC2 launch template to use to provision instances.", + "TemplateId": "The ID of the EC2 launch template to use to provision instances.", "Version": "The version of the EC2 launch template to use to provision instances." }, "AWS::PCS::ComputeNodeGroup ErrorInfo": { @@ -32489,7 +32963,7 @@ }, "AWS::Pipes::Pipe PipeTargetCloudWatchLogsParameters": { "LogStreamName": "The name of the log stream.", - "Timestamp": "The time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC." + "Timestamp": "A [dynamic path parameter](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html) to a field in the payload containing the time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.\n\nThe value cannot be a static timestamp as the provided timestamp would be applied to all events delivered by the Pipe, regardless of when they are actually delivered.\n\nIf no dynamic path parameter is provided, the default value is the time the invocation is processed by the Pipe." }, "AWS::Pipes::Pipe PipeTargetEcsTaskParameters": { "CapacityProviderStrategy": "The capacity provider strategy to use for the task.\n\nIf a `capacityProviderStrategy` is specified, the `launchType` parameter must be omitted. If no `capacityProviderStrategy` or launchType is specified, the `defaultCapacityProviderStrategy` for the cluster is used.", @@ -33083,6 +33557,7 @@ "ContributionAnalysisDefaults": "The contribution analysis (anomaly configuration) setup of the visual.", "DataLabels": "The options that determine if visual data labels are displayed.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "Orientation": "The orientation of the bars in a bar chart visual. There are two valid values in this structure:\n\n- `HORIZONTAL` : Used for charts that have horizontal bars. Visuals that use this value are horizontal bar charts, horizontal stacked bar charts, and horizontal stacked 100% bar charts.\n- `VERTICAL` : Used for charts that have vertical bars. Visuals that use this value are vertical bar charts, vertical stacked bar charts, and vertical stacked 100% bar charts.", "ReferenceLines": "The reference line setup of the visual.", @@ -33123,12 +33598,35 @@ "AWS::QuickSight::Analysis BodySectionConfiguration": { "Content": "The configuration of content in a body section.", "PageBreakConfiguration": "The configuration of a page break for a section.", + "RepeatConfiguration": "Describes the configurations that are required to declare a section as repeating.", "SectionId": "The unique identifier of a body section.", "Style": "The style options of a body section." }, "AWS::QuickSight::Analysis BodySectionContent": { "Layout": "The layout configuration of a body section." }, + "AWS::QuickSight::Analysis BodySectionDynamicCategoryDimensionConfiguration": { + "Column": "", + "Limit": "Number of values to use from the column for repetition.", + "SortByMetrics": "Sort criteria on the column values that you use for repetition." + }, + "AWS::QuickSight::Analysis BodySectionDynamicNumericDimensionConfiguration": { + "Column": "", + "Limit": "Number of values to use from the column for repetition.", + "SortByMetrics": "Sort criteria on the column values that you use for repetition." + }, + "AWS::QuickSight::Analysis BodySectionRepeatConfiguration": { + "DimensionConfigurations": "List of `BodySectionRepeatDimensionConfiguration` values that describe the dataset column and constraints for the column used to repeat the contents of a section.", + "NonRepeatingVisuals": "List of visuals to exclude from repetition in repeating sections. The visuals will render identically, and ignore the repeating configurations in all repeating instances.", + "PageBreakConfiguration": "Page break configuration to apply for each repeating instance." + }, + "AWS::QuickSight::Analysis BodySectionRepeatDimensionConfiguration": { + "DynamicCategoryDimensionConfiguration": "Describes the *Category* dataset column and constraints around the dynamic values that will be used in repeating the section contents.", + "DynamicNumericDimensionConfiguration": "Describes the *Numeric* dataset column and constraints around the dynamic values used to repeat the contents of a section." + }, + "AWS::QuickSight::Analysis BodySectionRepeatPageBreakConfiguration": { + "After": "" + }, "AWS::QuickSight::Analysis BoxPlotAggregatedFieldWells": { "GroupBy": "The group by field well of a box plot chart. Values are grouped based on group by fields.", "Values": "The value field well of a box plot chart. Values are aggregated based on group by fields." @@ -33138,6 +33636,7 @@ "CategoryAxis": "The label display options (grid line, range, scale, axis step) of a box plot category.", "CategoryLabelOptions": "The label options (label text, label visibility and sort Icon visibility) of a box plot category.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "", "PrimaryYAxisDisplayOptions": "The label display options (grid line, range, scale, axis step) of a box plot category.", "PrimaryYAxisLabelOptions": "The label options (label text, label visibility and sort icon visibility) of a box plot value.", @@ -33277,6 +33776,7 @@ "CategoryLabelOptions": "The label options (label text, label visibility, and sort icon visibility) of a combo chart category (group/color) field well.", "ColorLabelOptions": "The label options (label text, label visibility, and sort icon visibility) of a combo chart's color field well.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "LineDataLabels": "The options that determine if visual data labels are displayed.\n\nThe data label options for a line in a combo chart.", "PrimaryYAxisDisplayOptions": "The label display options (grid line, range, scale, and axis step) of a combo chart's primary y-axis (bar) field well.", @@ -33360,6 +33860,9 @@ "Color": "Determines the color.", "Expression": "The expression that determines the formatting configuration for solid color." }, + "AWS::QuickSight::Analysis ContextMenuOption": { + "AvailabilityStatus": "The availability status of the context menu options. If the value of this property is set to `ENABLED` , dashboard readers can interact with the context menu." + }, "AWS::QuickSight::Analysis ContributionAnalysisDefault": { "ContributorDimensions": "The dimensions columns that are used in the contribution analysis, usually a list of `ColumnIdentifiers` .", "MeasureFieldId": "The measure field that is used in the contribution analysis." @@ -33396,7 +33899,8 @@ "AWS::QuickSight::Analysis CustomContentConfiguration": { "ContentType": "The content type of the custom content visual. You can use this to have the visual render as an image.", "ContentUrl": "The input URL that links to the custom content that you want in the custom visual.", - "ImageScaling": "The sizing options for the size of the custom content visual. This structure is required when the `ContentType` of the visual is `'IMAGE'` ." + "ImageScaling": "The sizing options for the size of the custom content visual. This structure is required when the `ContentType` of the visual is `'IMAGE'` .", + "Interactions": "The general visual interactions setup for a visual." }, "AWS::QuickSight::Analysis CustomContentVisual": { "Actions": "The list of custom actions that are configured for a visual.", @@ -33539,7 +34043,9 @@ "ValueWhenUnset": "The configuration that defines the default value of a `DateTime` parameter when a value has not been set." }, "AWS::QuickSight::Analysis DateTimePickerControlDisplayOptions": { + "DateIconVisibility": "The date icon visibility of the `DateTimePickerControlDisplayOptions` .", "DateTimeFormat": "Customize how dates are formatted in controls.", + "HelperTextVisibility": "The helper text visibility of the `DateTimePickerControlDisplayOptions` .", "InfoIconLabelOptions": "The configuration of info icon label options.", "TitleOptions": "The options to configure the title visibility, name, and font size." }, @@ -33729,6 +34235,7 @@ }, "AWS::QuickSight::Analysis FilledMapConfiguration": { "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "MapStyleOptions": "The map style options of the filled map visual.", "SortConfiguration": "The sort configuration of a `FilledMapVisual` .", @@ -33868,6 +34375,7 @@ "AWS::QuickSight::Analysis FontConfiguration": { "FontColor": "Determines the color of the text.", "FontDecoration": "Determines the appearance of decorative lines on the text.", + "FontFamily": "The font family that you want to use.", "FontSize": "The option that determines the text display size.", "FontStyle": "Determines the text display face that is inherited by the given font family.", "FontWeight": "The option that determines the text display weight, or boldness." @@ -33948,6 +34456,7 @@ "CategoryLabelOptions": "The label options of the categories that are displayed in a `FunnelChartVisual` .", "DataLabelOptions": "The options that determine the presentation of the data labels.", "FieldWells": "The field well configuration of a `FunnelChartVisual` .", + "Interactions": "The general visual interactions setup for a visual.", "SortConfiguration": "The sort configuration of a `FunnelChartVisual` .", "Tooltip": "The tooltip configuration of a `FunnelChartVisual` .", "ValueLabelOptions": "The label options for the values that are displayed in a `FunnelChartVisual` .", @@ -33981,6 +34490,10 @@ "AWS::QuickSight::Analysis GaugeChartArcConditionalFormatting": { "ForegroundColor": "The conditional formatting of the arc foreground color." }, + "AWS::QuickSight::Analysis GaugeChartColorConfiguration": { + "BackgroundColor": "The background color configuration of a `GaugeChartVisual` .", + "ForegroundColor": "The foreground color configuration of a `GaugeChartVisual` ." + }, "AWS::QuickSight::Analysis GaugeChartConditionalFormatting": { "ConditionalFormattingOptions": "Conditional formatting options of a `GaugeChartVisual` ." }, @@ -33989,9 +34502,11 @@ "PrimaryValue": "The conditional formatting for the primary value of a `GaugeChartVisual` ." }, "AWS::QuickSight::Analysis GaugeChartConfiguration": { + "ColorConfiguration": "The color configuration of a `GaugeChartVisual` .", "DataLabels": "The data label configuration of a `GaugeChartVisual` .", "FieldWells": "The field well configuration of a `GaugeChartVisual` .", "GaugeChartOptions": "The options that determine the presentation of the `GaugeChartVisual` .", + "Interactions": "The general visual interactions setup for a visual.", "TooltipOptions": "The tooltip configuration of a `GaugeChartVisual` .", "VisualPalette": "The visual palette configuration of a `GaugeChartVisual` ." }, @@ -34249,6 +34764,7 @@ "ColumnLabelOptions": "The label options of the column that is displayed in a heat map.", "DataLabels": "The options that determine if visual data labels are displayed.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "RowLabelOptions": "The label options of the row that is displayed in a `heat map` .", "SortConfiguration": "The sort configuration of a heat map.", @@ -34285,6 +34801,7 @@ "BinOptions": "The options that determine the presentation of histogram bins.", "DataLabels": "The data label configuration of a histogram.", "FieldWells": "The field well configuration of a histogram.", + "Interactions": "The general visual interactions setup for a visual.", "Tooltip": "The tooltip configuration of a histogram.", "VisualPalette": "The visual palette configuration of a histogram.", "XAxisDisplayOptions": "The options that determine the presentation of the x-axis.", @@ -34329,7 +34846,8 @@ }, "AWS::QuickSight::Analysis InsightConfiguration": { "Computations": "The computations configurations of the insight visual", - "CustomNarrative": "The custom narrative of the insight visual." + "CustomNarrative": "The custom narrative of the insight visual.", + "Interactions": "The general visual interactions setup for a visual." }, "AWS::QuickSight::Analysis InsightVisual": { "Actions": "The list of custom actions that are configured for a visual.", @@ -34382,6 +34900,7 @@ }, "AWS::QuickSight::Analysis KPIConfiguration": { "FieldWells": "The field well configuration of a KPI visual.", + "Interactions": "The general visual interactions setup for a visual.", "KPIOptions": "The options that determine the presentation of a KPI visual.", "SortConfiguration": "The sort configuration of a KPI visual." }, @@ -34487,6 +35006,7 @@ "DefaultSeriesSettings": "The options that determine the default presentation of all line series in `LineChartVisual` .", "FieldWells": "The field well configuration of a line chart.", "ForecastConfigurations": "The forecast configuration of a line chart.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend configuration of a line chart.", "PrimaryYAxisDisplayOptions": "The series axis configuration of a line chart.", "PrimaryYAxisLabelOptions": "The options that determine the presentation of the y-axis label.", @@ -34816,6 +35336,7 @@ "DataLabels": "The options that determine if visual data labels are displayed.", "DonutOptions": "The options that determine the shape of the chart. This option determines whether the chart is a pie chart or a donut chart.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "SmallMultiplesOptions": "The small multiples setup for the visual.", "SortConfiguration": "The sort configuration of a pie chart.", @@ -34868,6 +35389,7 @@ "AWS::QuickSight::Analysis PivotTableConfiguration": { "FieldOptions": "The field options for a pivot table visual.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "PaginatedReportOptions": "The paginated report options for a pivot table visual.", "SortConfiguration": "The sort configuration for a `PivotTableVisual` .", "TableOptions": "The table options for a pivot table visual.", @@ -35023,6 +35545,7 @@ "ColorAxis": "The color axis of a radar chart.", "ColorLabelOptions": "The color label options of a radar chart.", "FieldWells": "The field well configuration of a `RadarChartVisual` .", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "Shape": "The shape of the radar chart.", "SortConfiguration": "The sort configuration of a `RadarChartVisual` .", @@ -35135,6 +35658,7 @@ "AWS::QuickSight::Analysis SankeyDiagramChartConfiguration": { "DataLabels": "The data label configuration of a sankey diagram.", "FieldWells": "The field well configuration of a sankey diagram.", + "Interactions": "The general visual interactions setup for a visual.", "SortConfiguration": "The sort configuration of a sankey diagram." }, "AWS::QuickSight::Analysis SankeyDiagramFieldWells": { @@ -35163,7 +35687,9 @@ "AWS::QuickSight::Analysis ScatterPlotConfiguration": { "DataLabels": "The options that determine if visual data labels are displayed.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", + "SortConfiguration": "The sort configuration of a scatter plot.", "Tooltip": "The legend display setup of the visual.", "VisualPalette": "The palette (chart color) display setup of the visual.", "XAxisDisplayOptions": "The label display options (grid line, range, scale, and axis step) of the scatter plot's x-axis.", @@ -35175,6 +35701,9 @@ "ScatterPlotCategoricallyAggregatedFieldWells": "The aggregated field wells of a scatter plot. The x and y-axes of scatter plots with aggregated field wells are aggregated by category, label, or both.", "ScatterPlotUnaggregatedFieldWells": "The unaggregated field wells of a scatter plot. The x and y-axes of these scatter plots are unaggregated." }, + "AWS::QuickSight::Analysis ScatterPlotSortConfiguration": { + "ScatterPlotLimitConfiguration": "" + }, "AWS::QuickSight::Analysis ScatterPlotUnaggregatedFieldWells": { "Category": "The category field well of a scatter plot.", "Label": "The label field well of a scatter plot.", @@ -35240,7 +35769,6 @@ "BackgroundColor": "The conditional formatting for the shape background color of a filled map visual." }, "AWS::QuickSight::Analysis Sheet": { - "Images": "A list of images on a sheet.", "Name": "The name of a sheet. This name is displayed on the sheet's tab in the Amazon QuickSight console.", "SheetId": "The unique identifier associated with a sheet." }, @@ -35429,6 +35957,7 @@ "AWS::QuickSight::Analysis TableConfiguration": { "FieldOptions": "The field options for a table visual.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "PaginatedReportOptions": "The paginated report options for a table visual.", "SortConfiguration": "The sort configuration for a `TableVisual` .", "TableInlineVisualizations": "A collection of inline visualizations to display within a chart.", @@ -35543,6 +36072,7 @@ "TitleOptions": "The options to configure the title visibility, name, and font size." }, "AWS::QuickSight::Analysis ThousandSeparatorOptions": { + "GroupingStyle": "Determines the way numbers are styled to accommodate different readability standards. The `DEFAULT` value uses the standard international grouping system and groups numbers by the thousands. The `LAKHS` value uses the Indian numbering system and groups numbers by lakhs and crores.", "Symbol": "Determines the thousands separator symbol.", "Visibility": "Determines the visibility of the thousands separator." }, @@ -35653,6 +36183,7 @@ "DataLabels": "The options that determine if visual data labels are displayed.", "FieldWells": "The field wells of the visual.", "GroupLabelOptions": "The label options (label text, label visibility) of the groups that are displayed in a tree map.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "SizeLabelOptions": "The label options (label text, label visibility) of the sizes that are displayed in a tree map.", "SortConfiguration": "The sort configuration of a tree map.", @@ -35733,6 +36264,13 @@ "SetParametersOperation": "The set parameter operation that sets parameters in custom action.", "URLOperation": "The URL operation that opens a link to another webpage." }, + "AWS::QuickSight::Analysis VisualInteractionOptions": { + "ContextMenuOption": "The context menu options for a visual.", + "VisualMenuOption": "The on-visual menu options for a visual." + }, + "AWS::QuickSight::Analysis VisualMenuOption": { + "AvailabilityStatus": "The availaiblity status of a visual's menu options." + }, "AWS::QuickSight::Analysis VisualPalette": { "ChartColor": "The chart color options for the visual palette.", "ColorMap": "The color map options for the visual palette." @@ -35759,6 +36297,7 @@ "ColorConfiguration": "The color configuration of a waterfall visual.", "DataLabels": "The data label configuration of a waterfall visual.", "FieldWells": "The field well configuration of a waterfall visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend configuration of a waterfall visual.", "PrimaryYAxisDisplayOptions": "The options that determine the presentation of the y-axis.", "PrimaryYAxisLabelOptions": "The options that determine the presentation of the y-axis label.", @@ -35806,6 +36345,7 @@ "AWS::QuickSight::Analysis WordCloudChartConfiguration": { "CategoryLabelOptions": "The label options (label text, label visibility, and sort icon visibility) for the word cloud category.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "SortConfiguration": "The sort configuration of a word cloud visual.", "WordCloudOptions": "The options for a word cloud visual." }, @@ -35984,6 +36524,7 @@ "ContributionAnalysisDefaults": "The contribution analysis (anomaly configuration) setup of the visual.", "DataLabels": "The options that determine if visual data labels are displayed.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "Orientation": "The orientation of the bars in a bar chart visual. There are two valid values in this structure:\n\n- `HORIZONTAL` : Used for charts that have horizontal bars. Visuals that use this value are horizontal bar charts, horizontal stacked bar charts, and horizontal stacked 100% bar charts.\n- `VERTICAL` : Used for charts that have vertical bars. Visuals that use this value are vertical bar charts, vertical stacked bar charts, and vertical stacked 100% bar charts.", "ReferenceLines": "The reference line setup of the visual.", @@ -36024,12 +36565,35 @@ "AWS::QuickSight::Dashboard BodySectionConfiguration": { "Content": "The configuration of content in a body section.", "PageBreakConfiguration": "The configuration of a page break for a section.", + "RepeatConfiguration": "Describes the configurations that are required to declare a section as repeating.", "SectionId": "The unique identifier of a body section.", "Style": "The style options of a body section." }, "AWS::QuickSight::Dashboard BodySectionContent": { "Layout": "The layout configuration of a body section." }, + "AWS::QuickSight::Dashboard BodySectionDynamicCategoryDimensionConfiguration": { + "Column": "", + "Limit": "Number of values to use from the column for repetition.", + "SortByMetrics": "Sort criteria on the column values that you use for repetition." + }, + "AWS::QuickSight::Dashboard BodySectionDynamicNumericDimensionConfiguration": { + "Column": "", + "Limit": "Number of values to use from the column for repetition.", + "SortByMetrics": "Sort criteria on the column values that you use for repetition." + }, + "AWS::QuickSight::Dashboard BodySectionRepeatConfiguration": { + "DimensionConfigurations": "List of `BodySectionRepeatDimensionConfiguration` values that describe the dataset column and constraints for the column used to repeat the contents of a section.", + "NonRepeatingVisuals": "List of visuals to exclude from repetition in repeating sections. The visuals will render identically, and ignore the repeating configurations in all repeating instances.", + "PageBreakConfiguration": "Page break configuration to apply for each repeating instance." + }, + "AWS::QuickSight::Dashboard BodySectionRepeatDimensionConfiguration": { + "DynamicCategoryDimensionConfiguration": "Describes the *Category* dataset column and constraints around the dynamic values that will be used in repeating the section contents.", + "DynamicNumericDimensionConfiguration": "Describes the *Numeric* dataset column and constraints around the dynamic values used to repeat the contents of a section." + }, + "AWS::QuickSight::Dashboard BodySectionRepeatPageBreakConfiguration": { + "After": "" + }, "AWS::QuickSight::Dashboard BoxPlotAggregatedFieldWells": { "GroupBy": "The group by field well of a box plot chart. Values are grouped based on group by fields.", "Values": "The value field well of a box plot chart. Values are aggregated based on group by fields." @@ -36039,6 +36603,7 @@ "CategoryAxis": "The label display options (grid line, range, scale, axis step) of a box plot category.", "CategoryLabelOptions": "The label options (label text, label visibility and sort Icon visibility) of a box plot category.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "", "PrimaryYAxisDisplayOptions": "The label display options (grid line, range, scale, axis step) of a box plot category.", "PrimaryYAxisLabelOptions": "The label options (label text, label visibility and sort icon visibility) of a box plot value.", @@ -36178,6 +36743,7 @@ "CategoryLabelOptions": "The label options (label text, label visibility, and sort icon visibility) of a combo chart category (group/color) field well.", "ColorLabelOptions": "The label options (label text, label visibility, and sort icon visibility) of a combo chart's color field well.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "LineDataLabels": "The options that determine if visual data labels are displayed.\n\nThe data label options for a line in a combo chart.", "PrimaryYAxisDisplayOptions": "The label display options (grid line, range, scale, and axis step) of a combo chart's primary y-axis (bar) field well.", @@ -36261,6 +36827,9 @@ "Color": "Determines the color.", "Expression": "The expression that determines the formatting configuration for solid color." }, + "AWS::QuickSight::Dashboard ContextMenuOption": { + "AvailabilityStatus": "The availability status of the context menu options. If the value of this property is set to `ENABLED` , dashboard readers can interact with the context menu." + }, "AWS::QuickSight::Dashboard ContributionAnalysisDefault": { "ContributorDimensions": "The dimensions columns that are used in the contribution analysis, usually a list of `ColumnIdentifiers` .", "MeasureFieldId": "The measure field that is used in the contribution analysis." @@ -36297,7 +36866,8 @@ "AWS::QuickSight::Dashboard CustomContentConfiguration": { "ContentType": "The content type of the custom content visual. You can use this to have the visual render as an image.", "ContentUrl": "The input URL that links to the custom content that you want in the custom visual.", - "ImageScaling": "The sizing options for the size of the custom content visual. This structure is required when the `ContentType` of the visual is `'IMAGE'` ." + "ImageScaling": "The sizing options for the size of the custom content visual. This structure is required when the `ContentType` of the visual is `'IMAGE'` .", + "Interactions": "The general visual interactions setup for a visual." }, "AWS::QuickSight::Dashboard CustomContentVisual": { "Actions": "The list of custom actions that are configured for a visual.", @@ -36500,7 +37070,9 @@ "ValueWhenUnset": "The configuration that defines the default value of a `DateTime` parameter when a value has not been set." }, "AWS::QuickSight::Dashboard DateTimePickerControlDisplayOptions": { + "DateIconVisibility": "The date icon visibility of the `DateTimePickerControlDisplayOptions` .", "DateTimeFormat": "Customize how dates are formatted in controls.", + "HelperTextVisibility": "The helper text visibility of the `DateTimePickerControlDisplayOptions` .", "InfoIconLabelOptions": "The configuration of info icon label options.", "TitleOptions": "The options to configure the title visibility, name, and font size." }, @@ -36699,6 +37271,7 @@ }, "AWS::QuickSight::Dashboard FilledMapConfiguration": { "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "MapStyleOptions": "The map style options of the filled map visual.", "SortConfiguration": "The sort configuration of a `FilledMapVisual` .", @@ -36838,6 +37411,7 @@ "AWS::QuickSight::Dashboard FontConfiguration": { "FontColor": "Determines the color of the text.", "FontDecoration": "Determines the appearance of decorative lines on the text.", + "FontFamily": "The font family that you want to use.", "FontSize": "The option that determines the text display size.", "FontStyle": "Determines the text display face that is inherited by the given font family.", "FontWeight": "The option that determines the text display weight, or boldness." @@ -36918,6 +37492,7 @@ "CategoryLabelOptions": "The label options of the categories that are displayed in a `FunnelChartVisual` .", "DataLabelOptions": "The options that determine the presentation of the data labels.", "FieldWells": "The field well configuration of a `FunnelChartVisual` .", + "Interactions": "The general visual interactions setup for a visual.", "SortConfiguration": "The sort configuration of a `FunnelChartVisual` .", "Tooltip": "The tooltip configuration of a `FunnelChartVisual` .", "ValueLabelOptions": "The label options for the values that are displayed in a `FunnelChartVisual` .", @@ -36951,6 +37526,10 @@ "AWS::QuickSight::Dashboard GaugeChartArcConditionalFormatting": { "ForegroundColor": "The conditional formatting of the arc foreground color." }, + "AWS::QuickSight::Dashboard GaugeChartColorConfiguration": { + "BackgroundColor": "The background color configuration of a `GaugeChartVisual` .", + "ForegroundColor": "The foreground color configuration of a `GaugeChartVisual` ." + }, "AWS::QuickSight::Dashboard GaugeChartConditionalFormatting": { "ConditionalFormattingOptions": "Conditional formatting options of a `GaugeChartVisual` ." }, @@ -36959,9 +37538,11 @@ "PrimaryValue": "The conditional formatting for the primary value of a `GaugeChartVisual` ." }, "AWS::QuickSight::Dashboard GaugeChartConfiguration": { + "ColorConfiguration": "The color configuration of a `GaugeChartVisual` .", "DataLabels": "The data label configuration of a `GaugeChartVisual` .", "FieldWells": "The field well configuration of a `GaugeChartVisual` .", "GaugeChartOptions": "The options that determine the presentation of the `GaugeChartVisual` .", + "Interactions": "The general visual interactions setup for a visual.", "TooltipOptions": "The tooltip configuration of a `GaugeChartVisual` .", "VisualPalette": "The visual palette configuration of a `GaugeChartVisual` ." }, @@ -37219,6 +37800,7 @@ "ColumnLabelOptions": "The label options of the column that is displayed in a heat map.", "DataLabels": "The options that determine if visual data labels are displayed.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "RowLabelOptions": "The label options of the row that is displayed in a `heat map` .", "SortConfiguration": "The sort configuration of a heat map.", @@ -37255,6 +37837,7 @@ "BinOptions": "The options that determine the presentation of histogram bins.", "DataLabels": "The data label configuration of a histogram.", "FieldWells": "The field well configuration of a histogram.", + "Interactions": "The general visual interactions setup for a visual.", "Tooltip": "The tooltip configuration of a histogram.", "VisualPalette": "The visual palette configuration of a histogram.", "XAxisDisplayOptions": "The options that determine the presentation of the x-axis.", @@ -37299,7 +37882,8 @@ }, "AWS::QuickSight::Dashboard InsightConfiguration": { "Computations": "The computations configurations of the insight visual", - "CustomNarrative": "The custom narrative of the insight visual." + "CustomNarrative": "The custom narrative of the insight visual.", + "Interactions": "The general visual interactions setup for a visual." }, "AWS::QuickSight::Dashboard InsightVisual": { "Actions": "The list of custom actions that are configured for a visual.", @@ -37352,6 +37936,7 @@ }, "AWS::QuickSight::Dashboard KPIConfiguration": { "FieldWells": "The field well configuration of a KPI visual.", + "Interactions": "The general visual interactions setup for a visual.", "KPIOptions": "The options that determine the presentation of a KPI visual.", "SortConfiguration": "The sort configuration of a KPI visual." }, @@ -37457,6 +38042,7 @@ "DefaultSeriesSettings": "The options that determine the default presentation of all line series in `LineChartVisual` .", "FieldWells": "The field well configuration of a line chart.", "ForecastConfigurations": "The forecast configuration of a line chart.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend configuration of a line chart.", "PrimaryYAxisDisplayOptions": "The series axis configuration of a line chart.", "PrimaryYAxisLabelOptions": "The options that determine the presentation of the y-axis label.", @@ -37789,6 +38375,7 @@ "DataLabels": "The options that determine if visual data labels are displayed.", "DonutOptions": "The options that determine the shape of the chart. This option determines whether the chart is a pie chart or a donut chart.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "SmallMultiplesOptions": "The small multiples setup for the visual.", "SortConfiguration": "The sort configuration of a pie chart.", @@ -37841,6 +38428,7 @@ "AWS::QuickSight::Dashboard PivotTableConfiguration": { "FieldOptions": "The field options for a pivot table visual.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "PaginatedReportOptions": "The paginated report options for a pivot table visual.", "SortConfiguration": "The sort configuration for a `PivotTableVisual` .", "TableOptions": "The table options for a pivot table visual.", @@ -37993,6 +38581,7 @@ "ColorAxis": "The color axis of a radar chart.", "ColorLabelOptions": "The color label options of a radar chart.", "FieldWells": "The field well configuration of a `RadarChartVisual` .", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "Shape": "The shape of the radar chart.", "SortConfiguration": "The sort configuration of a `RadarChartVisual` .", @@ -38105,6 +38694,7 @@ "AWS::QuickSight::Dashboard SankeyDiagramChartConfiguration": { "DataLabels": "The data label configuration of a sankey diagram.", "FieldWells": "The field well configuration of a sankey diagram.", + "Interactions": "The general visual interactions setup for a visual.", "SortConfiguration": "The sort configuration of a sankey diagram." }, "AWS::QuickSight::Dashboard SankeyDiagramFieldWells": { @@ -38133,7 +38723,9 @@ "AWS::QuickSight::Dashboard ScatterPlotConfiguration": { "DataLabels": "The options that determine if visual data labels are displayed.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", + "SortConfiguration": "The sort configuration of a scatter plot.", "Tooltip": "The legend display setup of the visual.", "VisualPalette": "The palette (chart color) display setup of the visual.", "XAxisDisplayOptions": "The label display options (grid line, range, scale, and axis step) of the scatter plot's x-axis.", @@ -38145,6 +38737,9 @@ "ScatterPlotCategoricallyAggregatedFieldWells": "The aggregated field wells of a scatter plot. The x and y-axes of scatter plots with aggregated field wells are aggregated by category, label, or both.", "ScatterPlotUnaggregatedFieldWells": "The unaggregated field wells of a scatter plot. The x and y-axes of these scatter plots are unaggregated." }, + "AWS::QuickSight::Dashboard ScatterPlotSortConfiguration": { + "ScatterPlotLimitConfiguration": "" + }, "AWS::QuickSight::Dashboard ScatterPlotUnaggregatedFieldWells": { "Category": "The category field well of a scatter plot.", "Label": "The label field well of a scatter plot.", @@ -38210,7 +38805,6 @@ "BackgroundColor": "The conditional formatting for the shape background color of a filled map visual." }, "AWS::QuickSight::Dashboard Sheet": { - "Images": "A list of images on a sheet.", "Name": "The name of a sheet. This name is displayed on the sheet's tab in the Amazon QuickSight console.", "SheetId": "The unique identifier associated with a sheet." }, @@ -38405,6 +38999,7 @@ "AWS::QuickSight::Dashboard TableConfiguration": { "FieldOptions": "The field options for a table visual.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "PaginatedReportOptions": "The paginated report options for a table visual.", "SortConfiguration": "The sort configuration for a `TableVisual` .", "TableInlineVisualizations": "A collection of inline visualizations to display within a chart.", @@ -38519,6 +39114,7 @@ "TitleOptions": "The options to configure the title visibility, name, and font size." }, "AWS::QuickSight::Dashboard ThousandSeparatorOptions": { + "GroupingStyle": "Determines the way numbers are styled to accommodate different readability standards. The `DEFAULT` value uses the standard international grouping system and groups numbers by the thousands. The `LAKHS` value uses the Indian numbering system and groups numbers by lakhs and crores.", "Symbol": "Determines the thousands separator symbol.", "Visibility": "Determines the visibility of the thousands separator." }, @@ -38629,6 +39225,7 @@ "DataLabels": "The options that determine if visual data labels are displayed.", "FieldWells": "The field wells of the visual.", "GroupLabelOptions": "The label options (label text, label visibility) of the groups that are displayed in a tree map.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "SizeLabelOptions": "The label options (label text, label visibility) of the sizes that are displayed in a tree map.", "SortConfiguration": "The sort configuration of a tree map.", @@ -38712,6 +39309,13 @@ "SetParametersOperation": "The set parameter operation that sets parameters in custom action.", "URLOperation": "The URL operation that opens a link to another webpage." }, + "AWS::QuickSight::Dashboard VisualInteractionOptions": { + "ContextMenuOption": "The context menu options for a visual.", + "VisualMenuOption": "The on-visual menu options for a visual." + }, + "AWS::QuickSight::Dashboard VisualMenuOption": { + "AvailabilityStatus": "The availaiblity status of a visual's menu options." + }, "AWS::QuickSight::Dashboard VisualPalette": { "ChartColor": "The chart color options for the visual palette.", "ColorMap": "The color map options for the visual palette." @@ -38738,6 +39342,7 @@ "ColorConfiguration": "The color configuration of a waterfall visual.", "DataLabels": "The data label configuration of a waterfall visual.", "FieldWells": "The field well configuration of a waterfall visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend configuration of a waterfall visual.", "PrimaryYAxisDisplayOptions": "The options that determine the presentation of the y-axis.", "PrimaryYAxisLabelOptions": "The options that determine the presentation of the y-axis label.", @@ -38785,6 +39390,7 @@ "AWS::QuickSight::Dashboard WordCloudChartConfiguration": { "CategoryLabelOptions": "The label options (label text, label visibility, and sort icon visibility) for the word cloud category.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "SortConfiguration": "The sort configuration of a word cloud visual.", "WordCloudOptions": "The options for a word cloud visual." }, @@ -39400,6 +40006,7 @@ "ContributionAnalysisDefaults": "The contribution analysis (anomaly configuration) setup of the visual.", "DataLabels": "The options that determine if visual data labels are displayed.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "Orientation": "The orientation of the bars in a bar chart visual. There are two valid values in this structure:\n\n- `HORIZONTAL` : Used for charts that have horizontal bars. Visuals that use this value are horizontal bar charts, horizontal stacked bar charts, and horizontal stacked 100% bar charts.\n- `VERTICAL` : Used for charts that have vertical bars. Visuals that use this value are vertical bar charts, vertical stacked bar charts, and vertical stacked 100% bar charts.", "ReferenceLines": "The reference line setup of the visual.", @@ -39440,12 +40047,35 @@ "AWS::QuickSight::Template BodySectionConfiguration": { "Content": "The configuration of content in a body section.", "PageBreakConfiguration": "The configuration of a page break for a section.", + "RepeatConfiguration": "Describes the configurations that are required to declare a section as repeating.", "SectionId": "The unique identifier of a body section.", "Style": "The style options of a body section." }, "AWS::QuickSight::Template BodySectionContent": { "Layout": "The layout configuration of a body section." }, + "AWS::QuickSight::Template BodySectionDynamicCategoryDimensionConfiguration": { + "Column": "", + "Limit": "Number of values to use from the column for repetition.", + "SortByMetrics": "Sort criteria on the column values that you use for repetition." + }, + "AWS::QuickSight::Template BodySectionDynamicNumericDimensionConfiguration": { + "Column": "", + "Limit": "Number of values to use from the column for repetition.", + "SortByMetrics": "Sort criteria on the column values that you use for repetition." + }, + "AWS::QuickSight::Template BodySectionRepeatConfiguration": { + "DimensionConfigurations": "List of `BodySectionRepeatDimensionConfiguration` values that describe the dataset column and constraints for the column used to repeat the contents of a section.", + "NonRepeatingVisuals": "List of visuals to exclude from repetition in repeating sections. The visuals will render identically, and ignore the repeating configurations in all repeating instances.", + "PageBreakConfiguration": "Page break configuration to apply for each repeating instance." + }, + "AWS::QuickSight::Template BodySectionRepeatDimensionConfiguration": { + "DynamicCategoryDimensionConfiguration": "Describes the *Category* dataset column and constraints around the dynamic values that will be used in repeating the section contents.", + "DynamicNumericDimensionConfiguration": "Describes the *Numeric* dataset column and constraints around the dynamic values used to repeat the contents of a section." + }, + "AWS::QuickSight::Template BodySectionRepeatPageBreakConfiguration": { + "After": "" + }, "AWS::QuickSight::Template BoxPlotAggregatedFieldWells": { "GroupBy": "The group by field well of a box plot chart. Values are grouped based on group by fields.", "Values": "The value field well of a box plot chart. Values are aggregated based on group by fields." @@ -39455,6 +40085,7 @@ "CategoryAxis": "The label display options (grid line, range, scale, axis step) of a box plot category.", "CategoryLabelOptions": "The label options (label text, label visibility and sort Icon visibility) of a box plot category.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "", "PrimaryYAxisDisplayOptions": "The label display options (grid line, range, scale, axis step) of a box plot category.", "PrimaryYAxisLabelOptions": "The label options (label text, label visibility and sort icon visibility) of a box plot value.", @@ -39606,6 +40237,7 @@ "CategoryLabelOptions": "The label options (label text, label visibility, and sort icon visibility) of a combo chart category (group/color) field well.", "ColorLabelOptions": "The label options (label text, label visibility, and sort icon visibility) of a combo chart's color field well.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "LineDataLabels": "The options that determine if visual data labels are displayed.\n\nThe data label options for a line in a combo chart.", "PrimaryYAxisDisplayOptions": "The label display options (grid line, range, scale, and axis step) of a combo chart's primary y-axis (bar) field well.", @@ -39689,6 +40321,9 @@ "Color": "Determines the color.", "Expression": "The expression that determines the formatting configuration for solid color." }, + "AWS::QuickSight::Template ContextMenuOption": { + "AvailabilityStatus": "The availability status of the context menu options. If the value of this property is set to `ENABLED` , dashboard readers can interact with the context menu." + }, "AWS::QuickSight::Template ContributionAnalysisDefault": { "ContributorDimensions": "The dimensions columns that are used in the contribution analysis, usually a list of `ColumnIdentifiers` .", "MeasureFieldId": "The measure field that is used in the contribution analysis." @@ -39725,7 +40360,8 @@ "AWS::QuickSight::Template CustomContentConfiguration": { "ContentType": "The content type of the custom content visual. You can use this to have the visual render as an image.", "ContentUrl": "The input URL that links to the custom content that you want in the custom visual.", - "ImageScaling": "The sizing options for the size of the custom content visual. This structure is required when the `ContentType` of the visual is `'IMAGE'` ." + "ImageScaling": "The sizing options for the size of the custom content visual. This structure is required when the `ContentType` of the visual is `'IMAGE'` .", + "Interactions": "The general visual interactions setup for a visual." }, "AWS::QuickSight::Template CustomContentVisual": { "Actions": "The list of custom actions that are configured for a visual.", @@ -39868,7 +40504,9 @@ "ValueWhenUnset": "The configuration that defines the default value of a `DateTime` parameter when a value has not been set." }, "AWS::QuickSight::Template DateTimePickerControlDisplayOptions": { + "DateIconVisibility": "The date icon visibility of the `DateTimePickerControlDisplayOptions` .", "DateTimeFormat": "Customize how dates are formatted in controls.", + "HelperTextVisibility": "The helper text visibility of the `DateTimePickerControlDisplayOptions` .", "InfoIconLabelOptions": "The configuration of info icon label options.", "TitleOptions": "The options to configure the title visibility, name, and font size." }, @@ -40054,6 +40692,7 @@ }, "AWS::QuickSight::Template FilledMapConfiguration": { "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "MapStyleOptions": "The map style options of the filled map visual.", "SortConfiguration": "The sort configuration of a `FilledMapVisual` .", @@ -40193,11 +40832,13 @@ "AWS::QuickSight::Template FontConfiguration": { "FontColor": "Determines the color of the text.", "FontDecoration": "Determines the appearance of decorative lines on the text.", + "FontFamily": "The font family that you want to use.", "FontSize": "The option that determines the text display size.", "FontStyle": "Determines the text display face that is inherited by the given font family.", "FontWeight": "The option that determines the text display weight, or boldness." }, "AWS::QuickSight::Template FontSize": { + "Absolute": "The font size that you want to use in px.", "Relative": "The lexical name for the text size, proportional to its surrounding context." }, "AWS::QuickSight::Template FontWeight": { @@ -40272,6 +40913,7 @@ "CategoryLabelOptions": "The label options of the categories that are displayed in a `FunnelChartVisual` .", "DataLabelOptions": "The options that determine the presentation of the data labels.", "FieldWells": "The field well configuration of a `FunnelChartVisual` .", + "Interactions": "The general visual interactions setup for a visual.", "SortConfiguration": "The sort configuration of a `FunnelChartVisual` .", "Tooltip": "The tooltip configuration of a `FunnelChartVisual` .", "ValueLabelOptions": "The label options for the values that are displayed in a `FunnelChartVisual` .", @@ -40305,6 +40947,10 @@ "AWS::QuickSight::Template GaugeChartArcConditionalFormatting": { "ForegroundColor": "The conditional formatting of the arc foreground color." }, + "AWS::QuickSight::Template GaugeChartColorConfiguration": { + "BackgroundColor": "The background color configuration of a `GaugeChartVisual` .", + "ForegroundColor": "The foreground color configuration of a `GaugeChartVisual` ." + }, "AWS::QuickSight::Template GaugeChartConditionalFormatting": { "ConditionalFormattingOptions": "Conditional formatting options of a `GaugeChartVisual` ." }, @@ -40313,9 +40959,11 @@ "PrimaryValue": "The conditional formatting for the primary value of a `GaugeChartVisual` ." }, "AWS::QuickSight::Template GaugeChartConfiguration": { + "ColorConfiguration": "The color configuration of a `GaugeChartVisual` .", "DataLabels": "The data label configuration of a `GaugeChartVisual` .", "FieldWells": "The field well configuration of a `GaugeChartVisual` .", "GaugeChartOptions": "The options that determine the presentation of the `GaugeChartVisual` .", + "Interactions": "The general visual interactions setup for a visual.", "TooltipOptions": "The tooltip configuration of a `GaugeChartVisual` .", "VisualPalette": "The visual palette configuration of a `GaugeChartVisual` ." }, @@ -40449,6 +41097,7 @@ "ColumnLabelOptions": "The label options of the column that is displayed in a heat map.", "DataLabels": "The options that determine if visual data labels are displayed.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "RowLabelOptions": "The label options of the row that is displayed in a `heat map` .", "SortConfiguration": "The sort configuration of a heat map.", @@ -40485,6 +41134,7 @@ "BinOptions": "The options that determine the presentation of histogram bins.", "DataLabels": "The data label configuration of a histogram.", "FieldWells": "The field well configuration of a histogram.", + "Interactions": "The general visual interactions setup for a visual.", "Tooltip": "The tooltip configuration of a histogram.", "VisualPalette": "The visual palette configuration of a histogram.", "XAxisDisplayOptions": "The options that determine the presentation of the x-axis.", @@ -40525,7 +41175,8 @@ }, "AWS::QuickSight::Template InsightConfiguration": { "Computations": "The computations configurations of the insight visual", - "CustomNarrative": "The custom narrative of the insight visual." + "CustomNarrative": "The custom narrative of the insight visual.", + "Interactions": "The general visual interactions setup for a visual." }, "AWS::QuickSight::Template InsightVisual": { "Actions": "The list of custom actions that are configured for a visual.", @@ -40574,6 +41225,7 @@ }, "AWS::QuickSight::Template KPIConfiguration": { "FieldWells": "The field well configuration of a KPI visual.", + "Interactions": "The general visual interactions setup for a visual.", "KPIOptions": "The options that determine the presentation of a KPI visual.", "SortConfiguration": "The sort configuration of a KPI visual." }, @@ -40658,6 +41310,7 @@ "DefaultSeriesSettings": "The options that determine the default presentation of all line series in `LineChartVisual` .", "FieldWells": "The field well configuration of a line chart.", "ForecastConfigurations": "The forecast configuration of a line chart.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend configuration of a line chart.", "PrimaryYAxisDisplayOptions": "The series axis configuration of a line chart.", "PrimaryYAxisLabelOptions": "The options that determine the presentation of the y-axis label.", @@ -40981,6 +41634,7 @@ "DataLabels": "The options that determine if visual data labels are displayed.", "DonutOptions": "The options that determine the shape of the chart. This option determines whether the chart is a pie chart or a donut chart.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "SmallMultiplesOptions": "The small multiples setup for the visual.", "SortConfiguration": "The sort configuration of a pie chart.", @@ -41033,6 +41687,7 @@ "AWS::QuickSight::Template PivotTableConfiguration": { "FieldOptions": "The field options for a pivot table visual.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "PaginatedReportOptions": "The paginated report options for a pivot table visual.", "SortConfiguration": "The sort configuration for a `PivotTableVisual` .", "TableOptions": "The table options for a pivot table visual.", @@ -41188,6 +41843,7 @@ "ColorAxis": "The color axis of a radar chart.", "ColorLabelOptions": "The color label options of a radar chart.", "FieldWells": "The field well configuration of a `RadarChartVisual` .", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "Shape": "The shape of the radar chart.", "SortConfiguration": "The sort configuration of a `RadarChartVisual` .", @@ -41300,6 +41956,7 @@ "AWS::QuickSight::Template SankeyDiagramChartConfiguration": { "DataLabels": "The data label configuration of a sankey diagram.", "FieldWells": "The field well configuration of a sankey diagram.", + "Interactions": "The general visual interactions setup for a visual.", "SortConfiguration": "The sort configuration of a sankey diagram." }, "AWS::QuickSight::Template SankeyDiagramFieldWells": { @@ -41328,7 +41985,9 @@ "AWS::QuickSight::Template ScatterPlotConfiguration": { "DataLabels": "The options that determine if visual data labels are displayed.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", + "SortConfiguration": "The sort configuration of a scatter plot.", "Tooltip": "The legend display setup of the visual.", "VisualPalette": "The palette (chart color) display setup of the visual.", "XAxisDisplayOptions": "The label display options (grid line, range, scale, and axis step) of the scatter plot's x-axis.", @@ -41340,6 +41999,9 @@ "ScatterPlotCategoricallyAggregatedFieldWells": "The aggregated field wells of a scatter plot. The x and y-axes of scatter plots with aggregated field wells are aggregated by category, label, or both.", "ScatterPlotUnaggregatedFieldWells": "The unaggregated field wells of a scatter plot. The x and y-axes of these scatter plots are unaggregated." }, + "AWS::QuickSight::Template ScatterPlotSortConfiguration": { + "ScatterPlotLimitConfiguration": "" + }, "AWS::QuickSight::Template ScatterPlotUnaggregatedFieldWells": { "Category": "The category field well of a scatter plot.", "Label": "The label field well of a scatter plot.", @@ -41405,7 +42067,6 @@ "BackgroundColor": "The conditional formatting for the shape background color of a filled map visual." }, "AWS::QuickSight::Template Sheet": { - "Images": "A list of images on a sheet.", "Name": "The name of a sheet. This name is displayed on the sheet's tab in the Amazon QuickSight console.", "SheetId": "The unique identifier associated with a sheet." }, @@ -41570,6 +42231,7 @@ "AWS::QuickSight::Template TableConfiguration": { "FieldOptions": "The field options for a table visual.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "PaginatedReportOptions": "The paginated report options for a table visual.", "SortConfiguration": "The sort configuration for a `TableVisual` .", "TableInlineVisualizations": "A collection of inline visualizations to display within a chart.", @@ -41722,6 +42384,7 @@ "TitleOptions": "The options to configure the title visibility, name, and font size." }, "AWS::QuickSight::Template ThousandSeparatorOptions": { + "GroupingStyle": "Determines the way numbers are styled to accommodate different readability standards. The `DEFAULT` value uses the standard international grouping system and groups numbers by the thousands. The `LAKHS` value uses the Indian numbering system and groups numbers by lakhs and crores.", "Symbol": "Determines the thousands separator symbol.", "Visibility": "Determines the visibility of the thousands separator." }, @@ -41832,6 +42495,7 @@ "DataLabels": "The options that determine if visual data labels are displayed.", "FieldWells": "The field wells of the visual.", "GroupLabelOptions": "The label options (label text, label visibility) of the groups that are displayed in a tree map.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend display setup of the visual.", "SizeLabelOptions": "The label options (label text, label visibility) of the sizes that are displayed in a tree map.", "SortConfiguration": "The sort configuration of a tree map.", @@ -41911,6 +42575,13 @@ "SetParametersOperation": "The set parameter operation that sets parameters in custom action.", "URLOperation": "The URL operation that opens a link to another webpage." }, + "AWS::QuickSight::Template VisualInteractionOptions": { + "ContextMenuOption": "The context menu options for a visual.", + "VisualMenuOption": "The on-visual menu options for a visual." + }, + "AWS::QuickSight::Template VisualMenuOption": { + "AvailabilityStatus": "The availaiblity status of a visual's menu options." + }, "AWS::QuickSight::Template VisualPalette": { "ChartColor": "The chart color options for the visual palette.", "ColorMap": "The color map options for the visual palette." @@ -41937,6 +42608,7 @@ "ColorConfiguration": "The color configuration of a waterfall visual.", "DataLabels": "The data label configuration of a waterfall visual.", "FieldWells": "The field well configuration of a waterfall visual.", + "Interactions": "The general visual interactions setup for a visual.", "Legend": "The legend configuration of a waterfall visual.", "PrimaryYAxisDisplayOptions": "The options that determine the presentation of the y-axis.", "PrimaryYAxisLabelOptions": "The options that determine the presentation of the y-axis label.", @@ -41984,6 +42656,7 @@ "AWS::QuickSight::Template WordCloudChartConfiguration": { "CategoryLabelOptions": "The label options (label text, label visibility, and sort icon visibility) for the word cloud category.", "FieldWells": "The field wells of the visual.", + "Interactions": "The general visual interactions setup for a visual.", "SortConfiguration": "The sort configuration of a word cloud visual.", "WordCloudOptions": "The options for a word cloud visual." }, @@ -42364,7 +43037,7 @@ "DBInstanceParameterGroupName": "The name of the DB parameter group to apply to all instances of the DB cluster.\n\n> When you apply a parameter group using the `DBInstanceParameterGroupName` parameter, the DB cluster isn't rebooted automatically. Also, parameter changes are applied immediately rather than during the next maintenance window. \n\nValid for Cluster Type: Aurora DB clusters only\n\nDefault: The existing name setting\n\nConstraints:\n\n- The DB parameter group must be in the same DB parameter group family as this DB cluster.\n- The `DBInstanceParameterGroupName` parameter is valid in combination with the `AllowMajorVersionUpgrade` parameter for a major version upgrade only.", "DBSubnetGroupName": "A DB subnet group that you want to associate with this DB cluster.\n\nIf you are restoring a DB cluster to a point in time with `RestoreType` set to `copy-on-write` , and don't specify a DB subnet group name, then the DB cluster is restored with a default DB subnet group.\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", "DBSystemId": "Reserved for future use.", - "DatabaseInsightsMode": "The mode of Database Insights to enable for the DB cluster.\n\nIf you set this value to `advanced` , you must also set the `PerformanceInsightsEnabled` parameter to `true` and the `PerformanceInsightsRetentionPeriod` parameter to 465.\n\nValid for Cluster Type: Aurora DB clusters only", + "DatabaseInsightsMode": "The mode of Database Insights to enable for the DB cluster.\n\nIf you set this value to `advanced` , you must also set the `PerformanceInsightsEnabled` parameter to `true` and the `PerformanceInsightsRetentionPeriod` parameter to 465.\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB clusters", "DatabaseName": "The name of your database. If you don't provide a name, then Amazon RDS won't create a database in this DB cluster. For naming constraints, see [Naming Constraints](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_Limits.html#RDS_Limits.Constraints) in the *Amazon Aurora User Guide* .\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", "DeletionProtection": "A value that indicates whether the DB cluster has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection is disabled.\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", "Domain": "Indicates the directory ID of the Active Directory to create the DB cluster.\n\nFor Amazon Aurora DB clusters, Amazon RDS can use Kerberos authentication to authenticate users that connect to the DB cluster.\n\nFor more information, see [Kerberos authentication](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/kerberos-authentication.html) in the *Amazon Aurora User Guide* .\n\nValid for: Aurora DB clusters only", @@ -42390,7 +43063,7 @@ "NetworkType": "The network type of the DB cluster.\n\nValid values:\n\n- `IPV4`\n- `DUAL`\n\nThe network type is determined by the `DBSubnetGroup` specified for the DB cluster. A `DBSubnetGroup` can support only the IPv4 protocol or the IPv4 and IPv6 protocols ( `DUAL` ).\n\nFor more information, see [Working with a DB instance in a VPC](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html) in the *Amazon Aurora User Guide.*\n\nValid for: Aurora DB clusters only", "PerformanceInsightsEnabled": "Specifies whether to turn on Performance Insights for the DB cluster.\n\nFor more information, see [Using Amazon Performance Insights](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html) in the *Amazon RDS User Guide* .\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB clusters", "PerformanceInsightsKmsKeyId": "The AWS KMS key identifier for encryption of Performance Insights data.\n\nThe AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key.\n\nIf you don't specify a value for `PerformanceInsightsKMSKeyId` , then Amazon RDS uses your default KMS key. There is a default KMS key for your AWS account . Your AWS account has a different default KMS key for each AWS Region .\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB clusters", - "PerformanceInsightsRetentionPeriod": "The number of days to retain Performance Insights data.\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB clusters\n\nValid Values:\n\n- `7`\n- *month* * 31, where *month* is a number of months from 1-23. Examples: `93` (3 months * 31), `341` (11 months * 31), `589` (19 months * 31)\n- `731`\n\nDefault: `7` days\n\nIf you specify a retention period that isn't valid, such as `94` , Amazon RDS issues an error.", + "PerformanceInsightsRetentionPeriod": "The number of days to retain Performance Insights data. When creating a DB cluster without enabling Performance Insights, you can't specify the parameter `PerformanceInsightsRetentionPeriod` .\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB clusters\n\nValid Values:\n\n- `7`\n- *month* * 31, where *month* is a number of months from 1-23. Examples: `93` (3 months * 31), `341` (11 months * 31), `589` (19 months * 31)\n- `731`\n\nDefault: `7` days\n\nIf you specify a retention period that isn't valid, such as `94` , Amazon RDS issues an error.", "Port": "The port number on which the DB instances in the DB cluster accept connections.\n\nDefault:\n\n- When `EngineMode` is `provisioned` , `3306` (for both Aurora MySQL and Aurora PostgreSQL)\n- When `EngineMode` is `serverless` :\n\n- `3306` when `Engine` is `aurora` or `aurora-mysql`\n- `5432` when `Engine` is `aurora-postgresql`\n\n> The `No interruption` on update behavior only applies to DB clusters. If you are updating a DB instance, see [Port](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-port) for the AWS::RDS::DBInstance resource. \n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", "PreferredBackupWindow": "The daily time range during which automated backups are created. For more information, see [Backup Window](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html#Aurora.Managing.Backups.BackupWindow) in the *Amazon Aurora User Guide.*\n\nConstraints:\n\n- Must be in the format `hh24:mi-hh24:mi` .\n- Must be in Universal Coordinated Time (UTC).\n- Must not conflict with the preferred maintenance window.\n- Must be at least 30 minutes.\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", "PreferredMaintenanceWindow": "The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).\n\nFormat: `ddd:hh24:mi-ddd:hh24:mi`\n\nThe default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week. To see the time blocks available, see [Maintaining an Amazon Aurora DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.Maintenance.html#AdjustingTheMaintenanceWindow.Aurora) in the *Amazon Aurora User Guide.*\n\nValid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun.\n\nConstraints: Minimum 30-minute window.\n\nValid for: Aurora DB clusters and Multi-AZ DB clusters", @@ -42477,7 +43150,7 @@ "DBParameterGroupName": "The name of an existing DB parameter group or a reference to an [AWS::RDS::DBParameterGroup](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-dbparametergroup.html) resource created in the template.\n\nTo list all of the available DB parameter group names, use the following command:\n\n`aws rds describe-db-parameter-groups --query \"DBParameterGroups[].DBParameterGroupName\" --output text`\n\n> If any of the data members of the referenced parameter group are changed during an update, the DB instance might need to be restarted, which causes some interruption. If the parameter group contains static parameters, whether they were changed or not, an update triggers a reboot. \n\nIf you don't specify a value for `DBParameterGroupName` property, the default DB parameter group for the specified engine and engine version is used.", "DBSecurityGroups": "A list of the DB security groups to assign to the DB instance. The list can include both the name of existing DB security groups or references to AWS::RDS::DBSecurityGroup resources created in the template.\n\nIf you set DBSecurityGroups, you must not set VPCSecurityGroups, and vice versa. Also, note that the DBSecurityGroups property exists only for backwards compatibility with older regions and is no longer recommended for providing security information to an RDS DB instance. Instead, use VPCSecurityGroups.\n\n> If you specify this property, AWS CloudFormation sends only the following properties (if specified) to Amazon RDS during create operations:\n> \n> - `AllocatedStorage`\n> - `AutoMinorVersionUpgrade`\n> - `AvailabilityZone`\n> - `BackupRetentionPeriod`\n> - `CharacterSetName`\n> - `DBInstanceClass`\n> - `DBName`\n> - `DBParameterGroupName`\n> - `DBSecurityGroups`\n> - `DBSubnetGroupName`\n> - `Engine`\n> - `EngineVersion`\n> - `Iops`\n> - `LicenseModel`\n> - `MasterUsername`\n> - `MasterUserPassword`\n> - `MultiAZ`\n> - `OptionGroupName`\n> - `PreferredBackupWindow`\n> - `PreferredMaintenanceWindow`\n> \n> All other properties are ignored. Specify a virtual private cloud (VPC) security group if you want to submit other properties, such as `StorageType` , `StorageEncrypted` , or `KmsKeyId` . If you're already using the `DBSecurityGroups` property, you can't use these other properties by updating your DB instance to use a VPC security group. You must recreate the DB instance.", "DBSnapshotIdentifier": "The name or Amazon Resource Name (ARN) of the DB snapshot that's used to restore the DB instance. If you're restoring from a shared manual DB snapshot, you must specify the ARN of the snapshot.\n\nBy specifying this property, you can create a DB instance from the specified DB snapshot. If the `DBSnapshotIdentifier` property is an empty string or the `AWS::RDS::DBInstance` declaration has no `DBSnapshotIdentifier` property, AWS CloudFormation creates a new database. If the property contains a value (other than an empty string), AWS CloudFormation creates a database from the specified snapshot. If a snapshot with the specified name doesn't exist, AWS CloudFormation can't create the database and it rolls back the stack.\n\nSome DB instance properties aren't valid when you restore from a snapshot, such as the `MasterUsername` and `MasterUserPassword` properties, and the point-in-time recovery properties `RestoreTime` and `UseLatestRestorableTime` . For information about the properties that you can specify, see the [`RestoreDBInstanceFromDBSnapshot`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html) action in the *Amazon RDS API Reference* .\n\nAfter you restore a DB instance with a `DBSnapshotIdentifier` property, you must specify the same `DBSnapshotIdentifier` property for any future updates to the DB instance. When you specify this property for an update, the DB instance is not restored from the DB snapshot again, and the data in the database is not changed. However, if you don't specify the `DBSnapshotIdentifier` property, an empty DB instance is created, and the original DB instance is deleted. If you specify a property that is different from the previous snapshot restore property, a new DB instance is restored from the specified `DBSnapshotIdentifier` property, and the original DB instance is deleted.\n\nIf you specify the `DBSnapshotIdentifier` property to restore a DB instance (as opposed to specifying it for DB instance updates), then don't specify the following properties:\n\n- `CharacterSetName`\n- `DBClusterIdentifier`\n- `DBName`\n- `KmsKeyId`\n- `MasterUsername`\n- `MasterUserPassword`\n- `PromotionTier`\n- `SourceDBInstanceIdentifier`\n- `SourceRegion`\n- `StorageEncrypted` (for an unencrypted snapshot)\n- `Timezone`\n\n*Amazon Aurora*\n\nNot applicable. Snapshot restore is managed by the DB cluster.", - "DBSubnetGroupName": "A DB subnet group to associate with the DB instance. If you update this value, the new subnet group must be a subnet group in a new VPC.\n\nIf there's no DB subnet group, then the DB instance isn't a VPC DB instance.\n\nFor more information about using Amazon RDS in a VPC, see [Amazon VPC and Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html) in the *Amazon RDS User Guide* .\n\nThis setting doesn't apply to Amazon Aurora DB instances. The DB subnet group is managed by the DB cluster. If specified, the setting must match the DB cluster setting.", + "DBSubnetGroupName": "A DB subnet group to associate with the DB instance. If you update this value, the new subnet group must be a subnet group in a new VPC.\n\nIf you don't specify a DB subnet group, RDS uses the default DB subnet group if one exists. If a default DB subnet group does not exist, and you don't specify a `DBSubnetGroupName` , the DB instance fails to launch.\n\nFor more information about using Amazon RDS in a VPC, see [Amazon VPC and Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html) in the *Amazon RDS User Guide* .\n\nThis setting doesn't apply to Amazon Aurora DB instances. The DB subnet group is managed by the DB cluster. If specified, the setting must match the DB cluster setting.", "DBSystemId": "The Oracle system identifier (SID), which is the name of the Oracle database instance that manages your database files. In this context, the term \"Oracle database instance\" refers exclusively to the system global area (SGA) and Oracle background processes. If you don't specify a SID, the value defaults to `RDSCDB` . The Oracle SID is also the name of your CDB.", "DedicatedLogVolume": "Indicates whether the DB instance has a dedicated log volume (DLV) enabled.", "DeleteAutomatedBackups": "A value that indicates whether to remove automated backups immediately after the DB instance is deleted. This parameter isn't case-sensitive. The default is to remove automated backups immediately after the DB instance is deleted.\n\n*Amazon Aurora*\n\nNot applicable. When you delete a DB cluster, all automated backups for that DB cluster are deleted and can't be recovered. Manual DB cluster snapshots of the DB cluster are not deleted.", @@ -42510,7 +43183,7 @@ "NetworkType": "The network type of the DB instance.\n\nValid values:\n\n- `IPV4`\n- `DUAL`\n\nThe network type is determined by the `DBSubnetGroup` specified for the DB instance. A `DBSubnetGroup` can support only the IPv4 protocol or the IPv4 and IPv6 protocols ( `DUAL` ).\n\nFor more information, see [Working with a DB instance in a VPC](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html) in the *Amazon RDS User Guide.*", "OptionGroupName": "Indicates that the DB instance should be associated with the specified option group.\n\nPermanent options, such as the TDE option for Oracle Advanced Security TDE, can't be removed from an option group. Also, that option group can't be removed from a DB instance once it is associated with a DB instance.", "PerformanceInsightsKMSKeyId": "The AWS KMS key identifier for encryption of Performance Insights data.\n\nThe KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key.\n\nIf you do not specify a value for `PerformanceInsightsKMSKeyId` , then Amazon RDS uses your default KMS key. There is a default KMS key for your AWS account. Your AWS account has a different default KMS key for each AWS Region.\n\nFor information about enabling Performance Insights, see [EnablePerformanceInsights](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html#cfn-rds-dbinstance-enableperformanceinsights) .", - "PerformanceInsightsRetentionPeriod": "The number of days to retain Performance Insights data.\n\nThis setting doesn't apply to RDS Custom DB instances.\n\nValid Values:\n\n- `7`\n- *month* * 31, where *month* is a number of months from 1-23. Examples: `93` (3 months * 31), `341` (11 months * 31), `589` (19 months * 31)\n- `731`\n\nDefault: `7` days\n\nIf you specify a retention period that isn't valid, such as `94` , Amazon RDS returns an error.", + "PerformanceInsightsRetentionPeriod": "The number of days to retain Performance Insights data. When creating a DB instance without enabling Performance Insights, you can't specify the parameter `PerformanceInsightsRetentionPeriod` .\n\nThis setting doesn't apply to RDS Custom DB instances.\n\nValid Values:\n\n- `7`\n- *month* * 31, where *month* is a number of months from 1-23. Examples: `93` (3 months * 31), `341` (11 months * 31), `589` (19 months * 31)\n- `731`\n\nDefault: `7` days\n\nIf you specify a retention period that isn't valid, such as `94` , Amazon RDS returns an error.", "Port": "The port number on which the database accepts connections.\n\nThis setting doesn't apply to Aurora DB instances. The port number is managed by the cluster.\n\nValid Values: `1150-65535`\n\nDefault:\n\n- RDS for Db2 - `50000`\n- RDS for MariaDB - `3306`\n- RDS for Microsoft SQL Server - `1433`\n- RDS for MySQL - `3306`\n- RDS for Oracle - `1521`\n- RDS for PostgreSQL - `5432`\n\nConstraints:\n\n- For RDS for Microsoft SQL Server, the value can't be `1234` , `1434` , `3260` , `3343` , `3389` , `47001` , or `49152-49156` .", "PreferredBackupWindow": "The daily time range during which automated backups are created if automated backups are enabled, using the `BackupRetentionPeriod` parameter. For more information, see [Backup Window](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html#USER_WorkingWithAutomatedBackups.BackupWindow) in the *Amazon RDS User Guide.*\n\nConstraints:\n\n- Must be in the format `hh24:mi-hh24:mi` .\n- Must be in Universal Coordinated Time (UTC).\n- Must not conflict with the preferred maintenance window.\n- Must be at least 30 minutes.\n\n*Amazon Aurora*\n\nNot applicable. The daily time range for creating automated backups is managed by the DB cluster.", "PreferredMaintenanceWindow": "The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).\n\nFormat: `ddd:hh24:mi-ddd:hh24:mi`\n\nThe default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week. To see the time blocks available, see [Maintaining a DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html#AdjustingTheMaintenanceWindow) in the *Amazon RDS User Guide.*\n\n> This property applies when AWS CloudFormation initially creates the DB instance. If you use AWS CloudFormation to update the DB instance, those updates are applied immediately. \n\nConstraints: Minimum 30-minute window.", @@ -42613,7 +43286,7 @@ }, "AWS::RDS::DBProxyTargetGroup ConnectionPoolConfigurationInfoFormat": { "ConnectionBorrowTimeout": "The number of seconds for a proxy to wait for a connection to become available in the connection pool. This setting only applies when the proxy has opened its maximum number of connections and all connections are busy with client sessions.\n\nDefault: `120`\n\nConstraints:\n\n- Must be between 0 and 3600.", - "InitQuery": "One or more SQL statements for the proxy to run when opening each new database connection. Typically used with `SET` statements to make sure that each connection has identical settings such as time zone and character set. For multiple statements, use semicolons as the separator. You can also include multiple variables in a single `SET` statement, such as `SET x=1, y=2` .\n\nDefault: no initialization query", + "InitQuery": "Add an initialization query, or modify the current one. You can specify one or more SQL statements for the proxy to run when opening each new database connection. The setting is typically used with `SET` statements to make sure that each connection has identical settings. Make sure that the query you add is valid. To include multiple variables in a single `SET` statement, use comma separators.\n\nFor example: `SET variable1=value1, variable2=value2`\n\nFor multiple statements, use semicolons as the separator.\n\nDefault: no initialization query", "MaxConnectionsPercent": "The maximum size of the connection pool for each target in a target group. The value is expressed as a percentage of the `max_connections` setting for the RDS DB instance or Aurora DB cluster used by the target group.\n\nIf you specify `MaxIdleConnectionsPercent` , then you must also include a value for this parameter.\n\nDefault: `10` for RDS for Microsoft SQL Server, and `100` for all other engines\n\nConstraints:\n\n- Must be between 1 and 100.", "MaxIdleConnectionsPercent": "A value that controls how actively the proxy closes idle database connections in the connection pool. The value is expressed as a percentage of the `max_connections` setting for the RDS DB instance or Aurora DB cluster used by the target group. With a high value, the proxy leaves a high percentage of idle database connections open. A low value causes the proxy to close more idle connections and return them to the database.\n\nIf you specify this parameter, then you must also include a value for `MaxConnectionsPercent` .\n\nDefault: The default value is half of the value of `MaxConnectionsPercent` . For example, if `MaxConnectionsPercent` is 80, then the default value of `MaxIdleConnectionsPercent` is 40. If the value of `MaxConnectionsPercent` isn't specified, then for SQL Server, `MaxIdleConnectionsPercent` is `5` , and for all other engines, the default is `50` .\n\nConstraints:\n\n- Must be between 0 and the value of `MaxConnectionsPercent` .", "SessionPinningFilters": "Each item in the list represents a class of SQL operations that normally cause all later statements in a session using a proxy to be pinned to the same underlying database connection. Including an item in the list exempts that class of SQL operations from the pinning behavior.\n\nDefault: no session pinning filters" @@ -42683,7 +43356,6 @@ "EngineLifecycleSupport": "The life cycle type for this global database cluster.\n\n> By default, this value is set to `open-source-rds-extended-support` , which enrolls your global cluster into Amazon RDS Extended Support. At the end of standard support, you can avoid charges for Extended Support by setting the value to `open-source-rds-extended-support-disabled` . In this case, creating the global cluster will fail if the DB major version is past its end of standard support date. \n\nThis setting only applies to Aurora PostgreSQL-based global databases.\n\nYou can use this setting to enroll your global cluster into Amazon RDS Extended Support. With RDS Extended Support, you can run the selected major engine version on your global cluster past the end of standard support for that engine version. For more information, see [Using Amazon RDS Extended Support](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/extended-support.html) in the *Amazon Aurora User Guide* .\n\nValid Values: `open-source-rds-extended-support | open-source-rds-extended-support-disabled`\n\nDefault: `open-source-rds-extended-support`", "EngineVersion": "The engine version to use for this global database cluster.\n\nConstraints:\n\n- Can't be specified if `SourceDBClusterIdentifier` is specified. In this case, Amazon Aurora uses the engine version of the source DB cluster.", "GlobalClusterIdentifier": "The cluster identifier for this global database cluster. This parameter is stored as a lowercase string.", - "GlobalEndpoint": "The writer endpoint for the new global database cluster. This endpoint always points to the writer DB instance in the current primary cluster.", "SourceDBClusterIdentifier": "The Amazon Resource Name (ARN) to use as the primary cluster of the global database.\n\nIf you provide a value for this parameter, don't specify values for the following settings because Amazon Aurora uses the values from the specified source DB cluster:\n\n- `DatabaseName`\n- `Engine`\n- `EngineVersion`\n- `StorageEncrypted`", "StorageEncrypted": "Specifies whether to enable storage encryption for the new global database cluster.\n\nConstraints:\n\n- Can't be specified if `SourceDBClusterIdentifier` is specified. In this case, Amazon Aurora uses the setting from the source DB cluster.", "Tags": "Metadata assigned to an Amazon RDS resource consisting of a key-value pair.\n\nFor more information, see [Tagging Amazon RDS resources](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) in the *Amazon RDS User Guide* or [Tagging Amazon Aurora and Amazon RDS resources](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_Tagging.html) in the *Amazon Aurora User Guide* ." @@ -42739,6 +43411,7 @@ "CwLogEnabled": "Data collected by CloudWatch RUM is kept by RUM for 30 days and then deleted. This parameter specifies whether CloudWatch RUM sends a copy of this telemetry data to Amazon CloudWatch Logs in your account. This enables you to keep the telemetry data for more than 30 days, but it does incur Amazon CloudWatch Logs charges.\n\nIf you omit this parameter, the default is `false` .", "Domain": "The top-level internet domain name for which your application has administrative authority. This parameter is required.", "Name": "A name for the app monitor. This parameter is required.", + "ResourcePolicy": "Use this structure to assign a resource-based policy to a CloudWatch RUM app monitor to control access to it. Each app monitor can have one resource-based policy. The maximum size of the policy is 4 KB. To learn more about using resource policies with RUM, see [Using resource-based policies with CloudWatch RUM](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-RUM-resource-policies.html) .", "Tags": "Assigns one or more tags (key-value pairs) to the app monitor.\n\nTags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values.\n\nTags don't have any semantic meaning to AWS and are interpreted strictly as strings of characters.\n\nYou can associate as many as 50 tags with an app monitor.\n\nFor more information, see [Tagging AWS resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) ." }, "AWS::RUM::AppMonitor AppMonitorConfiguration": { @@ -42770,6 +43443,10 @@ "IamRoleArn": "This parameter is required if `Destination` is `Evidently` . If `Destination` is `CloudWatch` , do not use this parameter.\n\nThis parameter specifies the ARN of an IAM role that RUM will assume to write to the Evidently experiment that you are sending metrics to. This role must have permission to write to that experiment.", "MetricDefinitions": "An array of structures which define the metrics that you want to send." }, + "AWS::RUM::AppMonitor ResourcePolicy": { + "PolicyDocument": "The JSON to use as the resource policy. The document can be up to 4 KB in size. For more information about the contents and syntax for this policy, see [Using resource-based policies with CloudWatch RUM](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-RUM-resource-policies.html) .", + "PolicyRevisionId": "A string value that you can use to conditionally update your policy. You can provide the revision ID of your existing policy to make mutating requests against that policy.\n\nWhen you assign a policy revision ID, then later requests about that policy will be rejected with an `InvalidPolicyRevisionIdException` error if they don't provide the correct current revision ID." + }, "AWS::RUM::AppMonitor Tag": { "Key": "", "Value": "" @@ -42949,9 +43626,9 @@ "Value": "The value for the resource tag." }, "AWS::Redshift::Integration": { - "AdditionalEncryptionContext": "The encryption context for the integration. For more information, see [Encryption context](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context) in the *AWS Key Management Service Developer Guide* .", + "AdditionalEncryptionContext": "The encryption context for the integration. For more information, see [Encryption context](https://docs.aws.amazon.com/) in the *AWS Key Management Service Developer Guide* .", "IntegrationName": "The name of the integration.", - "KMSKeyId": "The AWS Key Management Service ( AWS KMS ) key identifier for the key used to encrypt the integration.", + "KMSKeyId": "The AWS Key Management Service ( AWS KMS) key identifier for the key used to encrypt the integration.", "SourceArn": "The Amazon Resource Name (ARN) of the database used as the source for replication.", "Tags": "The list of tags associated with the integration.", "TargetArn": "The Amazon Resource Name (ARN) of the Amazon Redshift data warehouse to use as the target for replication." @@ -43042,6 +43719,8 @@ "SecurityGroupIds": "A list of security group IDs to associate with the workgroup.", "SubnetIds": "A list of subnet IDs the workgroup is associated with.", "Tags": "The map of the key-value pairs used to tag the workgroup.", + "TrackName": "An optional parameter for the name of the track for the workgroup. If you don't provide a track name, the workgroup is assigned to the current track.", + "Workgroup": "The collection of computing resources from which an endpoint is created.", "WorkgroupName": "The name of the workgroup." }, "AWS::RedshiftServerless::Workgroup ConfigParameter": { @@ -43085,6 +43764,7 @@ "SecurityGroupIds": "An array of security group IDs to associate with the workgroup.", "Status": "The status of the workgroup.", "SubnetIds": "An array of subnet IDs the workgroup is associated with.", + "TrackName": "The name of the track for the workgroup.", "WorkgroupArn": "The Amazon Resource Name (ARN) that links to the workgroup.", "WorkgroupId": "The unique identifier of the workgroup.", "WorkgroupName": "The name of the workgroup." @@ -43652,8 +44332,8 @@ "Region": "The AWS Region for a cluster endpoint." }, "AWS::Route53RecoveryControl::Cluster Tag": { - "Key": "The key for a tag.", - "Value": "The value for a tag." + "Key": "", + "Value": "" }, "AWS::Route53RecoveryControl::ControlPanel": { "ClusterArn": "The Amazon Resource Name (ARN) of the cluster for the control panel.", @@ -43661,8 +44341,8 @@ "Tags": "The tags associated with the control panel." }, "AWS::Route53RecoveryControl::ControlPanel Tag": { - "Key": "The key for a tag.", - "Value": "The value for a tag." + "Key": "", + "Value": "" }, "AWS::Route53RecoveryControl::RoutingControl": { "ClusterArn": "The Amazon Resource Name (ARN) of the cluster that hosts the routing control.", @@ -43692,8 +44372,8 @@ "Type": "A rule can be one of the following: `ATLEAST` , `AND` , or `OR` ." }, "AWS::Route53RecoveryControl::SafetyRule Tag": { - "Key": "The key for a tag.", - "Value": "The value for a tag." + "Key": "", + "Value": "" }, "AWS::Route53RecoveryReadiness::Cell": { "CellName": "The name of the cell to create.", @@ -45482,7 +46162,7 @@ "AWS::SSMQuickSetup::ConfigurationManager ConfigurationDefinition": { "LocalDeploymentAdministrationRoleArn": "The ARN of the IAM role used to administrate local configuration deployments.", "LocalDeploymentExecutionRoleName": "The name of the IAM role used to deploy local configurations.", - "Parameters": "The parameters for the configuration definition type. Parameters for configuration definitions vary based the configuration type. The following lists outline the parameters for each configuration type.\n\n- **AWS Config Recording (Type: AWS QuickSetupType-CFGRecording)** - - `RecordAllResources`\n\n- Description: (Optional) A boolean value that determines whether all supported resources are recorded. The default value is \" `true` \".\n- `ResourceTypesToRecord`\n\n- Description: (Optional) A comma separated list of resource types you want to record.\n- `RecordGlobalResourceTypes`\n\n- Description: (Optional) A boolean value that determines whether global resources are recorded with all resource configurations. The default value is \" `false` \".\n- `GlobalResourceTypesRegion`\n\n- Description: (Optional) Determines the AWS Region where global resources are recorded.\n- `UseCustomBucket`\n\n- Description: (Optional) A boolean value that determines whether a custom Amazon S3 bucket is used for delivery. The default value is \" `false` \".\n- `DeliveryBucketName`\n\n- Description: (Optional) The name of the Amazon S3 bucket you want AWS Config to deliver configuration snapshots and configuration history files to.\n- `DeliveryBucketPrefix`\n\n- Description: (Optional) The key prefix you want to use in the custom Amazon S3 bucket.\n- `NotificationOptions`\n\n- Description: (Optional) Determines the notification configuration for the recorder. The valid values are `NoStreaming` , `UseExistingTopic` , and `CreateTopic` . The default value is `NoStreaming` .\n- `CustomDeliveryTopicAccountId`\n\n- Description: (Optional) The ID of the AWS account where the Amazon SNS topic you want to use for notifications resides. You must specify a value for this parameter if you use the `UseExistingTopic` notification option.\n- `CustomDeliveryTopicName`\n\n- Description: (Optional) The name of the Amazon SNS topic you want to use for notifications. You must specify a value for this parameter if you use the `UseExistingTopic` notification option.\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(7 days)` , `rate(1 days)` , and `none` . The default value is \" `none` \".\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) The ID of the root of your Organization. This configuration type doesn't currently support choosing specific OUs. The configuration will be deployed to all the OUs in the Organization.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Change Manager (Type: AWS QuickSetupType-SSMChangeMgr)** - - `DelegatedAccountId`\n\n- Description: (Required) The ID of the delegated administrator account.\n- `JobFunction`\n\n- Description: (Required) The name for the Change Manager job function.\n- `PermissionType`\n\n- Description: (Optional) Specifies whether you want to use default administrator permissions for the job function role, or provide a custom IAM policy. The valid values are `CustomPermissions` and `AdminPermissions` . The default value for the parameter is `CustomerPermissions` .\n- `CustomPermissions`\n\n- Description: (Optional) A JSON string containing the IAM policy you want your job function to use. You must provide a value for this parameter if you specify `CustomPermissions` for the `PermissionType` parameter.\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Conformance Packs (Type: AWS QuickSetupType-CFGCPacks)** - - `DelegatedAccountId`\n\n- Description: (Optional) The ID of the delegated administrator account. This parameter is required for Organization deployments.\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(14 days)` , `rate(2 days)` , and `none` . The default value is \" `none` \".\n- `CPackNames`\n\n- Description: (Required) A comma separated list of AWS Config conformance packs.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) The ID of the root of your Organization. This configuration type doesn't currently support choosing specific OUs. The configuration will be deployed to all the OUs in the Organization.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Default Host Management Configuration (Type: AWS QuickSetupType-DHMC)** - - `UpdateSSMAgent`\n\n- Description: (Optional) A boolean value that determines whether the SSM Agent is updated on the target instances every 2 weeks. The default value is \" `true` \".\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **DevOps\u00a0Guru (Type: AWS QuickSetupType-DevOpsGuru)** - - `AnalyseAllResources`\n\n- Description: (Optional) A boolean value that determines whether DevOps\u00a0Guru analyzes all AWS CloudFormation stacks in the account. The default value is \" `false` \".\n- `EnableSnsNotifications`\n\n- Description: (Optional) A boolean value that determines whether DevOps\u00a0Guru sends notifications when an insight is created. The default value is \" `true` \".\n- `EnableSsmOpsItems`\n\n- Description: (Optional) A boolean value that determines whether DevOps\u00a0Guru creates an OpsCenter OpsItem when an insight is created. The default value is \" `true` \".\n- `EnableDriftRemediation`\n\n- Description: (Optional) A boolean value that determines whether a drift remediation schedule is used. The default value is \" `false` \".\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(14 days)` , `rate(1 days)` , and `none` . The default value is \" `none` \".\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Distributor (Type: AWS QuickSetupType-Distributor)** - - `PackagesToInstall`\n\n- Description: (Required) A comma separated list of packages you want to install on the target instances. The valid values are `AWSEFSTools` , `AWSCWAgent` , and `AWSEC2LaunchAgent` .\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(14 days)` , `rate(2 days)` , and `none` . The default value is \" `rate(30 days)` \".\n- `IsPolicyAttachAllowed`\n\n- Description: (Optional) A boolean value that determines whether Quick Setup attaches policies to instances profiles already associated with the target instances. The default value is \" `false` \".\n- `TargetType`\n\n- Description: (Optional) Determines how instances are targeted for local account deployments. Don't specify a value for this parameter if you're deploying to OUs. The valid values are `*` , `InstanceIds` , `ResourceGroups` , and `Tags` . Use `*` to target all instances in the account.\n- `TargetInstances`\n\n- Description: (Optional) A comma separated list of instance IDs. You must provide a value for this parameter if you specify `InstanceIds` for the `TargetType` parameter.\n- `TargetTagKey`\n\n- Description: (Required) The tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `TargetTagValue`\n\n- Description: (Required) The value of the tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `ResourceGroupName`\n\n- Description: (Required) The name of the resource group associated with the instances you want to target. You must provide a value for this parameter if you specify `ResourceGroups` for the `TargetType` parameter.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Host Management (Type: AWS QuickSetupType-SSMHostMgmt)** - - `UpdateSSMAgent`\n\n- Description: (Optional) A boolean value that determines whether the SSM Agent is updated on the target instances every 2 weeks. The default value is \" `true` \".\n- `UpdateEc2LaunchAgent`\n\n- Description: (Optional) A boolean value that determines whether the EC2 Launch agent is updated on the target instances every month. The default value is \" `false` \".\n- `CollectInventory`\n\n- Description: (Optional) A boolean value that determines whether instance metadata is collected on the target instances every 30 minutes. The default value is \" `true` \".\n- `ScanInstances`\n\n- Description: (Optional) A boolean value that determines whether the target instances are scanned daily for available patches. The default value is \" `true` \".\n- `InstallCloudWatchAgent`\n\n- Description: (Optional) A boolean value that determines whether the Amazon CloudWatch agent is installed on the target instances. The default value is \" `false` \".\n- `UpdateCloudWatchAgent`\n\n- Description: (Optional) A boolean value that determines whether the Amazon CloudWatch agent is updated on the target instances every month. The default value is \" `false` \".\n- `IsPolicyAttachAllowed`\n\n- Description: (Optional) A boolean value that determines whether Quick Setup attaches policies to instances profiles already associated with the target instances. The default value is \" `false` \".\n- `TargetType`\n\n- Description: (Optional) Determines how instances are targeted for local account deployments. Don't specify a value for this parameter if you're deploying to OUs. The valid values are `*` , `InstanceIds` , `ResourceGroups` , and `Tags` . Use `*` to target all instances in the account.\n- `TargetInstances`\n\n- Description: (Optional) A comma separated list of instance IDs. You must provide a value for this parameter if you specify `InstanceIds` for the `TargetType` parameter.\n- `TargetTagKey`\n\n- Description: (Optional) The tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `TargetTagValue`\n\n- Description: (Optional) The value of the tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `ResourceGroupName`\n\n- Description: (Optional) The name of the resource group associated with the instances you want to target. You must provide a value for this parameter if you specify `ResourceGroups` for the `TargetType` parameter.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **OpsCenter (Type: AWS QuickSetupType-SSMOpsCenter)** - - `DelegatedAccountId`\n\n- Description: (Required) The ID of the delegated administrator account.\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Patch Policy (Type: AWS QuickSetupType-PatchPolicy)** - - `PatchPolicyName`\n\n- Description: (Required) A name for the patch policy. The value you provide is applied to target Amazon EC2 instances as a tag.\n- `SelectedPatchBaselines`\n\n- Description: (Required) An array of JSON objects containing the information for the patch baselines to include in your patch policy.\n- `PatchBaselineUseDefault`\n\n- Description: (Optional) A boolean value that determines whether the selected patch baselines are all AWS provided.\n- `ConfigurationOptionsPatchOperation`\n\n- Description: (Optional) Determines whether target instances scan for available patches, or scan and install available patches. The valid values are `Scan` and `ScanAndInstall` . The default value for the parameter is `Scan` .\n- `ConfigurationOptionsScanValue`\n\n- Description: (Optional) A cron expression that is used as the schedule for when instances scan for available patches.\n- `ConfigurationOptionsInstallValue`\n\n- Description: (Optional) A cron expression that is used as the schedule for when instances install available patches.\n- `ConfigurationOptionsScanNextInterval`\n\n- Description: (Optional) A boolean value that determines whether instances should scan for available patches at the next cron interval. The default value is \" `false` \".\n- `ConfigurationOptionsInstallNextInterval`\n\n- Description: (Optional) A boolean value that determines whether instances should scan for available patches at the next cron interval. The default value is \" `false` \".\n- `RebootOption`\n\n- Description: (Optional) Determines whether instances are rebooted after patches are installed. Valid values are `RebootIfNeeded` and `NoReboot` .\n- `IsPolicyAttachAllowed`\n\n- Description: (Optional) A boolean value that determines whether Quick Setup attaches policies to instances profiles already associated with the target instances. The default value is \" `false` \".\n- `OutputLogEnableS3`\n\n- Description: (Optional) A boolean value that determines whether command output logs are sent to Amazon S3.\n- `OutputS3Location`\n\n- Description: (Optional) A JSON string containing information about the Amazon S3 bucket where you want to store the output details of the request.\n\n- `OutputS3BucketRegion`\n\n- Description: (Optional) The AWS Region where the Amazon S3 bucket you want to deliver command output to is located.\n- `OutputS3BucketName`\n\n- Description: (Optional) The name of the Amazon S3 bucket you want to deliver command output to.\n- `OutputS3KeyPrefix`\n\n- Description: (Optional) The key prefix you want to use in the custom Amazon S3 bucket.\n- `TargetType`\n\n- Description: (Optional) Determines how instances are targeted for local account deployments. Don't specify a value for this parameter if you're deploying to OUs. The valid values are `*` , `InstanceIds` , `ResourceGroups` , and `Tags` . Use `*` to target all instances in the account.\n- `TargetInstances`\n\n- Description: (Optional) A comma separated list of instance IDs. You must provide a value for this parameter if you specify `InstanceIds` for the `TargetType` parameter.\n- `TargetTagKey`\n\n- Description: (Required) The tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `TargetTagValue`\n\n- Description: (Required) The value of the tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `ResourceGroupName`\n\n- Description: (Required) The name of the resource group associated with the instances you want to target. You must provide a value for this parameter if you specify `ResourceGroups` for the `TargetType` parameter.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Resource Explorer (Type: AWS QuickSetupType-ResourceExplorer)** - - `SelectedAggregatorRegion`\n\n- Description: (Required) The AWS Region where you want to create the aggregator index.\n- `ReplaceExistingAggregator`\n\n- Description: (Required) A boolean value that determines whether to demote an existing aggregator if it is in a Region that differs from the value you specify for the `SelectedAggregatorRegion` .\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Resource Scheduler (Type: AWS QuickSetupType-Scheduler)** - - `TargetTagKey`\n\n- Description: (Required) The tag key assigned to the instances you want to target.\n- `TargetTagValue`\n\n- Description: (Required) The value of the tag key assigned to the instances you want to target.\n- `ICalendarString`\n\n- Description: (Required) An iCalendar formatted string containing the schedule you want Change Manager to use.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.", + "Parameters": "The parameters for the configuration definition type. Parameters for configuration definitions vary based the configuration type. The following lists outline the parameters for each configuration type.\n\n- **AWS Config Recording (Type: AWS QuickSetupType-CFGRecording)** - - `RecordAllResources`\n\n- Description: (Optional) A boolean value that determines whether all supported resources are recorded. The default value is \" `true` \".\n- `ResourceTypesToRecord`\n\n- Description: (Optional) A comma separated list of resource types you want to record.\n- `RecordGlobalResourceTypes`\n\n- Description: (Optional) A boolean value that determines whether global resources are recorded with all resource configurations. The default value is \" `false` \".\n- `GlobalResourceTypesRegion`\n\n- Description: (Optional) Determines the AWS Region where global resources are recorded.\n- `UseCustomBucket`\n\n- Description: (Optional) A boolean value that determines whether a custom Amazon S3 bucket is used for delivery. The default value is \" `false` \".\n- `DeliveryBucketName`\n\n- Description: (Optional) The name of the Amazon S3 bucket you want AWS Config to deliver configuration snapshots and configuration history files to.\n- `DeliveryBucketPrefix`\n\n- Description: (Optional) The key prefix you want to use in the custom Amazon S3 bucket.\n- `NotificationOptions`\n\n- Description: (Optional) Determines the notification configuration for the recorder. The valid values are `NoStreaming` , `UseExistingTopic` , and `CreateTopic` . The default value is `NoStreaming` .\n- `CustomDeliveryTopicAccountId`\n\n- Description: (Optional) The ID of the AWS account where the Amazon SNS topic you want to use for notifications resides. You must specify a value for this parameter if you use the `UseExistingTopic` notification option.\n- `CustomDeliveryTopicName`\n\n- Description: (Optional) The name of the Amazon SNS topic you want to use for notifications. You must specify a value for this parameter if you use the `UseExistingTopic` notification option.\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(7 days)` , `rate(1 days)` , and `none` . The default value is \" `none` \".\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) The ID of the root of your Organization. This configuration type doesn't currently support choosing specific OUs. The configuration will be deployed to all the OUs in the Organization.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Change Manager (Type: AWS QuickSetupType-SSMChangeMgr)** - - `DelegatedAccountId`\n\n- Description: (Required) The ID of the delegated administrator account.\n- `JobFunction`\n\n- Description: (Required) The name for the Change Manager job function.\n- `PermissionType`\n\n- Description: (Optional) Specifies whether you want to use default administrator permissions for the job function role, or provide a custom IAM policy. The valid values are `CustomPermissions` and `AdminPermissions` . The default value for the parameter is `CustomerPermissions` .\n- `CustomPermissions`\n\n- Description: (Optional) A JSON string containing the IAM policy you want your job function to use. You must provide a value for this parameter if you specify `CustomPermissions` for the `PermissionType` parameter.\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Conformance Packs (Type: AWS QuickSetupType-CFGCPacks)** - - `DelegatedAccountId`\n\n- Description: (Optional) The ID of the delegated administrator account. This parameter is required for Organization deployments.\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(14 days)` , `rate(2 days)` , and `none` . The default value is \" `none` \".\n- `CPackNames`\n\n- Description: (Required) A comma separated list of AWS Config conformance packs.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) The ID of the root of your Organization. This configuration type doesn't currently support choosing specific OUs. The configuration will be deployed to all the OUs in the Organization.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Default Host Management Configuration (Type: AWS QuickSetupType-DHMC)** - - `UpdateSSMAgent`\n\n- Description: (Optional) A boolean value that determines whether the SSM Agent is updated on the target instances every 2 weeks. The default value is \" `true` \".\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) The AWS Regions to deploy the configuration to. For this type, the parameter only accepts a value of `AllRegions` .\n- **DevOps\u00a0Guru (Type: AWS QuickSetupType-DevOpsGuru)** - - `AnalyseAllResources`\n\n- Description: (Optional) A boolean value that determines whether DevOps\u00a0Guru analyzes all AWS CloudFormation stacks in the account. The default value is \" `false` \".\n- `EnableSnsNotifications`\n\n- Description: (Optional) A boolean value that determines whether DevOps\u00a0Guru sends notifications when an insight is created. The default value is \" `true` \".\n- `EnableSsmOpsItems`\n\n- Description: (Optional) A boolean value that determines whether DevOps\u00a0Guru creates an OpsCenter OpsItem when an insight is created. The default value is \" `true` \".\n- `EnableDriftRemediation`\n\n- Description: (Optional) A boolean value that determines whether a drift remediation schedule is used. The default value is \" `false` \".\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(14 days)` , `rate(1 days)` , and `none` . The default value is \" `none` \".\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Distributor (Type: AWS QuickSetupType-Distributor)** - - `PackagesToInstall`\n\n- Description: (Required) A comma separated list of packages you want to install on the target instances. The valid values are `AWSEFSTools` , `AWSCWAgent` , and `AWSEC2LaunchAgent` .\n- `RemediationSchedule`\n\n- Description: (Optional) A rate expression that defines the schedule for drift remediation. The valid values are `rate(30 days)` , `rate(14 days)` , `rate(2 days)` , and `none` . The default value is \" `rate(30 days)` \".\n- `IsPolicyAttachAllowed`\n\n- Description: (Optional) A boolean value that determines whether Quick Setup attaches policies to instances profiles already associated with the target instances. The default value is \" `false` \".\n- `TargetType`\n\n- Description: (Optional) Determines how instances are targeted for local account deployments. Don't specify a value for this parameter if you're deploying to OUs. The valid values are `*` , `InstanceIds` , `ResourceGroups` , and `Tags` . Use `*` to target all instances in the account.\n- `TargetInstances`\n\n- Description: (Optional) A comma separated list of instance IDs. You must provide a value for this parameter if you specify `InstanceIds` for the `TargetType` parameter.\n- `TargetTagKey`\n\n- Description: (Required) The tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `TargetTagValue`\n\n- Description: (Required) The value of the tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `ResourceGroupName`\n\n- Description: (Required) The name of the resource group associated with the instances you want to target. You must provide a value for this parameter if you specify `ResourceGroups` for the `TargetType` parameter.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Host Management (Type: AWS QuickSetupType-SSMHostMgmt)** - - `UpdateSSMAgent`\n\n- Description: (Optional) A boolean value that determines whether the SSM Agent is updated on the target instances every 2 weeks. The default value is \" `true` \".\n- `UpdateEc2LaunchAgent`\n\n- Description: (Optional) A boolean value that determines whether the EC2 Launch agent is updated on the target instances every month. The default value is \" `false` \".\n- `CollectInventory`\n\n- Description: (Optional) A boolean value that determines whether instance metadata is collected on the target instances every 30 minutes. The default value is \" `true` \".\n- `ScanInstances`\n\n- Description: (Optional) A boolean value that determines whether the target instances are scanned daily for available patches. The default value is \" `true` \".\n- `InstallCloudWatchAgent`\n\n- Description: (Optional) A boolean value that determines whether the Amazon CloudWatch agent is installed on the target instances. The default value is \" `false` \".\n- `UpdateCloudWatchAgent`\n\n- Description: (Optional) A boolean value that determines whether the Amazon CloudWatch agent is updated on the target instances every month. The default value is \" `false` \".\n- `IsPolicyAttachAllowed`\n\n- Description: (Optional) A boolean value that determines whether Quick Setup attaches policies to instances profiles already associated with the target instances. The default value is \" `false` \".\n- `TargetType`\n\n- Description: (Optional) Determines how instances are targeted for local account deployments. Don't specify a value for this parameter if you're deploying to OUs. The valid values are `*` , `InstanceIds` , `ResourceGroups` , and `Tags` . Use `*` to target all instances in the account.\n- `TargetInstances`\n\n- Description: (Optional) A comma separated list of instance IDs. You must provide a value for this parameter if you specify `InstanceIds` for the `TargetType` parameter.\n- `TargetTagKey`\n\n- Description: (Optional) The tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `TargetTagValue`\n\n- Description: (Optional) The value of the tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `ResourceGroupName`\n\n- Description: (Optional) The name of the resource group associated with the instances you want to target. You must provide a value for this parameter if you specify `ResourceGroups` for the `TargetType` parameter.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **OpsCenter (Type: AWS QuickSetupType-SSMOpsCenter)** - - `DelegatedAccountId`\n\n- Description: (Required) The ID of the delegated administrator account.\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Patch Policy (Type: AWS QuickSetupType-PatchPolicy)** - - `PatchPolicyName`\n\n- Description: (Required) A name for the patch policy. The value you provide is applied to target Amazon EC2 instances as a tag.\n- `SelectedPatchBaselines`\n\n- Description: (Required) An array of JSON objects containing the information for the patch baselines to include in your patch policy.\n- `PatchBaselineUseDefault`\n\n- Description: (Optional) A value that determines whether the selected patch baselines are all AWS provided. Supported values are `default` and `custom` .\n- `PatchBaselineRegion`\n\n- Description: (Required) The AWS Region where the patch baseline exist.\n- `ConfigurationOptionsPatchOperation`\n\n- Description: (Optional) Determines whether target instances scan for available patches, or scan and install available patches. The valid values are `Scan` and `ScanAndInstall` . The default value for the parameter is `Scan` .\n- `ConfigurationOptionsScanValue`\n\n- Description: (Optional) A cron expression that is used as the schedule for when instances scan for available patches.\n- `ConfigurationOptionsInstallValue`\n\n- Description: (Optional) A cron expression that is used as the schedule for when instances install available patches.\n- `ConfigurationOptionsScanNextInterval`\n\n- Description: (Optional) A boolean value that determines whether instances should scan for available patches at the next cron interval. The default value is \" `false` \".\n- `ConfigurationOptionsInstallNextInterval`\n\n- Description: (Optional) A boolean value that determines whether instances should scan for available patches at the next cron interval. The default value is \" `false` \".\n- `RebootOption`\n\n- Description: (Optional) Determines whether instances are rebooted after patches are installed. Valid values are `RebootIfNeeded` and `NoReboot` .\n- `IsPolicyAttachAllowed`\n\n- Description: (Optional) A boolean value that determines whether Quick Setup attaches policies to instances profiles already associated with the target instances. The default value is \" `false` \".\n- `OutputLogEnableS3`\n\n- Description: (Optional) A boolean value that determines whether command output logs are sent to Amazon S3.\n- `OutputS3Location`\n\n- Description: (Optional) A JSON string containing information about the Amazon S3 bucket where you want to store the output details of the request.\n\n- `OutputS3BucketRegion`\n\n- Description: (Optional) The AWS Region where the Amazon S3 bucket you want to deliver command output to is located.\n- `OutputS3BucketName`\n\n- Description: (Optional) The name of the Amazon S3 bucket you want to deliver command output to.\n- `OutputS3KeyPrefix`\n\n- Description: (Optional) The key prefix you want to use in the custom Amazon S3 bucket.\n- `TargetType`\n\n- Description: (Optional) Determines how instances are targeted for local account deployments. Don't specify a value for this parameter if you're deploying to OUs. The valid values are `*` , `InstanceIds` , `ResourceGroups` , and `Tags` . Use `*` to target all instances in the account.\n- `TargetInstances`\n\n- Description: (Optional) A comma separated list of instance IDs. You must provide a value for this parameter if you specify `InstanceIds` for the `TargetType` parameter.\n- `TargetTagKey`\n\n- Description: (Required) The tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `TargetTagValue`\n\n- Description: (Required) The value of the tag key assigned to the instances you want to target. You must provide a value for this parameter if you specify `Tags` for the `TargetType` parameter.\n- `ResourceGroupName`\n\n- Description: (Required) The name of the resource group associated with the instances you want to target. You must provide a value for this parameter if you specify `ResourceGroups` for the `TargetType` parameter.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Resource Explorer (Type: AWS QuickSetupType-ResourceExplorer)** - - `SelectedAggregatorRegion`\n\n- Description: (Required) The AWS Region where you want to create the aggregator index.\n- `ReplaceExistingAggregator`\n\n- Description: (Required) A boolean value that determines whether to demote an existing aggregator if it is in a Region that differs from the value you specify for the `SelectedAggregatorRegion` .\n- `TargetOrganizationalUnits`\n\n- Description: (Required) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.\n- **Resource Scheduler (Type: AWS QuickSetupType-Scheduler)** - - `TargetTagKey`\n\n- Description: (Required) The tag key assigned to the instances you want to target.\n- `TargetTagValue`\n\n- Description: (Required) The value of the tag key assigned to the instances you want to target.\n- `ICalendarString`\n\n- Description: (Required) An iCalendar formatted string containing the schedule you want Change Manager to use.\n- `TargetAccounts`\n\n- Description: (Optional) The ID of the AWS account initiating the configuration deployment. You only need to provide a value for this parameter if you want to deploy the configuration locally. A value must be provided for either `TargetAccounts` or `TargetOrganizationalUnits` .\n- `TargetOrganizationalUnits`\n\n- Description: (Optional) A comma separated list of organizational units (OUs) you want to deploy the configuration to.\n- `TargetRegions`\n\n- Description: (Required) A comma separated list of AWS Regions you want to deploy the configuration to.", "Type": "The type of the Quick Setup configuration.", "TypeVersion": "The version of the Quick Setup type used.", "id": "The ID of the configuration definition." @@ -45906,7 +46586,7 @@ "LifecycleConfigArns": "The Amazon Resource Name (ARN) of the Lifecycle Configurations attached to the JupyterServerApp. If you use this parameter, the `DefaultResourceSpec` parameter is also required.\n\n> To remove a Lifecycle Config, you must set `LifecycleConfigArns` to an empty list." }, "AWS::SageMaker::Domain KernelGatewayAppSettings": { - "CustomImages": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.", + "CustomImages": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.\n\nThe maximum number of custom images are as follows.\n\n- On a domain level: 200\n- On a space level: 5\n- On a user profile level: 5", "DefaultResourceSpec": "The default instance type and the Amazon Resource Name (ARN) of the default SageMaker AI image used by the KernelGateway app.\n\n> The Amazon SageMaker AI Studio UI does not use the default instance type value set here. The default instance type set here is used when Apps are created using the AWS CLI or AWS CloudFormation and the instance type parameter value is not passed.", "LifecycleConfigArns": "The Amazon Resource Name (ARN) of the Lifecycle Configurations attached to the the user profile or domain.\n\n> To remove a Lifecycle Config, you must set `LifecycleConfigArns` to an empty list." }, @@ -45963,7 +46643,7 @@ "AWS::SageMaker::Endpoint": { "DeploymentConfig": "The deployment configuration for an endpoint, which contains the desired deployment strategy and rollback configurations.", "EndpointConfigName": "The name of the [AWS::SageMaker::EndpointConfig](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-endpointconfig.html) resource that specifies the configuration for the endpoint. For more information, see [CreateEndpointConfig](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateEndpointConfig.html) .", - "EndpointName": "The name of the endpoint.The name must be unique within an AWS Region in your AWS account. The name is case-insensitive in `CreateEndpoint` , but the case is preserved and must be matched in [](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_runtime_InvokeEndpoint.html) .", + "EndpointName": "The name of the endpoint. The name must be unique within an AWS Region in your AWS account. The name is case-insensitive in `CreateEndpoint` , but the case is preserved and must be matched in [](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_runtime_InvokeEndpoint.html) .", "ExcludeRetainedVariantProperties": "When you are updating endpoint resources with [RetainAllVariantProperties](https://docs.aws.amazon.com/sagemaker/latest/dg/API_UpdateEndpoint.html#SageMaker-UpdateEndpoint-request-RetainAllVariantProperties) whose value is set to `true` , `ExcludeRetainedVariantProperties` specifies the list of type [VariantProperty](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-endpoint-variantproperty.html) to override with the values provided by `EndpointConfig` . If you don't specify a value for `ExcludeAllVariantProperties` , no variant properties are overridden. Don't use this property when creating new endpoint resources or when `RetainAllVariantProperties` is set to `false` .", "RetainAllVariantProperties": "When updating endpoint resources, enables or disables the retention of variant properties, such as the instance count or the variant weight. To retain the variant properties of an endpoint when updating it, set `RetainAllVariantProperties` to `true` . To use the variant properties specified in a new `EndpointConfig` call when updating an endpoint, set `RetainAllVariantProperties` to `false` . Use this property only when updating endpoint resources, not when creating new endpoint resources.", "RetainDeploymentConfig": "Specifies whether to reuse the last deployment configuration. The default value is false (the configuration is not reused).", @@ -46188,6 +46868,7 @@ "VendorGuidance": "" }, "AWS::SageMaker::InferenceComponent": { + "DeploymentConfig": "The deployment configuration for an endpoint, which contains the desired deployment strategy and rollback configurations.", "EndpointArn": "The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.", "EndpointName": "The name of the endpoint that hosts the inference component.", "InferenceComponentName": "The name of the inference component.", @@ -46196,11 +46877,21 @@ "Tags": "", "VariantName": "The name of the production variant that hosts the inference component." }, + "AWS::SageMaker::InferenceComponent Alarm": { + "AlarmName": "The name of a CloudWatch alarm in your account." + }, + "AWS::SageMaker::InferenceComponent AutoRollbackConfiguration": { + "Alarms": "" + }, "AWS::SageMaker::InferenceComponent DeployedImage": { "ResolutionTime": "The date and time when the image path for the model resolved to the `ResolvedImage`", "ResolvedImage": "The specific digest path of the image hosted in this `ProductionVariant` .", "SpecifiedImage": "The image path you specified when you created the model." }, + "AWS::SageMaker::InferenceComponent InferenceComponentCapacitySize": { + "Type": "Specifies the endpoint capacity type.\n\n- **COPY_COUNT** - The endpoint activates based on the number of inference component copies.\n- **CAPACITY_PERCENT** - The endpoint activates based on the specified percentage of capacity.", + "Value": "Defines the capacity size, either as a number of inference component copies or a capacity percentage." + }, "AWS::SageMaker::InferenceComponent InferenceComponentComputeResourceRequirements": { "MaxMemoryRequiredInMb": "The maximum MB of memory to allocate to run a model that you assign to an inference component.", "MinMemoryRequiredInMb": "The minimum MB of memory to allocate to run a model that you assign to an inference component.", @@ -46213,6 +46904,16 @@ "Environment": "The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.", "Image": "The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored." }, + "AWS::SageMaker::InferenceComponent InferenceComponentDeploymentConfig": { + "AutoRollbackConfiguration": "", + "RollingUpdatePolicy": "Specifies a rolling deployment strategy for updating a SageMaker AI endpoint." + }, + "AWS::SageMaker::InferenceComponent InferenceComponentRollingUpdatePolicy": { + "MaximumBatchSize": "The batch size for each rolling step in the deployment process. For each step, SageMaker AI provisions capacity on the new endpoint fleet, routes traffic to that fleet, and terminates capacity on the old endpoint fleet. The value must be between 5% to 50% of the copy count of the inference component.", + "MaximumExecutionTimeoutInSeconds": "The time limit for the total deployment. Exceeding this limit causes a timeout.", + "RollbackMaximumBatchSize": "The batch size for a rollback to the old endpoint fleet. If this field is absent, the value is set to the default, which is 100% of the total capacity. When the default is used, SageMaker AI provisions the entire capacity of the old fleet at once during rollback.", + "WaitIntervalInSeconds": "The length of the baking period, during which SageMaker AI monitors alarms for each batch on the new fleet." + }, "AWS::SageMaker::InferenceComponent InferenceComponentRuntimeConfig": { "CopyCount": "The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.", "CurrentCopyCount": "", @@ -46328,7 +47029,7 @@ "MultiModelConfig": "Specifies additional configuration for multi-model endpoints." }, "AWS::SageMaker::Model HubAccessConfig": { - "HubContentArn": "" + "HubContentArn": "The ARN of your private model hub content. This should be a `ModelReference` resource type that points to a SageMaker JumpStart public hub model." }, "AWS::SageMaker::Model ImageConfig": { "RepositoryAccessMode": "Set this to one of the following values:\n\n- `Platform` - The model image is hosted in Amazon ECR.\n- `Vpc` - The model image is hosted in a private Docker registry in your VPC.", @@ -46351,7 +47052,7 @@ }, "AWS::SageMaker::Model S3DataSource": { "CompressionType": "", - "HubAccessConfig": "", + "HubAccessConfig": "The configuration for a private hub model reference that points to a SageMaker JumpStart public hub model.", "ModelAccessConfig": "", "S3DataType": "If you choose `S3Prefix` , `S3Uri` identifies a key name prefix. SageMaker uses all objects that match the specified key name prefix for model training.\n\nIf you choose `ManifestFile` , `S3Uri` identifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training.\n\nIf you choose `AugmentedManifestFile` , `S3Uri` identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training. `AugmentedManifestFile` can only be used if the Channel's input mode is `Pipe` .", "S3Uri": "Depending on the value specified for the `S3DataType` , identifies either a key name prefix or a manifest. For example:\n\n- A key name prefix might look like this: `s3://bucketname/exampleprefix/`\n- A manifest might look like this: `s3://bucketname/example.manifest`\n\nA manifest is an S3 object which is a JSON file consisting of an array of elements. The first element is a prefix which is followed by one or more suffixes. SageMaker appends the suffix elements to the prefix to get a full set of `S3Uri` . Note that the prefix must be a valid non-empty `S3Uri` that precludes users from specifying a manifest whose individual `S3Uri` is sourced from different S3 buckets.\n\nThe following code example shows a valid manifest format:\n\n`[ {\"prefix\": \"s3://customer_bucket/some/prefix/\"},`\n\n`\"relative/path/to/custdata-1\",`\n\n`\"relative/path/custdata-2\",`\n\n`...`\n\n`\"relative/path/custdata-N\"`\n\n`]`\n\nThis JSON is equivalent to the following `S3Uri` list:\n\n`s3://customer_bucket/some/prefix/relative/path/to/custdata-1`\n\n`s3://customer_bucket/some/prefix/relative/path/custdata-2`\n\n`...`\n\n`s3://customer_bucket/some/prefix/relative/path/custdata-N`\n\nThe complete set of `S3Uri` in this manifest is the input data for the channel for this data source. The object that each `S3Uri` points to must be readable by the IAM role that SageMaker uses to perform tasks on your behalf.\n\nYour input bucket must be located in same AWS region as your training job." @@ -47300,7 +48001,7 @@ "LifecycleConfigArns": "The Amazon Resource Name (ARN) of the Lifecycle Configurations attached to the JupyterServerApp. If you use this parameter, the `DefaultResourceSpec` parameter is also required.\n\n> To remove a Lifecycle Config, you must set `LifecycleConfigArns` to an empty list." }, "AWS::SageMaker::Space KernelGatewayAppSettings": { - "CustomImages": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.", + "CustomImages": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.\n\nThe maximum number of custom images are as follows.\n\n- On a domain level: 200\n- On a space level: 5\n- On a user profile level: 5", "DefaultResourceSpec": "The default instance type and the Amazon Resource Name (ARN) of the default SageMaker AI image used by the KernelGateway app.\n\n> The Amazon SageMaker AI Studio UI does not use the default instance type value set here. The default instance type set here is used when Apps are created using the AWS CLI or AWS CloudFormation and the instance type parameter value is not passed.", "LifecycleConfigArns": "The Amazon Resource Name (ARN) of the Lifecycle Configurations attached to the the user profile or domain.\n\n> To remove a Lifecycle Config, you must set `LifecycleConfigArns` to an empty list." }, @@ -47423,7 +48124,7 @@ "LifecycleConfigArns": "The Amazon Resource Name (ARN) of the Lifecycle Configurations attached to the JupyterServerApp. If you use this parameter, the `DefaultResourceSpec` parameter is also required.\n\n> To remove a Lifecycle Config, you must set `LifecycleConfigArns` to an empty list." }, "AWS::SageMaker::UserProfile KernelGatewayAppSettings": { - "CustomImages": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.", + "CustomImages": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.\n\nThe maximum number of custom images are as follows.\n\n- On a domain level: 200\n- On a space level: 5\n- On a user profile level: 5", "DefaultResourceSpec": "The default instance type and the Amazon Resource Name (ARN) of the default SageMaker AI image used by the KernelGateway app.\n\n> The Amazon SageMaker AI Studio UI does not use the default instance type value set here. The default instance type set here is used when Apps are created using the AWS CLI or AWS CloudFormation and the instance type parameter value is not passed.", "LifecycleConfigArns": "The Amazon Resource Name (ARN) of the Lifecycle Configurations attached to the the user profile or domain.\n\n> To remove a Lifecycle Config, you must set `LifecycleConfigArns` to an empty list." }, @@ -48707,7 +49408,7 @@ }, "AWS::Timestream::Table": { "DatabaseName": "The name of the Timestream database that contains this table.\n\n*Length Constraints* : Minimum length of 3 bytes. Maximum length of 256 bytes.", - "MagneticStoreWriteProperties": "Contains properties to set on the table when enabling magnetic store writes.\n\nThis object has the following attributes:\n\n- *EnableMagneticStoreWrites* : A `boolean` flag to enable magnetic store writes.\n- *MagneticStoreRejectedDataLocation* : The location to write error reports for records rejected, asynchronously, during magnetic store writes. Only `S3Configuration` objects are allowed. The `S3Configuration` object has the following attributes:\n\n- *BucketName* : The name of the S3 bucket.\n- *EncryptionOption* : The encryption option for the S3 location. Valid values are S3 server-side encryption with an S3 managed key ( `SSE_S3` ) or AWS managed key ( `SSE_KMS` ).\n- *KmsKeyId* : The AWS KMS key ID to use when encrypting with an AWS managed key.\n- *ObjectKeyPrefix* : The prefix to use option for the objects stored in S3.\n\nBoth `BucketName` and `EncryptionOption` are *required* when `S3Configuration` is specified. If you specify `SSE_KMS` as your `EncryptionOption` then `KmsKeyId` is *required* .\n\n`EnableMagneticStoreWrites` attribute is *required* when `MagneticStoreWriteProperties` is specified. `MagneticStoreRejectedDataLocation` attribute is *required* when `EnableMagneticStoreWrites` is set to `true` .\n\nSee the following examples:\n\n*JSON*\n\n```json\n{ \"Type\" : AWS::Timestream::Table\", \"Properties\":{ \"DatabaseName\":\"TestDatabase\", \"TableName\":\"TestTable\", \"MagneticStoreWriteProperties\":{ \"EnableMagneticStoreWrites\":true, \"MagneticStoreRejectedDataLocation\":{ \"S3Configuration\":{ \"BucketName\":\"testbucket\", \"EncryptionOption\":\"SSE_KMS\", \"KmsKeyId\":\"1234abcd-12ab-34cd-56ef-1234567890ab\", \"ObjectKeyPrefix\":\"prefix\" } } } }\n}\n```\n\n*YAML*\n\n```\nType: AWS::Timestream::Table\nDependsOn: TestDatabase\nProperties: TableName: \"TestTable\" DatabaseName: \"TestDatabase\" MagneticStoreWriteProperties: EnableMagneticStoreWrites: true MagneticStoreRejectedDataLocation: S3Configuration: BucketName: \"testbucket\" EncryptionOption: \"SSE_KMS\" KmsKeyId: \"1234abcd-12ab-34cd-56ef-1234567890ab\" ObjectKeyPrefix: \"prefix\"\n```", + "MagneticStoreWriteProperties": "Contains properties to set on the table when enabling magnetic store writes.\n\nThis object has the following attributes:\n\n- *EnableMagneticStoreWrites* : A `boolean` flag to enable magnetic store writes.\n- *MagneticStoreRejectedDataLocation* : The location to write error reports for records rejected, asynchronously, during magnetic store writes. Only `S3Configuration` objects are allowed. The `S3Configuration` object has the following attributes:\n\n- *BucketName* : The name of the S3 bucket.\n- *EncryptionOption* : The encryption option for the S3 location. Valid values are S3 server-side encryption with an S3 managed key ( `SSE_S3` ) or AWS managed key ( `SSE_KMS` ).\n- *KmsKeyId* : The AWS KMS key ID to use when encrypting with an AWS managed key.\n- *ObjectKeyPrefix* : The prefix to use option for the objects stored in S3.\n\nBoth `BucketName` and `EncryptionOption` are *required* when `S3Configuration` is specified. If you specify `SSE_KMS` as your `EncryptionOption` then `KmsKeyId` is *required* .\n\n`EnableMagneticStoreWrites` attribute is *required* when `MagneticStoreWriteProperties` is specified. `MagneticStoreRejectedDataLocation` attribute is *required* when `EnableMagneticStoreWrites` is set to `true` .\n\nSee the following examples:\n\n*JSON*\n\n```json\n{ \"Type\" : AWS::Timestream::Table\", \"Properties\":{ \"DatabaseName\":\"TestDatabase\", \"TableName\":\"TestTable\", \"MagneticStoreWriteProperties\":{ \"EnableMagneticStoreWrites\":true, \"MagneticStoreRejectedDataLocation\":{ \"S3Configuration\":{ \"BucketName\":\" amzn-s3-demo-bucket \", \"EncryptionOption\":\"SSE_KMS\", \"KmsKeyId\":\"1234abcd-12ab-34cd-56ef-1234567890ab\", \"ObjectKeyPrefix\":\"prefix\" } } } }\n}\n```\n\n*YAML*\n\n```\nType: AWS::Timestream::Table\nDependsOn: TestDatabase\nProperties: TableName: \"TestTable\" DatabaseName: \"TestDatabase\" MagneticStoreWriteProperties: EnableMagneticStoreWrites: true MagneticStoreRejectedDataLocation: S3Configuration: BucketName: \" amzn-s3-demo-bucket \" EncryptionOption: \"SSE_KMS\" KmsKeyId: \"1234abcd-12ab-34cd-56ef-1234567890ab\" ObjectKeyPrefix: \"prefix\"\n```", "RetentionProperties": "The retention duration for the memory store and magnetic store. This object has the following attributes:\n\n- *MemoryStoreRetentionPeriodInHours* : Retention duration for memory store, in hours.\n- *MagneticStoreRetentionPeriodInDays* : Retention duration for magnetic store, in days.\n\nBoth attributes are of type `string` . Both attributes are *required* when `RetentionProperties` is specified.\n\nSee the following examples:\n\n*JSON*\n\n`{ \"Type\" : AWS::Timestream::Table\", \"Properties\" : { \"DatabaseName\" : \"TestDatabase\", \"TableName\" : \"TestTable\", \"RetentionProperties\" : { \"MemoryStoreRetentionPeriodInHours\": \"24\", \"MagneticStoreRetentionPeriodInDays\": \"7\" } } }` \n\n*YAML*\n\n```\nType: AWS::Timestream::Table\nDependsOn: TestDatabase\nProperties: TableName: \"TestTable\" DatabaseName: \"TestDatabase\" RetentionProperties: MemoryStoreRetentionPeriodInHours: \"24\" MagneticStoreRetentionPeriodInDays: \"7\"\n```", "Schema": "The schema of the table.", "TableName": "The name of the Timestream table.\n\n*Length Constraints* : Minimum length of 3 bytes. Maximum length of 256 bytes.", @@ -48745,6 +49446,7 @@ "AWS::Transfer::Agreement": { "AccessRole": "Connectors are used to send files using either the AS2 or SFTP protocol. For the access role, provide the Amazon Resource Name (ARN) of the AWS Identity and Access Management role to use.\n\n*For AS2 connectors*\n\nWith AS2, you can send files by calling `StartFileTransfer` and specifying the file paths in the request parameter, `SendFilePaths` . We use the file\u2019s parent directory (for example, for `--send-file-paths /bucket/dir/file.txt` , parent directory is `/bucket/dir/` ) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the `AccessRole` needs to provide read and write access to the parent directory of the file location used in the `StartFileTransfer` request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with `StartFileTransfer` .\n\nIf you are using Basic authentication for your AS2 connector, the access role requires the `secretsmanager:GetSecretValue` permission for the secret. If the secret is encrypted using a customer-managed key instead of the AWS managed key in Secrets Manager, then the role also needs the `kms:Decrypt` permission for that key.\n\n*For SFTP connectors*\n\nMake sure that the access role provides read and write access to the parent directory of the file location that's used in the `StartFileTransfer` request. Additionally, make sure that the role provides `secretsmanager:GetSecretValue` permission to AWS Secrets Manager .", "BaseDirectory": "The landing directory (folder) for files that are transferred by using the AS2 protocol.", + "CustomDirectories": "A `CustomDirectoriesType` structure. This structure specifies custom directories for storing various AS2 message files. You can specify directories for the following types of files.\n\n- Failed files\n- MDN files\n- Payload files\n- Status files\n- Temporary files", "Description": "The name or short description that's used to identify the agreement.", "EnforceMessageSigning": "Determines whether or not unsigned messages from your trading partners will be accepted.\n\n- `ENABLED` : Transfer Family rejects unsigned messages from your trading partner.\n- `DISABLED` (default value): Transfer Family accepts unsigned messages from your trading partner.", "LocalProfileId": "A unique identifier for the AS2 local profile.", @@ -48754,6 +49456,13 @@ "Status": "The current status of the agreement, either `ACTIVE` or `INACTIVE` .", "Tags": "Key-value pairs that can be used to group and search for agreements." }, + "AWS::Transfer::Agreement CustomDirectories": { + "FailedFilesDirectory": "", + "MdnFilesDirectory": "", + "PayloadFilesDirectory": "", + "StatusFilesDirectory": "", + "TemporaryFilesDirectory": "" + }, "AWS::Transfer::Agreement Tag": { "Key": "The name assigned to the tag that you create.", "Value": "Contains one or more values that you assigned to the key name you create." @@ -49636,6 +50345,7 @@ "Cookies": "Inspect the request cookies. You must configure scope and pattern matching filters in the `Cookies` object, to define the set of cookies and the parts of the cookies that AWS WAF inspects.\n\nOnly the first 8 KB (8192 bytes) of a request's cookies and only the first 200 cookies are forwarded to AWS WAF for inspection by the underlying host service. You must configure how to handle any oversize cookie content in the `Cookies` object. AWS WAF applies the pattern matching filters to the cookies that it receives from the underlying host service.", "Headers": "Inspect the request headers. You must configure scope and pattern matching filters in the `Headers` object, to define the set of headers to and the parts of the headers that AWS WAF inspects.\n\nOnly the first 8 KB (8192 bytes) of a request's headers and only the first 200 headers are forwarded to AWS WAF for inspection by the underlying host service. You must configure how to handle any oversize header content in the `Headers` object. AWS WAF applies the pattern matching filters to the headers that it receives from the underlying host service.", "JA3Fingerprint": "Available for use with Amazon CloudFront distributions and Application Load Balancers. Match against the request's JA3 fingerprint. The JA3 fingerprint is a 32-character hash derived from the TLS Client Hello of an incoming request. This fingerprint serves as a unique identifier for the client's TLS configuration. AWS WAF calculates and logs this fingerprint for each request that has enough TLS Client Hello information for the calculation. Almost all web requests include this information.\n\n> You can use this choice only with a string match `ByteMatchStatement` with the `PositionalConstraint` set to `EXACTLY` . \n\nYou can obtain the JA3 fingerprint for client requests from the web ACL logs. If AWS WAF is able to calculate the fingerprint, it includes it in the logs. For information about the logging fields, see [Log fields](https://docs.aws.amazon.com/waf/latest/developerguide/logging-fields.html) in the *AWS WAF Developer Guide* .\n\nProvide the JA3 fingerprint string from the logs in your string match statement specification, to match with any future requests that have the same TLS configuration.", + "JA4Fingerprint": "Available for use with Amazon CloudFront distributions and Application Load Balancers. Match against the request's JA4 fingerprint. The JA4 fingerprint is a 36-character hash derived from the TLS Client Hello of an incoming request. This fingerprint serves as a unique identifier for the client's TLS configuration. AWS WAF calculates and logs this fingerprint for each request that has enough TLS Client Hello information for the calculation. Almost all web requests include this information.\n\n> You can use this choice only with a string match `ByteMatchStatement` with the `PositionalConstraint` set to `EXACTLY` . \n\nYou can obtain the JA4 fingerprint for client requests from the web ACL logs. If AWS WAF is able to calculate the fingerprint, it includes it in the logs. For information about the logging fields, see [Log fields](https://docs.aws.amazon.com/waf/latest/developerguide/logging-fields.html) in the *AWS WAF Developer Guide* .\n\nProvide the JA4 fingerprint string from the logs in your string match statement specification, to match with any future requests that have the same TLS configuration.", "JsonBody": "Inspect the request body as JSON. The request body immediately follows the request headers. This is the part of a request that contains any additional data that you want to send to your web server as the HTTP request body, such as data from a form.\n\nAWS WAF does not support inspecting the entire contents of the web request body if the body exceeds the limit for the resource type. When a web request body is larger than the limit, the underlying host service only forwards the contents that are within the limit to AWS WAF for inspection.\n\n- For Application Load Balancer and AWS AppSync , the limit is fixed at 8 KB (8,192 bytes).\n- For CloudFront, API Gateway, Amazon Cognito, App Runner, and Verified Access, the default limit is 16 KB (16,384 bytes), and you can increase the limit for each resource type in the web ACL `AssociationConfig` , for additional processing fees.\n\nFor information about how to handle oversized request bodies, see the `JsonBody` object configuration.", "Method": "Inspect the HTTP method. The method indicates the type of operation that the request is asking the origin to perform.", "QueryString": "Inspect the query string. This is the part of a URL that appears after a `?` character, if any.", @@ -49676,6 +50386,9 @@ "AWS::WAFv2::RuleGroup JA3Fingerprint": { "FallbackBehavior": "The match status to assign to the web request if the request doesn't have a JA3 fingerprint.\n\nYou can specify the following fallback behaviors:\n\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement." }, + "AWS::WAFv2::RuleGroup JA4Fingerprint": { + "FallbackBehavior": "The match status to assign to the web request if the request doesn't have a JA4 fingerprint.\n\nYou can specify the following fallback behaviors:\n\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement." + }, "AWS::WAFv2::RuleGroup JsonBody": { "InvalidFallbackBehavior": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\n> AWS WAF parsing doesn't fully validate the input JSON string, so parsing can succeed even for invalid JSON. When parsing succeeds, AWS WAF doesn't apply the fallback behavior. For more information, see [JSON body](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-fields-list.html#waf-rule-statement-request-component-json-body) in the *AWS WAF Developer Guide* .", "MatchPattern": "The patterns to look for in the JSON body. AWS WAF inspects the results of these pattern matches against the rule inspection criteria.", @@ -49716,6 +50429,8 @@ "HTTPMethod": "Use the request's HTTP method as an aggregate key. Each distinct HTTP method contributes to the aggregation instance. If you use just the HTTP method as your custom key, then each method fully defines an aggregation instance.", "Header": "Use the value of a header in the request as an aggregate key. Each distinct value in the header contributes to the aggregation instance. If you use a single header as your custom key, then each value fully defines an aggregation instance.", "IP": "Use the request's originating IP address as an aggregate key. Each distinct IP address contributes to the aggregation instance.\n\nWhen you specify an IP or forwarded IP in the custom key settings, you must also specify at least one other key to use. You can aggregate on only the IP address by specifying `IP` in your rate-based statement's `AggregateKeyType` .", + "JA3Fingerprint": "Use the request's JA3 fingerprint as an aggregate key. If you use a single JA3 fingerprint as your custom key, then each value fully defines an aggregation instance.", + "JA4Fingerprint": "Use the request's JA4 fingerprint as an aggregate key. If you use a single JA4 fingerprint as your custom key, then each value fully defines an aggregation instance.", "LabelNamespace": "Use the specified label namespace as an aggregate key. Each distinct fully qualified label name that has the specified label namespace contributes to the aggregation instance. If you use just one label namespace as your custom key, then each label name fully defines an aggregation instance.\n\nThis uses only labels that have been added to the request by rules that are evaluated before this rate-based rule in the web ACL.\n\nFor information about label namespaces and names, see [Label syntax and naming requirements](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-label-requirements.html) in the *AWS WAF Developer Guide* .", "QueryArgument": "Use the specified query argument as an aggregate key. Each distinct value for the named query argument contributes to the aggregation instance. If you use a single query argument as your custom key, then each value fully defines an aggregation instance.", "QueryString": "Use the request's query string as an aggregate key. Each distinct string contributes to the aggregation instance. If you use just the query string as your custom key, then each string fully defines an aggregation instance.", @@ -49729,6 +50444,12 @@ "Name": "The name of the header to use.", "TextTransformations": "Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. Text transformations are used in rule match statements, to transform the `FieldToMatch` request component before inspecting it, and they're used in rate-based rule statements, to transform request components before using them as custom aggregation keys. If you specify one or more transformations to apply, AWS WAF performs all transformations on the specified content, starting from the lowest priority setting, and then uses the transformed component contents." }, + "AWS::WAFv2::RuleGroup RateLimitJA3Fingerprint": { + "FallbackBehavior": "The match status to assign to the web request if there is insufficient TSL Client Hello information to compute the JA3 fingerprint.\n\nYou can specify the following fallback behaviors:\n\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement." + }, + "AWS::WAFv2::RuleGroup RateLimitJA4Fingerprint": { + "FallbackBehavior": "The match status to assign to the web request if there is insufficient TSL Client Hello information to compute the JA4 fingerprint.\n\nYou can specify the following fallback behaviors:\n\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement." + }, "AWS::WAFv2::RuleGroup RateLimitLabelNamespace": { "Namespace": "The namespace to use for aggregation." }, @@ -49823,6 +50544,7 @@ "CaptchaConfig": "Specifies how AWS WAF should handle `CAPTCHA` evaluations for rules that don't have their own `CaptchaConfig` settings. If you don't specify this, AWS WAF uses its default settings for `CaptchaConfig` .", "ChallengeConfig": "Specifies how AWS WAF should handle challenge evaluations for rules that don't have their own `ChallengeConfig` settings. If you don't specify this, AWS WAF uses its default settings for `ChallengeConfig` .", "CustomResponseBodies": "A map of custom response keys and content bodies. When you create a rule with a block action, you can send a custom response to the web request. You define these for the web ACL, and then use them in the rules and default actions that you define in the web ACL.\n\nFor information about customizing web requests and responses, see [Customizing web requests and responses in AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/waf-custom-request-response.html) in the *AWS WAF Developer Guide* .\n\nFor information about the limits on count and size for custom request and response settings, see [AWS WAF quotas](https://docs.aws.amazon.com/waf/latest/developerguide/limits.html) in the *AWS WAF Developer Guide* .", + "DataProtectionConfig": "Specifies data protection to apply to the web request data for the web ACL. This is a web ACL level data protection option.\n\nThe data protection that you configure for the web ACL alters the data that's available for any other data collection activity, including your AWS WAF logging destinations, web ACL request sampling, and Amazon Security Lake data collection and management. Your other option for data protection is in the logging configuration, which only affects logging.", "DefaultAction": "The action to perform if none of the `Rules` contained in the `WebACL` match.", "Description": "A description of the web ACL that helps with identification.", "Name": "The name of the web ACL. You cannot change the name of a web ACL after you create it.", @@ -49912,6 +50634,15 @@ "Content": "The payload of the custom response.\n\nYou can use JSON escape strings in JSON content. To do this, you must specify JSON content in the `ContentType` setting.\n\nFor information about the limits on count and size for custom request and response settings, see [AWS WAF quotas](https://docs.aws.amazon.com/waf/latest/developerguide/limits.html) in the *AWS WAF Developer Guide* .", "ContentType": "The type of content in the payload that you are defining in the `Content` string." }, + "AWS::WAFv2::WebACL DataProtect": { + "Action": "", + "ExcludeRateBasedDetails": "", + "ExcludeRuleMatchDetails": "", + "Field": "" + }, + "AWS::WAFv2::WebACL DataProtectionConfig": { + "DataProtections": "An array of data protection configurations for specific web request field types. This is defined for each web ACL. AWS WAF applies the specified protection to all web requests that the web ACL inspects." + }, "AWS::WAFv2::WebACL DefaultAction": { "Allow": "Specifies that AWS WAF should allow requests by default.", "Block": "Specifies that AWS WAF should block requests by default." @@ -49928,6 +50659,7 @@ "Cookies": "Inspect the request cookies. You must configure scope and pattern matching filters in the `Cookies` object, to define the set of cookies and the parts of the cookies that AWS WAF inspects.\n\nOnly the first 8 KB (8192 bytes) of a request's cookies and only the first 200 cookies are forwarded to AWS WAF for inspection by the underlying host service. You must configure how to handle any oversize cookie content in the `Cookies` object. AWS WAF applies the pattern matching filters to the cookies that it receives from the underlying host service.", "Headers": "Inspect the request headers. You must configure scope and pattern matching filters in the `Headers` object, to define the set of headers to and the parts of the headers that AWS WAF inspects.\n\nOnly the first 8 KB (8192 bytes) of a request's headers and only the first 200 headers are forwarded to AWS WAF for inspection by the underlying host service. You must configure how to handle any oversize header content in the `Headers` object. AWS WAF applies the pattern matching filters to the headers that it receives from the underlying host service.", "JA3Fingerprint": "Available for use with Amazon CloudFront distributions and Application Load Balancers. Match against the request's JA3 fingerprint. The JA3 fingerprint is a 32-character hash derived from the TLS Client Hello of an incoming request. This fingerprint serves as a unique identifier for the client's TLS configuration. AWS WAF calculates and logs this fingerprint for each request that has enough TLS Client Hello information for the calculation. Almost all web requests include this information.\n\n> You can use this choice only with a string match `ByteMatchStatement` with the `PositionalConstraint` set to `EXACTLY` . \n\nYou can obtain the JA3 fingerprint for client requests from the web ACL logs. If AWS WAF is able to calculate the fingerprint, it includes it in the logs. For information about the logging fields, see [Log fields](https://docs.aws.amazon.com/waf/latest/developerguide/logging-fields.html) in the *AWS WAF Developer Guide* .\n\nProvide the JA3 fingerprint string from the logs in your string match statement specification, to match with any future requests that have the same TLS configuration.", + "JA4Fingerprint": "Available for use with Amazon CloudFront distributions and Application Load Balancers. Match against the request's JA4 fingerprint. The JA4 fingerprint is a 36-character hash derived from the TLS Client Hello of an incoming request. This fingerprint serves as a unique identifier for the client's TLS configuration. AWS WAF calculates and logs this fingerprint for each request that has enough TLS Client Hello information for the calculation. Almost all web requests include this information.\n\n> You can use this choice only with a string match `ByteMatchStatement` with the `PositionalConstraint` set to `EXACTLY` . \n\nYou can obtain the JA4 fingerprint for client requests from the web ACL logs. If AWS WAF is able to calculate the fingerprint, it includes it in the logs. For information about the logging fields, see [Log fields](https://docs.aws.amazon.com/waf/latest/developerguide/logging-fields.html) in the *AWS WAF Developer Guide* .\n\nProvide the JA4 fingerprint string from the logs in your string match statement specification, to match with any future requests that have the same TLS configuration.", "JsonBody": "Inspect the request body as JSON. The request body immediately follows the request headers. This is the part of a request that contains any additional data that you want to send to your web server as the HTTP request body, such as data from a form.\n\nAWS WAF does not support inspecting the entire contents of the web request body if the body exceeds the limit for the resource type. When a web request body is larger than the limit, the underlying host service only forwards the contents that are within the limit to AWS WAF for inspection.\n\n- For Application Load Balancer and AWS AppSync , the limit is fixed at 8 KB (8,192 bytes).\n- For CloudFront, API Gateway, Amazon Cognito, App Runner, and Verified Access, the default limit is 16 KB (16,384 bytes), and you can increase the limit for each resource type in the web ACL `AssociationConfig` , for additional processing fees.\n\nFor information about how to handle oversized request bodies, see the `JsonBody` object configuration.", "Method": "Inspect the HTTP method. The method indicates the type of operation that the request is asking the origin to perform.", "QueryString": "Inspect the query string. This is the part of a URL that appears after a `?` character, if any.", @@ -49935,6 +50667,10 @@ "SingleQueryArgument": "Inspect a single query argument. Provide the name of the query argument to inspect, such as *UserName* or *SalesRegion* . The name can be up to 30 characters long and isn't case sensitive.\n\nExample JSON: `\"SingleQueryArgument\": { \"Name\": \"myArgument\" }`", "UriPath": "Inspect the request URI path. This is the part of the web request that identifies a resource, for example, `/images/daily-ad.jpg` ." }, + "AWS::WAFv2::WebACL FieldToProtect": { + "FieldKeys": "Specifies the keys to protect for the specified field type. If you don't specify any key, then all keys for the field type are protected.", + "FieldType": "Specifies the web request component type to protect." + }, "AWS::WAFv2::WebACL ForwardedIPConfiguration": { "FallbackBehavior": "The match status to assign to the web request if the request doesn't have a valid IP address in the specified position.\n\n> If the specified header isn't present in the request, AWS WAF doesn't apply the rule to the web request at all. \n\nYou can specify the following fallback behaviors:\n\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.", "HeaderName": "The name of the HTTP header to use for the IP address. For example, to use the X-Forwarded-For (XFF) header, set this to `X-Forwarded-For` .\n\n> If the specified header isn't present in the request, AWS WAF doesn't apply the rule to the web request at all." @@ -49968,6 +50704,9 @@ "AWS::WAFv2::WebACL JA3Fingerprint": { "FallbackBehavior": "The match status to assign to the web request if the request doesn't have a JA3 fingerprint.\n\nYou can specify the following fallback behaviors:\n\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement." }, + "AWS::WAFv2::WebACL JA4Fingerprint": { + "FallbackBehavior": "The match status to assign to the web request if the request doesn't have a JA4 fingerprint.\n\nYou can specify the following fallback behaviors:\n\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement." + }, "AWS::WAFv2::WebACL JsonBody": { "InvalidFallbackBehavior": "What AWS WAF should do if it fails to completely parse the JSON body. The options are the following:\n\n- `EVALUATE_AS_STRING` - Inspect the body as plain text. AWS WAF applies the text transformations and inspection criteria that you defined for the JSON inspection to the body text string.\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement.\n\nIf you don't provide this setting, AWS WAF parses and evaluates the content only up to the first parsing failure that it encounters.\n\n> AWS WAF parsing doesn't fully validate the input JSON string, so parsing can succeed even for invalid JSON. When parsing succeeds, AWS WAF doesn't apply the fallback behavior. For more information, see [JSON body](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-fields-list.html#waf-rule-statement-request-component-json-body) in the *AWS WAF Developer Guide* .", "MatchPattern": "The patterns to look for in the JSON body. AWS WAF inspects the results of these pattern matches against the rule inspection criteria.", @@ -50027,6 +50766,8 @@ "HTTPMethod": "Use the request's HTTP method as an aggregate key. Each distinct HTTP method contributes to the aggregation instance. If you use just the HTTP method as your custom key, then each method fully defines an aggregation instance.", "Header": "Use the value of a header in the request as an aggregate key. Each distinct value in the header contributes to the aggregation instance. If you use a single header as your custom key, then each value fully defines an aggregation instance.", "IP": "Use the request's originating IP address as an aggregate key. Each distinct IP address contributes to the aggregation instance.\n\nWhen you specify an IP or forwarded IP in the custom key settings, you must also specify at least one other key to use. You can aggregate on only the IP address by specifying `IP` in your rate-based statement's `AggregateKeyType` .", + "JA3Fingerprint": "Use the request's JA3 fingerprint as an aggregate key. If you use a single JA3 fingerprint as your custom key, then each value fully defines an aggregation instance.", + "JA4Fingerprint": "Use the request's JA4 fingerprint as an aggregate key. If you use a single JA4 fingerprint as your custom key, then each value fully defines an aggregation instance.", "LabelNamespace": "Use the specified label namespace as an aggregate key. Each distinct fully qualified label name that has the specified label namespace contributes to the aggregation instance. If you use just one label namespace as your custom key, then each label name fully defines an aggregation instance.\n\nThis uses only labels that have been added to the request by rules that are evaluated before this rate-based rule in the web ACL.\n\nFor information about label namespaces and names, see [Label syntax and naming requirements](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-label-requirements.html) in the *AWS WAF Developer Guide* .", "QueryArgument": "Use the specified query argument as an aggregate key. Each distinct value for the named query argument contributes to the aggregation instance. If you use a single query argument as your custom key, then each value fully defines an aggregation instance.", "QueryString": "Use the request's query string as an aggregate key. Each distinct string contributes to the aggregation instance. If you use just the query string as your custom key, then each string fully defines an aggregation instance.", @@ -50040,6 +50781,12 @@ "Name": "The name of the header to use.", "TextTransformations": "Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. Text transformations are used in rule match statements, to transform the `FieldToMatch` request component before inspecting it, and they're used in rate-based rule statements, to transform request components before using them as custom aggregation keys. If you specify one or more transformations to apply, AWS WAF performs all transformations on the specified content, starting from the lowest priority setting, and then uses the transformed component contents." }, + "AWS::WAFv2::WebACL RateLimitJA3Fingerprint": { + "FallbackBehavior": "The match status to assign to the web request if there is insufficient TSL Client Hello information to compute the JA3 fingerprint.\n\nYou can specify the following fallback behaviors:\n\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement." + }, + "AWS::WAFv2::WebACL RateLimitJA4Fingerprint": { + "FallbackBehavior": "The match status to assign to the web request if there is insufficient TSL Client Hello information to compute the JA4 fingerprint.\n\nYou can specify the following fallback behaviors:\n\n- `MATCH` - Treat the web request as matching the rule statement. AWS WAF applies the rule action to the request.\n- `NO_MATCH` - Treat the web request as not matching the rule statement." + }, "AWS::WAFv2::WebACL RateLimitLabelNamespace": { "Namespace": "The namespace to use for aggregation." }, @@ -50196,8 +50943,7 @@ "AWS::Wisdom::AIAgent AIAgentConfiguration": { "AnswerRecommendationAIAgentConfiguration": "The configuration for AI Agents of type `ANSWER_RECOMMENDATION` .", "ManualSearchAIAgentConfiguration": "The configuration for AI Agents of type `MANUAL_SEARCH` .", - "SelfServiceAIAgentConfiguration": "The self-service AI agent configuration.", - "SessionSummarizationAIAgentConfiguration": "" + "SelfServiceAIAgentConfiguration": "The self-service AI agent configuration." }, "AWS::Wisdom::AIAgent AnswerRecommendationAIAgentConfiguration": { "AnswerGenerationAIGuardrailId": "The ID of the answer generation AI guardrail.", @@ -50236,10 +50982,6 @@ "SelfServiceAnswerGenerationAIPromptId": "The ID of the self-service answer generation AI prompt.", "SelfServicePreProcessingAIPromptId": "The ID of the self-service preprocessing AI prompt." }, - "AWS::Wisdom::AIAgent SessionSummarizationAIAgentConfiguration": { - "Locale": "", - "SessionSummarizationAIPromptId": "" - }, "AWS::Wisdom::AIAgent TagCondition": { "Key": "The tag key in the tag condition.", "Value": "The tag value in the tag condition." @@ -50640,7 +51382,7 @@ "DesiredSoftwareSetId": "The ID of the software set to apply.", "DesktopArn": "The Amazon Resource Name (ARN) of the desktop to stream from Amazon WorkSpaces, WorkSpaces Secure Browser, or AppStream 2.0.", "DesktopEndpoint": "The URL for the identity provider login (only for environments that use AppStream 2.0).", - "DeviceCreationTags": "The tag keys and optional values for the newly created devices for this environment.", + "DeviceCreationTags": "An array of key-value pairs to apply to the newly created devices for this environment.", "KmsKeyArn": "The Amazon Resource Name (ARN) of the AWS Key Management Service key used to encrypt the environment.", "MaintenanceWindow": "A specification for a time window to apply software updates.", "Name": "The name of the environment.", @@ -50856,6 +51598,9 @@ "Key": "A tag key, such as `Stage` or `Name` . A tag key cannot be empty. The key can be a maximum of 128 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: `+ - = . _ : /`", "Value": "An optional tag value, such as `Production` or `test-only` . The value can be a maximum of 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: `+ - = . _ : /`" }, + "AWS::XRay::TransactionSearchConfig": { + "IndexingPercentage": "" + }, "Alexa::ASK::Skill": { "AuthenticationConfiguration": "Login with Amazon (LWA) configuration used to authenticate with the Alexa service. Only Login with Amazon clients created through the are supported. The client ID, client secret, and refresh token are required.", "SkillPackage": "Configuration for the skill package that contains the components of the Alexa skill. Skill packages are retrieved from an Amazon S3 bucket and key and used to create and update the skill. For more information about the skill package format, see the .", diff --git a/schema_source/cloudformation.schema.json b/schema_source/cloudformation.schema.json index c332c3ea1..43bdb7d8b 100644 --- a/schema_source/cloudformation.schema.json +++ b/schema_source/cloudformation.schema.json @@ -9028,7 +9028,7 @@ "type": "string" }, "RetrievalRoleArn": { - "markdownDescription": "The ARN of an IAM role with permission to access the configuration at the specified `LocationUri` .\n\n> A retrieval role ARN is not required for configurations stored in the AWS AppConfig hosted configuration store. It is required for all other sources that store your configuration.", + "markdownDescription": "The ARN of an IAM role with permission to access the configuration at the specified `LocationUri` .\n\n> A retrieval role ARN is not required for configurations stored in AWS CodePipeline or the AWS AppConfig hosted configuration store. It is required for all other sources that store your configuration.", "title": "RetrievalRoleArn", "type": "string" }, @@ -26615,7 +26615,7 @@ "type": "string" }, "ScheduleExpression": { - "markdownDescription": "A CRON expression in specified timezone when a restore testing plan is executed.", + "markdownDescription": "A CRON expression in specified timezone when a restore testing plan is executed. When no CRON expression is provided, AWS Backup will use the default expression `cron(0 5 ? * * *)` .", "title": "ScheduleExpression", "type": "string" }, @@ -27014,7 +27014,7 @@ "title": "EksConfiguration" }, "ReplaceComputeEnvironment": { - "markdownDescription": "Specifies whether the compute environment is replaced if an update is made that requires replacing the instances in the compute environment. The default value is `true` . To enable more properties to be updated, set this property to `false` . When changing the value of this property to `false` , do not change any other properties at the same time. If other properties are changed at the same time, and the change needs to be rolled back but it can't, it's possible for the stack to go into the `UPDATE_ROLLBACK_FAILED` state. You can't update a stack that is in the `UPDATE_ROLLBACK_FAILED` state. However, if you can continue to roll it back, you can return the stack to its original settings and then try to update it again. For more information, see [Continue rolling back an update](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html) in the *AWS CloudFormation User Guide* .\n\nThe properties that can't be changed without replacing the compute environment are in the [`ComputeResources`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html) property type: [`AllocationStrategy`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-allocationstrategy) , [`BidPercentage`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-bidpercentage) , [`Ec2Configuration`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2configuration) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`ImageId`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-imageid) , [`InstanceRole`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancerole) , [`InstanceTypes`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancetypes) , [`LaunchTemplate`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-launchtemplate) , [`MaxvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-maxvcpus) , [`MinvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-minvcpus) , [`PlacementGroup`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-placementgroup) , [`SecurityGroupIds`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-securitygroupids) , [`Subnets`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-subnets) , [Tags](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-tags) , [`Type`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-type) , and [`UpdateToLatestImageVersion`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-updatetolatestimageversion) .", + "markdownDescription": "Specifies whether the compute environment is replaced if an update is made that requires replacing the instances in the compute environment. The default value is `true` . To enable more properties to be updated, set this property to `false` . When changing the value of this property to `false` , do not change any other properties at the same time. If other properties are changed at the same time, and the change needs to be rolled back but it can't, it's possible for the stack to go into the `UPDATE_ROLLBACK_FAILED` state. You can't update a stack that is in the `UPDATE_ROLLBACK_FAILED` state. However, if you can continue to roll it back, you can return the stack to its original settings and then try to update it again. For more information, see [Continue rolling back an update](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html) in the *AWS CloudFormation User Guide* .\n\n`ReplaceComputeEnvironment` is not applicable for Fargate compute environments. Fargate compute environments are always updated without interruption.\n\nThe properties that can't be changed without replacing the compute environment are in the [`ComputeResources`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html) property type: [`AllocationStrategy`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-allocationstrategy) , [`BidPercentage`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-bidpercentage) , [`Ec2Configuration`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2configuration) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`ImageId`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-imageid) , [`InstanceRole`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancerole) , [`InstanceTypes`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancetypes) , [`LaunchTemplate`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-launchtemplate) , [`MaxvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-maxvcpus) , [`MinvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-minvcpus) , [`PlacementGroup`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-placementgroup) , [`SecurityGroupIds`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-securitygroupids) , [`Subnets`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-subnets) , [Tags](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-tags) , [`Type`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-type) , and [`UpdateToLatestImageVersion`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-updatetolatestimageversion) .", "title": "ReplaceComputeEnvironment", "type": "boolean" }, @@ -31879,7 +31879,7 @@ "additionalProperties": false, "properties": { "KeyspaceName": { - "markdownDescription": "The name of the keyspace to be created. The keyspace name is case sensitive. If you don't specify a name, AWS CloudFormation generates a unique ID and uses that ID for the keyspace name. For more information, see [Name type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-name.html) .\n\n*Length constraints:* Minimum length of 3. Maximum length of 255.\n\n*Pattern:* `^[a-zA-Z0-9][a-zA-Z0-9_]{1,47}$`", + "markdownDescription": "The name of the keyspace to be created. The keyspace name is case sensitive. If you don't specify a name, AWS CloudFormation generates a unique ID and uses that ID for the keyspace name. For more information, see [Name type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-name.html) .\n\n*Length constraints:* Minimum length of 1. Maximum length of 48.", "title": "KeyspaceName", "type": "string" }, @@ -32964,7 +32964,7 @@ "type": "string" }, "QueryLogStatus": { - "markdownDescription": "An indicator as to whether query logging has been enabled or disabled for the collaboration.", + "markdownDescription": "An indicator as to whether query logging has been enabled or disabled for the collaboration.\n\nWhen `ENABLED` , AWS Clean Rooms logs details about queries run within this collaboration and those logs can be viewed in Amazon CloudWatch Logs. The default value is `DISABLED` .", "title": "QueryLogStatus", "type": "string" }, @@ -33146,7 +33146,7 @@ "type": "array" }, "AnalysisMethod": { - "markdownDescription": "The analysis method for the configured table. The only valid value is currently `DIRECT_QUERY`.", + "markdownDescription": "The analysis method for the configured table.\n\n`DIRECT_QUERY` allows SQL queries to be run directly on this table.\n\n`DIRECT_JOB` allows PySpark jobs to be run directly on this table.\n\n`MULTIPLE` allows both SQL queries and PySpark jobs to be run directly on this table.", "title": "AnalysisMethod", "type": "string" }, @@ -33659,7 +33659,7 @@ "title": "PaymentConfiguration" }, "QueryLogStatus": { - "markdownDescription": "An indicator as to whether query logging has been enabled or disabled for the membership.", + "markdownDescription": "An indicator as to whether query logging has been enabled or disabled for the membership.\n\nWhen `ENABLED` , AWS Clean Rooms logs details about queries run within this collaboration and those logs can be viewed in Amazon CloudWatch Logs. The default value is `DISABLED` .", "title": "QueryLogStatus", "type": "string" }, @@ -35164,7 +35164,7 @@ }, "Parameters": { "additionalProperties": true, - "markdownDescription": "The set value pairs that represent the parameters passed to CloudFormation when this nested stack is created. Each parameter has a name corresponding to a parameter defined in the embedded template and a value representing the value that you want to set for the parameter.\n\n> If you use the `Ref` function to pass a parameter value to a nested stack, comma-delimited list parameters must be of type `String` . In other words, you can't pass values that are of type `CommaDelimitedList` to nested stacks. \n\nConditional. Required if the nested stack requires input parameters.\n\nWhether an update causes interruptions depends on the resources that are being updated. An update never causes a nested stack to be replaced.", + "markdownDescription": "The set value pairs that represent the parameters passed to CloudFormation when this nested stack is created. Each parameter has a name corresponding to a parameter defined in the embedded template and a value representing the value that you want to set for the parameter.\n\n> If you use the `Ref` function to pass a parameter value to a nested stack, comma-delimited list parameters must be of type `String` . In other words, you can't pass values that are of type `CommaDelimitedList` to nested stacks. \n\nRequired if the nested stack requires input parameters.\n\nWhether an update causes interruptions depends on the resources that are being updated. An update never causes a nested stack to be replaced.", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -35254,17 +35254,17 @@ "additionalProperties": false, "properties": { "AdministrationRoleARN": { - "markdownDescription": "The Amazon Resource Number (ARN) of the IAM role to use to create this stack set. Specify an IAM role only if you are using customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account.\n\nUse customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account. For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the *AWS CloudFormation User Guide* .\n\n*Minimum* : `20`\n\n*Maximum* : `2048`", + "markdownDescription": "The Amazon Resource Number (ARN) of the IAM role to use to create this stack set. Specify an IAM role only if you are using customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account.\n\nUse customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account. For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the *AWS CloudFormation User Guide* .\n\nValid only if the permissions model is `SELF_MANAGED` .", "title": "AdministrationRoleARN", "type": "string" }, "AutoDeployment": { "$ref": "#/definitions/AWS::CloudFormation::StackSet.AutoDeployment", - "markdownDescription": "[ `Service-managed` permissions] Describes whether StackSets automatically deploys to AWS Organizations accounts that are added to a target organization or organizational unit (OU).", + "markdownDescription": "Describes whether StackSets automatically deploys to AWS Organizations accounts that are added to a target organization or organizational unit (OU). For more information, see [Manage automatic deployments for CloudFormation StackSets that use service-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-manage-auto-deployment.html) in the *AWS CloudFormation User Guide* .\n\nRequired if the permissions model is `SERVICE_MANAGED` . (Not used with self-managed permissions.)", "title": "AutoDeployment" }, "CallAs": { - "markdownDescription": "[Service-managed permissions] Specifies whether you are acting as an account administrator in the organization's management account or as a delegated administrator in a member account.\n\nBy default, `SELF` is specified. Use `SELF` for stack sets with self-managed permissions.\n\n- To create a stack set with service-managed permissions while signed in to the management account, specify `SELF` .\n- To create a stack set with service-managed permissions while signed in to a delegated administrator account, specify `DELEGATED_ADMIN` .\n\nYour AWS account must be registered as a delegated admin in the management account. For more information, see [Register a delegated administrator](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-delegated-admin.html) in the *AWS CloudFormation User Guide* .\n\nStack sets with service-managed permissions are created in the management account, including stack sets that are created by delegated administrators.\n\n*Valid Values* : `SELF` | `DELEGATED_ADMIN`", + "markdownDescription": "Specifies whether you are acting as an account administrator in the organization's management account or as a delegated administrator in a member account.\n\nBy default, `SELF` is specified. Use `SELF` for stack sets with self-managed permissions.\n\n- To create a stack set with service-managed permissions while signed in to the management account, specify `SELF` .\n- To create a stack set with service-managed permissions while signed in to a delegated administrator account, specify `DELEGATED_ADMIN` .\n\nYour AWS account must be registered as a delegated admin in the management account. For more information, see [Register a delegated administrator](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-delegated-admin.html) in the *AWS CloudFormation User Guide* .\n\nStack sets with service-managed permissions are created in the management account, including stack sets that are created by delegated administrators.\n\nValid only if the permissions model is `SERVICE_MANAGED` .", "title": "CallAs", "type": "string" }, @@ -35277,12 +35277,12 @@ "type": "array" }, "Description": { - "markdownDescription": "A description of the stack set.\n\n*Minimum* : `1`\n\n*Maximum* : `1024`", + "markdownDescription": "A description of the stack set.", "title": "Description", "type": "string" }, "ExecutionRoleName": { - "markdownDescription": "The name of the IAM execution role to use to create the stack set. If you don't specify an execution role, CloudFormation uses the `AWSCloudFormationStackSetExecutionRole` role for the stack set operation.\n\n*Minimum* : `1`\n\n*Maximum* : `64`\n\n*Pattern* : `[a-zA-Z_0-9+=,.@-]+`", + "markdownDescription": "The name of the IAM execution role to use to create the stack set. If you don't specify an execution role, CloudFormation uses the `AWSCloudFormationStackSetExecutionRole` role for the stack set operation.\n\nValid only if the permissions model is `SELF_MANAGED` .\n\n*Pattern* : `[a-zA-Z_0-9+=,.@-]+`", "title": "ExecutionRoleName", "type": "string" }, @@ -35318,7 +35318,7 @@ "type": "array" }, "StackSetName": { - "markdownDescription": "The name to associate with the stack set. The name must be unique in the Region where you create your stack set.\n\n> The `StackSetName` property is required.", + "markdownDescription": "The name to associate with the stack set. The name must be unique in the Region where you create your stack set.", "title": "StackSetName", "type": "string" }, @@ -35459,7 +35459,7 @@ "items": { "type": "string" }, - "markdownDescription": "The order of the Regions where you want to perform the stack operation.\n\n> `RegionOrder` isn't followed if `AutoDeployment` is enabled.", + "markdownDescription": "The order of the Regions where you want to perform the stack operation.", "title": "RegionOrder", "type": "array" } @@ -39205,7 +39205,7 @@ "type": "array" }, "Field": { - "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.\n\nThe following are valid values for network activity events:\n\n- `cloudtrail.amazonaws.com`\n- `ec2.amazonaws.com`\n- `kms.amazonaws.com`\n- `s3.amazonaws.com`\n- `secretsmanager.amazonaws.com`\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor information about filtering data events on the `resources.ARN` field, see [Filtering data events by resources.ARN](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/filtering-data-events.html#filtering-data-events-resourcearn) in the *AWS CloudTrail User Guide* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", + "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.\n\nThe following are valid values for network activity events:\n\n- `cloudtrail.amazonaws.com`\n- `ec2.amazonaws.com`\n- `kms.amazonaws.com`\n- `s3.amazonaws.com`\n- `secretsmanager.amazonaws.com`\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", "title": "Field", "type": "string" }, @@ -39441,7 +39441,7 @@ "type": "string" }, "SnsTopicName": { - "markdownDescription": "Specifies the name of the Amazon SNS topic defined for notification of log file delivery. The maximum length is 256 characters.", + "markdownDescription": "Specifies the name or ARN of the Amazon SNS topic defined for notification of log file delivery. The maximum length is 256 characters.", "title": "SnsTopicName", "type": "string" }, @@ -39528,7 +39528,7 @@ "type": "array" }, "Field": { - "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.\n\nThe following are valid values for network activity events:\n\n- `cloudtrail.amazonaws.com`\n- `ec2.amazonaws.com`\n- `kms.amazonaws.com`\n- `s3.amazonaws.com`\n- `secretsmanager.amazonaws.com`\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor information about filtering data events on the `resources.ARN` field, see [Filtering data events by resources.ARN](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/filtering-data-events.html#filtering-data-events-resourcearn) in the *AWS CloudTrail User Guide* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", + "markdownDescription": "A field in a CloudTrail event record on which to filter events to be logged. For event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the field is used only for selecting events as filtering is not supported.\n\nFor CloudTrail management events, supported fields include `eventCategory` (required), `eventSource` , and `readOnly` . The following additional fields are available for event data stores: `eventName` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail data events, supported fields include `eventCategory` (required), `resources.type` (required), `eventName` , `readOnly` , and `resources.ARN` . The following additional fields are available for event data stores: `eventSource` , `eventType` , `sessionCredentialFromConsole` , and `userIdentity.arn` .\n\nFor CloudTrail network activity events, supported fields include `eventCategory` (required), `eventSource` (required), `eventName` , `errorCode` , and `vpcEndpointId` .\n\nFor event data stores for CloudTrail Insights events, AWS Config configuration items, Audit Manager evidence, or events outside of AWS , the only supported field is `eventCategory` .\n\n> Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith` , `EndsWith` , `NotStartsWith` , or `NotEndsWith` to explicitly match the beginning or end of the event field. \n\n- *`readOnly`* - This is an optional field that is only used for management events and data events. This field can be set to `Equals` with a value of `true` or `false` . If you do not add this field, CloudTrail logs both `read` and `write` events. A value of `true` logs only `read` events. A value of `false` logs only `write` events.\n- *`eventSource`* - This field is only used for management events, data events (for event data stores only), and network activity events.\n\nFor management events for trails, this is an optional field that can be set to `NotEquals` `kms.amazonaws.com` to exclude KMS management events, or `NotEquals` `rdsdata.amazonaws.com` to exclude RDS management events.\n\nFor management and data events for event data stores, you can use it to include or exclude any event source and can use any operator.\n\nFor network activity events, this is a required field that only uses the `Equals` operator. Set this field to the event source for which you want to log network activity events. If you want to log network activity events for multiple event sources, you must create a separate field selector for each event source.\n\nThe following are valid values for network activity events:\n\n- `cloudtrail.amazonaws.com`\n- `ec2.amazonaws.com`\n- `kms.amazonaws.com`\n- `s3.amazonaws.com`\n- `secretsmanager.amazonaws.com`\n- *`eventName`* - This is an optional field that is only used for data events, management events (for event data stores only), and network activity events. You can use any operator with `eventName` . You can use it to \ufb01lter in or \ufb01lter out specific events. You can have multiple values for this \ufb01eld, separated by commas.\n- *`eventCategory`* - This field is required and must be set to `Equals` .\n\n- For CloudTrail management events, the value must be `Management` .\n- For CloudTrail data events, the value must be `Data` .\n- For CloudTrail network activity events, the value must be `NetworkActivity` .\n\nThe following are used only for event data stores:\n\n- For CloudTrail Insights events, the value must be `Insight` .\n- For AWS Config configuration items, the value must be `ConfigurationItem` .\n- For Audit Manager evidence, the value must be `Evidence` .\n- For events outside of AWS , the value must be `ActivityAuditLog` .\n- *`eventType`* - This is an optional field available only for event data stores, which is used to filter management and data events on the event type. For information about available event types, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html#ct-event-type) in the *AWS CloudTrail user guide* .\n- *`errorCode`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This is the error code to filter on. Currently, the only valid `errorCode` is `VpceAccessDenied` . `errorCode` can only use the `Equals` operator.\n- *`sessionCredentialFromConsole`* - This is an optional field available only for event data stores, which is used to filter management and data events based on whether the events originated from an AWS Management Console session. `sessionCredentialFromConsole` can only use the `Equals` and `NotEquals` operators.\n- *`resources.type`* - This \ufb01eld is required for CloudTrail data events. `resources.type` can only use the `Equals` operator.\n\nFor a list of available resource types for data events, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *AWS CloudTrail User Guide* .\n\nYou can have only one `resources.type` \ufb01eld per selector. To log events on more than one resource type, add another selector.\n- *`resources.ARN`* - The `resources.ARN` is an optional field for data events. You can use any operator with `resources.ARN` , but if you use `Equals` or `NotEquals` , the value must exactly match the ARN of a valid resource of the type you've speci\ufb01ed in the template as the value of resources.type. To log all data events for all objects in a specific S3 bucket, use the `StartsWith` operator, and include only the bucket ARN as the matching value.\n\nFor more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference* .\n\n> You can't use the `resources.ARN` field to filter resource types that do not have ARNs.\n- *`userIdentity.arn`* - This is an optional field available only for event data stores, which is used to filter management and data events on the userIdentity ARN. You can use any operator with `userIdentity.arn` . For more information on the userIdentity element, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *AWS CloudTrail User Guide* .\n- *`vpcEndpointId`* - This \ufb01eld is only used to filter CloudTrail network activity events and is optional. This field identifies the VPC endpoint that the request passed through. You can use any operator with `vpcEndpointId` .", "title": "Field", "type": "string" }, @@ -40905,7 +40905,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "A list of tags to be applied to the package group.", + "markdownDescription": "", "title": "Tags", "type": "array" } @@ -40942,7 +40942,7 @@ "properties": { "Restrictions": { "$ref": "#/definitions/AWS::CodeArtifact::PackageGroup.Restrictions", - "markdownDescription": "The origin configuration settings that determine how package versions can enter repositories.", + "markdownDescription": "", "title": "Restrictions" } }, @@ -40958,12 +40958,12 @@ "items": { "type": "string" }, - "markdownDescription": "The repositories to add to the allowed repositories list. The allowed repositories list is used when the `RestrictionMode` is set to `ALLOW_SPECIFIC_REPOSITORIES` .", + "markdownDescription": "", "title": "Repositories", "type": "array" }, "RestrictionMode": { - "markdownDescription": "The package group origin restriction setting. When the value is `INHERIT` , the value is set to the value of the first parent package group which does not have a value of `INHERIT` .", + "markdownDescription": "", "title": "RestrictionMode", "type": "string" } @@ -40978,17 +40978,17 @@ "properties": { "ExternalUpstream": { "$ref": "#/definitions/AWS::CodeArtifact::PackageGroup.RestrictionType", - "markdownDescription": "The package group origin restriction setting for external, upstream repositories.", + "markdownDescription": "", "title": "ExternalUpstream" }, "InternalUpstream": { "$ref": "#/definitions/AWS::CodeArtifact::PackageGroup.RestrictionType", - "markdownDescription": "The package group origin restriction setting for internal, upstream repositories.", + "markdownDescription": "", "title": "InternalUpstream" }, "Publish": { "$ref": "#/definitions/AWS::CodeArtifact::PackageGroup.RestrictionType", - "markdownDescription": "The package group origin restriction setting for publishing packages.", + "markdownDescription": "", "title": "Publish" } }, @@ -41550,7 +41550,7 @@ "title": "RegistryCredential" }, "Type": { - "markdownDescription": "The type of build environment to use for related builds.\n\n- The environment type `ARM_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), and EU (Frankfurt).\n- The environment type `LINUX_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), China (Beijing), and China (Ningxia).\n- The environment type `LINUX_GPU_CONTAINER` is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney) , China (Beijing), and China (Ningxia).\n\n- The environment types `ARM_LAMBDA_CONTAINER` and `LINUX_LAMBDA_CONTAINER` are available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), and South America (S\u00e3o Paulo).\n\n- The environment types `WINDOWS_CONTAINER` and `WINDOWS_SERVER_2019_CONTAINER` are available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland).\n\n> If you're using compute fleets during project creation, `type` will be ignored. \n\nFor more information, see [Build environment compute types](https://docs.aws.amazon.com//codebuild/latest/userguide/build-env-ref-compute-types.html) in the *AWS CodeBuild user guide* .", + "markdownDescription": "The type of build environment to use for related builds.\n\n> If you're using compute fleets during project creation, `type` will be ignored. \n\nFor more information, see [Build environment compute types](https://docs.aws.amazon.com//codebuild/latest/userguide/build-env-ref-compute-types.html) in the *AWS CodeBuild user guide* .", "title": "Type", "type": "string" } @@ -41934,7 +41934,7 @@ "type": "string" }, "Type": { - "markdownDescription": "The type of webhook filter. There are nine webhook filter types: `EVENT` , `ACTOR_ACCOUNT_ID` , `HEAD_REF` , `BASE_REF` , `FILE_PATH` , `COMMIT_MESSAGE` , `TAG_NAME` , `RELEASE_NAME` , and `WORKFLOW_NAME` .\n\n- EVENT\n\n- A webhook event triggers a build when the provided `pattern` matches one of nine event types: `PUSH` , `PULL_REQUEST_CREATED` , `PULL_REQUEST_UPDATED` , `PULL_REQUEST_CLOSED` , `PULL_REQUEST_REOPENED` , `PULL_REQUEST_MERGED` , `RELEASED` , `PRERELEASED` , and `WORKFLOW_JOB_QUEUED` . The `EVENT` patterns are specified as a comma-separated string. For example, `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` filters all push, pull request created, and pull request updated events.\n\n> Types `PULL_REQUEST_REOPENED` and `WORKFLOW_JOB_QUEUED` work with GitHub and GitHub Enterprise only. Types `RELEASED` and `PRERELEASED` work with GitHub only.\n- ACTOR_ACCOUNT_ID\n\n- A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression `pattern` .\n- HEAD_REF\n\n- A webhook event triggers a build when the head reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` and `refs/tags/tag-name` .\n\n> Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.\n- BASE_REF\n\n- A webhook event triggers a build when the base reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` .\n\n> Works with pull request events only.\n- FILE_PATH\n\n- A webhook triggers a build when the path of a changed file matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- COMMIT_MESSAGE\n\n- A webhook triggers a build when the head commit message matches the regular expression `pattern` .\n\n> Works with GitHub and Bitbucket events push and pull requests events. Also works with GitHub Enterprise push events, but does not work with GitHub Enterprise pull request events.\n- TAG_NAME\n\n- A webhook triggers a build when the tag name of the release matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- RELEASE_NAME\n\n- A webhook triggers a build when the release name matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- REPOSITORY_NAME\n\n- A webhook triggers a build when the repository name matches the regular expression pattern.\n\n> Works with GitHub global or organization webhooks only.\n- WORKFLOW_NAME\n\n- A webhook triggers a build when the workflow name matches the regular expression `pattern` .\n\n> Works with `WORKFLOW_JOB_QUEUED` events only. > For CodeBuild-hosted Buildkite runner builds, WORKFLOW_NAME filters will filter by pipeline name.", + "markdownDescription": "The type of webhook filter. There are 11 webhook filter types: `EVENT` , `ACTOR_ACCOUNT_ID` , `HEAD_REF` , `BASE_REF` , `FILE_PATH` , `COMMIT_MESSAGE` , `TAG_NAME` , `RELEASE_NAME` , `REPOSITORY_NAME` , `ORGANIZATION_NAME` , and `WORKFLOW_NAME` .\n\n- EVENT\n\n- A webhook event triggers a build when the provided `pattern` matches one of nine event types: `PUSH` , `PULL_REQUEST_CREATED` , `PULL_REQUEST_UPDATED` , `PULL_REQUEST_CLOSED` , `PULL_REQUEST_REOPENED` , `PULL_REQUEST_MERGED` , `RELEASED` , `PRERELEASED` , and `WORKFLOW_JOB_QUEUED` . The `EVENT` patterns are specified as a comma-separated string. For example, `PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED` filters all push, pull request created, and pull request updated events.\n\n> Types `PULL_REQUEST_REOPENED` and `WORKFLOW_JOB_QUEUED` work with GitHub and GitHub Enterprise only. Types `RELEASED` and `PRERELEASED` work with GitHub only.\n- ACTOR_ACCOUNT_ID\n\n- A webhook event triggers a build when a GitHub, GitHub Enterprise, or Bitbucket account ID matches the regular expression `pattern` .\n- HEAD_REF\n\n- A webhook event triggers a build when the head reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` and `refs/tags/tag-name` .\n\n> Works with GitHub and GitHub Enterprise push, GitHub and GitHub Enterprise pull request, Bitbucket push, and Bitbucket pull request events.\n- BASE_REF\n\n- A webhook event triggers a build when the base reference matches the regular expression `pattern` . For example, `refs/heads/branch-name` .\n\n> Works with pull request events only.\n- FILE_PATH\n\n- A webhook triggers a build when the path of a changed file matches the regular expression `pattern` .\n\n> Works with push and pull request events only.\n- COMMIT_MESSAGE\n\n- A webhook triggers a build when the head commit message matches the regular expression `pattern` .\n\n> Works with push and pull request events only.\n- TAG_NAME\n\n- A webhook triggers a build when the tag name of the release matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- RELEASE_NAME\n\n- A webhook triggers a build when the release name matches the regular expression `pattern` .\n\n> Works with `RELEASED` and `PRERELEASED` events only.\n- REPOSITORY_NAME\n\n- A webhook triggers a build when the repository name matches the regular expression `pattern` .\n\n> Works with GitHub global or organization webhooks only.\n- ORGANIZATION_NAME\n\n- A webhook triggers a build when the organization name matches the regular expression `pattern` .\n\n> Works with GitHub global webhooks only.\n- WORKFLOW_NAME\n\n- A webhook triggers a build when the workflow name matches the regular expression `pattern` .\n\n> Works with `WORKFLOW_JOB_QUEUED` events only. > For CodeBuild-hosted Buildkite runner builds, WORKFLOW_NAME filters will filter by pipeline name.", "title": "Type", "type": "string" } @@ -45730,7 +45730,7 @@ }, "DeviceConfiguration": { "$ref": "#/definitions/AWS::Cognito::UserPool.DeviceConfiguration", - "markdownDescription": "The device-remembering configuration for a user pool. Device remembering or device tracking is a \"Remember me on this device\" option for user pools that perform authentication with the device key of a trusted device in the back end, instead of a user-provided MFA code. For more information about device authentication, see [Working with user devices in your user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-device-tracking.html) . A null value indicates that you have deactivated device remembering in your user pool.\n\n> When you provide a value for any `DeviceConfiguration` field, you activate the Amazon Cognito device-remembering feature. For more infor", + "markdownDescription": "The device-remembering configuration for a user pool. Device remembering or device tracking is a \"Remember me on this device\" option for user pools that perform authentication with the device key of a trusted device in the back end, instead of a user-provided MFA code. For more information about device authentication, see [Working with user devices in your user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-device-tracking.html) . A null value indicates that you have deactivated device remembering in your user pool.\n\n> When you provide a value for any `DeviceConfiguration` field, you activate the Amazon Cognito device-remembering feature. For more information, see [Working with devices](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-device-tracking.html) .", "title": "DeviceConfiguration" }, "EmailConfiguration": { @@ -53400,7 +53400,7 @@ "items": { "$ref": "#/definitions/AWS::ControlTower::EnabledBaseline.Parameter" }, - "markdownDescription": "Parameters that are applied when enabling this `Baseline` . These parameters configure the behavior of the baseline.", + "markdownDescription": "Shows the parameters that are applied when enabling this `Baseline` .", "title": "Parameters", "type": "array" }, @@ -53408,7 +53408,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "Tags associated with input to `EnableBaseline` .", + "markdownDescription": "", "title": "Tags", "type": "array" }, @@ -53450,12 +53450,12 @@ "additionalProperties": false, "properties": { "Key": { - "markdownDescription": "A string denoting the parameter key.", + "markdownDescription": "", "title": "Key", "type": "string" }, "Value": { - "markdownDescription": "A low-level `Document` object of any type (for example, a Java Object).", + "markdownDescription": "", "title": "Value", "type": "object" } @@ -53514,7 +53514,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "Tags to be applied to the enabled control.", + "markdownDescription": "", "title": "Tags", "type": "array" }, @@ -62435,7 +62435,7 @@ "title": "OnPremConfig" }, "ServerHostname": { - "markdownDescription": "Specifies the Domain Name System (DNS) name or IP version 4 address of the NFS file server that your DataSync agent connects to.", + "markdownDescription": "Specifies the DNS name or IP version 4 address of the NFS file server that your DataSync agent connects to.", "title": "ServerHostname", "type": "string" }, @@ -62571,7 +62571,7 @@ "type": "string" }, "ServerHostname": { - "markdownDescription": "Specifies the domain name or IP address of the object storage server. A DataSync agent uses this hostname to mount the object storage server in a network.", + "markdownDescription": "Specifies the domain name or IP version 4 (IPv4) address of the object storage server that your DataSync agent connects to.", "title": "ServerHostname", "type": "string" }, @@ -62788,7 +62788,7 @@ "type": "string" }, "ServerHostname": { - "markdownDescription": "Specifies the domain name or IP address of the SMB file server that your DataSync agent will mount.\n\nRemember the following when configuring this parameter:\n\n- You can't specify an IP version 6 (IPv6) address.\n- If you're using Kerberos authentication, you must specify a domain name.", + "markdownDescription": "Specifies the domain name or IP address of the SMB file server that your DataSync agent connects to.\n\nRemember the following when configuring this parameter:\n\n- You can't specify an IP version 6 (IPv6) address.\n- If you're using Kerberos authentication, you must specify a domain name.", "title": "ServerHostname", "type": "string" }, @@ -63525,7 +63525,7 @@ "title": "Schedule" }, "Type": { - "markdownDescription": "The type of the data source.", + "markdownDescription": "The type of the data source. In Amazon DataZone, you can use data sources to import technical metadata of assets (data) from the source databases or data warehouses into Amazon DataZone. In the current release of Amazon DataZone, you can create and run data sources for AWS Glue and Amazon Redshift.", "title": "Type", "type": "string" } @@ -65740,7 +65740,7 @@ "additionalProperties": false, "properties": { "AutoEnableMembers": { - "markdownDescription": "Indicates whether to automatically enable new organization accounts as member accounts in the organization behavior graph.\n\nBy default, this property is set to `false` . If you want to change the value of this property, you must be the Detective administrator for the organization. For more information on setting a Detective administrator account, see [AWS::Detective::OrganizationAdmin](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-detective-organizationadmin.html)", + "markdownDescription": "Indicates whether to automatically enable new organization accounts as member accounts in the organization behavior graph.\n\nBy default, this property is set to `false` . If you want to change the value of this property, you must be the Detective administrator for the organization. For more information on setting a Detective administrator account, see [AWS::Detective::OrganizationAdmin](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-detective-organizationadmin.html) .", "title": "AutoEnableMembers", "type": "boolean" }, @@ -69786,7 +69786,7 @@ "items": { "type": "string" }, - "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", + "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", "title": "AcceleratorTypes", "type": "array" }, @@ -72887,7 +72887,7 @@ "items": { "type": "string" }, - "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", + "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", "title": "AcceleratorTypes", "type": "array" }, @@ -73082,7 +73082,7 @@ "items": { "$ref": "#/definitions/AWS::EC2::LaunchTemplate.ElasticGpuSpecification" }, - "markdownDescription": "Deprecated.\n\n> Amazon Elastic Graphics reached end of life on January 8, 2024. For workloads that require graphics acceleration, we recommend that you use Amazon EC2 G4ad, G4dn, or G5 instances.", + "markdownDescription": "Deprecated.\n\n> Amazon Elastic Graphics reached end of life on January 8, 2024.", "title": "ElasticGpuSpecifications", "type": "array" }, @@ -73090,7 +73090,7 @@ "items": { "$ref": "#/definitions/AWS::EC2::LaunchTemplate.LaunchTemplateElasticInferenceAccelerator" }, - "markdownDescription": "> Amazon Elastic Inference is no longer available. \n\nAn elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads.\n\nYou cannot specify accelerators from different generations in the same request.\n\n> Starting April 15, 2023, AWS will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, Amazon ECS, or Amazon EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.", + "markdownDescription": "> Amazon Elastic Inference is no longer available. \n\nAn elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads.\n\nYou cannot specify accelerators from different generations in the same request.", "title": "ElasticInferenceAccelerators", "type": "array" }, @@ -76565,7 +76565,7 @@ "type": "string" }, "GroupName": { - "markdownDescription": "The name of the security group.\n\nConstraints: Up to 255 characters in length. Cannot start with `sg-` .\n\nValid characters: a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*", + "markdownDescription": "The name of the security group. Names are case-insensitive and must be unique within the VPC.\n\nConstraints: Up to 255 characters in length. Can't start with `sg-` .\n\nValid characters: a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*", "title": "GroupName", "type": "string" }, @@ -77407,7 +77407,7 @@ "items": { "type": "string" }, - "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n- For instance types with Inference accelerators, specify `inference` .\n\nDefault: Any accelerator type", + "markdownDescription": "The accelerator types that must be on the instance type.\n\n- For instance types with FPGA accelerators, specify `fpga` .\n- For instance types with GPU accelerators, specify `gpu` .\n\nDefault: Any accelerator type", "title": "AcceleratorTypes", "type": "array" }, @@ -82815,7 +82815,7 @@ "type": "string" }, "RepositoryPolicy": { - "markdownDescription": "he repository policy to apply to repositories created using the template. A repository policy is a permissions policy associated with a repository to control access permissions.", + "markdownDescription": "The repository policy to apply to repositories created using the template. A repository policy is a permissions policy associated with a repository to control access permissions.", "title": "RepositoryPolicy", "type": "string" }, @@ -83616,7 +83616,7 @@ "additionalProperties": false, "properties": { "AssignPublicIp": { - "markdownDescription": "Whether the task's elastic network interface receives a public IP address.\n\nConsider the following when you set this value:\n\n- When you use `create-service` or `update-service` , the default is `DISABLED` .\n- When the service `deploymentController` is `ECS` , the value must be `DISABLED` .\n- When you use `create-service` or `update-service` , the default is `ENABLED` .", + "markdownDescription": "Whether the task's elastic network interface receives a public IP address.\n\nConsider the following when you set this value:\n\n- When you use `create-service` or `update-service` , the default is `DISABLED` .\n- When the service `deploymentController` is `ECS` , the value must be `DISABLED` .", "title": "AssignPublicIp", "type": "string" }, @@ -85425,7 +85425,7 @@ "additionalProperties": false, "properties": { "AssignPublicIp": { - "markdownDescription": "Whether the task's elastic network interface receives a public IP address.\n\nConsider the following when you set this value:\n\n- When you use `create-service` or `update-service` , the default is `DISABLED` .\n- When the service `deploymentController` is `ECS` , the value must be `DISABLED` .\n- When you use `create-service` or `update-service` , the default is `ENABLED` .", + "markdownDescription": "Whether the task's elastic network interface receives a public IP address.\n\nConsider the following when you set this value:\n\n- When you use `create-service` or `update-service` , the default is `DISABLED` .\n- When the service `deploymentController` is `ECS` , the value must be `DISABLED` .", "title": "AssignPublicIp", "type": "string" }, @@ -87719,7 +87719,7 @@ "type": "array" }, "EbsOptimized": { - "markdownDescription": "Indicates whether an Amazon EBS volume is EBS-optimized.", + "markdownDescription": "Indicates whether an Amazon EBS volume is EBS-optimized. The default is false. You should explicitly set this value to true to enable the Amazon EBS-optimized setting for an EC2 instance.", "title": "EbsOptimized", "type": "boolean" } @@ -88538,7 +88538,7 @@ "type": "array" }, "EbsOptimized": { - "markdownDescription": "Indicates whether an Amazon EBS volume is EBS-optimized.", + "markdownDescription": "Indicates whether an Amazon EBS volume is EBS-optimized. The default is false. You should explicitly set this value to true to enable the Amazon EBS-optimized setting for an EC2 instance.", "title": "EbsOptimized", "type": "boolean" } @@ -88949,7 +88949,7 @@ "type": "array" }, "EbsOptimized": { - "markdownDescription": "Indicates whether an Amazon EBS volume is EBS-optimized.", + "markdownDescription": "Indicates whether an Amazon EBS volume is EBS-optimized. The default is false. You should explicitly set this value to true to enable the Amazon EBS-optimized setting for an EC2 instance.", "title": "EbsOptimized", "type": "boolean" } @@ -90721,7 +90721,7 @@ "additionalProperties": false, "properties": { "CacheParameterGroupFamily": { - "markdownDescription": "The name of the cache parameter group family that this cache parameter group is compatible with.\n\nValid values are: `memcached1.4` | `memcached1.5` | `memcached1.6` | `redis2.6` | `redis2.8` | `redis3.2` | `redis4.0` | `redis5.0` | `redis6.x` | `redis7`", + "markdownDescription": "The name of the cache parameter group family that this cache parameter group is compatible with.\n\nValid values are: `valkey8` | `valkey7` | `memcached1.4` | `memcached1.5` | `memcached1.6` | `redis2.6` | `redis2.8` | `redis3.2` | `redis4.0` | `redis5.0` | `redis6.x` | `redis7`", "title": "CacheParameterGroupFamily", "type": "string" }, @@ -93188,7 +93188,7 @@ "type": "boolean" }, "Mode": { - "markdownDescription": "The client certificate handling method. The possible values are `off` , `passthrough` , and `verify` . The default value is `off` .", + "markdownDescription": "The client certificate handling method. Options are `off` , `passthrough` or `verify` . The default value is `off` .", "title": "Mode", "type": "string" }, @@ -100813,7 +100813,7 @@ "additionalProperties": false, "properties": { "CopyTagsToSnapshots": { - "markdownDescription": "A Boolean value indicating whether tags for the volume should be copied to snapshots. This value defaults to `false` . If it's set to `true` , all tags for the volume are copied to snapshots where the user doesn't specify tags. If this value is `true` , and you specify one or more tags, only the specified tags are copied to snapshots. If you specify one or more tags when creating the snapshot, no tags are copied from the volume, regardless of this value.", + "markdownDescription": "A Boolean value indicating whether tags for the volume should be copied to snapshots. This value defaults to `false` . If this value is set to `true` , and you do not specify any tags, all tags for the original volume are copied over to snapshots. If this value is\u00a0set to `true` , and you do specify one or more tags, only the specified tags for the original volume are copied over to snapshots. If you specify one or more tags when creating a new snapshot, no tags are copied over from the original volume, regardless of this value.", "title": "CopyTagsToSnapshots", "type": "boolean" }, @@ -102755,18 +102755,18 @@ "type": "string" }, "OperatingSystem": { - "markdownDescription": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use Amazon GameLift server SDK 4.x., first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to Amazon GameLift server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", + "markdownDescription": "The operating system that your game server binaries run on. This value determines the type of fleet resources that you use for this build. If your game build contains multiple executables, they all must run on the same operating system. You must specify a valid operating system in this request. There is no default value. You can't change a build's operating system later.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "title": "OperatingSystem", "type": "string" }, "ServerSdkVersion": { - "markdownDescription": "A server SDK version you used when integrating your game server build with Amazon GameLift. For more information see [Integrate games with custom game servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/integration-custom-intro.html) . By default Amazon GameLift sets this value to `4.0.2` .", + "markdownDescription": "A server SDK version you used when integrating your game server build with Amazon GameLift Servers. For more information see [Integrate games with custom game servers](https://docs.aws.amazon.com/gamelift/latest/developerguide/integration-custom-intro.html) . By default Amazon GameLift Servers sets this value to `4.0.2` .", "title": "ServerSdkVersion", "type": "string" }, "StorageLocation": { "$ref": "#/definitions/AWS::GameLift::Build.StorageLocation", - "markdownDescription": "Information indicating where your game build files are stored. Use this parameter only when creating a build with files stored in an Amazon S3 bucket that you own. The storage location must specify an Amazon S3 bucket name and key. The location must also specify a role ARN that you set up to allow Amazon GameLift to access your Amazon S3 bucket. The S3 bucket and your new build must be in the same Region.\n\nIf a `StorageLocation` is specified, the size of your file can be found in your Amazon S3 bucket. Amazon GameLift will report a `SizeOnDisk` of 0.", + "markdownDescription": "Information indicating where your game build files are stored. Use this parameter only when creating a build with files stored in an Amazon S3 bucket that you own. The storage location must specify an Amazon S3 bucket name and key. The location must also specify a role ARN that you set up to allow Amazon GameLift Servers to access your Amazon S3 bucket. The S3 bucket and your new build must be in the same Region.\n\nIf a `StorageLocation` is specified, the size of your file can be found in your Amazon S3 bucket. Amazon GameLift Servers will report a `SizeOnDisk` of 0.", "title": "StorageLocation" }, "Version": { @@ -102875,7 +102875,7 @@ "type": "string" }, "OperatingSystem": { - "markdownDescription": "The platform that all containers in the container group definition run on.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use Amazon GameLift server SDK 4.x, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to Amazon GameLift server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", + "markdownDescription": "The platform that all containers in the container group definition run on.\n\n> Amazon Linux 2 (AL2) will reach end of support on 6/30/2025. See more details in the [Amazon Linux 2 FAQs](https://docs.aws.amazon.com/https://aws.amazon.com/amazon-linux-2/faqs/) . For game servers that are hosted on AL2 and use server SDK version 4.x for Amazon GameLift Servers, first update the game server build to server SDK 5.x, and then deploy to AL2023 instances. See [Migrate to server SDK version 5.](https://docs.aws.amazon.com/gamelift/latest/developerguide/reference-serversdk5-migration.html)", "title": "OperatingSystem", "type": "string" }, @@ -103157,7 +103157,7 @@ "properties": { "AnywhereConfiguration": { "$ref": "#/definitions/AWS::GameLift::Fleet.AnywhereConfiguration", - "markdownDescription": "Amazon GameLift Anywhere configuration options.", + "markdownDescription": "Amazon GameLift Servers Anywhere configuration options.", "title": "AnywhereConfiguration" }, "ApplyCapacity": { @@ -103172,7 +103172,7 @@ }, "CertificateConfiguration": { "$ref": "#/definitions/AWS::GameLift::Fleet.CertificateConfiguration", - "markdownDescription": "Prompts Amazon GameLift to generate a TLS/SSL certificate for the fleet. Amazon GameLift uses the certificates to encrypt traffic between game clients and the game servers running on Amazon GameLift. By default, the `CertificateConfiguration` is `DISABLED` . You can't change this property after you create the fleet.\n\nAWS Certificate Manager (ACM) certificates expire after 13 months. Certificate expiration can cause fleets to fail, preventing players from connecting to instances in the fleet. We recommend you replace fleets before 13 months, consider using fleet aliases for a smooth transition.\n\n> ACM isn't available in all AWS regions. A fleet creation request with certificate generation enabled in an unsupported Region, fails with a 4xx error. For more information about the supported Regions, see [Supported Regions](https://docs.aws.amazon.com/acm/latest/userguide/acm-regions.html) in the *AWS Certificate Manager User Guide* .", + "markdownDescription": "Prompts Amazon GameLift Servers to generate a TLS/SSL certificate for the fleet. Amazon GameLift Servers uses the certificates to encrypt traffic between game clients and the game servers running on Amazon GameLift Servers. By default, the `CertificateConfiguration` is `DISABLED` . You can't change this property after you create the fleet.\n\nAWS Certificate Manager (ACM) certificates expire after 13 months. Certificate expiration can cause fleets to fail, preventing players from connecting to instances in the fleet. We recommend you replace fleets before 13 months, consider using fleet aliases for a smooth transition.\n\n> ACM isn't available in all AWS regions. A fleet creation request with certificate generation enabled in an unsupported Region, fails with a 4xx error. For more information about the supported Regions, see [Supported Regions](https://docs.aws.amazon.com/acm/latest/userguide/acm-regions.html) in the *AWS Certificate Manager User Guide* .", "title": "CertificateConfiguration" }, "ComputeType": { @@ -103197,12 +103197,12 @@ "items": { "$ref": "#/definitions/AWS::GameLift::Fleet.IpPermission" }, - "markdownDescription": "The IP address ranges and port settings that allow inbound traffic to access game server processes and other processes on this fleet. Set this parameter for managed EC2 fleets. You can leave this parameter empty when creating the fleet, but you must call [](https://docs.aws.amazon.com/gamelift/latest/apireference/API_UpdateFleetPortSettings) to set it before players can connect to game sessions. As a best practice, we recommend opening ports for remote access only when you need them and closing them when you're finished. For Realtime Servers fleets, Amazon GameLift automatically sets TCP and UDP ranges.", + "markdownDescription": "The IP address ranges and port settings that allow inbound traffic to access game server processes and other processes on this fleet. Set this parameter for managed EC2 fleets. You can leave this parameter empty when creating the fleet, but you must call [](https://docs.aws.amazon.com/gamelift/latest/apireference/API_UpdateFleetPortSettings) to set it before players can connect to game sessions. As a best practice, we recommend opening ports for remote access only when you need them and closing them when you're finished. For Amazon GameLift Servers Realtime fleets, Amazon GameLift Servers automatically sets TCP and UDP ranges.", "title": "EC2InboundPermissions", "type": "array" }, "EC2InstanceType": { - "markdownDescription": "The Amazon GameLift-supported Amazon EC2 instance type to use with managed EC2 fleets. Instance type determines the computing resources that will be used to host your game servers, including CPU, memory, storage, and networking capacity. See [Amazon Elastic Compute Cloud Instance Types](https://docs.aws.amazon.com/ec2/instance-types/) for detailed descriptions of Amazon EC2 instance types.", + "markdownDescription": "The Amazon GameLift Servers-supported Amazon EC2 instance type to use with managed EC2 fleets. Instance type determines the computing resources that will be used to host your game servers, including CPU, memory, storage, and networking capacity. See [Amazon Elastic Compute Cloud Instance Types](https://docs.aws.amazon.com/ec2/instance-types/) for detailed descriptions of Amazon EC2 instance types.", "title": "EC2InstanceType", "type": "string" }, @@ -103225,7 +103225,7 @@ "items": { "$ref": "#/definitions/AWS::GameLift::Fleet.LocationConfiguration" }, - "markdownDescription": "A set of remote locations to deploy additional instances to and manage as a multi-location fleet. Use this parameter when creating a fleet in AWS Regions that support multiple locations. You can add any AWS Region or Local Zone that's supported by Amazon GameLift. Provide a list of one or more AWS Region codes, such as `us-west-2` , or Local Zone names. When using this parameter, Amazon GameLift requires you to include your home location in the request. For a list of supported Regions and Local Zones, see [Amazon GameLift service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", + "markdownDescription": "A set of remote locations to deploy additional instances to and manage as a multi-location fleet. Use this parameter when creating a fleet in AWS Regions that support multiple locations. You can add any AWS Region or Local Zone that's supported by Amazon GameLift Servers. Provide a list of one or more AWS Region codes, such as `us-west-2` , or Local Zone names. When using this parameter, Amazon GameLift Servers requires you to include your home location in the request. For a list of supported Regions and Local Zones, see [Amazon GameLift Servers service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", "title": "Locations", "type": "array" }, @@ -103258,12 +103258,12 @@ "type": "string" }, "PeerVpcAwsAccountId": { - "markdownDescription": "Used when peering your Amazon GameLift fleet with a VPC, the unique identifier for the AWS account that owns the VPC. You can find your account ID in the AWS Management Console under account settings.", + "markdownDescription": "Used when peering your Amazon GameLift Servers fleet with a VPC, the unique identifier for the AWS account that owns the VPC. You can find your account ID in the AWS Management Console under account settings.", "title": "PeerVpcAwsAccountId", "type": "string" }, "PeerVpcId": { - "markdownDescription": "A unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same Region as your fleet. To look up a VPC ID, use the [VPC Dashboard](https://docs.aws.amazon.com/vpc/) in the AWS Management Console . Learn more about VPC peering in [VPC Peering with Amazon GameLift Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/vpc-peering.html) .", + "markdownDescription": "A unique identifier for a VPC with resources to be accessed by your Amazon GameLift Servers fleet. The VPC must be in the same Region as your fleet. To look up a VPC ID, use the [VPC Dashboard](https://docs.aws.amazon.com/vpc/) in the AWS Management Console . Learn more about VPC peering in [VPC Peering with Amazon GameLift Servers Fleets](https://docs.aws.amazon.com/gamelift/latest/developerguide/vpc-peering.html) .", "title": "PeerVpcId", "type": "string" }, @@ -103321,7 +103321,7 @@ "additionalProperties": false, "properties": { "Cost": { - "markdownDescription": "The cost to run your fleet per hour. Amazon GameLift uses the provided cost of your fleet to balance usage in queues. For more information about queues, see [Setting up queues](https://docs.aws.amazon.com/gamelift/latest/developerguide/queues-intro.html) in the *Amazon GameLift Developer Guide* .", + "markdownDescription": "The cost to run your fleet per hour. Amazon GameLift Servers uses the provided cost of your fleet to balance usage in queues. For more information about queues, see [Setting up queues](https://docs.aws.amazon.com/gamelift/latest/developerguide/queues-intro.html) in the *Amazon GameLift Servers Developer Guide* .", "title": "Cost", "type": "string" } @@ -103457,7 +103457,7 @@ "additionalProperties": false, "properties": { "Location": { - "markdownDescription": "An AWS Region code, such as `us-west-2` . For a list of supported Regions and Local Zones, see [Amazon GameLift service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", + "markdownDescription": "An AWS Region code, such as `us-west-2` . For a list of supported Regions and Local Zones, see [Amazon GameLift Servers service locations](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html) for managed hosting.", "title": "Location", "type": "string" }, @@ -103476,7 +103476,7 @@ "additionalProperties": false, "properties": { "NewGameSessionsPerCreator": { - "markdownDescription": "A policy that puts limits on the number of game sessions that a player can create within a specified span of time. With this policy, you can control players' ability to consume available resources.\n\nThe policy is evaluated when a player tries to create a new game session. On receiving a `CreateGameSession` request, Amazon GameLift checks that the player (identified by `CreatorId` ) has created fewer than game session limit in the specified time period.", + "markdownDescription": "A policy that puts limits on the number of game sessions that a player can create within a specified span of time. With this policy, you can control players' ability to consume available resources.\n\nThe policy is evaluated when a player tries to create a new game session. On receiving a `CreateGameSession` request, Amazon GameLift Servers checks that the player (identified by `CreatorId` ) has created fewer than game session limit in the specified time period.", "title": "NewGameSessionsPerCreator", "type": "number" }, @@ -103531,7 +103531,7 @@ "type": "string" }, "MetricName": { - "markdownDescription": "Name of the Amazon GameLift-defined metric that is used to trigger a scaling adjustment. For detailed descriptions of fleet metrics, see [Monitor Amazon GameLift with Amazon CloudWatch](https://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html) .\n\n- *ActivatingGameSessions* -- Game sessions in the process of being created.\n- *ActiveGameSessions* -- Game sessions that are currently running.\n- *ActiveInstances* -- Fleet instances that are currently running at least one game session.\n- *AvailableGameSessions* -- Additional game sessions that fleet could host simultaneously, given current capacity.\n- *AvailablePlayerSessions* -- Empty player slots in currently active game sessions. This includes game sessions that are not currently accepting players. Reserved player slots are not included.\n- *CurrentPlayerSessions* -- Player slots in active game sessions that are being used by a player or are reserved for a player.\n- *IdleInstances* -- Active instances that are currently hosting zero game sessions.\n- *PercentAvailableGameSessions* -- Unused percentage of the total number of game sessions that a fleet could host simultaneously, given current capacity. Use this metric for a target-based scaling policy.\n- *PercentIdleInstances* -- Percentage of the total number of active instances that are hosting zero game sessions.\n- *QueueDepth* -- Pending game session placement requests, in any queue, where the current fleet is the top-priority destination.\n- *WaitTime* -- Current wait time for pending game session placement requests, in any queue, where the current fleet is the top-priority destination.", + "markdownDescription": "Name of the Amazon GameLift Servers-defined metric that is used to trigger a scaling adjustment. For detailed descriptions of fleet metrics, see [Monitor Amazon GameLift Servers with Amazon CloudWatch](https://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html) .\n\n- *ActivatingGameSessions* -- Game sessions in the process of being created.\n- *ActiveGameSessions* -- Game sessions that are currently running.\n- *ActiveInstances* -- Fleet instances that are currently running at least one game session.\n- *AvailableGameSessions* -- Additional game sessions that fleet could host simultaneously, given current capacity.\n- *AvailablePlayerSessions* -- Empty player slots in currently active game sessions. This includes game sessions that are not currently accepting players. Reserved player slots are not included.\n- *CurrentPlayerSessions* -- Player slots in active game sessions that are being used by a player or are reserved for a player.\n- *IdleInstances* -- Active instances that are currently hosting zero game sessions.\n- *PercentAvailableGameSessions* -- Unused percentage of the total number of game sessions that a fleet could host simultaneously, given current capacity. Use this metric for a target-based scaling policy.\n- *PercentIdleInstances* -- Percentage of the total number of active instances that are hosting zero game sessions.\n- *QueueDepth* -- Pending game session placement requests, in any queue, where the current fleet is the top-priority destination.\n- *WaitTime* -- Current wait time for pending game session placement requests, in any queue, where the current fleet is the top-priority destination.", "title": "MetricName", "type": "string" }, @@ -103591,7 +103591,7 @@ "type": "number" }, "LaunchPath": { - "markdownDescription": "The location of a game build executable or Realtime script. Game builds and Realtime scripts are installed on instances at the root:\n\n- Windows (custom game builds only): `C:\\game` . Example: \" `C:\\game\\MyGame\\server.exe` \"\n- Linux: `/local/game` . Examples: \" `/local/game/MyGame/server.exe` \" or \" `/local/game/MyRealtimeScript.js` \"\n\n> Amazon GameLift doesn't support the use of setup scripts that launch the game executable. For custom game builds, this parameter must indicate the executable that calls the server SDK operations `initSDK()` and `ProcessReady()` .", + "markdownDescription": "The location of a game build executable or Realtime script. Game builds and Realtime scripts are installed on instances at the root:\n\n- Windows (custom game builds only): `C:\\game` . Example: \" `C:\\game\\MyGame\\server.exe` \"\n- Linux: `/local/game` . Examples: \" `/local/game/MyGame/server.exe` \" or \" `/local/game/MyRealtimeScript.js` \"\n\n> Amazon GameLift Servers doesn't support the use of setup scripts that launch the game executable. For custom game builds, this parameter must indicate the executable that calls the server SDK operations `initSDK()` and `ProcessReady()` .", "title": "LaunchPath", "type": "string" }, @@ -103662,7 +103662,7 @@ "title": "AutoScalingPolicy" }, "BalancingStrategy": { - "markdownDescription": "Indicates how Amazon GameLift FleetIQ balances the use of Spot Instances and On-Demand Instances in the game server group. Method options include the following:\n\n- `SPOT_ONLY` - Only Spot Instances are used in the game server group. If Spot Instances are unavailable or not viable for game hosting, the game server group provides no hosting capacity until Spot Instances can again be used. Until then, no new instances are started, and the existing nonviable Spot Instances are terminated (after current gameplay ends) and are not replaced.\n- `SPOT_PREFERRED` - (default value) Spot Instances are used whenever available in the game server group. If Spot Instances are unavailable, the game server group continues to provide hosting capacity by falling back to On-Demand Instances. Existing nonviable Spot Instances are terminated (after current gameplay ends) and are replaced with new On-Demand Instances.\n- `ON_DEMAND_ONLY` - Only On-Demand Instances are used in the game server group. No Spot Instances are used, even when available, while this balancing strategy is in force.", + "markdownDescription": "Indicates how Amazon GameLift Servers FleetIQ balances the use of Spot Instances and On-Demand Instances in the game server group. Method options include the following:\n\n- `SPOT_ONLY` - Only Spot Instances are used in the game server group. If Spot Instances are unavailable or not viable for game hosting, the game server group provides no hosting capacity until Spot Instances can again be used. Until then, no new instances are started, and the existing nonviable Spot Instances are terminated (after current gameplay ends) and are not replaced.\n- `SPOT_PREFERRED` - (default value) Spot Instances are used whenever available in the game server group. If Spot Instances are unavailable, the game server group continues to provide hosting capacity by falling back to On-Demand Instances. Existing nonviable Spot Instances are terminated (after current gameplay ends) and are replaced with new On-Demand Instances.\n- `ON_DEMAND_ONLY` - Only On-Demand Instances are used in the game server group. No Spot Instances are used, even when available, while this balancing strategy is in force.", "title": "BalancingStrategy", "type": "string" }, @@ -103685,27 +103685,27 @@ "items": { "$ref": "#/definitions/AWS::GameLift::GameServerGroup.InstanceDefinition" }, - "markdownDescription": "The set of Amazon EC2 instance types that Amazon GameLift FleetIQ can use when balancing and automatically scaling instances in the corresponding Auto Scaling group.", + "markdownDescription": "The set of Amazon EC2 instance types that Amazon GameLift Servers FleetIQ can use when balancing and automatically scaling instances in the corresponding Auto Scaling group.", "title": "InstanceDefinitions", "type": "array" }, "LaunchTemplate": { "$ref": "#/definitions/AWS::GameLift::GameServerGroup.LaunchTemplate", - "markdownDescription": "The Amazon EC2 launch template that contains configuration settings and game server code to be deployed to all instances in the game server group. You can specify the template using either the template name or ID. For help with creating a launch template, see [Creating a Launch Template for an Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html) in the *Amazon Elastic Compute Cloud Auto Scaling User Guide* . After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.\n\n> If you specify network interfaces in your launch template, you must explicitly set the property `AssociatePublicIpAddress` to \"true\". If no network interface is specified in the launch template, Amazon GameLift FleetIQ uses your account's default VPC.", + "markdownDescription": "The Amazon EC2 launch template that contains configuration settings and game server code to be deployed to all instances in the game server group. You can specify the template using either the template name or ID. For help with creating a launch template, see [Creating a Launch Template for an Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html) in the *Amazon Elastic Compute Cloud Auto Scaling User Guide* . After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.\n\n> If you specify network interfaces in your launch template, you must explicitly set the property `AssociatePublicIpAddress` to \"true\". If no network interface is specified in the launch template, Amazon GameLift Servers FleetIQ uses your account's default VPC.", "title": "LaunchTemplate" }, "MaxSize": { - "markdownDescription": "The maximum number of instances allowed in the Amazon EC2 Auto Scaling group. During automatic scaling events, Amazon GameLift FleetIQ and EC2 do not scale up the group above this maximum. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", + "markdownDescription": "The maximum number of instances allowed in the Amazon EC2 Auto Scaling group. During automatic scaling events, Amazon GameLift Servers FleetIQ and EC2 do not scale up the group above this maximum. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", "title": "MaxSize", "type": "number" }, "MinSize": { - "markdownDescription": "The minimum number of instances allowed in the Amazon EC2 Auto Scaling group. During automatic scaling events, Amazon GameLift FleetIQ and Amazon EC2 do not scale down the group below this minimum. In production, this value should be set to at least 1. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", + "markdownDescription": "The minimum number of instances allowed in the Amazon EC2 Auto Scaling group. During automatic scaling events, Amazon GameLift Servers FleetIQ and Amazon EC2 do not scale down the group below this minimum. In production, this value should be set to at least 1. After the Auto Scaling group is created, update this value directly in the Auto Scaling group using the AWS console or APIs.", "title": "MinSize", "type": "number" }, "RoleArn": { - "markdownDescription": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) for an IAM role that allows Amazon GameLift to access your Amazon EC2 Auto Scaling groups.", + "markdownDescription": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) for an IAM role that allows Amazon GameLift Servers to access your Amazon EC2 Auto Scaling groups.", "title": "RoleArn", "type": "string" }, @@ -103721,7 +103721,7 @@ "items": { "type": "string" }, - "markdownDescription": "A list of virtual private cloud (VPC) subnets to use with instances in the game server group. By default, all Amazon GameLift FleetIQ-supported Availability Zones are used. You can use this parameter to specify VPCs that you've set up. This property cannot be updated after the game server group is created, and the corresponding Auto Scaling group will always use the property value that is set with this request, even if the Auto Scaling group is updated directly.", + "markdownDescription": "A list of virtual private cloud (VPC) subnets to use with instances in the game server group. By default, all Amazon GameLift Servers FleetIQ-supported Availability Zones are used. You can use this parameter to specify VPCs that you've set up. This property cannot be updated after the game server group is created, and the corresponding Auto Scaling group will always use the property value that is set with this request, even if the Auto Scaling group is updated directly.", "title": "VpcSubnets", "type": "array" } @@ -103758,7 +103758,7 @@ "additionalProperties": false, "properties": { "EstimatedInstanceWarmup": { - "markdownDescription": "Length of time, in seconds, it takes for a new instance to start new game server processes and register with Amazon GameLift FleetIQ. Specifying a warm-up time can be useful, particularly with game servers that take a long time to start up, because it avoids prematurely starting new instances.", + "markdownDescription": "Length of time, in seconds, it takes for a new instance to start new game server processes and register with Amazon GameLift Servers FleetIQ. Specifying a warm-up time can be useful, particularly with game servers that take a long time to start up, because it avoids prematurely starting new instances.", "title": "EstimatedInstanceWarmup", "type": "number" }, @@ -103782,7 +103782,7 @@ "type": "string" }, "WeightedCapacity": { - "markdownDescription": "Instance weighting that indicates how much this instance type contributes to the total capacity of a game server group. Instance weights are used by Amazon GameLift FleetIQ to calculate the instance type's cost per unit hour and better identify the most cost-effective options. For detailed information on weighting instance capacity, see [Instance Weighting](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-weighting.html) in the *Amazon Elastic Compute Cloud Auto Scaling User Guide* . Default value is \"1\".", + "markdownDescription": "Instance weighting that indicates how much this instance type contributes to the total capacity of a game server group. Instance weights are used by Amazon GameLift Servers FleetIQ to calculate the instance type's cost per unit hour and better identify the most cost-effective options. For detailed information on weighting instance capacity, see [Instance Weighting](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-weighting.html) in the *Amazon Elastic Compute Cloud Auto Scaling User Guide* . Default value is \"1\".", "title": "WeightedCapacity", "type": "string" } @@ -103894,7 +103894,7 @@ "items": { "$ref": "#/definitions/AWS::GameLift::GameSessionQueue.PlayerLatencyPolicy" }, - "markdownDescription": "A set of policies that enforce a sliding cap on player latency when processing game sessions placement requests. Use multiple policies to gradually relax the cap over time if Amazon GameLift can't make a placement. Policies are evaluated in order starting with the lowest maximum latency value.", + "markdownDescription": "A set of policies that enforce a sliding cap on player latency when processing game sessions placement requests. Use multiple policies to gradually relax the cap over time if Amazon GameLift Servers can't make a placement. Policies are evaluated in order starting with the lowest maximum latency value.", "title": "PlayerLatencyPolicies", "type": "array" }, @@ -103912,7 +103912,7 @@ "type": "array" }, "TimeoutInSeconds": { - "markdownDescription": "The maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a `TIMED_OUT` status.", + "markdownDescription": "The maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a `TIMED_OUT` status. If you don't specify a request timeout, the queue uses a default value.", "title": "TimeoutInSeconds", "type": "number" } @@ -103991,7 +103991,7 @@ "items": { "type": "string" }, - "markdownDescription": "The prioritization order to use for fleet locations, when the `PriorityOrder` property includes `LOCATION` . Locations can include AWS Region codes (such as `us-west-2` ), local zones, and custom locations (for Anywhere fleets). Each location must be listed only once. For details, see [Amazon GameLift service locations.](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html)", + "markdownDescription": "The prioritization order to use for fleet locations, when the `PriorityOrder` property includes `LOCATION` . Locations can include AWS Region codes (such as `us-west-2` ), local zones, and custom locations (for Anywhere fleets). Each location must be listed only once. For details, see [Amazon GameLift Servers service locations.](https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-regions.html)", "title": "LocationOrder", "type": "array" }, @@ -103999,7 +103999,7 @@ "items": { "type": "string" }, - "markdownDescription": "A custom sequence to use when prioritizing where to place new game sessions. Each priority type is listed once.\n\n- `LATENCY` -- Amazon GameLift prioritizes locations where the average player latency is lowest. Player latency data is provided in each game session placement request.\n- `COST` -- Amazon GameLift prioritizes destinations with the lowest current hosting costs. Cost is evaluated based on the location, instance type, and fleet type (Spot or On-Demand) of each destination in the queue.\n- `DESTINATION` -- Amazon GameLift prioritizes based on the list order of destinations in the queue configuration.\n- `LOCATION` -- Amazon GameLift prioritizes based on the provided order of locations, as defined in `LocationOrder` .", + "markdownDescription": "A custom sequence to use when prioritizing where to place new game sessions. Each priority type is listed once.\n\n- `LATENCY` -- Amazon GameLift Servers prioritizes locations where the average player latency is lowest. Player latency data is provided in each game session placement request.\n- `COST` -- Amazon GameLift Servers prioritizes queue destinations with the lowest current hosting costs. Cost is evaluated based on the destination's location, instance type, and fleet type (Spot or On-Demand).\n- `DESTINATION` -- Amazon GameLift Servers prioritizes based on the list order of destinations in the queue configuration.\n- `LOCATION` -- Amazon GameLift Servers prioritizes based on the provided order of locations, as defined in `LocationOrder` .", "title": "PriorityOrder", "type": "array" } @@ -104152,7 +104152,7 @@ "type": "string" }, "FlexMatchMode": { - "markdownDescription": "Indicates whether this matchmaking configuration is being used with Amazon GameLift hosting or as a standalone matchmaking solution.\n\n- *STANDALONE* - FlexMatch forms matches and returns match information, including players and team assignments, in a [MatchmakingSucceeded](https://docs.aws.amazon.com/gamelift/latest/flexmatchguide/match-events.html#match-events-matchmakingsucceeded) event.\n- *WITH_QUEUE* - FlexMatch forms matches and uses the specified Amazon GameLift queue to start a game session for the match.", + "markdownDescription": "Indicates whether this matchmaking configuration is being used with Amazon GameLift Servers hosting or as a standalone matchmaking solution.\n\n- *STANDALONE* - FlexMatch forms matches and returns match information, including players and team assignments, in a [MatchmakingSucceeded](https://docs.aws.amazon.com/gamelift/latest/flexmatchguide/match-events.html#match-events-matchmakingsucceeded) event.\n- *WITH_QUEUE* - FlexMatch forms matches and uses the specified Amazon GameLift Servers queue to start a game session for the match.", "title": "FlexMatchMode", "type": "string" }, @@ -104173,7 +104173,7 @@ "items": { "type": "string" }, - "markdownDescription": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) that is assigned to a Amazon GameLift game session queue resource and uniquely identifies it. ARNs are unique across all Regions. Format is `arn:aws:gamelift:::gamesessionqueue/` . Queues can be located in any Region. Queues are used to start new Amazon GameLift-hosted game sessions for matches that are created with this matchmaking configuration. If `FlexMatchMode` is set to `STANDALONE` , do not set this parameter.", + "markdownDescription": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) that is assigned to a Amazon GameLift Servers game session queue resource and uniquely identifies it. ARNs are unique across all Regions. Format is `arn:aws:gamelift:::gamesessionqueue/` . Queues can be located in any Region. Queues are used to start new Amazon GameLift Servers-hosted game sessions for matches that are created with this matchmaking configuration. If `FlexMatchMode` is set to `STANDALONE` , do not set this parameter.", "title": "GameSessionQueueArns", "type": "array" }, @@ -104383,7 +104383,7 @@ }, "StorageLocation": { "$ref": "#/definitions/AWS::GameLift::Script.S3Location", - "markdownDescription": "The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the \"key\"), and a role ARN that allows Amazon GameLift to access the Amazon S3 storage location. The S3 bucket must be in the same Region where you want to create a new script. By default, Amazon GameLift uploads the latest version of the zip file; if you have S3 object versioning turned on, you can use the `ObjectVersion` parameter to specify an earlier version.", + "markdownDescription": "The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the \"key\"), and a role ARN that allows Amazon GameLift Servers to access the Amazon S3 storage location. The S3 bucket must be in the same Region where you want to create a new script. By default, Amazon GameLift Servers uploads the latest version of the zip file; if you have S3 object versioning turned on, you can use the `ObjectVersion` parameter to specify an earlier version.", "title": "StorageLocation" }, "Tags": { @@ -104430,7 +104430,7 @@ "additionalProperties": false, "properties": { "Bucket": { - "markdownDescription": "An Amazon S3 bucket identifier. Thename of the S3 bucket.\n\n> Amazon GameLift doesn't support uploading from Amazon S3 buckets with names that contain a dot (.).", + "markdownDescription": "An Amazon S3 bucket identifier. Thename of the S3 bucket.\n\n> Amazon GameLift Servers doesn't support uploading from Amazon S3 buckets with names that contain a dot (.).", "title": "Bucket", "type": "string" }, @@ -104440,12 +104440,12 @@ "type": "string" }, "ObjectVersion": { - "markdownDescription": "The version of the file, if object versioning is turned on for the bucket. Amazon GameLift uses this information when retrieving files from an S3 bucket that you own. Use this parameter to specify a specific version of the file. If not set, the latest version of the file is retrieved.", + "markdownDescription": "The version of the file, if object versioning is turned on for the bucket. Amazon GameLift Servers uses this information when retrieving files from an S3 bucket that you own. Use this parameter to specify a specific version of the file. If not set, the latest version of the file is retrieved.", "title": "ObjectVersion", "type": "string" }, "RoleArn": { - "markdownDescription": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) for an IAM role that allows Amazon GameLift to access the S3 bucket.", + "markdownDescription": "The Amazon Resource Name ( [ARN](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-arn-format.html) ) for an IAM role that allows Amazon GameLift Servers to access the S3 bucket.", "title": "RoleArn", "type": "string" } @@ -120313,7 +120313,7 @@ }, "DeviceCertificateExpiringCheck": { "$ref": "#/definitions/AWS::IoT::AccountAuditConfiguration.AuditCheckConfiguration", - "markdownDescription": "Checks if a device certificate is expiring. This check applies to device certificates expiring within 30 days or that have expired.", + "markdownDescription": "Checks if a device certificate is expiring. By default, this check applies to device certificates expiring within 30 days or that have expired. You can modify this threshold by configuring the DeviceCertExpirationAuditCheckConfiguration.", "title": "DeviceCertificateExpiringCheck" }, "DeviceCertificateKeyQualityCheck": { @@ -122937,12 +122937,12 @@ "additionalProperties": false, "properties": { "Description": { - "markdownDescription": "", + "markdownDescription": "A summary of the package being created. This can be used to outline the package's contents or purpose.", "title": "Description", "type": "string" }, "PackageName": { - "markdownDescription": "", + "markdownDescription": "The name of the new software package.", "title": "PackageName", "type": "string" }, @@ -122950,7 +122950,7 @@ "items": { "$ref": "#/definitions/Tag" }, - "markdownDescription": "", + "markdownDescription": "Metadata that can be used to manage the package.", "title": "Tags", "type": "array" } @@ -123386,7 +123386,7 @@ "additionalProperties": false, "properties": { "DeprecateThingType": { - "markdownDescription": "Deprecates a thing type. You can not associate new things with deprecated thing type. You cannot update `ThingTypeProperties` if the thing type is deprecated.\n\nRequires permission to access the [DeprecateThingType](https://docs.aws.amazon.com//service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions) action.", + "markdownDescription": "Deprecates a thing type. You can not associate new things with deprecated thing type.\n\nRequires permission to access the [DeprecateThingType](https://docs.aws.amazon.com//service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions) action.", "title": "DeprecateThingType", "type": "boolean" }, @@ -123486,7 +123486,7 @@ "additionalProperties": false, "properties": { "RuleName": { - "markdownDescription": "The name of the rule.\n\n*Pattern* : `^[a-zA-Z0-9_]+$`", + "markdownDescription": "The name of the rule.", "title": "RuleName", "type": "string" }, @@ -130141,9 +130141,7 @@ "additionalProperties": false, "properties": { "Greengrass": { - "$ref": "#/definitions/AWS::IoTSiteWise::Gateway.Greengrass", - "markdownDescription": "A gateway that runs on AWS IoT Greengrass .", - "title": "Greengrass" + "$ref": "#/definitions/AWS::IoTSiteWise::Gateway.Greengrass" }, "GreengrassV2": { "$ref": "#/definitions/AWS::IoTSiteWise::Gateway.GreengrassV2", @@ -130152,7 +130150,7 @@ }, "SiemensIE": { "$ref": "#/definitions/AWS::IoTSiteWise::Gateway.SiemensIE", - "markdownDescription": "A AWS IoT SiteWise Edge gateway that runs on a Siemens Industrial Edge Device.", + "markdownDescription": "An AWS IoT SiteWise Edge gateway that runs on a Siemens Industrial Edge Device.", "title": "SiemensIE" } }, @@ -133778,7 +133776,7 @@ "type": "string" }, "ConnectorName": { - "markdownDescription": "The name of the connector.", + "markdownDescription": "The name of the connector.\n\nThe connector name must be unique and can include up to 128 characters. Valid characters you can include in a connector name are: a-z, A-Z, 0-9, and -.", "title": "ConnectorName", "type": "string" }, @@ -140334,7 +140332,7 @@ }, "ProcessingConfiguration": { "$ref": "#/definitions/AWS::KinesisFirehose::DeliveryStream.ProcessingConfiguration", - "markdownDescription": "Specifies configuration for Snowflake.", + "markdownDescription": "", "title": "ProcessingConfiguration" }, "RetryOptions": { @@ -140941,7 +140939,7 @@ "type": "string" }, "Parameters": { - "markdownDescription": "A key-value map that provides an additional configuration on your data lake. `CrossAccountVersion` is the key you can configure in the `Parameters` field. Accepted values for the `CrossAccountVersion` key are 1, 2, and 3.", + "markdownDescription": "A key-value map that provides an additional configuration on your data lake. `CrossAccountVersion` is the key you can configure in the `Parameters` field. Accepted values for the `CrossAccountVersion` key are 1, 2, 3, and 4.", "title": "Parameters", "type": "object" }, @@ -142386,12 +142384,12 @@ "properties": { "OnFailure": { "$ref": "#/definitions/AWS::Lambda::EventInvokeConfig.OnFailure", - "markdownDescription": "The destination configuration for failed invocations.", + "markdownDescription": "The destination configuration for failed invocations.\n\n> When using an Amazon SQS queue as a destination, FIFO queues cannot be used.", "title": "OnFailure" }, "OnSuccess": { "$ref": "#/definitions/AWS::Lambda::EventInvokeConfig.OnSuccess", - "markdownDescription": "The destination configuration for successful invocations.", + "markdownDescription": "The destination configuration for successful invocations.\n\n> When using an Amazon SQS queue as a destination, FIFO queues cannot be used.", "title": "OnSuccess" } }, @@ -142471,7 +142469,7 @@ "type": "number" }, "BisectBatchOnFunctionError": { - "markdownDescription": "(Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.", + "markdownDescription": "(Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.\n\n> When using `BisectBatchOnFunctionError` , check the `BatchSize` parameter in the `OnFailure` destination message's metadata. The `BatchSize` could be greater than 1 since Lambda consolidates failed messages metadata when writing to the `OnFailure` destination.", "title": "BisectBatchOnFunctionError", "type": "boolean" }, @@ -148796,7 +148794,7 @@ "items": { "type": "string" }, - "markdownDescription": "A list of allowed actions that an API key resource grants permissions to perform. You must have at least one action for each type of resource. For example, if you have a place resource, you must include at least one place action.\n\nThe following are valid values for the actions.\n\n- *Map actions*\n\n- `geo:GetMap*` - Allows all actions needed for map rendering.\n- *Place actions*\n\n- `geo:SearchPlaceIndexForText` - Allows geocoding.\n- `geo:SearchPlaceIndexForPosition` - Allows reverse geocoding.\n- `geo:SearchPlaceIndexForSuggestions` - Allows generating suggestions from text.\n- `geo:GetPlace` - Allows finding a place by place ID.\n- *Route actions*\n\n- `geo:CalculateRoute` - Allows point to point routing.\n- `geo:CalculateRouteMatrix` - Allows calculating a matrix of routes.\n\n> You must use these strings exactly. For example, to provide access to map rendering, the only valid action is `geo:GetMap*` as an input to the list. `[\"geo:GetMap*\"]` is valid but `[\"geo:GetMapTile\"]` is not. Similarly, you cannot use `[\"geo:SearchPlaceIndexFor*\"]` - you must list each of the Place actions separately.", + "markdownDescription": "A list of allowed actions that an API key resource grants permissions to perform. You must have at least one action for each type of resource. For example, if you have a place resource, you must include at least one place action.\n\nThe following are valid values for the actions.\n\n- *Map actions*\n\n- `geo:GetMap*` - Allows all actions needed for map rendering.\n- *Enhanced Maps actions*\n\n- `geo-maps:GetTile` - Allows getting map tiles for rendering.\n- `geo-maps:GetStaticMap` - Allows getting static map images.\n- *Place actions*\n\n- `geo:SearchPlaceIndexForText` - Allows finding geo coordinates of a known place.\n- `geo:SearchPlaceIndexForPosition` - Allows getting nearest address to geo coordinates.\n- `geo:SearchPlaceIndexForSuggestions` - Allows suggestions based on an incomplete or misspelled query.\n- `geo:GetPlace` - Allows getting details of a place.\n- *Enhanced Places actions*\n\n- `geo-places:Autcomplete` - Allows auto-completion of search text.\n- `geo-places:Geocode` - Allows finding geo coordinates of a known place.\n- `geo-places:GetPlace` - Allows getting details of a place.\n- `geo-places:ReverseGeocode` - Allows getting nearest address to geo coordinates.\n- `geo-places:SearchNearby` - Allows category based places search around geo coordinates.\n- `geo-places:SearchText` - Allows place or address search based on free-form text.\n- `geo-places:Suggest` - Allows suggestions based on an incomplete or misspelled query.\n- *Route actions*\n\n- `geo:CalculateRoute` - Allows point to point routing.\n- `geo:CalculateRouteMatrix` - Allows matrix routing.\n- *Enhanced Routes actions*\n\n- `geo-routes:CalculateIsolines` - Allows isoline calculation.\n- `geo-routes:CalculateRoutes` - Allows point to point routing.\n- `geo-routes:CalculateRouteMatrix` - Allows matrix routing.\n- `geo-routes:OptimizeWaypoints` - Allows computing the best sequence of waypoints.\n- `geo-routes:SnapToRoads` - Allows snapping GPS points to a likely route.\n\n> You must use these strings exactly. For example, to provide access to map rendering, the only valid action is `geo:GetMap*` as an input to the list. `[\"geo:GetMap*\"]` is valid but `[\"geo:GetTile\"]` is not. Similarly, you cannot use `[\"geo:SearchPlaceIndexFor*\"]` - you must list each of the Place actions separately.", "title": "AllowActions", "type": "array" }, @@ -151791,7 +151789,7 @@ "additionalProperties": false, "properties": { "ClusterArn": { - "markdownDescription": "", + "markdownDescription": "The Amazon Resource Name (ARN) that uniquely identifies the cluster.", "title": "ClusterArn", "type": "string" }, @@ -151799,7 +151797,7 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "List of Amazon Resource Name (ARN)s of Secrets Manager secrets.", "title": "SecretArnList", "type": "array" } @@ -151867,67 +151865,67 @@ "properties": { "BrokerNodeGroupInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.BrokerNodeGroupInfo", - "markdownDescription": "", + "markdownDescription": "Information about the broker nodes in the cluster.", "title": "BrokerNodeGroupInfo" }, "ClientAuthentication": { "$ref": "#/definitions/AWS::MSK::Cluster.ClientAuthentication", - "markdownDescription": "", + "markdownDescription": "Includes all client authentication related information.", "title": "ClientAuthentication" }, "ClusterName": { - "markdownDescription": "", + "markdownDescription": "The name of the cluster.", "title": "ClusterName", "type": "string" }, "ConfigurationInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.ConfigurationInfo", - "markdownDescription": "", + "markdownDescription": "Represents the configuration that you want MSK to use for the cluster.", "title": "ConfigurationInfo" }, "CurrentVersion": { - "markdownDescription": "", + "markdownDescription": "The version of the cluster that you want to update.", "title": "CurrentVersion", "type": "string" }, "EncryptionInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.EncryptionInfo", - "markdownDescription": "", + "markdownDescription": "Includes all encryption-related information.", "title": "EncryptionInfo" }, "EnhancedMonitoring": { - "markdownDescription": "", + "markdownDescription": "Specifies the level of monitoring for the MSK cluster.", "title": "EnhancedMonitoring", "type": "string" }, "KafkaVersion": { - "markdownDescription": "", + "markdownDescription": "The version of Apache Kafka. You can use Amazon MSK to create clusters that use [supported Apache Kafka versions](https://docs.aws.amazon.com/msk/latest/developerguide/supported-kafka-versions.html) .", "title": "KafkaVersion", "type": "string" }, "LoggingInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.LoggingInfo", - "markdownDescription": "", + "markdownDescription": "Logging info details for the cluster.", "title": "LoggingInfo" }, "NumberOfBrokerNodes": { - "markdownDescription": "", + "markdownDescription": "The number of broker nodes in the cluster.", "title": "NumberOfBrokerNodes", "type": "number" }, "OpenMonitoring": { "$ref": "#/definitions/AWS::MSK::Cluster.OpenMonitoring", - "markdownDescription": "", + "markdownDescription": "The settings for open monitoring.", "title": "OpenMonitoring" }, "StorageMode": { - "markdownDescription": "", + "markdownDescription": "This controls storage mode for supported storage tiers.", "title": "StorageMode", "type": "string" }, "Tags": { "additionalProperties": true, - "markdownDescription": "", + "markdownDescription": "An arbitrary set of tags (key-value pairs) for the cluster.", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -151991,7 +151989,7 @@ "additionalProperties": false, "properties": { "BrokerAZDistribution": { - "markdownDescription": "", + "markdownDescription": "This parameter is currently not in use.", "title": "BrokerAZDistribution", "type": "string" }, @@ -151999,13 +151997,13 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "The list of subnets to connect to in the client virtual private cloud (VPC). Amazon creates elastic network interfaces (ENIs) inside these subnets. Client applications use ENIs to produce and consume data.\n\nIf you use the US West (N. California) Region, specify exactly two subnets. For other Regions where Amazon MSK is available, you can specify either two or three subnets. The subnets that you specify must be in distinct Availability Zones. When you create a cluster, Amazon MSK distributes the broker nodes evenly across the subnets that you specify.\n\nClient subnets can't occupy the Availability Zone with ID `use1-az3` .", "title": "ClientSubnets", "type": "array" }, "ConnectivityInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.ConnectivityInfo", - "markdownDescription": "", + "markdownDescription": "Information about the cluster's connectivity setting.", "title": "ConnectivityInfo" }, "InstanceType": { @@ -152017,13 +152015,13 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "The security groups to associate with the ENIs in order to specify who can connect to and communicate with the Amazon MSK cluster. If you don't specify a security group, Amazon MSK uses the default security group associated with the VPC. If you specify security groups that were shared with you, you must ensure that you have permissions to them. Specifically, you need the `ec2:DescribeSecurityGroups` permission.", "title": "SecurityGroups", "type": "array" }, "StorageInfo": { "$ref": "#/definitions/AWS::MSK::Cluster.StorageInfo", - "markdownDescription": "", + "markdownDescription": "Contains information about storage volumes attached to Amazon MSK broker nodes.", "title": "StorageInfo" } }, @@ -152143,12 +152141,12 @@ "additionalProperties": false, "properties": { "ClientBroker": { - "markdownDescription": "", + "markdownDescription": "Indicates the encryption setting for data in transit between clients and brokers. You must set it to one of the following values.\n\n- `TLS` : Indicates that client-broker communication is enabled with TLS only.\n- `TLS_PLAINTEXT` : Indicates that client-broker communication is enabled for both TLS-encrypted, as well as plaintext data.\n- `PLAINTEXT` : Indicates that client-broker communication is enabled in plaintext only.\n\nThe default value is `TLS` .", "title": "ClientBroker", "type": "string" }, "InCluster": { - "markdownDescription": "", + "markdownDescription": "When set to true, it indicates that data communication among the broker nodes of the cluster is encrypted. When set to false, the communication happens in plaintext.\n\nThe default value is true.", "title": "InCluster", "type": "boolean" } @@ -152165,7 +152163,7 @@ }, "EncryptionInTransit": { "$ref": "#/definitions/AWS::MSK::Cluster.EncryptionInTransit", - "markdownDescription": "", + "markdownDescription": "The details for encryption in transit.", "title": "EncryptionInTransit" } }, @@ -152595,7 +152593,7 @@ "additionalProperties": false, "properties": { "Description": { - "markdownDescription": "", + "markdownDescription": "The description of the configuration.", "title": "Description", "type": "string" }, @@ -152603,22 +152601,22 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "The [versions of Apache Kafka](https://docs.aws.amazon.com/msk/latest/developerguide/supported-kafka-versions.html) with which you can use this MSK configuration.\n\nWhen you update the `KafkaVersionsList` property, AWS CloudFormation recreates a new configuration with the updated property before deleting the old configuration. Such an update requires a [resource replacement](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html#update-replacement) . To successfully update `KafkaVersionsList` , you must also update the `Name` property in the same operation.\n\nIf your configuration is attached with any clusters created using the AWS Management Console or AWS CLI , you'll need to manually delete the old configuration from the console after the update completes.\n\nFor more information, see [Can\u2019t update KafkaVersionsList in MSK configuration](https://docs.aws.amazon.com/msk/latest/developerguide/troubleshooting.html#troubleshoot-kafkaversionslist-cfn-update-failure) in the *Amazon MSK Developer Guide* .", "title": "KafkaVersionsList", "type": "array" }, "LatestRevision": { "$ref": "#/definitions/AWS::MSK::Configuration.LatestRevision", - "markdownDescription": "", + "markdownDescription": "Latest revision of the MSK configuration.", "title": "LatestRevision" }, "Name": { - "markdownDescription": "", + "markdownDescription": "The name of the configuration. Configuration names are strings that match the regex \"^[0-9A-Za-z][0-9A-Za-z-]{0,}$\".", "title": "Name", "type": "string" }, "ServerProperties": { - "markdownDescription": "", + "markdownDescription": "Contents of the `server.properties` file. When using this property, you must ensure that the contents of the file are base64 encoded. When using the console, the SDK, or the AWS CLI , the contents of `server.properties` can be in plaintext.", "title": "ServerProperties", "type": "string" } @@ -152654,17 +152652,17 @@ "additionalProperties": false, "properties": { "CreationTime": { - "markdownDescription": "", + "markdownDescription": "The time when the configuration revision was created.", "title": "CreationTime", "type": "string" }, "Description": { - "markdownDescription": "", + "markdownDescription": "The description of the configuration revision.", "title": "Description", "type": "string" }, "Revision": { - "markdownDescription": "", + "markdownDescription": "The revision number.", "title": "Revision", "type": "number" } @@ -152707,8 +152705,6 @@ "additionalProperties": false, "properties": { "CurrentVersion": { - "markdownDescription": "The current version number of the replicator.", - "title": "CurrentVersion", "type": "string" }, "Description": { @@ -153005,17 +153001,17 @@ "properties": { "ClientAuthentication": { "$ref": "#/definitions/AWS::MSK::ServerlessCluster.ClientAuthentication", - "markdownDescription": "", + "markdownDescription": "Includes all client authentication related information.", "title": "ClientAuthentication" }, "ClusterName": { - "markdownDescription": "", + "markdownDescription": "The name of the cluster.", "title": "ClusterName", "type": "string" }, "Tags": { "additionalProperties": true, - "markdownDescription": "", + "markdownDescription": "An arbitrary set of tags (key-value pairs) for the cluster.", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -153028,7 +153024,7 @@ "items": { "$ref": "#/definitions/AWS::MSK::ServerlessCluster.VpcConfig" }, - "markdownDescription": "", + "markdownDescription": "VPC configuration information for the serverless cluster.", "title": "VpcConfigs", "type": "array" } @@ -153172,7 +153168,7 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "The list of subnets in the client VPC to connect to.", "title": "ClientSubnets", "type": "array" }, @@ -153180,13 +153176,13 @@ "items": { "type": "string" }, - "markdownDescription": "", + "markdownDescription": "The security groups to attach to the ENIs for the broker nodes.", "title": "SecurityGroups", "type": "array" }, "Tags": { "additionalProperties": true, - "markdownDescription": "", + "markdownDescription": "An arbitrary set of tags (key-value pairs) you specify while creating the VPC connection.", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -153196,12 +153192,12 @@ "type": "object" }, "TargetClusterArn": { - "markdownDescription": "", + "markdownDescription": "The Amazon Resource Name (ARN) of the cluster.", "title": "TargetClusterArn", "type": "string" }, "VpcId": { - "markdownDescription": "", + "markdownDescription": "The VPC ID of the remote client.", "title": "VpcId", "type": "string" } @@ -166498,7 +166494,7 @@ "type": "string" }, "ProvisionedMemory": { - "markdownDescription": "The provisioned memory-optimized Neptune Capacity Units (m-NCUs) to use for the graph.\n\nMin = 128", + "markdownDescription": "The provisioned memory-optimized Neptune Capacity Units (m-NCUs) to use for the graph.\n\nMin = 16", "title": "ProvisionedMemory", "type": "number" }, @@ -167472,7 +167468,7 @@ "items": { "$ref": "#/definitions/AWS::NetworkFirewall::RuleGroup.PortRange" }, - "markdownDescription": "The destination ports to inspect for. If not specified, this matches with any destination port. This setting is only used for protocols 6 (TCP) and 17 (UDP).\n\nYou can specify individual ports, for example `1994` and you can specify port ranges, for example `1990:1994` .", + "markdownDescription": "The destination port to inspect for. You can specify an individual port, for example `1994` and you can specify a port range, for example `1990:1994` . To match with any port, specify `ANY` .\n\nThis setting is only used for protocols 6 (TCP) and 17 (UDP).", "title": "DestinationPorts", "type": "array" }, @@ -167488,7 +167484,7 @@ "items": { "type": "number" }, - "markdownDescription": "The protocols to inspect for, specified using each protocol's assigned internet protocol number (IANA). If not specified, this matches with any protocol.", + "markdownDescription": "The protocols to inspect for, specified using the assigned internet protocol number (IANA) for each protocol. If not specified, this matches with any protocol.", "title": "Protocols", "type": "array" }, @@ -167496,7 +167492,7 @@ "items": { "$ref": "#/definitions/AWS::NetworkFirewall::RuleGroup.PortRange" }, - "markdownDescription": "The source ports to inspect for. If not specified, this matches with any source port. This setting is only used for protocols 6 (TCP) and 17 (UDP).\n\nYou can specify individual ports, for example `1994` and you can specify port ranges, for example `1990:1994` .", + "markdownDescription": "The source port to inspect for. You can specify an individual port, for example `1994` and you can specify a port range, for example `1990:1994` . To match with any port, specify `ANY` .\n\nIf not specified, this matches with any source port.\n\nThis setting is only used for protocols 6 (TCP) and 17 (UDP).", "title": "SourcePorts", "type": "array" }, @@ -168062,7 +168058,7 @@ "items": { "type": "number" }, - "markdownDescription": "The protocols to decrypt for inspection, specified using each protocol's assigned internet protocol number\n(IANA). Network Firewall currently supports only TCP.", + "markdownDescription": "The protocols to inspect for, specified using the assigned internet protocol number (IANA) for each protocol. If not specified, this matches with any protocol.\n\nNetwork Firewall currently supports only TCP.", "title": "Protocols", "type": "array" }, @@ -170739,7 +170735,7 @@ "items": { "type": "string" }, - "markdownDescription": "An array of strings that define which types of data that the source account shares with the monitoring account. Valid values are `AWS::CloudWatch::Metric | AWS::Logs::LogGroup | AWS::XRay::Trace | AWS::ApplicationInsights::Application | AWS::InternetMonitor::Monitor` .", + "markdownDescription": "An array of strings that define which types of data that the source account shares with the monitoring account. Valid values are `AWS::CloudWatch::Metric | AWS::Logs::LogGroup | AWS::XRay::Trace | AWS::ApplicationInsights::Application | AWS::InternetMonitor::Monitor | AWS::ApplicationSignals::Service | AWS::ApplicationSignals::ServiceLevelObjective` .", "title": "ResourceTypes", "type": "array" }, @@ -170792,7 +170788,7 @@ "properties": { "LogGroupConfiguration": { "$ref": "#/definitions/AWS::Oam::Link.LinkFilter", - "markdownDescription": "Use this structure to filter which log groups are to share log events from this source account to the monitoring account.", + "markdownDescription": "Use this structure to filter which log groups are to send log events from the source account to the monitoring account.", "title": "LogGroupConfiguration" }, "MetricConfiguration": { @@ -170807,7 +170803,7 @@ "additionalProperties": false, "properties": { "Filter": { - "markdownDescription": "When used in `MetricConfiguration` this field specifies which metric namespaces are to be shared with the monitoring account\n\nWhen used in `LogGroupConfiguration` this field specifies which log groups are to share their log events with the monitoring account. Use the term `LogGroupName` and one or more of the following operands.\n\nUse single quotation marks (') around log group names and metric namespaces.\n\nThe matching of log group names and metric namespaces is case sensitive. Each filter has a limit of five conditional operands. Conditional operands are `AND` and `OR` .\n\n- `=` and `!=`\n- `AND`\n- `OR`\n- `LIKE` and `NOT LIKE` . These can be used only as prefix searches. Include a `%` at the end of the string that you want to search for and include.\n- `IN` and `NOT IN` , using parentheses `( )`\n\nExamples:\n\n- `Namespace NOT LIKE 'AWS/%'` includes only namespaces that don't start with `AWS/` , such as custom namespaces.\n- `Namespace IN ('AWS/EC2', 'AWS/ELB', 'AWS/S3')` includes only the metrics in the EC2, Elastic Load Balancing , and Amazon S3 namespaces.\n- `Namespace = 'AWS/EC2' OR Namespace NOT LIKE 'AWS/%'` includes only the EC2 namespace and your custom namespaces.\n- `LogGroupName IN ('This-Log-Group', 'Other-Log-Group')` includes only the log groups with names `This-Log-Group` and `Other-Log-Group` .\n- `LogGroupName NOT IN ('Private-Log-Group', 'Private-Log-Group-2')` includes all log groups except the log groups with names `Private-Log-Group` and `Private-Log-Group-2` .\n- `LogGroupName LIKE 'aws/lambda/%' OR LogGroupName LIKE 'AWSLogs%'` includes all log groups that have names that start with `aws/lambda/` or `AWSLogs` .\n\n> If you are updating a link that uses filters, you can specify `*` as the only value for the `filter` parameter to delete the filter and share all log groups with the monitoring account.", + "markdownDescription": "", "title": "Filter", "type": "string" } @@ -182167,7 +182163,7 @@ "type": "string" }, "Timestamp": { - "markdownDescription": "The time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.", + "markdownDescription": "A [dynamic path parameter](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html) to a field in the payload containing the time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.\n\nThe value cannot be a static timestamp as the provided timestamp would be applied to all events delivered by the Pipe, regardless of when they are actually delivered.\n\nIf no dynamic path parameter is provided, the default value is the time the invocation is processed by the Pipe.", "title": "Timestamp", "type": "string" } @@ -207574,6 +207570,8 @@ "additionalProperties": false, "properties": { "AvailabilityStatus": { + "markdownDescription": "The availaiblity status of a visual's menu options.", + "title": "AvailabilityStatus", "type": "string" } }, @@ -224597,7 +224595,7 @@ "type": "string" }, "PerformanceInsightsRetentionPeriod": { - "markdownDescription": "The number of days to retain Performance Insights data.\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB clusters\n\nValid Values:\n\n- `7`\n- *month* * 31, where *month* is a number of months from 1-23. Examples: `93` (3 months * 31), `341` (11 months * 31), `589` (19 months * 31)\n- `731`\n\nDefault: `7` days\n\nIf you specify a retention period that isn't valid, such as `94` , Amazon RDS issues an error.", + "markdownDescription": "The number of days to retain Performance Insights data. When creating a DB cluster without enabling Performance Insights, you can't specify the parameter `PerformanceInsightsRetentionPeriod` .\n\nValid for Cluster Type: Aurora DB clusters and Multi-AZ DB clusters\n\nValid Values:\n\n- `7`\n- *month* * 31, where *month* is a number of months from 1-23. Examples: `93` (3 months * 31), `341` (11 months * 31), `589` (19 months * 31)\n- `731`\n\nDefault: `7` days\n\nIf you specify a retention period that isn't valid, such as `94` , Amazon RDS issues an error.", "title": "PerformanceInsightsRetentionPeriod", "type": "number" }, @@ -225073,7 +225071,7 @@ "type": "string" }, "DBSubnetGroupName": { - "markdownDescription": "A DB subnet group to associate with the DB instance. If you update this value, the new subnet group must be a subnet group in a new VPC.\n\nIf there's no DB subnet group, then the DB instance isn't a VPC DB instance.\n\nFor more information about using Amazon RDS in a VPC, see [Amazon VPC and Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html) in the *Amazon RDS User Guide* .\n\nThis setting doesn't apply to Amazon Aurora DB instances. The DB subnet group is managed by the DB cluster. If specified, the setting must match the DB cluster setting.", + "markdownDescription": "A DB subnet group to associate with the DB instance. If you update this value, the new subnet group must be a subnet group in a new VPC.\n\nIf you don't specify a DB subnet group, RDS uses the default DB subnet group if one exists. If a default DB subnet group does not exist, and you don't specify a `DBSubnetGroupName` , the DB instance fails to launch.\n\nFor more information about using Amazon RDS in a VPC, see [Amazon VPC and Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html) in the *Amazon RDS User Guide* .\n\nThis setting doesn't apply to Amazon Aurora DB instances. The DB subnet group is managed by the DB cluster. If specified, the setting must match the DB cluster setting.", "title": "DBSubnetGroupName", "type": "string" }, @@ -225234,7 +225232,7 @@ "type": "string" }, "PerformanceInsightsRetentionPeriod": { - "markdownDescription": "The number of days to retain Performance Insights data.\n\nThis setting doesn't apply to RDS Custom DB instances.\n\nValid Values:\n\n- `7`\n- *month* * 31, where *month* is a number of months from 1-23. Examples: `93` (3 months * 31), `341` (11 months * 31), `589` (19 months * 31)\n- `731`\n\nDefault: `7` days\n\nIf you specify a retention period that isn't valid, such as `94` , Amazon RDS returns an error.", + "markdownDescription": "The number of days to retain Performance Insights data. When creating a DB instance without enabling Performance Insights, you can't specify the parameter `PerformanceInsightsRetentionPeriod` .\n\nThis setting doesn't apply to RDS Custom DB instances.\n\nValid Values:\n\n- `7`\n- *month* * 31, where *month* is a number of months from 1-23. Examples: `93` (3 months * 31), `341` (11 months * 31), `589` (19 months * 31)\n- `731`\n\nDefault: `7` days\n\nIf you specify a retention period that isn't valid, such as `94` , Amazon RDS returns an error.", "title": "PerformanceInsightsRetentionPeriod", "type": "number" }, @@ -225952,7 +225950,7 @@ "type": "number" }, "InitQuery": { - "markdownDescription": "One or more SQL statements for the proxy to run when opening each new database connection. Typically used with `SET` statements to make sure that each connection has identical settings such as time zone and character set. For multiple statements, use semicolons as the separator. You can also include multiple variables in a single `SET` statement, such as `SET x=1, y=2` .\n\nDefault: no initialization query", + "markdownDescription": "Add an initialization query, or modify the current one. You can specify one or more SQL statements for the proxy to run when opening each new database connection. The setting is typically used with `SET` statements to make sure that each connection has identical settings. Make sure that the query you add is valid. To include multiple variables in a single `SET` statement, use comma separators.\n\nFor example: `SET variable1=value1, variable2=value2`\n\nFor multiple statements, use semicolons as the separator.\n\nDefault: no initialization query", "title": "InitQuery", "type": "string" }, @@ -245661,7 +245659,7 @@ "items": { "$ref": "#/definitions/AWS::SageMaker::Domain.CustomImage" }, - "markdownDescription": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.", + "markdownDescription": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.\n\nThe maximum number of custom images are as follows.\n\n- On a domain level: 200\n- On a space level: 5\n- On a user profile level: 5", "title": "CustomImages", "type": "array" }, @@ -245915,7 +245913,7 @@ "type": "string" }, "EndpointName": { - "markdownDescription": "The name of the endpoint.The name must be unique within an AWS Region in your AWS account. The name is case-insensitive in `CreateEndpoint` , but the case is preserved and must be matched in [](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_runtime_InvokeEndpoint.html) .", + "markdownDescription": "The name of the endpoint. The name must be unique within an AWS Region in your AWS account. The name is case-insensitive in `CreateEndpoint` , but the case is preserved and must be matched in [](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_runtime_InvokeEndpoint.html) .", "title": "EndpointName", "type": "string" }, @@ -252963,7 +252961,7 @@ "items": { "$ref": "#/definitions/AWS::SageMaker::Space.CustomImage" }, - "markdownDescription": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.", + "markdownDescription": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.\n\nThe maximum number of custom images are as follows.\n\n- On a domain level: 200\n- On a space level: 5\n- On a user profile level: 5", "title": "CustomImages", "type": "array" }, @@ -253405,7 +253403,7 @@ "items": { "$ref": "#/definitions/AWS::SageMaker::UserProfile.CustomImage" }, - "markdownDescription": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.", + "markdownDescription": "A list of custom SageMaker AI images that are configured to run as a KernelGateway app.\n\nThe maximum number of custom images are as follows.\n\n- On a domain level: 200\n- On a space level: 5\n- On a user profile level: 5", "title": "CustomImages", "type": "array" }, @@ -262455,7 +262453,7 @@ }, "MagneticStoreWriteProperties": { "$ref": "#/definitions/AWS::Timestream::Table.MagneticStoreWriteProperties", - "markdownDescription": "Contains properties to set on the table when enabling magnetic store writes.\n\nThis object has the following attributes:\n\n- *EnableMagneticStoreWrites* : A `boolean` flag to enable magnetic store writes.\n- *MagneticStoreRejectedDataLocation* : The location to write error reports for records rejected, asynchronously, during magnetic store writes. Only `S3Configuration` objects are allowed. The `S3Configuration` object has the following attributes:\n\n- *BucketName* : The name of the S3 bucket.\n- *EncryptionOption* : The encryption option for the S3 location. Valid values are S3 server-side encryption with an S3 managed key ( `SSE_S3` ) or AWS managed key ( `SSE_KMS` ).\n- *KmsKeyId* : The AWS KMS key ID to use when encrypting with an AWS managed key.\n- *ObjectKeyPrefix* : The prefix to use option for the objects stored in S3.\n\nBoth `BucketName` and `EncryptionOption` are *required* when `S3Configuration` is specified. If you specify `SSE_KMS` as your `EncryptionOption` then `KmsKeyId` is *required* .\n\n`EnableMagneticStoreWrites` attribute is *required* when `MagneticStoreWriteProperties` is specified. `MagneticStoreRejectedDataLocation` attribute is *required* when `EnableMagneticStoreWrites` is set to `true` .\n\nSee the following examples:\n\n*JSON*\n\n```json\n{ \"Type\" : AWS::Timestream::Table\", \"Properties\":{ \"DatabaseName\":\"TestDatabase\", \"TableName\":\"TestTable\", \"MagneticStoreWriteProperties\":{ \"EnableMagneticStoreWrites\":true, \"MagneticStoreRejectedDataLocation\":{ \"S3Configuration\":{ \"BucketName\":\"testbucket\", \"EncryptionOption\":\"SSE_KMS\", \"KmsKeyId\":\"1234abcd-12ab-34cd-56ef-1234567890ab\", \"ObjectKeyPrefix\":\"prefix\" } } } }\n}\n```\n\n*YAML*\n\n```\nType: AWS::Timestream::Table\nDependsOn: TestDatabase\nProperties: TableName: \"TestTable\" DatabaseName: \"TestDatabase\" MagneticStoreWriteProperties: EnableMagneticStoreWrites: true MagneticStoreRejectedDataLocation: S3Configuration: BucketName: \"testbucket\" EncryptionOption: \"SSE_KMS\" KmsKeyId: \"1234abcd-12ab-34cd-56ef-1234567890ab\" ObjectKeyPrefix: \"prefix\"\n```", + "markdownDescription": "Contains properties to set on the table when enabling magnetic store writes.\n\nThis object has the following attributes:\n\n- *EnableMagneticStoreWrites* : A `boolean` flag to enable magnetic store writes.\n- *MagneticStoreRejectedDataLocation* : The location to write error reports for records rejected, asynchronously, during magnetic store writes. Only `S3Configuration` objects are allowed. The `S3Configuration` object has the following attributes:\n\n- *BucketName* : The name of the S3 bucket.\n- *EncryptionOption* : The encryption option for the S3 location. Valid values are S3 server-side encryption with an S3 managed key ( `SSE_S3` ) or AWS managed key ( `SSE_KMS` ).\n- *KmsKeyId* : The AWS KMS key ID to use when encrypting with an AWS managed key.\n- *ObjectKeyPrefix* : The prefix to use option for the objects stored in S3.\n\nBoth `BucketName` and `EncryptionOption` are *required* when `S3Configuration` is specified. If you specify `SSE_KMS` as your `EncryptionOption` then `KmsKeyId` is *required* .\n\n`EnableMagneticStoreWrites` attribute is *required* when `MagneticStoreWriteProperties` is specified. `MagneticStoreRejectedDataLocation` attribute is *required* when `EnableMagneticStoreWrites` is set to `true` .\n\nSee the following examples:\n\n*JSON*\n\n```json\n{ \"Type\" : AWS::Timestream::Table\", \"Properties\":{ \"DatabaseName\":\"TestDatabase\", \"TableName\":\"TestTable\", \"MagneticStoreWriteProperties\":{ \"EnableMagneticStoreWrites\":true, \"MagneticStoreRejectedDataLocation\":{ \"S3Configuration\":{ \"BucketName\":\" amzn-s3-demo-bucket \", \"EncryptionOption\":\"SSE_KMS\", \"KmsKeyId\":\"1234abcd-12ab-34cd-56ef-1234567890ab\", \"ObjectKeyPrefix\":\"prefix\" } } } }\n}\n```\n\n*YAML*\n\n```\nType: AWS::Timestream::Table\nDependsOn: TestDatabase\nProperties: TableName: \"TestTable\" DatabaseName: \"TestDatabase\" MagneticStoreWriteProperties: EnableMagneticStoreWrites: true MagneticStoreRejectedDataLocation: S3Configuration: BucketName: \" amzn-s3-demo-bucket \" EncryptionOption: \"SSE_KMS\" KmsKeyId: \"1234abcd-12ab-34cd-56ef-1234567890ab\" ObjectKeyPrefix: \"prefix\"\n```", "title": "MagneticStoreWriteProperties" }, "RetentionProperties": {