diff --git a/docs/observability/aws/deploy-use-aws-observability/deploy-with-aws-cloudformation/index.md b/docs/observability/aws/deploy-use-aws-observability/deploy-with-aws-cloudformation/index.md
index 190a413729..1e2164b9cd 100644
--- a/docs/observability/aws/deploy-use-aws-observability/deploy-with-aws-cloudformation/index.md
+++ b/docs/observability/aws/deploy-use-aws-observability/deploy-with-aws-cloudformation/index.md
@@ -89,10 +89,11 @@ You should only install the AWS Observability apps and alerts the first time you
The below tables displays the response for each text box in this section.
| Prompt | Guideline |
-|:--|:--|
+| :-- | :-- |
| Select the kind of CloudWatch Metrics Source to create | **Note:** Switching from one type of Metrics Source to another can result in re-computation of your Root Cause Explorer anomaly detection models. This re-computation can take a couple of days to finish and meanwhile you will not get new Events of Interest (EOIs).
- **CloudWatch Metrics Source** - Creates Sumo Logic AWS CloudWatch Metrics Sources.
- **Kinesis Firehose Metrics Source (Recommended)** - Creates a Sumo Logic AWS Kinesis Firehose for Metrics Source.
**Note:** This new source has cost and performance benefits over the CloudWatch Metrics Source is therefore recommended. - **None** - Skips the Installation of both the Sumo Logic Sources
|
-| Sumo Logic AWS Metrics Namespaces | Enter a comma-delimited list of the namespaces which will be used for both AWS CloudWatch Metrics and Inventory Sources.
The default will be AWS/ApplicationELB, AWS/ApiGateway, AWS/DynamoDB, AWS/Lambda, AWS/RDS, AWS/ECS, AWS/ElastiCache, AWS/ELB, AWS/NetworkELB, AWS/SQS, AWS/SNS, and AWS/EC2.
AWS/AutoScaling will be appended to Namespaces for Inventory Sources.
Supported namespaces are based on the type of CloudWatch Metrics Source you have selected above. See the relevant docs for the [Kinesis Firehose Metrics Source](/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source.md) and the [CloudWatch Metrics Source](/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics.md) for details on which namespaces they support. |
-| Existing Sumo Logic Metrics Source API URL | You must supply this URL if you are already collecting CloudWatch Metrics. Provide the existing Sumo Logic Metrics Source API URL. The account field will be added to the Source. For information on how to determine the URL, see [View or Download Source JSON Configuration](/docs/send-data/use-json-configure-sources/local-configuration-file-management/view-download-source-json-configuration.md). |
+| Sumo Logic AWS Metrics Namespaces | Enter a comma-delimited list of the namespaces which will be used for both AWS CloudWatch Metrics and Inventory Sources.
The default will be AWS/ApplicationELB, AWS/ApiGateway, AWS/DynamoDB, AWS/Lambda, AWS/RDS, AWS/ECS, AWS/ElastiCache, AWS/ELB, AWS/NetworkELB, AWS/SQS, AWS/SNS, and AWS/EC2. You can provide both AWS as well as custom namespaces.
AWS/AutoScaling will be appended to Namespaces for Inventory Sources.
Supported namespaces are based on the type of CloudWatch Metrics Source you have selected above. See the relevant docs for the [Kinesis Firehose Metrics Source](/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source.md) and the [CloudWatch Metrics Source](/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics.md) for details on which namespaces they support. |
+| Existing Sumo Logic Metrics Source API URL | You must supply this URL if you are already collecting CloudWatch Metrics. Provide the existing Sumo Logic Metrics Source API URL. The account field will be added to the Source. For information on how to determine the URL, see [View or Download Source JSON Configuration](/docs/send-data/use-json-configure-sources/local-configuration-file-management/view-download-source-json-configuration.md).|
+| Sumo Logic AWS Metrics Tag Filters | Provide JSON format of the namespaces with its tags values to add filters to your metrics. Use semicolons to separate multiple values for the same tag key. AWS Tag Filters will be added to the Source. See JSON format example: ```json {"AWS/ELB":{"tags":["env=prod;dev"]},"AWS/EC2":{"tags":["env=dev","creator=john"]},"AWS/RDS":{"tags":["env=prod;dev","creator=himan"]},"All":{"tags":["env=dev"]}}``` .
Filters are not supported for custom metrics. |
## Step 6: Sumo Logic AWS ALB Log Source
@@ -129,6 +130,7 @@ The below tables displays the response for each text box in this section.
| Existing Sumo Logic Lambda CloudWatch Logs Source API URL | Required you already collect AWS Lambda CloudWatch logs. Provide the existing Sumo Logic AWS Lambda CloudWatch Source API URL. The account, region and namespace fields will be added to the Source. For information on how to determine the URL, see [View or Download Source JSON Configuration](/docs/send-data/use-json-configure-sources/local-configuration-file-management/view-download-source-json-configuration.md). |
| Subscribe log groups to destination (lambda or kinesis firehose delivery stream) | - **New** - Automatically subscribes new AWS Lambda log groups to Lambda, to send logs to Sumo Logic.
- **Existing** - Automatically subscribes existing log groups to Lambda, to send logs to Sumo Logic.
- **Both** - Automatically subscribes new and existing log groups.
- **None** - Skips automatic subscription of log groups.
|
| Regex for AWS Log Groups | Default Value: **aws/(lambda\|apigateway\|rds)**
With default value, log group names matching with lambda or rds will be subscribed and ingesting cloudwatch logs into sumo logic.
Enter a regex for matching log group names. For more information, see [Configuring parameters](/docs/send-data/collect-from-other-data-sources/autosubscribe-arn-destination/#configuringparameters) in the *Auto-Subscribe ARN (Amazon Resource Name) Destination* topic.
+| Tags for filtering CloudWatch Log Groups | Enter comma separated key value pairs for filtering logGroups using tags. Ex KeyName1=string,KeyName2=string. This is optional leave it blank if tag based filtering is not needed. Visit https://help.sumologic.com/docs/send-data/collect-from-other-data-sources/autosubscribe-arn-destination/#configuringparameters |
:::note
* Don't use forward slashes (`/`) to encapsulate the regex. While normally they are needed for raw code, it's not necessary here.
diff --git a/docs/observability/aws/deploy-use-aws-observability/deploy-with-terraform.md b/docs/observability/aws/deploy-use-aws-observability/deploy-with-terraform.md
index 07f40136bd..afc9704f56 100644
--- a/docs/observability/aws/deploy-use-aws-observability/deploy-with-terraform.md
+++ b/docs/observability/aws/deploy-use-aws-observability/deploy-with-terraform.md
@@ -64,7 +64,7 @@ System Files:
Before you run the Terraform script, perform the following actions on a server machine of your choice:
-1. Install [Terraform](https://www.terraform.io/) version [0.13.0](https://releases.hashicorp.com/terraform/) or later. To check the installed Terraform version, run the following command:
+1. Install [Terraform](https://www.terraform.io/) version [1.6.0](https://releases.hashicorp.com/terraform/) or later. To check the installed Terraform version, run the following command:
```bash
$ terraform --version
```
@@ -647,7 +647,7 @@ The following table provides a list of all source parameters and their default v
### Configure collection of CloudWatch metrics
:::note
-To migrate CloudWatch Metrics Source to Kinesis Firehose Metrics Source using Terraform, refer to [Migration Strategy using Terraform](/docs/observability/aws/deploy-use-aws-observability/migration-strategy-using-terraform).
+To migrate from legacy CloudWatch Metrics Source to Kinesis Firehose Metrics Source using Terraform, refer to [Migration Strategy using Terraform](/docs/observability/aws/deploy-use-aws-observability/migration-strategy-using-terraform).
:::
#### collect_cloudwatch_metrics
@@ -676,7 +676,7 @@ collect_cloudwatch_metrics = "Kinesis Firehose Metrics Source"
Provide details for the Sumo Logic CloudWatch Metrics source. If not provided, then defaults will be used.
-* `limit_to_namespaces`. Enter a comma-delimited list of the namespaces which will be used for both AWS CloudWatch Metrics Source.
+* `limit_to_namespaces`. Enter a comma-delimited list of the namespaces which will be used for both AWS CloudWatch Metrics Source. You can provide both AWS and custom namespaces.
Supported namespaces are based on the type of CloudWatch Metrics Source you have selected above. See the relevant docs for the [Kinesis Firehose Metrics Source](/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source) and the [CloudWatch Metrics Source](/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics) for details on which namespaces they support.
@@ -703,7 +703,8 @@ Supported namespaces are based on the type of CloudWatch Metrics Source you have
"AWS/NetworkELB",
"AWS/SQS",
"AWS/SNS"
- ],
+ ],
+ "tag_filters": [],
"source_category": "aws/observability/cloudwatch/metrics",
"source_name": "CloudWatch Metrics (Region)"
}
@@ -713,8 +714,8 @@ Supported namespaces are based on the type of CloudWatch Metrics Source you have
The following override example collects only DynamoDB and Lambda namespaces with source_category set to `"aws/observability/cloudwatch/metrics/us-east-1"`:
-```json
-Cloudwatch_metrics_source_details = {
+```json title="cloudwatch_metrics_source_details"
+cloudwatch_metrics_source_details = {
"bucket_details": {
"bucket_name": "",
"create_bucket": true,
@@ -724,13 +725,27 @@ Cloudwatch_metrics_source_details = {
"fields": {},
"limit_to_namespaces": [
"AWS/DynamoDB",
- "AWS/Lambda"
- ],
+ "AWS/Lambda",
+ "CWAgent"
+ ],
+ "tag_filters": [{
+ "type":"TagFilters",
+ "namespace" : "AWS/DynamoDB",
+ "tags": ["env=prod;dev"]
+ },{
+ "type": "TagFilters",
+ "namespace": "AWS/Lambda",
+ "tags": ["env=prod"]
+ }],
"source_category": "aws/observability/cloudwatch/metrics/us-east-1",
"source_name": "CloudWatch Metrics us-east-1"
}
```
+:::note
+All namespaces specified in `tag_filters` must be included in `limit_to_namespaces`. Filters are not supported for custom metrics.
+:::
+
#### cloudwatch_metrics_source_url
Use this parameter if you are already collecting CloudWatch Metrics and want to use an existing Sumo Logic Collector Source. You need to provide the URL of the existing Sumo Logic CloudWatch Metrics Source. If the URL is for a AWS CloudWatch Metrics source, the "account" and "accountid" metadata fields will be added to the Source. If the URL is for the Kinesis Firehose for Metrics source, the "account" field will be added to the Source. For information on how to determine the URL, see [View or Download Source JSON Configuration](/docs/send-data/use-json-configure-sources/local-configuration-file-management/view-download-source-json-configuration).
@@ -1243,23 +1258,26 @@ auto_enable_logs_subscription="New"
### auto_enable_logs_subscription_options
-`filter`. Enter regex for matching logGroups for AWS Lambda only. The regex will check the name. See [Configuring Parameters](/docs/send-data/collect-from-other-data-sources/autosubscribe-arn-destination).
+* `filter`. Enter regex for matching logGroups for AWS Lambda only. The regex will check the name. See [Configuring Parameters](/docs/send-data/collect-from-other-data-sources/autosubscribe-arn-destination/#configuringparameters).
+* `tags_filter`. Enter comma separated key value pairs for filtering logGroups using tags. Ex KeyName1=string,KeyName2=string. This is optional leave it blank if tag based filtering is not needed. See [Configuring Parameters](/docs/send-data/collect-from-other-data-sources/autosubscribe-arn-destination/#configuringparameters)
**Default value:**
```json
{
- "filter": "apigateway|lambda|rds"
+ "filter": "apigateway|lambda|rds",
+ "tags_filter": ""
}
```
-**Default JSON:**
+**Override Example JSON:**
The following example includes all log groups that match `"lambda-cloudwatch-logs"`:
```
auto_enable_logs_subscription_options = {
"filter": "lambda-cloudwatch-logs"
+ "tags_filter": "Environment=Production,Application=MyApp"
}
```
diff --git a/docs/observability/aws/integrations/amazon-rds.md b/docs/observability/aws/integrations/amazon-rds.md
index 0bfdc6100f..b50247936a 100644
--- a/docs/observability/aws/integrations/amazon-rds.md
+++ b/docs/observability/aws/integrations/amazon-rds.md
@@ -11,7 +11,7 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
[Amazon Relational Database Service (Amazon RDS)](https://aws.amazon.com/rds/) is a managed database service, optimized to run in the cloud. The RDS Amazon Web Service (AWS) simplifies the setup, operation, and scaling of relational database instances for use in applications throughout your infrastructure.
-The Sumo Logic Amazon RDS app dashboards provide visibility into the performance and operations of your Amazon Relational Database Service (RDS). Preconfigured dashboards allow you to monitor critical metrics of your RDS instance(s) or cluster(s) including CPU, memory, storage, network transmits and receive throughput, read and write operations, database connection count, disk queue depth, and more. CloudTrail Audit dashboards help you monitor activities performed on your RDS infrastructure. MySQL Logs dashboards helps you monitor database errors, slow queries, audit sql queries and generic activities. PostgreSQL logs dashboard help you to monitor database errors, slow queries, database security, and query execution timings. MSSQL Logs dashboards helps you monitor error logs and basic infrastructure details.
+The Sumo Logic Amazon RDS app dashboards provide visibility into the performance and operations of your Amazon Relational Database Service (RDS). Preconfigured dashboards allow you to monitor critical metrics of your RDS instance(s) or cluster(s) including CPU, memory, storage, network transmits and receive throughput, read and write operations, database connection count, disk queue depth, and more. CloudTrail Audit dashboards help you monitor activities performed on your RDS infrastructure. MySQL Logs dashboards helps you monitor database errors, slow queries, audit sql queries and generic activities. PostgreSQL logs dashboard help you to monitor database errors, slow queries, database security, and query execution timings. MSSQL Logs dashboards helps you monitor error logs and basic infrastructure details. Oracle CloudTrail and CloudWatch Logs dashboards provide monitoring for error logs and essential infrastructure details.
## Log and metrics types
@@ -21,6 +21,7 @@ The Amazon RDS app uses the following logs and metrics:
* [Publishing RDS CloudWatch Logs, RDS Database logs for Aurora MySQL, RDS MySQL, MariaDB](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.MySQLDB.PublishtoCloudWatchLogs.html).
* [Publishing RDS CloudWatch logs, RDS Database logs for Aurora PostgreSQL, and RDS PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.PostgreSQL.html#USER_LogAccess.Concepts.PostgreSQL.PublishtoCloudWatchLogs)
* [Publishing RDS CloudWatch logs, RDS Database logs for RDS MSSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.SQLServer.html#USER_LogAccess.SQLServer.PublishtoCloudWatchLogs)
+* [Publishing RDS CloudWatch logs, RDS Database logs for RDS Oracle](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.Oracle.html#USER_LogAccess.Oracle.PublishtoCloudWatchLogs)
### Sample CloudTrail log message
@@ -271,6 +272,35 @@ account=* region=* namespace=aws/rds dbidentifier=* _sourceHost=/aws/rds/*Error
| sort by _timeslice
```
+```sql title="Engine and Its DB Instance (Oracle CloudTrail log based)"
+account=* region=* namespace=aws/rds "\"eventSource\":\"rds.amazonaws.com\"" !errorCode
+| json "eventTime", "eventName", "eventSource", "awsRegion", "userAgent", "recipientAccountId", "userIdentity", "requestParameters", "responseElements", "errorCode", "errorMessage", "requestID", "sourceIPAddress" as eventTime, event_name, event_source, Region, user_agent, accountId1, userIdentity, requestParameters, responseElements, error_code, error_message, requestID, src_ip nodrop
+| where event_source = "rds.amazonaws.com"
+| json "requestParameters.engine", "responseElements.engine" as engine1, engine2 nodrop
+| if (!isEmpty(engine1), engine1, engine2) as engine
+| where !isEmpty(engine) and engine contains "oracle"
+| json field=userIdentity "accountId", "arn", "userName", "type" as accountId, arn, username, type nodrop
+| parse field=arn ":assumed-role/*" as user nodrop | parse field=arn "arn:aws:iam::*:*" as accountId, user nodrop
+| json field=requestParameters "dBInstanceIdentifier", "resourceName", "dBClusterIdentifier" as dBInstanceIdentifier1, resourceName, dBClusterIdentifier1 nodrop
+| json field=responseElements "dBInstanceIdentifier" as dBInstanceIdentifier3 nodrop
+| parse field=resourceName "arn:aws:rds:*:db:*" as f1, dBInstanceIdentifier2 nodrop
+| if (resourceName matches "arn:aws:rds:*:db:*", dBInstanceIdentifier2, if (!isEmpty(dBInstanceIdentifier1), dBInstanceIdentifier1, dBInstanceIdentifier3) ) as dBInstanceIdentifier
+| where !isEmpty(dBInstanceIdentifier)
+| count as freq by engine, dBInstanceIdentifier
+| sort by dBInstanceIdentifier, engine asc
+| fields -freq
+```
+
+
+```sql title="ORA Messages Over Time (Oracle CloudWatch log based)"
+account=* region=* namespace=aws/rds dbidentifier=* _sourceHost=/aws/rds/*alert ORA-*
+| json "message" nodrop | if (_raw matches "{*", message, _raw) as message
+| parse regex field=message "(?ORA-\d{5}): (?.*)" multi
+| timeslice 1s
+| count as eventCount by oraerr, _timeslice
+| transpose row _timeslice column oraerr
+```
+
## Viewing the RDS dashboards
import FilterDashboards from '../../../reuse/filter-dashboards.md';
@@ -542,3 +572,36 @@ Use this dashboard to:
* Track recent terminations of SQL Server instances and monitor the creation of new databases.
+
+
+### 20. Oracle Logs - Alert Logs Analysis
+
+The **Amazon RDS - Oracle Logs - Alert Logs Analysis** dashboard provides details on Oracle errors, including counts of various error types, ORA messages, Oracle instance states, and other data derived from the Oracle Alert log.
+
+Use this dashboard to:
+* Monitor Amazon Oracle RDS errors through CloudWatch Events.
+* Monitor ORA and TNS message events.
+* Monitor log switch activities, archival errors, tablespace extension issues, failures, warnings, and errors occurring on the Oracle RDS instance.
+
+
+
+### 21. Oracle Logs - Audit Logs Analysis
+
+The **Amazon RDS - Oracle Logs - Audit Logs Analysis** dashboard provides details on syslog audit trail, including successful and failed activities, and top usage by client, database user, and privileges used.
+
+Use this dashboard to:
+* Monitor successful and failed Amazon Oracle RDS events.
+* Monitor top usage by client, database user, and privileges on Oracle RDS instance.
+
+
+
+
+### 22. Oracle Logs - Listener Troubleshooting
+
+The **Amazon RDS - Oracle Logs - Listener Troubleshooting** dashboard provides insights into Oracle listener process activity, including database connections by host and application, connection failures, command execution statuses and trends, and additional data from the Oracle Listener log.
+
+Use this dashboard to:
+* Monitor listener process activity on Oracle RDS instance.
+* Monitor database connections by host and application, track connection failures, analyze command execution statuses and trends, and gather insights from the Oracle Listener log.
+
+
diff --git a/docs/send-data/collect-from-other-data-sources/autosubscribe-arn-destination.md b/docs/send-data/collect-from-other-data-sources/autosubscribe-arn-destination.md
index 7247d2c855..9fc6fcbace 100644
--- a/docs/send-data/collect-from-other-data-sources/autosubscribe-arn-destination.md
+++ b/docs/send-data/collect-from-other-data-sources/autosubscribe-arn-destination.md
@@ -58,7 +58,7 @@ This section describes the parameters you can configure for the Lambda function.
* **UseExistingLogs**—Controls whether this function will be used to create subscription filters for existing log groups. Select "True" if you want to use the function for subscribing to existing log groups.
-* **LogGroupTags**: Enter comma-separated key-value pairs for filtering logGroups using tags. For Example, KeyName1=string,KeyName2=string. Only log groups that match any one of the key-value pairs will be subscribed by this Lambda function. Supported only when UseExistingLogs is set to false which means it works only for new log groups, not existing log groups.
+* **LogGroupTags**: Enter comma-separated key-value pairs for filtering logGroups using tags. For Example, KeyName1=string,KeyName2=string. Supported only when UseExistingLogs is set to false.
* **RoleArn:** Provide the AWS Role ARN which has permission to put data into the provided Kinesis Firehose data delivery stream. Keep the value empty, when the destination type is Lambda.
diff --git a/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics.md b/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics.md
index 443c198cf7..e3afcc2568 100644
--- a/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics.md
+++ b/docs/send-data/hosted-collectors/amazon-aws/amazon-cloudwatch-source-metrics.md
@@ -73,7 +73,8 @@ AWS tag filtering is supported for the following AWS namespaces.
* AWS/ES
* AWS/Firehose
* AWS/Inspector
-* AWS/Kinesis AWS/KinesisAnalytics
+* AWS/Kinesis
+* AWS/KinesisAnalytics
* AWS/KinesisVideo
* AWS/KMS
* AWS/Lambda
diff --git a/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source.md b/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source.md
index 343169ecde..fb4acb16d8 100644
--- a/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source.md
+++ b/docs/send-data/hosted-collectors/amazon-aws/aws-kinesis-firehose-metrics-source.md
@@ -54,8 +54,9 @@ In this step, you create the AWS Kinesis Firehose for Metrics source.
1. Enter a **Name** for the source.
1. (Optional) Enter a **Description**.
1. For **Source Category**, enter any string to tag the output collected from this Source. Category metadata is stored in a searchable field called `_sourceCategory`.
-1. For **AWS Tag Filters**, enter keys and values to add filters to your metrics. AWS Tag filters are supported for AWS namespaces but not for custom namespaces.
-2. For **AWS Access** of a Kinesis Metric source, the role requires `tag:GetResources` permission. The Kinesis Log source does not require permissions.
+1. For **AWS Tag Filters** (Optional) , enter keys and values to add filters to your metrics. AWS Tag filters are supported for AWS namespaces but not for custom namespaces.
+**Example**
+1. For **AWS Access** of a Kinesis Metric source, the role requires `tag:GetResources` permission. The Kinesis Log source does not require permissions.
1. Click **Save**.
## Step 2: Set up AWS Metric Streams
diff --git a/static/img/send-data/kinesis-aws-tag-filters.png b/static/img/send-data/kinesis-aws-tag-filters.png
new file mode 100644
index 0000000000..1e9ce4fe30
Binary files /dev/null and b/static/img/send-data/kinesis-aws-tag-filters.png differ