Skip to content

Google Search Console recommended fixes for SEO and performance #4228

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jun 20, 2024
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ This page has information about Sumo Logic’s AWS Kinesis Firehose for Metrics

You can use the AWS Kinesis Firehose for Metrics source to ingest CloudWatch metrics from the [Amazon Kinesis Data Firehose](https://aws.amazon.com/kinesis/data-firehose/?kinesis-blogs.sort-by=item.additionalFields.createdDate&kinesis-blogs.sort-order=desc). AWS CloudWatch Metrics can be streamed using AWS Metric Streams, a managed service that exports CloudWatch metrics data with low latency, and without management overhead or custom integration. With Metric Streams, you can create dedicated, continuous streams of metric data that can be delivered to Sumo Logic by Kinesis Data Firehose.

<img src={useBaseUrl('img/send-data/aws-kinesis-firehose-metrics.png')} alt="icon" width="50"/>

## How it works

The diagram below illustrates the metrics collection pipeline.
Expand All @@ -35,13 +37,13 @@ The key difference between the sources is how they get metrics. The CloudWatch s

The benefits of a streaming source over a polling source include:

* No API throttlingThe Kinesis Firehose for Metrics source doesn’t consume your AWS quota by making calls to the AWS CloudWatch APIs. This offers both efficiency and cost benefits.
* Automatic retry mechanismKinesis Firehose has an automatic retry mechanism for delivering metrics to the Kinesis Firehose for Metrics source. In the event of a glitch, the metrics are re-sent after the service is restored. If  that fails, Firehose stores all failed messages in a customer-owned S3 bucket for later recovery.
* Latency is the same for all metrics, whether new, old, sparse, or continuous. This is a benefit over the AWS CloudWatch Metrics source, which doesn’t reliably ingest old or sparsely published metrics.
* High resolutionThe Kinesis Firehose streams all metrics at a 1-minute resolution. The AWS CloudWatch Metrics source supports scans as low as 1 minute, but that resolution can result in AWS account throttling and higher AWS bills
* **No API throttling**. The Kinesis Firehose for Metrics source doesn’t consume your AWS quota by making calls to the AWS CloudWatch APIs. This offers both efficiency and cost benefits.
* **Automatic retry mechanism**. Kinesis Firehose has an automatic retry mechanism for delivering metrics to the Kinesis Firehose for Metrics source. In the event of a glitch, the metrics are re-sent after the service is restored. If  that fails, Firehose stores all failed messages in a customer-owned S3 bucket for later recovery.
* **Consistent speed**. Latency is the same for all metrics, whether new, old, sparse, or continuous. This is a benefit over the AWS CloudWatch Metrics source, which doesn’t reliably ingest old or sparsely published metrics.
* **High resolution**. The Kinesis Firehose streams all metrics at a 1-minute resolution. The AWS CloudWatch Metrics source supports scans as low as 1 minute, but that resolution can result in AWS account throttling and higher AWS bills

:::note
The AWS CloudWatch Metrics source uses AWS’s [GetMetricStatistics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricStatistics.html) API and as a result, supports the `Unit` parameter. (When a request includes the `Unit` parameter, only metrics with the unit specified, for example, bytes, Microseconds, and so on, are reported. The Kinesis Firehose for Metrics does not currently support the `Unit` parameter.
The AWS CloudWatch Metrics source uses AWS’s [GetMetricStatistics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricStatistics.html) API and as a result, supports the `Unit` parameter. When a request includes the `Unit` parameter, only metrics with the unit specified, for example, bytes, Microseconds, and so on, are reported. The Kinesis Firehose for Metrics does not currently support the `Unit` parameter.
:::

## Step 1: Set up the source
Expand All @@ -51,12 +53,9 @@ In this step, you create the AWS Kinesis Firehose for Metrics source.
1. <!--Kanso [**Classic UI**](/docs/get-started/sumo-logic-ui/). Kanso--> In the main Sumo Logic menu, select **Manage Data > Collection > Collection**. <!--Kanso <br/>[**New UI**](/docs/get-started/sumo-logic-ui-new/). In the Sumo Logic top menu select **Configuration**, and then under **Data Collection** select **Collection**. You can also click the **Go To...** menu at the top of the screen and select **Collection**. Kanso-->
1. Click **Add Source** next to a Hosted Collector. 
1. Select **AWS Kinesis Firehose** for Metrics.

![kinesis-aws-source.png](/img/send-data/kinesis-aws-source.png)

1. Enter a **Name** for the source.
1. (Optional) Enter a **Description**.
1. For **Source Category**, enter any string to tag the output collected from this Source. Category metadata is stored in a searchable field called `_sourceCategory`.
1. For **Source Category**, enter any string to tag the output collected from this Source. Category metadata is stored in a searchable field called `_sourceCategory`.<br/><img src={useBaseUrl('img/send-data/kinesis-aws-source.png')} alt="kinesis-aws-source.png" width="500"/>
1. For **AWS Access** of a Kinesis Metric source, the role requires `tag:GetResources` permission. The Kinesis Log source does not require permissions.
1. Click **Save**.

Expand All @@ -65,36 +64,20 @@ In this step, you create the AWS Kinesis Firehose for Metrics source.
In this step, you set up the AWS Metric Streams service to stream metrics to Kinesis Data Firehose using a [CloudFormation template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-whatis-concepts.html#w2ab1b5c15b7):

1. Go to **Services > CloudFormation** in the AWS console.
1. On the **CloudFormation > Stack** page, click **Create stack**.

![create-stack-icon.png](/img/send-data/create-stack-icon.png)

1. On the **CloudFormation > Stack** page, click **Create stack**.<br/> ![create-stack-icon.png](/img/send-data/create-stack-icon.png)
1. On the **Create stack** page:

1. Click **Template is ready**.
1. Click **Amazon S3 URL** and paste this URL into the URL field:  https://sumologic-appdev-aws-sam-apps.s3.amazonaws.com/KinesisFirehoseCWMetrics.template.yaml.
1. Click **Next**.

![step4a.png](/img/send-data/step4a.png)

1. Click **Next**.<br/> ![step4a.png](/img/send-data/step4a.png)
1. On the **Specify stack details** page:

* **Stack name**. Enter a name for the stack. 
* **Sumo Logic Kinesis Firehose Metrics Configuration.** (Required) Enter the URL of the AWS Kinesis Firehose for Metrics source.
* **Select Namespaces to collect AWS CloudWatch Metrics**. Enter a comma-delimited list of the namespaces from which you want to collect AWS CloudWatch metrics.
* **Failed Data AWS S3 Bucket Configuration**. Enter "Yes" to create a new bucket, or "No" if you want to use an existing bucket.
* **Failed Data AWS S3 Bucket Configuration**. Enter **Yes** to create a new bucket, or "No" if you want to use an existing bucket.
* **AWS S3 Bucket Name for Failed Data**. Provide the name of Amazon S3 bucket to create, or the name of an existing bucket in the current AWS Account.
* Click **Next**.

![stack.png](/img/send-data/stack.png)

1. Click **Create stack**.

![final-create-icon.png](/img/send-data/final-create-icon.png)

1. The AWS console displays the resources in the newly created stack.

![resources-in-stack.png](/img/send-data/resources-in-stack.png)
* Click **Next**.<br/> ![stack.png](/img/send-data/stack.png)
1. Click **Create stack**.<br/> ![final-create-icon.png](/img/send-data/final-create-icon.png)
1. The AWS console displays the resources in the newly created stack.<br/> ![resources-in-stack.png](/img/send-data/resources-in-stack.png)

## Filter CloudWatch metrics during ingestion

Expand All @@ -106,40 +89,20 @@ Inclusive and exclusive filters can’t be combined. You can choose namespaces t

### Include metrics by namespace

1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch.
1. Open the [CloudWatch console](https://console.aws.amazon.com/cloudwatch).
1. In the navigation pane, choose **Metrics**.
1. Under **Metrics**, select **Streams**.

![metric_stream_1.png](/img/send-data/metric_stream_1.png)

1. Select the metric stream and click **Edit**.

![metric-stream_2.png](/img/send-data/metric-stream_2.png)

1. Click **Selected namespaces**. 

![metric-stream_4.png](/img/send-data/metric-stream_4.png)

1. From the list of AWS namespaces, select the namespaces whose metrics you want to receive. In the screenshot below “S3” and “Billing” are selected.

![metric-stream-5.png](/img/send-data/metric-stream-5.png)

1. Click **Save changes** at the bottom of the page.

![metric-stream-6.png](/img/send-data/metric-stream-6.png)
1. Under **Metrics**, select **Streams**. <br/> ![metric_stream_1.png](/img/send-data/metric_stream_1.png)
1. Select the metric stream and click **Edit**.<br/> ![metric-stream_2.png](/img/send-data/metric-stream_2.png)
1. Click **Selected namespaces**. <br/>![metric-stream_4.png](/img/send-data/metric-stream_4.png)
1. From the list of AWS namespaces, select the namespaces whose metrics you want to receive. In the screenshot below “S3” and “Billing” are selected.<br/> ![metric-stream-5.png](/img/send-data/metric-stream-5.png)
1. Click **Save changes** at the bottom of the page.<br/> ![metric-stream-6.png](/img/send-data/metric-stream-6.png)

### Exclude metrics by namespace

1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch.
1. Open the [CloudWatch console](https://console.aws.amazon.com/cloudwatch).
1. In the navigation pane, choose **Metrics**.
1. Under **Metrics**, select **Streams**.

![metric_stream_1.png](/img/send-data/metric_stream_1.png)

1. Select the metric stream and click **Edit**.

![metric-stream_2.png](/img/send-data/metric-stream_2.png)

1. Under **Metrics**, select **Streams**.<br/> ![metric_stream_1.png](/img/send-data/metric_stream_1.png)
1. Select the metric stream and click **Edit**.<br/> ![metric-stream_2.png](/img/send-data/metric-stream_2.png)
1. Click **All metrics** and select the **Exclude metric namespaces** option.
1. From the list of AWS namespaces, select the namespaces whose metrics you do not want to receive.
1. Click **Save changes** at the bottom of the page.
16 changes: 8 additions & 8 deletions docs/send-data/reference-information/time-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ Our Collectors can automatically parse most timestamps without any issues. Howe
* To edit the timestamp settings for an existing Source, navigate to the [**Collection**](/docs/send-data/collection/) page. Then click **Edit** to the right of the Source name and go to step 2.<br/><img src={useBaseUrl('img/send-data/source-edit.png')} alt="source-edit" width="600"/>
1. Navigate to the **Advanced Options for Logs (Optional)** section.<br/><img src={useBaseUrl('img/send-data/advanced-options-logs.png')} alt="advanced-options-logs" />
1. Under **Timestamp Format**, select **Specify a format** > **+ Add Timestamp Format**.<br/><img src={useBaseUrl('img/send-data/specify-timestamp-format.png')} alt="specify-timestamp-format" width="300"/>
1. In the **Format** field, enter the timestamp format the Collector should use to parse timestamps in your log.<br/><img src={useBaseUrl('img/send-data/timestamp-format-highlighted.png')} alt="timestamp-format-highlighted" /><br/>
1. In the **Format** field, enter the timestamp format the Collector should use to parse timestamps in your log.<br/><img src={useBaseUrl('img/send-data/timestamp-format-highlighted.png')} alt="timestamp-format-highlighted" width="600"/><br/>
:::note
If the timestamp format is in epoch time, enter **epoch** in the **Format** field.
:::
Expand All @@ -141,7 +141,7 @@ Our Collectors can automatically parse most timestamps without any issues. Howe
* When providing multiple custom formats, specify the most common format first. The Collector will process each custom format in the order provided. Once a timestamp is located, no further timestamp checking is done.
* If no timestamps are located that match your custom formats, the Collector will still attempt to automatically locate the log's timestamp.
:::
1. The **Timestamp locator** is a regular expression with a capture group matching the timestamp in your log messages.<br/> ![timestamp locator highlighted.png](/img/send-data/timestamp-locator-highlighted.png) The timestamp locator must:
1. The **Timestamp locator** is a regular expression with a capture group matching the timestamp in your log messages.<br/><img src={useBaseUrl('/img/send-data/timestamp-locator-highlighted.png')} alt="timestamp-locator-highlighted.png" width="500"/><br/>The timestamp locator must:
* be provided for 16-digit epoch or 19-digit epoch timestamps. Otherwise, this field is not necessary.
* be a valid Java regular expression. Otherwise, this error message will be displayed: `Unable to validate timestamp formats. The timestamp locator regex your-regex is invalid. The timestamp locator regex your-regex uses matching features which are not supported.`
* be an [RE2-compliant](https://github.com/google/re2/wiki/Syntax) regular expression, for example: `\[time=(.*?)\]`. Otherwise, this error message will be displayed: `Unable to validate timestamp formats. The timestamp locator regex your-regex uses matching features which are not supported.`
Expand All @@ -150,8 +150,8 @@ Our Collectors can automatically parse most timestamps without any issues. Howe
If you use quotes in the timestamp locator regular expression, you may see issues in the display after you save. The regular expression is not actually changed and can still be used to locate your timestamp.
:::
1. If you have more than one custom timestamp format that you want to add, click **+ Add**. The ordering of formats is significant. Each provided timestamp format is tested, in the order specified, until a matching format is found. The first matching format determines the final message timestamp. If none of the provided formats match a particular message, the Collector will attempt to automatically determine the message's timestamp.
1. Next, we recommend testing a few log lines from your data against your specified formats and locators. Enter sample log messages to test the timestamp formats you want to extract.<br/> ![timestamp format test examples.png](/img/send-data/timestamp-format-test-examples.png)
1. Click **Test** once your log lines are entered. The results display with the timestamp parsed and format matches (if any).<br/> ![timestamp format test results.png](/img/send-data/timestamp-format-test-results.png)
1. Next, we recommend testing a few log lines from your data against your specified formats and locators. Enter sample log messages to test the timestamp formats you want to extract.<br/> <img src={useBaseUrl('img/send-data/timestamp-format-test-examples.png')} alt="timestamp format test examples.png" width="600"/>
1. Click **Test** once your log lines are entered. The results display with the timestamp parsed and format matches (if any).<br/><img src={useBaseUrl('img/send-data/timestamp-format-test-results.png')} alt="timestamp format test results.png" width="500"/><br/>
You should see one of the following messages:
* **Format matched**.  In this example, the format of `yyyy/MM/dd HH:mm:ss` was matched and highlighted in green. This was the first format provided, so it returns as `1(format: yyyy/MM/dd HH:mm:ss locator: \[time=(.*?)\])` The **Effective message time** would be `2017-01-15 02:12.000 +0000`.
* **None of the custom timestamp format was matched**.  While the custom formats were not found in the log, there's still an auto detected timestamp highlighted in orange, `2017-06-01 02:12:12.259667` that we can use. **The Effective message** time is going to be `2017-06-01 02:12:12.259 +0000`.
Expand Down Expand Up @@ -191,7 +191,7 @@ _sourceCategory=PaloAltoNetworks
| _format as timestampformat
```

The result would look like this: <br/> ![format](/img/send-data/format.png)
The result would look like this: <br/><img src={useBaseUrl('img/send-data/format.png')} alt="format.png" width="600"/>


### Large time between message time and receipt time
Expand Down Expand Up @@ -257,17 +257,17 @@ Changing the **Default Timezone** setting affects how the UI displays messages

For example, the following screenshot shows the time zone set to **PST** in the UI, in the **Time** column. The logs were collected from a system that was also configured to use the **PST** time zone, which is displayed in the timestamp of the **Message** column. The timestamps in both columns match as they are set to the same time zone.

![img](/img/send-data/timezone_PST.png)
<img src={useBaseUrl('img/send-data/timezone_PST.png')} alt="timezone_PST.png" width="500" />

The next screenshot shows the same search result after changing the Default Timezone setting to UTC. Now the Time column is displayed in UTC, while the Message column retains the original timestamp, in PST.

![img](/img/send-data/timezone_UTC.png)
<img src={useBaseUrl('img/send-data/timezone_UTC.png')} alt="timezone_UTC.png" width="500"/>

In another example, if your time zone is set to **UTC**, and you share a Dashboard with another user who has their time zone set to **PST**, what will they see?

They will see the same data, just displayed using their custom set time zone. For example, if you have a Panel that uses a time series, the timeline on the X axis of your chart is displayed in your time zone, **UTC**. The other user will see the timeline on the X axis displayed in their time zone, **PST**. But the data displayed in the chart is exactly the same.

![img](/img/send-data/timezone_dashboards_compare.png)
<img src={useBaseUrl('img/send-data/timezone_dashboards_compare.png')} alt="timezone_dashboards_compare.png" width="600"/>

## Time ranges

Expand Down
Loading