diff --git a/blog-service/2025-01-08-otel-remote-management.md b/blog-service/2025-01-08-otel-remote-management.md
new file mode 100644
index 0000000000..8d02fb8058
--- /dev/null
+++ b/blog-service/2025-01-08-otel-remote-management.md
@@ -0,0 +1,23 @@
+---
+title: Remote Management for OpenTelemetry Collector (Collection)
+image: https://help.sumologic.com/img/sumo-square.png
+keywords:
+ - collection
+ - opentelemetry
+ - otel
+ - remote-management
+hide_table_of_contents: true
+---
+
+import useBaseUrl from '@docusaurus/useBaseUrl';
+
+
+
+The Sumo Logic Distribution for OpenTelemetry Collector now supports remote management, enabling you to configure and manage data collection directly from the Sumo Logic UI. With this feature, you can:
+
+* **Simplify configuration**. Set up and manage data collection for multiple collectors without server access.
+* **Streamline workflows**. Use tags to group collectors and apply centralized data source templates, reducing redundancy and manual effort.
+* **Enhance automation**. Automatically monitor new servers by tagging them during setup.
+* **Accelerate time to value**. Start collecting data in minutes with an intuitive UI and no need to manage configuration files.
+
+This release provides a faster, more efficient way to manage large-scale data collection, supporting scalable and automated operations. [Learn more](/docs/send-data/opentelemetry-collector/remote-management).
diff --git a/cid-redirects.json b/cid-redirects.json
index 2f4be5c425..8c65bf3a27 100644
--- a/cid-redirects.json
+++ b/cid-redirects.json
@@ -2674,7 +2674,7 @@
"/cid/9008": "/docs/alerts/webhook-connections/new-relic",
"/cid/10333": "/docs/send-data/opentelemetry-collector/remote-management/processing-rules",
"/cid/10334": "/docs/send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules",
- "/cid/10335": "/docs/send-data/opentelemetry-collector/remote-management/processing-rules/metrics-include-and-exclude-rules",
+ "/cid/10335": "/docs/send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules",
"/cid/10336": "/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules",
"/cid/9010": "/docs/send-data/opentelemetry-collector",
"/cid/9011": "/docs/send-data/opentelemetry-collector/install-collector/linux",
diff --git a/docs/reuse/apps/opentelemetry/collector-installation.md b/docs/reuse/apps/opentelemetry/collector-installation.md
index 15ecf2dc81..fd9e3fbd44 100644
--- a/docs/reuse/apps/opentelemetry/collector-installation.md
+++ b/docs/reuse/apps/opentelemetry/collector-installation.md
@@ -1,17 +1,14 @@
-import useBaseUrl from '@docusaurus/useBaseUrl';
+In this step, we'll install the collector and add a uniquely identifiable tag to these collectors.
-In this step, we will install the collector and add a uniquely identifiable tag to these collectors.
+1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > OpenTelemetry Collection**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the Sumo Logic top menu select **Configuration**, and then under **Data Collection** select **OpenTelemetry Collection**. You can also click the **Go To...** menu at the top of the screen and select **OpenTelemetry Collection**.
+1. On the **OpenTelemetry Collection** page, click **+ Add Collector**.
+1. In the **Set up Collector** step:
+ 1. Choose your platform (for example, Linux).
+ 1. Enter your **Installation Token**.
+ 1. Under **Tag data on Collector level**, add a new tag to identify these collectors as having Apache running on them (for example, `application = Apache`).
+ 1. Leave the **Collector Settings** at their default values to configure collectors as remotely managed.
+ 1. Under **Generate and run the command to install the collector**, copy and run the installation command in your system terminal where the collector needs to be installed.
+1. After installation is complete, click **Next** to proceed.
+1. Select a source template (for example, Apache source template) to start collecting logs from all linked collectors, then proceed with the data configuration.
-1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > OpenTelemetry Collection**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the Sumo Logic top menu select **Configuration**, and then under **Data Collection** select **OpenTelemetry Collection**. You can also click the **Go To...** menu at the top of the screen and select **OpenTelemetry Collection**.
-1. On the **OpenTelemetry Collection** page, click **Add Collector**.
-1. Select the platform of the remote host in the **Set up Collector** section.
-1. Enter your **Installation Token**.
-1. Under **Tag data on Collector level**, add a new key-value pair by clicking **+New tags**, for example, `application = Apache` to identify these collectors as having Apache running on them.
-1. For **Collector Settings**, leave it as default.
-1. Under **Generate and run the command to install the collector**, copy the command and execute it in your system terminal, where the collector needs to be installed.
-1. Wait for the installation process to complete, then click **Next** to proceed.
-1. On the next screen, you will see a list of available Source Templates.
-
-:::note
-If you close this Source template creation screen, you can navigate back. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Source Template**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the Sumo Logic top menu select **Configuration**, and then under **Data Collection** select **Source Template**. You can also click the **Go To...** menu at the top of the screen and select **Source Template**.
-:::
+To revisit this screen later: From the [**Classic UI**](/docs/get-started/sumo-logic-ui-classic), select **Manage Data > Collection > Source Template**. From the [**New UI**](/docs/get-started/sumo-logic-ui), select **Configuration** > **Source Template**.
diff --git a/docs/reuse/apps/opentelemetry/data-configuration.md b/docs/reuse/apps/opentelemetry/data-configuration.md
index 9393340469..c6223807bd 100644
--- a/docs/reuse/apps/opentelemetry/data-configuration.md
+++ b/docs/reuse/apps/opentelemetry/data-configuration.md
@@ -1,9 +1,48 @@
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-In this step, we'll create a data collection configuration and link them to all the collectors that have particular tags.
-
-1. Complete the Source Template form by providing the necessary details, then click **Next**.
-1. On the **Link Collectors** step, you will have the option to link the collectors using the Collector name or by adding tags to find the group of collectors. For our scenario, we will add the tag `application = Apache`.
-1. Click **Preview Collector(s)** to see the list of collectors that will be linked to the newly created source template.
-1. Click **Next** to complete the Source Template creation. In the background, the system will apply the configuration to all the linked collectors and will start collecting the respective telemetry data from the remote host.
-1. Click the **Log Search** or **Metric Search** icon to search for data collected for this Source Template.
+:::info
+A new source template will always be created with the latest version of the source template.
+:::
+
+Follow the below steps to create a data collection configuration to gather the required logs and link them to all the collectors with the help of collector tags.
+
+1. Complete the source template form with the name and file path for your logs (for example, error logs or access logs), then click **Next**.
+1. Under **Link Collectors**, you will have the option to link the collectors using the collector name or by adding tags to find the group of collectors (for example, `application = Apache`).
+1. Preview and confirm the collectors that will be linked (fetched automatically) to the newly created source template.

+1. Click **Next** to complete the source template creation. In the background, the system will apply the configuration to all the linked collectors and will start collecting the respective telemetry data from the remote host (in the example, it would start collecting Apache error logs).
+1. Click the **Log Search** or **Metrics Search** icons to search for and analyze your data collected for this source template.
+
+
diff --git a/docs/reuse/apps/opentelemetry/logs-advance-option-otel.md b/docs/reuse/apps/opentelemetry/logs-advance-option-otel.md
index e8dd282991..b0e689e798 100644
--- a/docs/reuse/apps/opentelemetry/logs-advance-option-otel.md
+++ b/docs/reuse/apps/opentelemetry/logs-advance-option-otel.md
@@ -1,6 +1,5 @@
-**Advance options** for log collection can be used as follows :
-
- **Timestamp Format**. By default, Sumo Logic will automatically detect the timestamp format of your logs. However, you can manually specify a timestamp format for a source by specifying the following:
- - **Timestamp locator**. This will be a [Go regular expression](https://github.com/google/re2/wiki/Syntax) which should have timestamp matched using a named capture group. Name of this captured group should be “timestamp_field”.
- - **Layout**. The exact layout of the timestamp to be parsed for example - %Y-%m-%dT%H:%M:%S.%LZ. To learn more about the rules, [refer here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/internal/coreinternal/timeutils/internal/ctimefmt/ctimefmt.go#L68).
- - **Location (Time zone)** : The geographic location (timezone) to use when parsing a timestamp that does not include a timezone. The available locations depend on the local IANA Time Zone database. [This page](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) contains many examples, such as America/New_York.
\ No newline at end of file
+**Advance options** for log collection can be used as follows:
+ * **Timestamp Format**. By default, Sumo Logic will automatically detect the timestamp format of your logs. However, you can manually specify a timestamp format for a source by configuring the following:
+ - **Timestamp locator**. Use a [Go regular expression](https://github.com/google/re2/wiki/Syntax) to match the timestamp in your logs. Ensure the regular expression includes a named capture group called `timestamp_field`.
+ - **Layout**. Specify the exact layout of the timestamp to be parsed. For example, `- %Y-%m-%dT%H:%M:%S.%LZ`. To learn more about the formatting rules, refer to [this guide](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/internal/coreinternal/timeutils/internal/ctimefmt/ctimefmt.go#L68).
+ - **Location (Time zone)**. Define the geographic location (timezone) to use when parsing a timestamp that does not include a timezone. The available locations depend on the local IANA Time Zone database. For example, `America/New_York`. See more examples [here](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones).
diff --git a/docs/send-data/opentelemetry-collector/remote-management/index.md b/docs/send-data/opentelemetry-collector/remote-management/index.md
index 5505637e43..5bbe7912cf 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/index.md
@@ -1,62 +1,53 @@
---
slug: /send-data/opentelemetry-collector/remote-management
-title: OpenTelemetry Remote Management
+title: OpenTelemetry Collector Remote Management
sidebar_label: Remote Management
---
import useBaseUrl from '@docusaurus/useBaseUrl';
-
-
-
-
-Beta
-
-This feature is in Beta. To participate, contact your Sumo Logic account executive.
-
-The Sumo Logic Distribution for OpenTelemetry Collector simplifies remote management of data collection, allowing setup from the Sumo Logic UI and deployment to multiple collectors.
-
-## Remote Management features
-
-### Collector tags
-
-With remote management, you can tag your [OpenTelemetry Collectors](/docs/send-data/opentelemetry-collector) to categorize and group them. These tags are also enriched in your data, enabling you to use them in your dashboards and searches.
-
-### Source templates
-
-Remote management data configuration for OpenTelemetry collectors is handled using Source templates. This feature extends the [Installed Collector Source](/docs/send-data/installed-collectors/sources) template, allowing association with multiple collectors.
-
-Use collector tags to group collectors and associate Source templates to these groups, reducing redundancy in data collection setup. This process, known as *Collector Linking*, streamlines configuration management.
-
-## How it works
-
-To illustrate the setup and configuration process, let's walk through an example scenario where you'd need to monitor Apache error logs from 50 Linux servers.
-
-### Step 1: Install collectors
-
-First, you'll need to install the OpenTelemetry collectors on each of the 50 servers and tag them to indicate that they are running Apache.
-
-1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > OpenTelemetry Collection**.
[**New UI**](/docs/get-started/sumo-logic-ui). In the Sumo Logic top menu, select **Configuration**, and then under **Data Collection** select **OpenTelemetry Collection**.
-1. Click **Add Collector**.
-1. In the **Set up Collector** step, choose **Linux** as the platform.
-1. Enter your **Installation Token**.
-1. Under **Tag data on Collector level**, add a new tag, `“application = Apache”`.
-1. Leave the **Collector Settings** at their default values.
-1. Under **Generate and run the command to install the collector**, copy and run the installation command in your system terminal where the collector needs to be installed.
-1. After installation is complete, click **Next** to proceed.
-1. On the next screen, you will see a list of available Source Templates. Select the **Apache Source Template** to apply the source template to start collecting logs from all linked collectors.
-
-To revisit this screen later, you can navigate back ([**Classic UI**](/docs/get-started/sumo-logic-ui-classic). In the main Sumo Logic menu, select **Manage Data > Collection > Source Template**. [**New UI**](/docs/get-started/sumo-logic-ui). In the Sumo Logic top menu select **Configuration**, and then under **Data Collection** select **Source Template**).
-
-### Step 2: Configure data collection
-
-Next, you'll create a data collection configuration to gather Apache error logs and link it to all collectors tagged `"application = Apache"`.
-
-1. Complete the Source Template form with the **Name** and **File Path** for your error logs, then click **Next**.
-1. Under **Link Collectors**, add the tag `"application = Apache"`.
-1. Click **Preview Collector(s)** to see the list of collectors that will be linked to the newly created Source Template.
-1. Click **Next** to complete Source Template creation. The system will apply the configuration to all linked collectors and start collecting Apache error logs.
-
-### Step 3: Monitor logs
-
-After configuring data collection, you can monitor the collected Apache error logs using the [Log Search](/docs/search). Additionally, use our [Dashboards](/docs/dashboards) to analyze the logs and gain insights from your Apache servers.
+The [Sumo Logic Distribution for OpenTelemetry Collector](/docs/send-data/opentelemetry-collector) simplifies remote management of data collection by enabling setup and configuration from the Sumo Logic UI, deployment to multiple collectors at once, and centralized management of data configurations using source templates.
+
+You can tag your OpenTelemetry collectors to categorize and group them. These tags are enriched in your data so you can use them in dashboards and searches.
+
+Remote management data configuration is handled using source templates. This feature extends our [Installed Collector](/docs/send-data/installed-collectors/sources) source template to support multiple collectors.
+
+By associating source templates with collector tags—a process called *collector linking*—you reduce redundancy in data collection setup and streamline configuration management across your environment.
+
+**Key benefits of remote management**
+
+* Simplified setup and configuration via the Sumo Logic UI
+* Tag-based collector grouping for efficient data collection
+* Centralized configuration using source templates
+* No server access required after installation
+* Faster time to value and reduced manual errors
+
+**Common use cases**
+
+* Grouping collectors by environment (for example, production, staging)
+* Expanding data collection for new services with minimal effort
+* Simplifying migration from legacy monitoring solutions
+* Monitoring error logs across multiple Apache servers
+
+In this section, we'll introduce the following concepts:
+
+
+
+
+
+
})
Source Templates
+
Learn how to create and modify your OpenTelemetry Remote Management source templates.
+
+
+
+
+
})
Processing Rules
+
Use processing rules for an OpenTelemetry agent with remote management source templates.
+
+
+
diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules.md
index 1693bafc2d..32a41e47a9 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules.md
@@ -1,28 +1,27 @@
---
id: include-and-exclude-rules
-title: Include and Exclude Rules for OpenTelemetry (Beta)
-description: Use include and exclude processing rules to specify what kind of data is sent to Sumo Logic using OpenTelemetry Collector.
+title: OpenTelemetry Remote Management Include and Exclude Rules
+sidebar_label: Include and Exclude Rules
+description: Use include and exclude processing rules to specify what kind of data is sent to Sumo Logic using OpenTelemetry remote management.
---
-
-
-
+import useBaseUrl from '@docusaurus/useBaseUrl';
-Beta
+You can use include and exclude processing rules to define which data is sent to Sumo Logic using the OpenTelemetry Collector. These rules internally utilize the [filter processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/filterprocessor) to filter data for logs and metrics.
-import useBaseUrl from '@docusaurus/useBaseUrl';
+* Exclude rule functions as denylist filters, ensuring that matching data is not sent to Sumo Logic.
+* Include rule functions as allowlist filters, ensuring that only matching data is sent to Sumo Logic.
-You can use include and exclude processing rules to specify what data is sent to Sumo Logic using OpenTelemetry Collector. Internally these will use [filter processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/filterprocessor) to get the data filtered.
+As a best practice, configure these rules to filter the smaller volume of data for optimal performance:
-* An exclude rule functions as a denylist filter where the matching data is not sent to Sumo Logic.
-* An include rule functions as an allowlist filter where only matching data is sent to Sumo Logic.
+* If you want to collect the majority of data from a source template, use exclude rules to match (filter out) the smaller volume of data.
+* If you want to collect a small set of data from a source template, use include rules to match (filter in) the smaller volume of data.
-As a best practice, specify these rules to match the lesser volume of data.
+## Logs: Include and exclude rules
-* If you want to **collect the majority of data** from a source template, provide **exclude** rules to match (filter out) the lesser volume of data.
-* If you want to **collect a small set of data** from a source template, provide **include** rules to match (filter in) the lesser volume of data.
+### Examples
-For example, to include only messages coming from a Windows Event log with ID `8015`, you can add a Logs Filter to the source template and select the **Type** of the filter as "Include message that match", and can use the following filter regular expression:
+To include only messages from a Windows Event log with ID `8015`, you can add a Logs Filter to the source template. Select the **Type** of the filter as "Include messages that match" and use the following filter regular expression:
```
.*"id":8015.*
@@ -30,12 +29,61 @@ For example, to include only messages coming from a Windows Event log with ID `8
+## Metrics: Include and exclude rules
+
+### Examples
+
+Metrics filters can be configured in a source template by specifying:
+
+* Filter by metrics name
+* Filter by dimension
+* Filter by metric name and dimension
+
+Specify the filter name, **Type** (include or exclude), and **Filter By** criteria.
+
+### Filter by metrics name
+
+To filter by the name of a metric, select this option and provide a regex that matches the metric name.
+
+For example, to collect only network metrics while collecting host metrics, specify `network` as the metric name.
+
+
+
+### Filter by dimension
+
+To filter by metric dimensions, select this option and specify key-value pairs in the dimension table.
+
+* The key must match the exact dimension name.
+* The value can be a regex matching the corresponding value.
+* Multiple key-value pairs are evaluated using an `AND` condition.
+
+For example, when collecting host metrics, you can filter CPU metrics data for a specific CPU (say `cpu0`), and you can mention the respective key value pair in the dimension table.
+
+
+
+### Filter by metrics name and dimension
+
+To filter by both metric name and dimensions, specify a regex for the metric name along with key-value pairs for dimensions.
+
+* The key must match the exact dimension name.
+* The value can be a regex matching the corresponding value for the key given.
+* The metric name and all key-value pairs are evaluated using an `AND` condition.
+
+For example, when collecting host metrics, you can filter network metrics for a specific device and direction by specifying:
+
+* Metric name regex: `network`
+* Dimension key-value pairs: `device=lo`, `direction=transmit`
+
+
+
+
## Rules and limitations
-When writing regular expression rules, you must follow these rules:
+When creating regular expression rules, adhere to the following guidelines:
-* Your rule must be [RE2 compliant](https://github.com/google/re2/wiki/Syntax).
-* If your rule matches *only a section* of the log line, the full log line will be matched.
-* For *single line messages*, it is not mandatory to prefix and suffix the regex expression with `.\*`.
-* Exclude rules take priority over include rules. Include rules are processed first. However, if an exclude rule matches data that matched the include rule filter, the data is excluded.
-* If two or more rules are listed, the assumed Boolean operator is `OR`.
+- Rules must comply with [RE2 syntax](https://github.com/google/re2/wiki/Syntax).
+- Exclude rules take precedence over include rules. Include rules are processed first, but if an exclude rule matches data that also matches the include rule, the data will be excluded.
+- When multiple rules are listed, the assumed Boolean operator is `OR`.
+- To filter for a single dimension key with multiple possible values, use the | operator. Example: For `cpu0` and `cpu1`, specify the dimension value as: `cpu0|cpu1`
+- If your rule matches any part of a log line, the entire log line will be matched.
+- For single-line messages, it is not necessary to prefix or suffix the regex with `.*`.
diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md
index 6cc6375788..8a2a2c64bc 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/index.md
@@ -1,58 +1,32 @@
---
slug: /send-data/opentelemetry-collector/remote-management/processing-rules
-title: Processing Rules for OpenTelemetry (Beta)
-description: Use Sumo Logic processing rules for an OpenTelemetry agent with an OpenTelemetry remote management (OTRM) source template.
+title: OpenTelemetry Remote Management Processing Rules
+description: Use Sumo Logic processing rules for an OpenTelemetry agent with an OpenTelemetry remote management source template.
---
-
-
-
-
-Beta
import useBaseUrl from '@docusaurus/useBaseUrl';
-Processing rules can be used with OpenTelemetry Collector for different source templates in OTRM (OpenTelemetry remote management). These processing rules can filter and can mask the data sent to Sumo Logic from OpenTelemetry Collector which is remotely managed by Sumo Logic. The rules affect only the data sent to Sumo Logic; logs and metrics on your end remain intact and unchanged. Data filtered by OpenTelemetry Collector using processing rules does not count towards your daily data volume quota.
-
-Processing rules for logs collection support the following rule types:
-
-* [Exclude messages that match](include-and-exclude-rules.md). Remove messages that you do not want to send to Sumo Logic at all ("denylist" filter). These messages are skipped by OpenTelemetry Collector and are not uploaded to Sumo Logic.
-* [Include messages that match](include-and-exclude-rules.md). Send only the data you'd like in your Sumo Logic account (an "allowlist" filter). This type of rule can be useful, for example, if you only want to include messages coming from a firewall.
-* [Mask messages that match](mask-rules.md). Replace an expression with a mask string that you can customize. This is another way to your protect data, such as passwords, that you do not normally track.
-
-
-Processing Rules for metrics collection support the following rule types:
-
-* [Exclude metrics that match](metrics-include-and-exclude-rules.md). Remove metrics that you do not want to send to Sumo Logic at all ("denylist" filter).
-* [Include metrics that match](metrics-include-and-exclude-rules.md). Send only selected metrics to your Sumo Logic account (an "allowlist" filter).
-
-## Limitations
-
-* Regular expressions must be [RE2 compliant](https://github.com/google/re2/wiki/Syntax).
-* Processing Rules are tested with maximum of 20 rules.
-
-## How do processing rules work together?
-
-You can create one or more processing rules for a Source Template, combining the different types of filters to generate the exact data set you want sent to Sumo Logic.
-
-When a Source has multiple rules they are processed in the following order: includes, excludes, masks.
-
-Exclude rules take priority over include rules. Include rules are processed first, however, if an exclude rule matches data that matched the include rule filter, the data is excluded.
-
-## Guide contents
+Use our OpenTelemetry Collector remote management (OTRM) source template processing rules to filter and mask data sent to Sumo Logic from OpenTelemetry Collector. The collector itself is remotely managed by Sumo Logic.
In this section, we'll introduce the following concepts:
+
+
+
})
OTRM Mask Rules
+
Create an OTRM mask rule to replace an expression with a mask string.
diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules-windows.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules-windows.md
index f65d060163..0b9db633f4 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules-windows.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules-windows.md
@@ -1,36 +1,40 @@
---
id: mask-rules-windows
-title: Mask Rules for Windows Source Template (Beta)
-sidebar_label: Mask Rules for Windows
-description: Create a mask rule to replace an expression with a mask string.
+title: OpenTelemetry Remote Management Windows Source Template Mask Rules
+sidebar_label: Mask Rules - Windows Source Template
+description: Create an OpenTelemetry remote management Windows source template mask rule to replace an expression with a mask string.
---
-
-
-
-
-Beta
:::note
-This document only support masking logs for Windows source template. Refer to [Mask Rules](mask-rules.md) to mask logs for other source template.
+This document supports masking logs specifically for our [Windows source template](/docs/send-data/opentelemetry-collector/remote-management/source-templates/windows). For other source templates, refer to [Mask Rules](mask-rules.md).
:::
-A mask rule is a type of processing rule that hides irrelevant or sensitive information from logs before they are ingested. When you create a mask rule, the selected key will have its value matched against a regex pattern, which will then be replaced with a mask string before being sent to Sumo Logic. You can provide a custom mask string or use the default string, `"#####"`.
+A mask rule is a type of processing rule that hides irrelevant or sensitive information from logs before they are ingested. When you create a mask rule:
+
+* The selected key’s value is matched against a regular expression (regex).
+* The matching portion is replaced with a mask string before being sent to Sumo Logic.
+* You can provide a custom mask string or use the default mask string: `"#####"`.
+
+Masking is an effective method for reducing overall ingestion volume. Ingestion volume is calculated after applying the mask filter. If masking reduces the log size, the smaller size will be considered against the ingestion limits.
-Ingestion volume is calculated after applying the mask filter. If masking reduces the log size, the smaller size will be considered against the ingestion limits. Masking is an effective method for reducing overall ingestion volume.
+## Examples
-To mask specific fields in the Windows Event Log, the following inputs are required:
-- **Key**. This should point to the key in the Windows Event Log for which the value needs to be masked. This key can be nested, with each level separated by a dot(.). For example, `provider.guid`.
-- **Regex**. This identifies the part of the string value that needs to be masked.
-- ** Replacement **. This is to get the string that will be substituted in place of the string that was selected through the regex expression.
+### Masking inputs
+
+To mask specific fields in a Windows Event Log, the following inputs are required:
+- **Key**. This should point to the key in the Windows Event Log for which the value needs to be masked. This key can be nested, with each level separated by a dot (`.`). For example, `provider.guid`.
+- **Regex**. This pattern identifies the part of the string value that needs to be masked.
+- **Replacement**. The string to substitute for the matching portion identified by the regex.
:::important
Any masking expression should be tested and verified with a sample source file before applying it to your production logs.
:::
+### Masking numbers in a nested field
+
For example, to mask numbers inside `guid` under `provider` field from this log:
-```
-{
+`{
"record_id": 163054,
"channel": "Security",
"event_data": {
@@ -79,8 +83,7 @@ For example, to mask numbers inside `guid` under `provider` field from this log:
"id": 4798
},
"level": "Information"
-}
-```
+}`
You could use the following masking expression input:
1. Key as `provider.guid`.
@@ -89,8 +92,7 @@ You could use the following masking expression input:
Using the above masking options would provide the following result:
-```
-{
+`{
"record_id": 163054,
"channel": "Security",
"event_data": {
@@ -139,17 +141,13 @@ Using the above masking options would provide the following result:
"id": 4798
},
"level": "Information"
-}
-```
+}`
-:::note
-- For masking, we use the [replace_pattern](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/ottl/ottlfuncs/README.md#replace_pattern) OTTL function. In this function:
- - $ must be escaped as $$ to bypass environment variable substitution logic.
- - To input a literal $, use $$$.
-- When masking strings containing special characters like double quotes (`"`) and backslashes (`\`), these characters will be escaped by a backslash when masking the logs.
-:::
-
-## Limitations
+## Rules and limitations
+- Masking utilizes the [replace_pattern](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/ottl/ottlfuncs/README.md#replace_pattern) OTTL function. In this function:
+ - Escape `$` as `$$` to bypass environment variable substitution logic.
+ - Use `$$$` to include a literal `$`.
+- When masking strings containing special characters like double quotes (`"`) and backslashes (`\`), these characters will be escaped by a backslash when masking the logs.
- You can *only* mask the data which is a string in the Windows event log JSON.
-- You cannot mask a value which is nested inside any array.
+- You cannot mask a value that is nested inside any array.
diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md
index d3d6b53544..c4362748c6 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules.md
@@ -1,28 +1,25 @@
---
id: mask-rules
-title: Mask Rules (Beta)
+title: OpenTelemetry Remote Management Mask Rules
sidebar_label: Mask Rules
-description: Create a mask rule to replace an expression with a mask string.
+description: Create an OpenTelemetry collector remote management mask rule to replace an expression with a mask string.
---
-
-
-
-
-Beta
:::note
-This document do not support masking logs for Windows source template. Refer to [Mask Rules for Windows Source Template](mask-rules-windows.md) to mask logs for Windows source template.
+This document does not cover masking logs for Windows source templates. For details on masking logs for Windows, refer to [Mask Rules for the Windows Source Template](mask-rules-windows.md).
:::
-A mask rule is a type of processing rule that hides irrelevant or sensitive information from logs before ingestion. When you create a mask rule, whatever expression you choose to mask will be replaced with a mask string before it is sent to Sumo Logic. You can provide a mask string, or use the default `"#####"`.
+A mask rule is a processing rule that hides irrelevant or sensitive information from logs before they are ingested. When you create a mask rule, the selected expression will be replaced with a mask string before the data is sent to Sumo Logic. You can either specify a custom mask string or use the default `"#####"`.
+
+Ingestion volume is calculated after applying the mask filter. If the mask reduces the size of the log, the smaller size will be measured against ingestion limits. Masking is an effective method to reduce overall ingestion volume.
+
+## Examples
-Ingestion volume is calculated after applying the mask filter. If the mask reduces the size of the log, the smaller size will be measured against ingestion limits. Masking is a good method for reducing overall ingestion volume.
+### Mask an email address
For example, to mask the email address `dan@demo.com` from this log:
-```
-2018-05-16 09:43:39,607 -0700 DEBUG [hostId=prod-cass-raw-8] [module=RAW] [logger=scala.raw.InboundRawProtocolHandler] [auth=User:dan@demo.com] [remote_ip=98.248.40.103] [web_session=19zefhqy...] [session=80F1BD83AEBDF4FB] [customer=0000000000000005] [call=InboundRawProtocol.getMessages]
-```
+`2018-05-16 09:43:39,607 -0700 DEBUG [hostId=prod-cass-raw-8] [module=RAW] [logger=scala.raw.InboundRawProtocolHandler] [auth=User:dan@demo.com] [remote_ip=98.248.40.103] [web_session=19zefhqy...] [session=80F1BD83AEBDF4FB] [customer=0000000000000005] [call=InboundRawProtocol.getMessages]`
You could use the following filter expression:
@@ -30,43 +27,64 @@ You could use the following filter expression:
auth=User:.*\.com
```
-Using the masking string `auth=User:AAA` would provide the following result:
+Using the masking string `auth=User:AAA` would produce the following result:
+
+`2018-05-16 09:43:39,607 -0700 DEBUG [hostId=prod-cass-raw-8] [module=RAW] [logger=scala.raw.InboundRawProtocolHandler] [auth=User:AAA] [remote_ip=98.248.40.103] [web_session=19zefhqy...] [session=80F1BD83AEBDF4FB] [customer=0000000000000005] [call=InboundRawProtocol.getMessages]`
+
+
+:::important
+Any masking expression should be tested and verified with a sample source file before applying it to your production logs.
+:::
+
+### Mask credit card numbers
+
+You can mask credit card numbers from log messages using a regular expression within a mask rule. Once masked with a known string, you can then perform a search for that string within your logs to detect if credit card numbers may be leaking into your log files.
+
+To mask credit card numbers in logs, you can use a masking filter with the following regular expression:
+
+The following regular expression can be used within a masking filter to mask American Express, Visa (16 digit only), Mastercard, and Discover credit card numbers:
```
-2018-05-16 09:43:39,607 -0700 DEBUG [hostId=prod-cass-raw-8] [module=RAW] [logger=scala.raw.InboundRawProtocolHandler] [auth=User:AAA] [remote_ip=98.248.40.103] [web_session=19zefhqy...] [session=80F1BD83AEBDF4FB] [customer=0000000000000005] [call=InboundRawProtocol.getMessages]
+((?:(?:4\d{3})|(?:5[1-5]\d{2})|6(?:011|5[0-9]{2}))(?:-?|\040?)(?:\d{4}(?:-?|\040?)){3}|(?:3[4,7]\d{2})(?:-?|\040?)\d{6}(?:-?|\040?)\d{5})
```
-## Rules
+This regular expression covers instances where the number includes dashes, spaces, or is a solid string of numbers.
+
+Samples include:
+
+* **American Express**. 3711-078176-01234 \| 371107817601234 \| 3711 078176 01234
+* **Visa**. 4123-5123-6123-7123 \| 4123512361237123 \| 4123 5123 6123 7123
+* **Master Card**. 5123-4123-6123-7123 \| 5123412361237123 \| 5123 4123 6123 7123
+* **Discover**. 6011-0009-9013-9424 \| 6500000000000002 \| 6011 0009 9013 9424
+
+
+## Rules and limitations
* Expressions that you want masked must be selected by the regular expression you given. And the masking string provided will mask whole of the string which is selected by the regular expression.
- For example, this log message:
+ For example, for this log message:
- ```
- {
+ `{
"reqHdr":{
"auth":"Basic ksoe9wudkej2lfj*jshd6sl.cmei=",
"cookie":"$Version=0; JSESSIONID=6C1BR5DAB897346B70FD2CA7SD4639.localhost_bc; $Path=/"
}
- }
- ```
+ }`
You would use the following as a mask expression to mask the auth parameter's token:
```
"auth"\s*:\s*"Basic\s*[^"]+"
```
-
- If the masking string given here is `"auth":"#####"`, then the log output will be:
- ```
- {
+ Applying the masking string `"auth":"#####"`, the log output will be:
+
+ `{
"reqHdr": {
"auth":"#####",
"cookie":"$Version=0; JSESSIONID=6C1BR5DAB897346B70FD2CA7SD4639.localhost_bc; $Path=/"
}
- }
- ```
+ }`
* Do not unnecessarily match on more of the log than needed. As seen in the previous example, avoid using overly broad expressions that could mask the entire log. This ensures that only the sensitive information is masked, not the whole log entry.
@@ -74,42 +92,17 @@ Using the masking string `auth=User:AAA` would provide the following result:
(?s).*auth"\s*:\s*"Basic\s*([^"]+)".*(?s)
```
-* Make sure you do not specify a regular expression that matches a full log line. Doing so will result in the entire log line being masked.
+* Avoid regular expressions that match an entire log line, as this will result in the entire line being masked.
-* If you need to mask values on multiple lines, use single-line modifiers (?s). For example:
+* To mask values spanning multiple lines, use the single-line modifier `(?s)`. For example:
```
auth=User\:(.*(?s).*session=.*?)\]
```
-:::note
-- For masking, we use the [replace_pattern](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/ottl/ottlfuncs/README.md#replace_pattern) OTTL function. In this function:
- - $ must be escaped as $$ to bypass environment variable substitution logic.
- - To input a literal $, use $$$.
+:::note
+- Masking utilizes the [replace_pattern](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/ottl/ottlfuncs/README.md#replace_pattern) OTTL function. In this function:
+ - Escape `$` as `$$` to bypass environment variable substitution logic.
+ - Use `$$$` to include a literal `$`.
- When masking strings containing special characters like double quotes (`"`) and backslashes (`\`), these characters will be escaped by a backslash when masking the logs.
:::
-
-## Examples
-
-:::important
-Any masking expression should be tested and verified with a sample source file before applying it to your production logs.
-:::
-
-### Mask credit card numbers
-
-You can mask credit card numbers from log messages using a regular expression within a mask rule. Once masked with a known string, you can then perform a search for that string within your logs to detect if credit card numbers may be leaking into your log files.
-
-The following regular expression can be used within a masking filter to mask American Express, Visa (16 digit only), Mastercard, and Discover credit card numbers:
-
-```
-((?:(?:4\d{3})|(?:5[1-5]\d{2})|6(?:011|5[0-9]{2}))(?:-?|\040?)(?:\d{4}(?:-?|\040?)){3}|(?:3[4,7]\d{2})(?:-?|\040?)\d{6}(?:-?|\040?)\d{5})
-```
-
-This regular expression covers instances where the number includes dashes, spaces, or is a solid string of numbers.
-
-Samples include:
-
-* **American Express:** 3711-078176-01234 \| 371107817601234 \| 3711 078176 01234
-* **Visa:** 4123-5123-6123-7123 \| 4123512361237123 \| 4123 5123 6123 7123
-* **Master Card:** 5123-4123-6123-7123 \| 5123412361237123 \| 5123 4123 6123 7123
-* **Discover:** 6011-0009-9013-9424 \| 6500000000000002 \| 6011 0009 9013 9424
diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/metrics-include-and-exclude-rules.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/metrics-include-and-exclude-rules.md
deleted file mode 100644
index 8ddfd4f46f..0000000000
--- a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/metrics-include-and-exclude-rules.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-id: metrics-include-and-exclude-rules
-title: Metrics Include and Exclude Rules for OpenTelemetry (Beta)
-description: You can use metrics processing rules to specify what metrics are sent to Sumo Logic using OpenTelemetry Collector.
----
-
-
-
-
-Beta
-
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-You can use include and exclude processing rules to specify what metrics is sent to Sumo Logic using OpenTelemetry Collector. Internally these will use [filter processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/filterprocessor) to get the metrics filtered.
-
-* An exclude rule functions as a denylist filter where all data is sent except matching data to Sumo Logic.
-* An include rule functions as an allowlist filter where only matching data is sent to Sumo Logic.
-
-As a best practice, specify these rules to match the lesser volume of data.
-
-* If you want to collect the majority of data from a source template, provide exclude rules to match (filter out) the lesser volume of data
-* If you want to collect a small set of data from a source template, provide include rules to match (filter in) the lesser volume of data.
-
-## Metric filter examples
-
-For filtering metrics data in source template you can add a metrics filter to the source template. You can then provide the name of the filter followed by **Type** (filter to include or exclude) and **Filter by**.
-
-There are three ways to use metrics filter in source template:
-* Filter by metrics name
-* Filter by dimension
-* Filter by metrics name and dimension
-
-### Filter by metrics name
-
-If you need to filter by name of the metrics, then you can select this option and provide the regex which matched with the metric name.
-
-For example when collecting host metrics, if you need to collect only network metrics, then you can give `network` in the metric name.
-
-
-
-### Filter by dimension
-
-If you need to filter by dimension of the metrics, then you can select this option and provide the list of keys and values in the dimension table. Key needs to be the exact dimension name and value can be a regex which matches against the value for the key given. All of these key value pairs will have the `AND` condition between them.
-
-For example, when collecting host metrics you can filter CPU metrics data for a specific CPU (say `cpu0`), and you can mention the respective key value pair in the dimension table.
-
-
-
-### Filter by metrics name and dimension
-
-If you need to filter by metrics name and dimension, then you can select this option and provide the metric name regex and dimension key and value. Key needs to be the exact dimension name and value can be a regex which matches against the value for the key given. All inputs here (that is, metric name) and all key value pairs will have the `AND` condition between them.
-
-For example, when collecting host metrics, you can filter network metrics for a specific device and direction by giving metric name regex as `network`, and in the dimension table key value pair you can specify `device=lo` and `direction=transmit`.
-
-
-
-## Rules and Limitations
-
-* Your rule must be [RE2 compliant](https://github.com/google/re2/wiki/Syntax).
-* Exclude rules take priority over include rules. Include rules are processed first, however, if an exclude rule matches data that matched the include rule filter, the data is excluded.
-* If two or more rules are listed, the assumed Boolean operator is OR.
-* If data needs to get filtered for single dimension key which can have multiple possible values then we can use a `|` operator. For example if we need to monitor cpu metrics for only cpu0 and cpu1 then we can form the dimension value expression as `cpu0|cpu1`.
diff --git a/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md
new file mode 100644
index 0000000000..66171f4454
--- /dev/null
+++ b/docs/send-data/opentelemetry-collector/remote-management/processing-rules/overview.md
@@ -0,0 +1,37 @@
+---
+id: overview
+title: OpenTelemetry Remote Management Processing Rules
+sidebar_label: Overview
+---
+
+import useBaseUrl from '@docusaurus/useBaseUrl';
+
+Processing rules affect only the data sent to Sumo Logic; logs and metrics on your end remain intact and unchanged. Data filtered by OpenTelemetry Collector using processing rules will not count towards your daily data volume quota.
+
+## Logs collection
+
+Processing rules for logs collection support the following rule types:
+
+* [Exclude messages that match](include-and-exclude-rules.md). Remove messages that you do not want to send to Sumo Logic at all ("denylist" filter). These messages are skipped by OpenTelemetry Collector and are not uploaded to Sumo Logic.
+* [Include messages that match](include-and-exclude-rules.md). Send only the data you'd like in your Sumo Logic account (an "allowlist" filter). This type of rule can be useful, for example, if you only want to include messages coming from a firewall.
+* [Mask messages that match](mask-rules.md). Replace an expression with a mask string that you can customize. This is another way to your protect data, such as passwords, that you do not normally track.
+
+## Metrics collection
+
+Processing rules for metrics collection support the following rule types:
+
+* [Exclude metrics that match](/docs/send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules). Remove metrics that you do not want to send to Sumo Logic at all ("denylist" filter).
+* [Include metrics that match](/docs/send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules). Send only selected metrics to your Sumo Logic account (an "allowlist" filter).
+
+## How do processing rules work together?
+
+You can create one or more processing rules for a source template, combining the different types of filters to generate the exact data set you want sent to Sumo Logic.
+
+When a Source has multiple rules they are processed in the following order: includes, excludes, masks.
+
+Exclude rules take priority over include rules. Include rules are processed first, however, if an exclude rule matches data that matched the include rule filter, the data is excluded.
+
+## Limitations
+
+* Regular expressions must be [RE2 compliant](https://github.com/google/re2/wiki/Syntax).
+* Processing rules are tested with maximum of 20 rules.
diff --git a/docs/send-data/opentelemetry-collector/remote-management/remote-management-v2.md b/docs/send-data/opentelemetry-collector/remote-management/remote-management-v2.md
deleted file mode 100644
index 46972ac70b..0000000000
--- a/docs/send-data/opentelemetry-collector/remote-management/remote-management-v2.md
+++ /dev/null
@@ -1,107 +0,0 @@
----
-id: remote-management-v2
-title: OpenTelemetry Remote Management
-sidebar_label: OpenTelemetry Remote Management
----
-
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-
-
-
-
-Beta
-
-:::info
-This feature is in Beta. To participate, contact your Sumo Logic account executive.
-:::
-
-The Sumo Logic Distribution for OpenTelemetry Collector facilitates remote management of data collection configurations, enabling seamless setup from the Sumo Logic UI and deployment to one or more collectors.
-
-## Remote management features
-
-### Collector tags
-
-With OpenTelemetry (OTel) remote management, you can tag your [OpenTelemetry Collectors](/docs/send-data/opentelemetry-collector) and use those tags to categorize and group them. These tags are enriched in your data, so you can use them in your dashboards and searches as well.
-
-### Source templates
-
-:::note
-Source template feature is not available for locally managed collectors.
-:::
-
-With remote management, data configuration setup for OTel collectors is done using source templates. This feature extends our existing [Installed Collector Source](/docs/send-data/installed-collectors/sources), allowing attachment to multiple collectors.
-
-Utilize collector tags for grouping collectors, and associate source templates to these collector groups, reducing redundancy in data collection setup. This process, termed *Collector Linking*, streamlines configuration management.
-
-## Install a Collector and configure the source template
-
-### Step 1: Collector installation
-
-Follow the below steps to install the collector and add uniquely identifiable tags:
-
-1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic/). In the main Sumo Logic menu, select **Manage Data > Collection > OpenTelemetry Collection**.
[**New UI**](/docs/get-started/sumo-logic-ui/). In the Sumo Logic top menu select **Configuration**, and then under **Data Collection** select **OpenTelemetry Collection**. You can also click the **Go To...** menu at the top of the screen and select **OpenTelemetry Collection**.
-1. On the **OpenTelemetry Collection** page, click **Add Collector**.
-1. In the **Set up Collector** step, select **Linux** as the platform.
-1. Enter your **Installation Token**.
-1. Under **Tag data on Collector level**, add a new tag (for example, `application = Apache` as seen in the screenshot below to identify these collectors as having Apache running on them).
-1. For **Collector Settings**, leave them as default to configure collectors as remotely managed.
-1. Under **Generate and run the command to install the collector**, copy the command and execute it in your system terminal where the collector needs to be installed.
-1. Wait for the installation process to complete, then click **Next** to proceed.
-1. On the next screen, you will see a list of available source templates. Select the required source template and proceed with the data configuration.
-
-### Step 2: Data configuration
-
-:::info
-A new source template will always be created with the latest version of the source template.
-:::
-
-Follow the below steps to create a data collection configuration to collect the required logs and link them to all the collectors with the help of tags:
-
-1. [**Classic UI**](/docs/get-started/sumo-logic-ui-classic/). In the main Sumo Logic menu, select **Manage Data > Collection > Source Template**.
[**New UI**](/docs/get-started/sumo-logic-ui/). In the Sumo Logic top menu select **Configuration**, and then under **Data Collection** select **Source Template**.
-1. On the **Source Template** page, click **Add Source Template** and search for the required Source Template.
-1. Complete the source template form by providing the mandatory fields, then click **Next**.
-1. On the **Link Collectors** page, you will have the option to link the collectors using the **Collector Name** or by adding **Collector Tags** to find the group of collectors.
-1. Navigate to **Preview Collector(s)** to view the details about the compatibility of the collectors and list of collectors that will be linked to the newly created source template. If we have mapped the collectors using both the **Collector Name** and **Collector Tags**, you will get a separate preview sections for the collectors identified by collector name and collector tags.
- :::note
- Incompatible version conflict will be found if your collectors cannot be linked to the source template due to version incompatibility or unsupported operating system. To move to next step, make sure you update the collect version of the incompatible collector.
- :::
-1. Click **Next** to complete Source Template creation. In the background, the system will apply the configuration to all the compatible linked collectors and starts collecting the required files.
-
-## Edit a source template
-
-To edit a source template:
-
-1. In the main Sumo Logic menu, select **Manage Data > Collection > Source Template**.
-1. Select the Source Template that you need to edit, and click **Edit**. Or, click the kebab menu against the selected source template and click **Edit** from the dropdown.
-1. Change the required configuration in the source template configuration page, and click **Next**.
-1. If required, update the collectors on the **Link Collectors** page.
-1. Click **Next** to complete editing the source template.
-
-## Upgrade the source template
-
-:::note
-Source template update will not available if there are any incompatible collector version.
-:::
-
-Follow the below steps to upgrade the source template:
-
-1. In the main Sumo Logic menu, select **Manage Data > Collection > Source Template**.
-1. Select the source template that you need to upgrade, and click **Upgrade** button.
-1. To upgrade the compatible collectors:
- 1. Update the required configuration for the new source template version.
- :::info
- To know about the changes in the latest source template version, click the **Learn more** button in the warning.
- :::
- 1. Click **Next** to finish the upgrade.
-1. To upgrade the incompatible collectors. Navigate to the **Preview Collector(s)** section to view the list of collectors that are compatible and incompatible to the new version of the source template. Follow any one of the below steps:
- - Create a new source template and link the compatible collectors by collector name and collector tags.
- - Or, unlink the collectors added in the new source template to the existing source template.
-
-## Delete a source template
-
-To delete a source template:
-
-1. In the main Sumo Logic menu, select **Manage Data > Collection > Source Template**.
-1. Select the source template that you need to delete, and click the **Delete** button. Or, click the kebab menu against the selected source template and click **Delete** from the dropdown.
-1. The source template will be deleted and removed from the **Source Template** page.
diff --git a/docs/send-data/opentelemetry-collector/remote-management/source-templates/apache/index.md b/docs/send-data/opentelemetry-collector/remote-management/source-templates/apache/index.md
index 8572b39465..76ad098ba7 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/source-templates/apache/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/source-templates/apache/index.md
@@ -9,19 +9,13 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
-
-
+})
-Beta
+The Apache source template creates an OpenTelemetry configuration that can be pushed to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and pushing the config to the appropriate OpenTelemetry agent, you can collect Apache logs and metrics to send to Sumo Logic.
- })
+## Fields created by the source template
-Apache source template creates an OpenTelemetry configuration that can be pushed to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and pushing the config to the appropriate OpenTelemetry agent you can ensure collection of logs and metrics of Apache to Sumo Logic.
-
-## Fields creation in Sumo Logic for Apache
-
-If not already present, the following [Fields](/docs/manage/fields/) are created as part of Source template creation.
+When you create a source template, the following [fields](/docs/manage/fields/) are automatically added (if they don’t already exist):
- **`sumo.datasource`**. Fixed value of **apache**.
- **`webengine.system`**. Fixed value of **apache**.
@@ -62,9 +56,9 @@ import OtelWindowsLogPrereq from '../../../../../reuse/apps/opentelemetry/log-co
-## Source template configuration
+## Configuring the source template
-You can follow the below steps to set a remotely managed OpenTelemetry collector and push the source template to it.
+Follow these steps to set up and deploy the source template to a remotely managed OpenTelemetry collector.
### Step 1: Set up remotely managed OpenTelemetry collector
diff --git a/docs/send-data/opentelemetry-collector/remote-management/source-templates/docker/index.md b/docs/send-data/opentelemetry-collector/remote-management/source-templates/docker/index.md
index 97e7334019..289b99a274 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/source-templates/docker/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/source-templates/docker/index.md
@@ -9,19 +9,13 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
-
-
+})
-Beta
+The Docker source template creates an OpenTelemetry configuration that can be pushed to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and pushing the config to the appropriate OpenTelemetry agent, you can collect Docker logs and metrics to Sumo Logic.
- })
+## Fields created by the source template
-The Docker source template creates an OpenTelemetry configuration that can be pushed to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and pushing the config to the appropriate OpenTelemetry agent, you can ensure collection of Docker logs and metrics to Sumo Logic.
-
-## Fields creation in Sumo Logic for Docker
-
-If not already present, the following [Fields](/docs/manage/fields/) are created as part of Source template creation.
+When you create a source template, the following [fields](/docs/manage/fields/) are automatically added (if they don’t already exist):
- **`sumo.datasource`**. Fixed value of **docker**.
- **`deployment.environment`**. This is a user-configured field set at the time of collector installation. It identifies the environment where the docker env resides, such as `dev`, `prod`, or `qa`.
@@ -30,7 +24,7 @@ If not already present, the following [Fields](/docs/manage/fields/) are created
This section provides instructions for configuring metrics and log collection for the Sumo Logic Docker app.
-#### For metrics collection
+### For metrics collection
Metrics are collected through the [Docker Stats Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/dockerstatsreceiver/README.md) of OpenTelemetry. This requires Docker API version 1.22+ and only Linux is supported.
@@ -45,8 +39,7 @@ After installing Sumo OpenTelemetry collector to docker host machine, you need t
sudo usermod -aG docker otelcol-sumo
```
-
-#### For logs collection
+### For logs collection
To collect Docker container event logs, execute the following command on the host machine and keep it running to monitor all Docker container-related events. The command requires a JSON file path where these container events will be stored.
@@ -63,9 +56,9 @@ sudo setfacl -R -m d:u:otelcol-sumo:r-x,u:otelcol-sumo:r-x,g:otelcol-sumo:r-x
-
-
-
-
Beta
-
-This feature is in Beta. To participate, contact your Sumo Logic account executive.
-
Source templates in Sumo Logic provide efficient, scalable data collection management by applying consistent setups across multiple collectors. They reduce redundancy and simplify configuration, making it easier to manage and scale your data collection efforts.
-## Benefits
-
-* **Efficiency**. Create a template once and apply it to multiple collectors.
-* **Consistency**. Ensure uniform data collection across your environment.
-* **Scalability**. Easily manage configurations for large numbers of collectors.
-
-## Common use cases
-
-Source templates are useful for managing data collection in scenarios like:
-
-* Monitoring application logs across multiple servers
-* Collecting metrics from a fleet of containers
-* Aggregating error logs from distributed services
-
-## Creating and managing Source templates
-
-To create a source template:
-
-1. **Navigate to Source Templates**.
- * [**Classic UI**](/docs/get-started/sumo-logic-ui-classic). Go to **Manage Data > Collection > Source Template**.
- * [**New UI**](/docs/get-started/sumo-logic-ui). In the Sumo Logic top menu, select **Configuration > Collection > Source Template**.
-2. **Create a new Source Template**. Click **Create Source Template** and fill in the required details, such as name and configuration settings.
-3. **Link Collectors**. Use tags or collector names to link the appropriate collectors to your source template.
-4. **Apply and manage**. Apply the source template to the linked collectors and manage or update it as needed.
-
-
-## Example Scenario: Monitoring Nginx access logs
-
-Monitoring Nginx Access Logs from a group of web servers:
-
-1. **Create Source Template**. Name it `"Nginx Access Logs"` and specify the log file path.
-2. **Link Collectors**. Tag your web servers with `"application=nginx"` and link these collectors to the source template.
-3. **Deploy**. Apply the source template to start collecting logs from all linked collectors.
-4. **Monitor**. Use our [log search](/docs/search) and [dashboard](/docs/dashboards) features to monitor and analyze the collected Nginx access logs.
-
-:::tip
-For more details on source templates, see [Installed Collector Source Documentation](/docs/send-data/installed-collectors/sources).
-:::
+In this section, we'll show you how to set up source templates for the following sources:
+
+
+
+
+
+
})
Apache
Learn how to configure our OTel Apache source template.
+
+
+
+
+
})
Docker
Learn how to configure our OTel Docker source template.
+
+
+
+
+
})
Kafka
Learn how to configure our OTel Kafka source template.
+
+
+
+
+
})
Linux
Learn how to configure our OTel Linux source template.
+
+
+
+
+
})
Local File
Learn how to configure our OTel Local File source template.
+
+
+
+
+
})
Mac
Learn how to configure our OTel Mac source template.
+
+
+
+
+
})
Nginx
Learn how to configure our OTel Nginx source template.
+
+
+
+
+
})
RabbitMQ
Learn how to configure our OTel RabbitMQ source template.
+
+
+
+
+
})
Redis
Learn how to configure our OTel Redis source template.
+
+
+
+
+
})
Syslog
Learn how to configure our OTel Syslog source template.
+
+
+
+
+
})
Windows
Learn how to configure our OTel Windows source template.
+
+
+
diff --git a/docs/send-data/opentelemetry-collector/remote-management/source-templates/kafka/index.md b/docs/send-data/opentelemetry-collector/remote-management/source-templates/kafka/index.md
index af04cac97a..866963a5b4 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/source-templates/kafka/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/source-templates/kafka/index.md
@@ -9,19 +9,13 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
-
-
-
-Beta
-
- })
+})
The Kafka source template generates an OpenTelemetry configuration that can be sent to a remotely managed OpenTelemetry collector (otelcol). By creating this source template and pushing the configuration to the appropriate OpenTelemetry agent, you can ensure the collection of Kafka logs and metrics in Sumo Logic.
-## Fields Creation in Sumo Logic for Kafka
+## Fields created by the source template
-If not already present, the following [Fields](/docs/manage/fields/) are created as part of Source template creation.
+When you create a source template, the following [fields](/docs/manage/fields/) are automatically added (if they don’t already exist):
- **`sumo.datasource`**. Fixed value of **kafka**.
- **`messaging.system`**. Fixed value of **kafka**.
@@ -53,9 +47,9 @@ import OtelWindowsLogPrereq from '../../../../../reuse/apps/opentelemetry/log-co
-## Source template configuration
+## Configuring the Kafka source template
-You can follow the below steps to set a remotely managed OpenTelemetry collector and push the source template to it.
+Follow these steps to set up and deploy the source template to a remotely managed OpenTelemetry collector.
### Step 1: Set up remotely managed OpenTelemetry collector
diff --git a/docs/send-data/opentelemetry-collector/remote-management/source-templates/linux/index.md b/docs/send-data/opentelemetry-collector/remote-management/source-templates/linux/index.md
index 797861d2f7..e0a9aa549f 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/source-templates/linux/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/source-templates/linux/index.md
@@ -9,19 +9,13 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
-
-
-
-Beta
-
- })
+})
The Linux source template generates an OpenTelemetry configuration that can be pushed to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and pushing the configuration to the appropriate OpenTelemetry agent, you can ensure the collection of Linux logs and host metrics for Sumo Logic.
-## Fields creation in Sumo Logic for Linux
+## Fields created by the source template
-If not already present, the following [Fields](/docs/manage/fields/) are created as part of source template creation.
+When you create a source template, the following [fields](/docs/manage/fields/) are automatically added (if they don’t already exist):
- **`sumo.datasource`**. Fixed value of **linux**.
- **`deployment.environment`**. This is a user-configured field set at the time of collector installation. It identifies the environment where the Linux system resides, such as `dev`, `prod`, or `qa`.
@@ -48,7 +42,7 @@ import LogsCollectionPrereqisites from '../../../../../reuse/apps/logs-collectio
-## Source template configuration
+## Configuring the Linux source template
Follow the below steps to set a remotely managed OpenTelemetry collector and push the source template to it.
@@ -66,14 +60,14 @@ In this step, you will configure the yaml required for Linux Collection. Below a
- **Name**. Name of the source template.
- **Description**. Description for the source template.
-#### Logs Collection
+#### Logs collection
- **Fields/Metadata**. You can provide any customer fields to be tagged with the data collected. By default, sumo tags `_sourceCategory` with the value otel/linux.
- **Logs**. The following fields are pre-populated with default paths, for common log files that are used in different Linux distributions. Not all paths might be relevant for your operating system. Modify the list of files as required or leave the default values.
-#### Metrics Collection
+#### Metrics collection
- **Metrics**. Select the metric scrappers you want to enable. By default, metric collection for CPU, memory, disk, load, file system, network, and paging are enabled and process metric collection is disabled.
-##### Enable process metric collection (Optional)
+##### Enable process metric collection (optional)
import ProcMetrics from '../../../../../reuse/apps/opentelemetry/process-metric-collection.md';
diff --git a/docs/send-data/opentelemetry-collector/remote-management/source-templates/localfile/index.md b/docs/send-data/opentelemetry-collector/remote-management/source-templates/localfile/index.md
index b72851b2d7..e5eeff21dc 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/source-templates/localfile/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/source-templates/localfile/index.md
@@ -5,12 +5,6 @@ sidebar_label: Local File
description: Learn about the Sumo Logic Local File source template for OpenTelemetry.
---
-
-
-
-
-Beta
-
import useBaseUrl from '@docusaurus/useBaseUrl';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
@@ -19,9 +13,9 @@ import TabItem from '@theme/TabItem';
The Local File source template generates an OpenTelemetry configuration that can be sent to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and deploying the configuration to the appropriate OpenTelemetry agent, you can ensure your logs are collected and sent to Sumo Logic.
-## Fields creation in Sumo Logic for Local File
+## Fields created by the source template
-If not already present, the following [Fields](/docs/manage/fields/) are created as part of source template creation.
+When you create a source template, the following [fields](/docs/manage/fields/) are automatically added (if they don’t already exist):
- **`sumo.datasource`**. Fixed value of **localfile**.
- **`deployment.environment`**. User configured field at the time of collector installation. This identifies the environment where the host resides. For example: `dev`, `prod`, or `qa`.
@@ -39,9 +33,9 @@ import OtelWindowsLogPrereq from '../../../../../reuse/apps/opentelemetry/log-co
-## Source template configuration
+## Configuring the Local File source template
-You can follow the below steps to set a remotely managed OpenTelemetry collector and push the source template to it.
+Follow these steps to set up and deploy the source template to a remotely managed OpenTelemetry collector.
### Step 1: Set up remotely managed OpenTelemetry collector
@@ -51,7 +45,7 @@ import CollectorInstallation from '../../../../../reuse/apps/opentelemetry/colle
### Step 2: Configure the source template
-In this step, you will configure the yaml required for Local File Collection. Below are the inputs required for configuration:
+In this step, you will configure the yaml required for Local File collection. Below are the inputs required for configuration:
- **Name**. Name of the source template.
- **Description**. Description for the source template.
diff --git a/docs/send-data/opentelemetry-collector/remote-management/source-templates/mac/index.md b/docs/send-data/opentelemetry-collector/remote-management/source-templates/mac/index.md
index b9ea84ede9..7e2a2a9bdb 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/source-templates/mac/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/source-templates/mac/index.md
@@ -9,19 +9,13 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
-
-
-
-Beta
-
- })
+})
The Mac source template generates an OpenTelemetry configuration that can be sent to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and deploying the configuration to the appropriate OpenTelemetry agent, you can ensure the collection of Mac logs and host metrics to Sumo Logic.
-## Fields creation in Sumo Logic for Mac
+## Fields created by the source template
-If not already present, the following [Fields](/docs/manage/fields/) are created as part of source template creation.
+When you create a source template, the following [fields](/docs/manage/fields/) are automatically added (if they don’t already exist):
- **`sumo.datasource`**. Fixed value of **mac**.
- **`deployment.environment`**. This is a user-configured field set at the time of collector installation. It identifies the environment where the Mac system resides, such as `dev`, `prod`, or `qa`.
@@ -36,7 +30,7 @@ Log collection is pre-populated with default paths for common mac system log fil
### For metrics collection
Host metrics for CPU and disk are not supported by otel as of now.
-## Source template configuration
+## Configuring the Mac source template
Follow the below steps to set a remotely managed OpenTelemetry collector and push the source template to it.
@@ -53,11 +47,11 @@ In this step, you will configure the yaml required for Mac Collection. Below are
- **Name**. Name of the source template.
- **Description**. Description for the source template.
-#### Logs Collection
+#### Logs collection
- **Fields/Metadata**. You can provide any customer fields to be tagged with the data collected. By default, sumo tags `_sourceCategory` with the value otel/mac.
- **Logs**. The following fields are pre-populated with default paths for common log files that are used in different Mac distributions. Not all paths might be relevant for your operating system. Modify the list of files as required or leave the default values.
-#### Metrics Collection
+#### Metrics collection
- **Metrics**. Select the metric scrappers you want to enable. By default, metric collection for memory, load, file system, network and paging are enabled and process metric collection is disabled.
##### Enable process metric collection (Optional)
diff --git a/docs/send-data/opentelemetry-collector/remote-management/source-templates/manage-source-templates.md b/docs/send-data/opentelemetry-collector/remote-management/source-templates/manage-source-templates.md
new file mode 100644
index 0000000000..a35d46c4a7
--- /dev/null
+++ b/docs/send-data/opentelemetry-collector/remote-management/source-templates/manage-source-templates.md
@@ -0,0 +1,98 @@
+---
+id: manage-source-templates
+title: Managing OpenTelemetry Remote Management Source Templates
+sidebar_label: Managing Source Templates
+description: Learn how to create, edit, and delete OpenTelemetry remote management source templates.
+---
+
+import useBaseUrl from '@docusaurus/useBaseUrl';
+
+Source templates provide a powerful mechanism to simplify and standardize data collection configurations across multiple collectors.
+
+:::note
+Source templates are not available for locally managed collectors.
+:::
+
+## Benefits of source templates
+
+* **Efficiency**. Create a template once and apply it to multiple collectors.
+* **Consistency**. Ensure uniform data collection across your environment.
+* **Scalability**. Easily manage configurations for large numbers of collectors.
+
+## Common use cases
+
+Source templates are useful for managing data collection in scenarios like:
+
+* Monitoring application logs across multiple servers
+* Collecting metrics from a fleet of containers
+* Aggregating error logs from distributed services
+
+## Create a new source template
+
+1. From the [**Classic UI**](/docs/get-started/sumo-logic-ui-classic), go to **Manage Data > Collection > Source Template**. Or, from the [**New UI**](/docs/get-started/sumo-logic-ui), go to the Sumo Logic top menu and select **Configuration > Collection > Source Template**.
+1. Click **Create Source Template** > **Add Source Template** and fill in the required details, such as name and configuration settings. When you're done, click **Next**.
+1. On the **Link Collectors** page, you will have the option to link the collectors using the **Collector Name** or by adding **Collector Tags** to find the group of collectors.
+1. Navigate to **Preview Collector(s)** to view details about the compatibility of the collectors and the collectors that will be linked to the newly created source template. If we have mapped the collectors using both the **Collector Name** and **Collector Tags**, you will get separate preview sections for the collectors identified by collector name and collector tags.


+ :::note
+ Incompatible version conflict will be found if your collectors cannot be linked to the source template due to version incompatibility or unsupported operating system. To move to the next step, make sure you update the collect version of the incompatible collector.
+ :::
+1. Click **Next** to complete source template creation. In the background, the system will apply the configuration to all the compatible linked collectors and starts collecting the required files.
+1. Apply the source template to the linked collectors and manage or update it as needed.
+
+Use our [Log Search](/docs/search) and [dashboards](/docs/dashboards) to monitor and analyze your collected logs.
+
+### Example: Apache error logs
+
+To illustrate the setup and configuration process, we'll use an example scenario where you'd need to monitor Apache error logs from 50 Linux servers.
+
+First, you'll need to install the OpenTelemetry collectors on each of the 50 Linux servers and add a uniquely identifiable tag to indicate that they are running Apache.
+
+1. **Create source template**. Name it `Apache Error Logs` and specify the log file path.
+2. **Link collectors**. Under **Collector Tags**, tag your web servers with `application = Apache` and link these collectors to the source template.
+3. **Deploy**. Apply the source template to start collecting logs from all linked collectors.
+4. **Monitor**. Use Log Search, Metrics, and Dashboards to look at your collected error logs and gain insights from your Apache servers.
+
+### Example: Nginx access logs
+
+To monitor Nginx access logs from a group of web servers:
+
+1. **Create source template**. Name it `Nginx Access Logs` and specify the log file path.
+2. **Link collectors**. Under **Collector Tags**, tag your web servers with `application=nginx` and link these collectors to the source template.
+3. **Deploy**. Apply the source template to start collecting logs from all linked collectors.
+4. **Monitor**. Use Log Search, Metrics, and Dashboards to monitor and analyze the collected Nginx access logs.
+
+:::tip
+For more details on source templates, see [Installed Collector Sources](/docs/send-data/installed-collectors/sources).
+:::
+
+
+## Edit a source template
+
+To edit a source template:
+
+1. In the main Sumo Logic menu, select **Manage Data > Collection > Source Template**.
+1. Select the source template that you need to edit, and click **Edit**. Or, click the kebab menu against the selected source template and click **Edit** from the dropdown.
+1. Change the required configuration in the source template configuration page, and click **Next**.
+1. If required, update the collectors on the **Link Collectors** page.
+1. Click **Next** to complete editing the source template.
+
+
+## Upgrade a source template
+
+You cannot upgrade a source template if there are any incompatible collector versions Make sure you update those collectors first.
+
+1. From the **Source Template** page, select the source template you need to upgrade and click **Upgrade**.
+1. Update the configuration for the new source template version.
+ :::info
+ To see what changes are included in the latest source template version, click **Learn more** in the warning.
+ :::
+1. Click **Next** to finish the upgrade.
+1. Navigate to the **Preview Collector(s)** section, check which collectors are compatible or incompatible to the new version of the source template. Follow any one of the below steps:
+ - Create a new source template and link the compatible collectors by collector name and collector tags.
+ - Or, unlink the collectors added in the new source template to the existing source template.
+
+## Delete a source template
+
+1. From the **Source Template** page, select the source template that you need to delete.
+1. Click the **Delete** button (or use the kebab menu against the selected source template, click **Delete** from the dropdown).
+1. Confirm the deletion. The source template will be removed from the **Source Template** page and unlinked from all collectors.
diff --git a/docs/send-data/opentelemetry-collector/remote-management/source-templates/nginx/index.md b/docs/send-data/opentelemetry-collector/remote-management/source-templates/nginx/index.md
index cd2b5c4a5d..06fb335cff 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/source-templates/nginx/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/source-templates/nginx/index.md
@@ -9,19 +9,13 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
-
-
-
-Beta
-
-})
+})
The Nginx source template generates an OpenTelemetry configuration that can be sent to a remotely managed OpenTelemetry collector (otelcol). By creating this source template and pushing the configuration to the appropriate OpenTelemetry agent, you can ensure the collection of Nginx logs and metrics in Sumo Logic.
-## Fields creation in Sumo Logic for Nginx
+## Fields created by the source template
-If not already present, the following [Fields](/docs/manage/fields/) are created as part of Source template creation.
+When you create a source template, the following [fields](/docs/manage/fields/) are automatically added (if they don’t already exist):
- **`sumo.datasource`**. Fixed value of **nginx**.
- **`webengine.system`**. Fixed value of **nginx**.
@@ -61,9 +55,9 @@ import OtelWindowsLogPrereq from '../../../../../reuse/apps/opentelemetry/log-co
-## Source template configuration
+## Configuring the Nginx source template
-You can follow the below steps to set a remotely managed OpenTelemetry collector and push the source template to it.
+Follow these steps to set up and deploy the source template to a remotely managed OpenTelemetry collector.
### Step 1: Set up remotely managed OpenTelemetry collector
diff --git a/docs/send-data/opentelemetry-collector/remote-management/source-templates/rabbitmq/index.md b/docs/send-data/opentelemetry-collector/remote-management/source-templates/rabbitmq/index.md
index 412a5ac322..cd9b44e3e3 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/source-templates/rabbitmq/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/source-templates/rabbitmq/index.md
@@ -5,25 +5,19 @@ sidebar_label: RabbitMQ
description: Learn about the Sumo Logic RabbitMQ source template for OpenTelemetry.
---
-
-
-
-
-Beta
-
import useBaseUrl from '@docusaurus/useBaseUrl';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
- })
+})
-The RabbitMQ source template creates an OpenTelemetry configuration that can be pushed to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and pushing the config to the appropriate OpenTelemetry agent, you can ensure collection of your RabbitMQ logs to Sumo Logic.
+The RabbitMQ source template creates an OpenTelemetry configuration that can be pushed to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and pushing the config to the appropriate OpenTelemetry agent, you can collect your RabbitMQ logs to Sumo Logic.
-## Fields creation in Sumo Logic for Local File
+## Fields created by the source template
-If not already present, the following [Fields](/docs/manage/fields/) are created as part of source template creation.
+When you create a source template, the following [fields](/docs/manage/fields/) are automatically added (if they don’t already exist):
-- **`sumo.datasource`**. Fixed value of **localfile**.
+- **`sumo.datasource`**. Fixed value of **rabbitmq**.
- **`deployment.environment`**. This is a user-configured field set at the time of collector installation. It identifies the environment where the host resides, such as `dev`, `prod`, or `qa`.
- **`messaging.cluster.name`**. User configured. Enter a uniquely identifiable name for your RabbitMQ server cluster to show in the Sumo Logic dashboards.
- **`messaging.node.name`**. Includes the value of the hostname of the machine which is being monitored.
@@ -39,7 +33,7 @@ import OtelWindowsLogPrereq from '../../../../../reuse/apps/opentelemetry/log-co
-## Source template configuration
+## Configuring the RabbitMQ source template
Follow the below steps to set a remotely managed OpenTelemetry collector and push the source template to it.
diff --git a/docs/send-data/opentelemetry-collector/remote-management/source-templates/redis/index.md b/docs/send-data/opentelemetry-collector/remote-management/source-templates/redis/index.md
index fd71c23c1a..186d146add 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/source-templates/redis/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/source-templates/redis/index.md
@@ -5,25 +5,19 @@ sidebar_label: Redis
description: Learn about the Sumo Logic Redis source template for OpenTelemetry.
---
-
-
-
-
-Beta
-
import useBaseUrl from '@docusaurus/useBaseUrl';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
})
-The Redis source template creates an OpenTelemetry configuration that can be pushed to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and pushing the config to the appropriate OpenTelemetry agent, you can ensure collection of your redis logs to Sumo Logic.
+The Redis source template creates an OpenTelemetry configuration that can be pushed to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and pushing the config to the appropriate OpenTelemetry agent, you can collect your redis logs to Sumo Logic.
-## Fields creation in Sumo Logic for Local File
+## Fields created by the source template
-If not already present, the following [Fields](/docs/manage/fields/) are created as part of source template creation.
+When you create a source template, the following [fields](/docs/manage/fields/) are automatically added (if they don’t already exist):
-- **`sumo.datasource`**. Fixed value of **localfile**.
+- **`sumo.datasource`**. Fixed value of **redis**.
- **`deployment.environment`**. This is a user-configured field set at the time of collector installation. It identifies the environment where the host resides, such as `dev`, `prod`, or `qa`.
- **`db.cluster.name`**. User configured. Enter a uniquely identifiable name for your redis server cluster to show in the Sumo Logic dashboards.
- **`db.node.name`**. Includes the value of the hostname of the machine which is being monitored.
@@ -35,7 +29,7 @@ import LogsCollectionPrereqisites from '../../../../../reuse/apps/logs-collectio
-## Source template configuration
+## Configuring the Redis source template
Follow the below steps to set a remotely managed OpenTelemetry collector and push the source template to it.
diff --git a/docs/send-data/opentelemetry-collector/remote-management/source-templates/syslog/index.md b/docs/send-data/opentelemetry-collector/remote-management/source-templates/syslog/index.md
index b50ac44d32..25c87d35d8 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/source-templates/syslog/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/source-templates/syslog/index.md
@@ -5,25 +5,20 @@ sidebar_label: Syslog
description: Learn about the Sumo Logic Syslog source template for OpenTelemetry.
---
-
-
-
-
-Beta
-
import useBaseUrl from '@docusaurus/useBaseUrl';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
+})
The Syslog source template creates an OpenTelemetry configuration that can be pushed to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and pushing the config to the appropriate OpenTelemetry agent, the agent will start listening on the configured port for syslogs and send them to Sumo Logic.
-## Fields creation in Sumo Logic for Syslog
+## Fields created by the source template
-If not already present, the following [Fields](/docs/manage/fields/) are created as part of source template creation.
+When you create a source template, the following [fields](/docs/manage/fields/) are automatically added (if they don’t already exist):
-- **`sumo.datasource`**. Fixed value of **localfile**.
+
+- **`sumo.datasource`**. Fixed value of **syslog**.
- **`deployment.environment`**. This is a user-configured field set at the time of collector installation. It identifies the environment where the host resides, such as `dev`, `prod`, or `qa`.
- **`host.group`**. This is a collector level field and is user configured (at the time of collector installation). This identifies the group of hosts.
- **`host.name`**. This is tagged through the resourcedetection processor. It holds the value of the host name where the OTel collector is installed.
@@ -31,9 +26,9 @@ If not already present, the following [Fields](/docs/manage/fields/) are created
## Prerequisite
Ensure that the syslogs conform to the [RFC 5424](https://datatracker.ietf.org/doc/html/rfc5424) protocol. Since we use the OpenTelemetry [syslog receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/syslogreceiver) with this protocol, this will ensure proper parsing of the syslog metadata when ingested into Sumo Logic.
-## Source template configuration
+## Configuring the Syslog source template
-You can follow the below steps to set a remotely managed OpenTelemetry collector and push the source template to it.
+Follow these steps to set up and deploy the source template to a remotely managed OpenTelemetry collector.
### Step 1: Set up remotely managed OpenTelemetry collector
diff --git a/docs/send-data/opentelemetry-collector/remote-management/source-templates/windows/index.md b/docs/send-data/opentelemetry-collector/remote-management/source-templates/windows/index.md
index 1edc9d5a66..611e6b300f 100644
--- a/docs/send-data/opentelemetry-collector/remote-management/source-templates/windows/index.md
+++ b/docs/send-data/opentelemetry-collector/remote-management/source-templates/windows/index.md
@@ -9,19 +9,13 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
-
-
-
-Beta
-
})
-The Windows source template creates an OpenTelemetry configuration that can be pushed to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and pushing the config to the appropriate OpenTelemetry agent, you can ensure collection of Windows event log and metrics of Windows to Sumo Logic.
+The Windows source template creates an OpenTelemetry configuration that can be pushed to a remotely managed OpenTelemetry collector (abbreviated as otelcol). By creating this source template and pushing the config to the appropriate OpenTelemetry agent, you can collect Windows event logs and metrics from Windows systems and send them to Sumo Logic.
-## Fields creation in Sumo Logic for Windows
+## Fields created by the source template
-If not already present, the following [fields](/docs/manage/fields/) are created as part of source template creation.
+When you create a source template, the following [fields](/docs/manage/fields/) are automatically added (if they don’t already exist):
- **`sumo.datasource`**. Fixed value of **windows**.
- **`deployment.environment`**. User configured field at the time of collector installation. This identifies the environment where the Windows system resides. For example: `dev`, `prod`, or `qa`.
@@ -33,7 +27,7 @@ If not already present, the following [fields](/docs/manage/fields/) are created
### For logs collection
Ensure that the channel for collecting Windows event logs is installed and enabled on the monitored Windows machine.
-## Source template configuration
+## Configuring the Windows source template
Follow the below steps to set a remotely managed OpenTelemetry collector and push the source template to it.
@@ -50,12 +44,12 @@ In this step, you will configure the YAML required for Windows collection. Below
- **Name**. Name of the source template.
- **Description**. Description for the source template.
-#### Logs Collection
+#### Logs collection
- **Fields/Metadata**. You can provide any customer fields to be tagged with the data collected. By default, Sumo Logic tags `_sourceCategory` with the value `otel/windows`.
- **Windows Event**. In this section you can select choose among the most widely used Windows event channel for which Windows event log collection will be enabled. You can also provide **Custom Event Channels** providing any customer event channel for which event logs are to be collected.
- **Forward to SIEM**. Check the checkbox to forward your data to [Cloud SIEM](/docs/cse).
-#### Metrics Collection
+#### Metrics collection
- **Metrics**. Select the metric scrappers you want to enable. By default, metric collection for CPU, memory, disk, load, file system, network and paging are enabled, and process metric collection is disabled.
##### Enable process metric collection (optional)
@@ -65,7 +59,7 @@ import ProcMetrics from '../../../../../reuse/apps/opentelemetry/process-metric-
- **Scan Interval**. The frequency at which the source is scanned.
-- **Processing Rules**. You can add processing rules for logs/metrics collected. To learn more, refer to [Processing Rules](/docs/send-data/opentelemetry-collector/remote-management/processing-rules/). For masking windows event logs, refer to [Mask Rules for Windows Source Template](/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules-windows).
+- **Processing Rules**. You can add processing rules for logs/metrics collected. To learn more, refer to [Processing Rules](/docs/send-data/opentelemetry-collector/remote-management/processing-rules/). For masking windows event logs, refer to [Mask Rules for Windows Source Template](/docs/send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules-windows).
### Step 3: Push the source template to the desired remotely managed collectors
diff --git a/sidebars.ts b/sidebars.ts
index 7b94540be1..0bd9418bff 100644
--- a/sidebars.ts
+++ b/sidebars.ts
@@ -118,146 +118,147 @@ module.exports = {
'send-data/opentelemetry-collector/data-source-configurations/additional-configurations-reference',
]
},
-// {
-// type: 'category',
-// label: 'Remote Management',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/index'},
-// items:[
-// {
-// type: 'category',
-// label: 'Processing Rules',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/processing-rules/index'},
-// items:[
-// 'send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules',
-// 'send-data/opentelemetry-collector/remote-management/processing-rules/metrics-include-and-exclude-rules',
-// 'send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules',
-// ],
-// },
-// {
-// type: 'category',
-// label: 'Source Templates',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/index'},
-// items:[
-// {
-// type: 'category',
-// label: 'Apache',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/apache/index'},
-// items:[
-// 'send-data/opentelemetry-collector/remote-management/source-templates/apache/changelog',
-// ]
-// },
-// {
-// type: 'category',
-// label: 'Docker',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/docker/index'},
-// items:[
-// 'send-data/opentelemetry-collector/remote-management/source-templates/docker/changelog',
-// ]
-// },
-// {
-// type: 'category',
-// label: 'Kafka',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/kafka/index'},
-// items:[
-// 'send-data/opentelemetry-collector/remote-management/source-templates/kafka/changelog',
-// ]
-// },
-// {
-// type: 'category',
-// label: 'Linux',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/linux/index'},
-// items:[
-// 'send-data/opentelemetry-collector/remote-management/source-templates/linux/changelog',
-// ]
-// },
-// {
-// type: 'category',
-// label: 'Localfile',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/localfile/index'},
-// items:[
-// 'send-data/opentelemetry-collector/remote-management/source-templates/localfile/changelog',
-// ]
-// },
-// {
-// type: 'category',
-// label: 'Mac',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/mac/index'},
-// items:[
-// 'send-data/opentelemetry-collector/remote-management/source-templates/mac/changelog',
-// ]
-// },
-// {
-// type: 'category',
-// label: 'Nginx',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/nginx/index'},
-// items:[
-// 'send-data/opentelemetry-collector/remote-management/source-templates/nginx/changelog',
-// ]
-// },
-// {
-// type: 'category',
-// label: 'RabbitMQ',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/rabbitmq/index'},
-// items:[
-// 'send-data/opentelemetry-collector/remote-management/source-templates/rabbitmq/changelog',
-// ]
-// },
-// {
-// type: 'category',
-// label: 'Redis',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/redis/index'},
-// items:[
-// 'send-data/opentelemetry-collector/remote-management/source-templates/redis/changelog',
-// ]
-// },
-// {
-// type: 'category',
-// label: 'Syslog',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/syslog/index'},
-// items:[
-// 'send-data/opentelemetry-collector/remote-management/source-templates/syslog/changelog',
-// ]
-// },
-// {
-// type: 'category',
-// label: 'Windows',
-// collapsible: true,
-// collapsed: true,
-// link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/windows/index'},
-// items:[
-// 'send-data/opentelemetry-collector/remote-management/source-templates/windows/changelog',
-// ]
-// },
-// ],
-// },
-// ],
-// },
+ {
+ type: 'category',
+ label: 'Remote Management',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/index'},
+ items:[
+ {
+ type: 'category',
+ label: 'Source Templates',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/source-templates/manage-source-templates',
+ {
+ type: 'category',
+ label: 'Apache',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/apache/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/source-templates/apache/changelog',
+ ]
+ },
+ {
+ type: 'category',
+ label: 'Docker',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/docker/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/source-templates/docker/changelog',
+ ]
+ },
+ {
+ type: 'category',
+ label: 'Kafka',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/kafka/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/source-templates/kafka/changelog',
+ ]
+ },
+ {
+ type: 'category',
+ label: 'Linux',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/linux/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/source-templates/linux/changelog',
+ ]
+ },
+ {
+ type: 'category',
+ label: 'Localfile',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/localfile/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/source-templates/localfile/changelog',
+ ]
+ },
+ {
+ type: 'category',
+ label: 'Mac',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/mac/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/source-templates/mac/changelog',
+ ]
+ },
+ {
+ type: 'category',
+ label: 'Nginx',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/nginx/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/source-templates/nginx/changelog',
+ ]
+ },
+ {
+ type: 'category',
+ label: 'RabbitMQ',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/rabbitmq/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/source-templates/rabbitmq/changelog',
+ ]
+ },
+ {
+ type: 'category',
+ label: 'Redis',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/redis/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/source-templates/redis/changelog',
+ ]
+ },
+ {
+ type: 'category',
+ label: 'Syslog',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/syslog/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/source-templates/syslog/changelog',
+ ]
+ },
+ {
+ type: 'category',
+ label: 'Windows',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/source-templates/windows/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/source-templates/windows/changelog',
+ ]
+ },
+ ],
+ },
+ {
+ type: 'category',
+ label: 'Processing Rules',
+ collapsible: true,
+ collapsed: true,
+ link: {type: 'doc', id: 'send-data/opentelemetry-collector/remote-management/processing-rules/index'},
+ items:[
+ 'send-data/opentelemetry-collector/remote-management/processing-rules/include-and-exclude-rules',
+ 'send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules',
+ 'send-data/opentelemetry-collector/remote-management/processing-rules/mask-rules-windows',
+ ],
+ },
+ ],
+ },
'send-data/opentelemetry-collector/auto-discovery',
'send-data/opentelemetry-collector/performance-benchmarks',
'send-data/opentelemetry-collector/data-transformations',
diff --git a/static/img/send-data/link-collectors.png b/static/img/send-data/link-collectors.png
index 11cfcb0e8d..3ae849c6a5 100644
Binary files a/static/img/send-data/link-collectors.png and b/static/img/send-data/link-collectors.png differ
diff --git a/static/img/send-data/linux-install.png b/static/img/send-data/linux-install.png
deleted file mode 100644
index 180dcdceb0..0000000000
Binary files a/static/img/send-data/linux-install.png and /dev/null differ
diff --git a/static/img/send-data/linux-terminal-installation.png b/static/img/send-data/linux-terminal-installation.png
deleted file mode 100644
index 0efce772ae..0000000000
Binary files a/static/img/send-data/linux-terminal-installation.png and /dev/null differ
diff --git a/static/img/send-data/preview-collectors1.png b/static/img/send-data/preview-collectors1.png
new file mode 100644
index 0000000000..5d5dae25c5
Binary files /dev/null and b/static/img/send-data/preview-collectors1.png differ
diff --git a/static/img/send-data/preview-collectors2.png b/static/img/send-data/preview-collectors2.png
new file mode 100644
index 0000000000..37c277f49e
Binary files /dev/null and b/static/img/send-data/preview-collectors2.png differ
diff --git a/static/img/send-data/source-template.png b/static/img/send-data/source-template.png
new file mode 100644
index 0000000000..ca90ce00b2
Binary files /dev/null and b/static/img/send-data/source-template.png differ