Skip to content

Commit 1efcccf

Browse files
gwynnemonahanopenshift-cherrypick-robot
authored andcommitted
OSDOCS-15280 [NETOBSERV] Line break for configuring-operator.adoc assembly and its includes
1 parent ba05cab commit 1efcccf

5 files changed

+8
-5
lines changed

modules/network-observability-configuring-FLP-sampling.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66
[id="network-observability-config-FLP-sampling_{context}"]
77

88
= Updating the Flow Collector resource
9+
910
As an alternative to editing YAML in the {product-title} web console, you can configure specifications, such as eBPF sampling, by patching the `flowcollector` custom resource (CR):
1011

1112
.Procedure

modules/network-observability-enriched-flows.adoc

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,13 @@
66
[id="network-observability-enriched-flows_{context}"]
77
= Export enriched network flow data
88

9-
You can send network flows to Kafka, IPFIX, the Red{nbsp}Hat build of OpenTelemetry, or all three at the same time. For Kafka or IPFIX, any processor or storage that supports those inputs, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data. For OpenTelemetry, network flow data and metrics can be exported to a compatible OpenTelemetry endpoint, such as Red{nbsp}Hat build of OpenTelemetry, Jaeger, or Prometheus.
9+
You can send network flows to Kafka, IPFIX, the Red{nbsp}Hat build of OpenTelemetry, or all three at the same time. For Kafka or IPFIX, any processor or storage that supports those inputs, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data. For OpenTelemetry, network flow data and metrics can be exported to a compatible OpenTelemetry endpoint, such as Red{nbsp}Hat build of OpenTelemetry, Jaeger, or Prometheus.
1010

1111
.Prerequisites
1212
* Your Kafka, IPFIX, or OpenTelemetry collector endpoints are available from Network Observability `flowlogs-pipeline` pods.
1313
1414
1515
.Procedure
16-
1716
. In the web console, navigate to *Operators* -> *Installed Operators*.
1817
. Under the *Provided APIs* heading for the *NetObserv Operator*, select *Flow Collector*.
1918
. Select *cluster* and then select the *YAML* tab.
@@ -56,10 +55,10 @@ spec:
5655
----
5756
<1> You can export flows to IPFIX, OpenTelemetry, and Kafka individually or concurrently.
5857
<2> The Network Observability Operator exports all flows to the configured Kafka topic.
59-
<3> You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the `flowlogs-pipeline` processor component is deployed (default: netobserv). It must be referenced with `spec.exporters.tls.caCert`. When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with `spec.exporters.tls.userCert`.
58+
<3> You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the `flowlogs-pipeline` processor component is deployed (default: netobserv). It must be referenced with `spec.exporters.tls.caCert`. When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with `spec.exporters.tls.userCert`.
6059
<4> You have the option to specify transport. The default value is `tcp` but you can also specify `udp`.
6160
<5> The protocol of OpenTelemetry connection. The available options are `http` and `grpc`.
62-
<6> OpenTelemetry configuration for exporting logs, which are the same as the logs created for Loki.
61+
<6> OpenTelemetry configuration for exporting logs, which are the same as the logs created for Loki.
6362
<7> OpenTelemetry configuration for exporting metrics, which are the same as the metrics created for Prometheus. These configurations are specified in the `spec.processor.metrics.includeList` parameter of the `FlowCollector` custom resource, along with any custom metrics you defined using the `FlowMetrics` custom resource.
6463
<8> The time interval that metrics are sent to the OpenTelemetry collector.
6564
<9> *Optional*:Network Observability network flows formats get automatically renamed to an OpenTelemetry compliant format. The `fieldsMapping` specification gives you the ability to customize the OpenTelemetry format output. For example in the YAML sample, `SrcAddr` is the Network Observability input field, and it is being renamed `source.address` in OpenTelemetry output. You can see both Network Observability and OpenTelemetry formats in the "Network flows format reference".

modules/network-observability-flowcollector-kafka-config.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
:_mod-docs-content-type: PROCEDURE
66
[id="network-observability-flowcollector-kafka-config_{context}"]
77
= Configuring the Flow Collector resource with Kafka
8+
89
You can configure the `FlowCollector` resource to use Kafka for high-throughput and low-latency data feeds. A Kafka instance needs to be running, and a Kafka topic dedicated to {product-title} Network Observability must be created in that instance. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.7/html/using_amq_streams_on_openshift/using-the-topic-operator-str[Kafka documentation with AMQ Streams].
910

1011
.Prerequisites

modules/network-observability-flowcollector-view.adoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
:_mod-docs-content-type: CONCEPT
66
[id="network-observability-flowcollector-view_{context}"]
77
= View the FlowCollector resource
8+
89
You can view and edit YAML directly in the {product-title} web console.
910

1011
.Procedure
@@ -44,7 +45,7 @@ spec:
4445
cpu: 100m
4546
limits:
4647
memory: 800Mi
47-
logTypes: Flows
48+
logTypes: Flows
4849
advanced:
4950
conversationEndTimeout: 10s
5051
conversationHeartbeatInterval: 30s

modules/network-observability-resources-table.adoc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44
:_mod-docs-content-type: REFERENCE
55
[id="network-observability-resources-table_{context}"]
66
= Resource considerations
7+
78
The following table outlines examples of resource considerations for clusters with certain workload sizes.
89

910
[IMPORTANT]

0 commit comments

Comments
 (0)