Skip to content

Add descriptions #755

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Sep 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/modules/kafka/pages/getting_started/first_steps.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= First steps
:description: Deploy and verify a Kafka cluster on Kubernetes with Stackable Operators, including ZooKeeper setup and data testing using kcat.

After going through the xref:getting_started/installation.adoc[] section and having installed all the operators, you will now deploy a Kafka cluster and the required dependencies. Afterwards you can <<_verify_that_it_works, verify that it works>> by producing test data into a topic and consuming it.

Expand Down
4 changes: 3 additions & 1 deletion docs/modules/kafka/pages/getting_started/index.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
= Getting started
:description: Start with Apache Kafka using Stackable Operator: Install, set up Kafka, and manage topics in a Kubernetes cluster.

This guide will get you started with Apache Kafka using the Stackable Operator. It will guide you through the installation of the Operator and its dependencies, setting up your first Kafka instance and create, write to and read from a topic.
This guide will get you started with Apache Kafka using the Stackable Operator.
It will guide you through the installation of the Operator and its dependencies, setting up your first Kafka instance and create, write to and read from a topic.

== Prerequisites

Expand Down
1 change: 1 addition & 0 deletions docs/modules/kafka/pages/getting_started/installation.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Installation
:description: Install Stackable Operator for Apache Kafka using stackablectl or Helm, including dependencies like ZooKeeper and required operators for Kubernetes.

On this page you will install the Stackable Operator for Apache Kafka and operators for its dependencies - ZooKeeper -
as well as the commons, secret and listener operator which are required by all Stackable Operators.
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/kafka/pages/index.adoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
= Stackable Operator for Apache Kafka
:description: The Stackable operator for Apache Superset is a Kubernetes operator that can manage Apache Kafka clusters. Learn about its features, resources, dependencies and demos, and see the list of supported Kafka versions.
:description: Deploy and manage Apache Kafka clusters on Kubernetes using Stackable Operator.
:keywords: Stackable operator, Apache Kafka, Kubernetes, operator, SQL, engineer, broker, big data, CRD, StatefulSet, ConfigMap, Service, Druid, ZooKeeper, NiFi, S3, demo, version
:kafka: https://kafka.apache.org/
:github: https://github.com/stackabletech/kafka-operator/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,3 +86,8 @@ servers:
default:
replicas: 1
----

== Pod overrides

The Kafka operator also supports Pod overrides, allowing you to override any property that you can set on a Kubernetes Pod.
Read the xref:concepts:overrides.adoc#pod-overrides[Pod overrides documentation] to learn more about this feature.
7 changes: 3 additions & 4 deletions docs/modules/kafka/pages/usage-guide/logging.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
= Log aggregation
:description: The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent

The logs can be forwarded to a Vector log aggregator by providing a discovery
ConfigMap for the aggregator and by enabling the log agent:
The logs can be forwarded to a Vector log aggregator by providing a discovery ConfigMap for the aggregator and by enabling the log agent:

[source,yaml]
----
Expand All @@ -14,5 +14,4 @@ spec:
enableVectorAgent: true
----

Further information on how to configure logging, can be found in
xref:concepts:logging.adoc[].
Further information on how to configure logging, can be found in xref:concepts:logging.adoc[].
5 changes: 3 additions & 2 deletions docs/modules/kafka/pages/usage-guide/monitoring.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Monitoring
:description: The managed Kafka instances are automatically configured to export Prometheus metrics.

The managed Kafka instances are automatically configured to export Prometheus metrics. See
xref:operators:monitoring.adoc[] for more details.
The managed Kafka instances are automatically configured to export Prometheus metrics.
See xref:operators:monitoring.adoc[] for more details.
18 changes: 8 additions & 10 deletions docs/modules/kafka/pages/usage-guide/security.adoc
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
= Security
:description: Configure TLS encryption, authentication, and Open Policy Agent (OPA) authorization for Kafka with the Stackable Operator.

== Encryption

The internal and client communication can be encrypted TLS. This requires the xref:secret-operator:index.adoc[Secret
Operator] to be present in order to provide certificates. The utilized certificates can be changed in a top-level config.
The internal and client communication can be encrypted TLS. This requires the xref:secret-operator:index.adoc[Secret Operator] to be present in order to provide certificates.
The utilized certificates can be changed in a top-level config.

[source,yaml]
----
Expand Down Expand Up @@ -47,14 +48,12 @@ spec:
autoGenerate: true
----

You can create your own secrets and reference them e.g. in the `spec.clusterConfig.tls.serverSecretClass` or
`spec.clusterConfig.tls.internalSecretClass` to use different certificates.
You can create your own secrets and reference them e.g. in the `spec.clusterConfig.tls.serverSecretClass` or `spec.clusterConfig.tls.internalSecretClass` to use different certificates.

== Authentication

The internal or broker-to-broker communication is authenticated via TLS. In order to enforce TLS authentication for
client-to-server communication, you can set an `AuthenticationClass` reference in the custom resource provided by the
xref:commons-operator:index.adoc[Commons Operator].
The internal or broker-to-broker communication is authenticated via TLS.
In order to enforce TLS authentication for client-to-server communication, you can set an `AuthenticationClass` reference in the custom resource provided by the xref:commons-operator:index.adoc[Commons Operator].

[source,yaml]
----
Expand Down Expand Up @@ -105,9 +104,8 @@ spec:

== [[authorization]]Authorization

If you wish to include integration with xref:opa:index.adoc[Open Policy Agent] and already have an OPA cluster, then you
can include an `opa` field pointing to the OPA cluster discovery `ConfigMap` and the required package. The package is
optional and will default to the `metadata.name` field:
If you wish to include integration with xref:opa:index.adoc[Open Policy Agent] and already have an OPA cluster, then you can include an `opa` field pointing to the OPA cluster discovery `ConfigMap` and the required package.
The package is optional and will default to the `metadata.name` field:

[source,yaml]
----
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Storage and resource configuration
:description: Configure storage and resource allocation for Kafka brokers using Stackable Operator, including PersistentVolumeClaims, CPU, memory, and storage defaults.

== Storage for data volumes

Expand Down
Loading