Skip to content

Commit 684ab56

Browse files
fhennigrazvan
andauthored
docs: pod overrides, better install instructions, improved wording (#569)
* ~ * Update docs/modules/hbase/pages/getting_started/installation.adoc * Update docs/modules/hbase/pages/usage-guide/operations/graceful-shutdown.adoc * Update docs/modules/hbase/pages/usage-guide/operations/pod-placement.adoc * Update docs/modules/hbase/pages/usage-guide/security.adoc * fix: spelling --------- Co-authored-by: Razvan-Daniel Mihai <84674+razvan@users.noreply.github.com>
1 parent b118eb4 commit 684ab56

File tree

11 files changed

+46
-44
lines changed

11 files changed

+46
-44
lines changed

docs/modules/hbase/pages/getting_started/first_steps.adoc

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
:description: Deploy and verify an HBase cluster using ZooKeeper, HDFS, and HBase configurations. Test with REST API and Apache Phoenix for table creation and data querying.
33
:phoenix: https://phoenix.apache.org/index.html
44

5-
Once you have followed the steps in the xref:getting_started/installation.adoc[] section to install the operator and its dependencies, you will now deploy an HBase cluster and its dependencies.
6-
Afterwards you can <<_verify_that_it_works, verify that it works>> by creating tables and data in HBase using the REST API and Apache Phoenix (an SQL layer used to interact with HBase).
5+
Once you have followed the steps in the xref:getting_started/installation.adoc[] section to install the operator and its dependencies, you deploy an HBase cluster and its dependencies.
6+
Afterward you can <<_verify_that_it_works, verify that it works>> by creating tables and data in HBase using the REST API and Apache Phoenix (an SQL layer used to interact with HBase).
77

88
== Setup
99

@@ -14,7 +14,7 @@ To deploy a ZooKeeper cluster create one file called `zk.yaml`:
1414
[source,yaml]
1515
include::example$getting_started/zk.yaml[]
1616

17-
We also need to define a ZNode that will be used by the HDFS and HBase clusters to reference ZooKeeper.
17+
We also need to define a ZNode that is used by the HDFS and HBase clusters to reference ZooKeeper.
1818
Create another file called `znode.yaml` and define a separate ZNode for each service:
1919

2020
[source,yaml]
@@ -73,16 +73,16 @@ include::example$getting_started/hbase.yaml[]
7373

7474
== Verify that it works
7575

76-
To test the cluster you will use the REST API to check its version and status, and to create and inspect a new table.
77-
You will also use Phoenix to create, populate and query a second new table, before listing all non-system tables in HBase.
76+
To test the cluster, use the REST API to check its version and status, and to create and inspect a new table.
77+
Use Phoenix to create, populate and query a second new table, before listing all non-system tables in HBase.
7878
These actions wil be carried out from one of the HBase components, the REST server.
7979

8080
First, check the cluster version with this callout:
8181

8282
[source]
8383
include::example$getting_started/getting_started.sh[tag=cluster-version]
8484

85-
This will return the version that was specified in the HBase cluster definition:
85+
This returns the version that was specified in the HBase cluster definition:
8686

8787
[source,json]
8888
{"Version":"2.4.18"}
@@ -92,7 +92,7 @@ The cluster status can be checked and formatted like this:
9292
[source]
9393
include::example$getting_started/getting_started.sh[tag=cluster-status]
9494

95-
which will display cluster metadata that looks like this (only the first region is included for the sake of readability):
95+
which displays cluster metadata that looks like this (only the first region is included for the sake of readability):
9696

9797
[source,json]
9898
{
@@ -134,7 +134,7 @@ You can now create a table like this:
134134
[source]
135135
include::example$getting_started/getting_started.sh[tag=create-table]
136136

137-
This will create a table `users` with a single column family `cf`.
137+
This creates a table `users` with a single column family `cf`.
138138
Its creation can be verified by listing it:
139139

140140
[source]
@@ -155,7 +155,7 @@ Use the Python utility `psql.py` (found in /stackable/phoenix/bin) to create, po
155155
[source]
156156
include::example$getting_started/getting_started.sh[tag=phoenix-table]
157157

158-
The final command will display some grouped data like this:
158+
The final command displays some grouped data like this:
159159

160160
[source]
161161
HO TOTAL_ACTIVE_VISITORS

docs/modules/hbase/pages/getting_started/index.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
= Getting started
22

3-
This guide will get you started with HBase using the Stackable operator.
3+
This guide gets you started with HBase using the Stackable operator.
44
It guides you through the installation of the operator and its dependencies, setting up your first HBase cluster and verifying its operation.
55

66
== Prerequisites

docs/modules/hbase/pages/getting_started/installation.adoc

Lines changed: 18 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,17 @@
22
:description: Install Stackable HBase and required operators using stackablectl or Helm on Kubernetes. Follow setup and verification steps for a complete installation.
33
:kind: https://kind.sigs.k8s.io/
44

5-
On this page you will install the Stackable HBase operator and its dependencies, the ZooKeeper and HDFS operators, as well as the commons, secret and listener operators which are required by all Stackable operators.
5+
Install the Stackable HBase operator and its dependencies, the ZooKeeper and HDFS operators, as well as the commons, secret and listener operators which are required by all Stackable operators.
66

7-
== Stackable Operators
8-
9-
There are 2 ways to run Stackable operators
10-
11-
. Using xref:management:stackablectl:index.adoc[]
12-
. Using Helm
13-
14-
=== stackablectl
7+
There are multiple ways to install the Stackable Operator for Apache Zookeeper.
8+
xref:management:stackablectl:index.adoc[] is the preferred way, but Helm is also supported.
9+
OpenShift users may prefer installing the operator from the RedHat Certified Operator catalog using the OpenShift web console.
1510

11+
[tabs]
12+
====
13+
stackablectl::
14+
+
15+
--
1616
`stackablectl` is the command line tool to interact with Stackable operators and our recommended way to install operators.
1717
Follow the xref:management:stackablectl:installation.adoc[installation steps] for your platform.
1818
@@ -23,32 +23,34 @@ After you have installed stackablectl run the following command to install all o
2323
include::example$getting_started/getting_started.sh[tag=stackablectl-install-operators]
2424
----
2525
26-
The tool will show
26+
The tool shows
2727
2828
[source]
2929
include::example$getting_started/install_output.txt[]
3030
3131
3232
TIP: Consult the xref:management:stackablectl:quickstart.adoc[] to learn more about how to use `stackablectl`.
3333
For example, you can use the `--cluster kind` flag to create a Kubernetes cluster with {kind}[kind].
34+
--
3435
35-
=== Helm
36-
37-
You can also use Helm to install the operators.
36+
Helm::
37+
+
38+
--
3839
Add the Stackable Helm repository:
3940
[source,bash]
4041
----
4142
include::example$getting_started/getting_started.sh[tag=helm-add-repo]
4243
----
4344
44-
Then install the Stackable Operators:
45+
Install the Stackable operators:
4546
[source,bash]
4647
----
4748
include::example$getting_started/getting_started.sh[tag=helm-install-operators]
4849
----
4950
50-
Helm will deploy the operators in a Kubernetes Deployment and apply the CRDs for the HBase cluster (as well as the CRDs for the required operators).
51-
You are now ready to deploy HBase in Kubernetes.
51+
Helm deploys the operators in a Kubernetes Deployment and apply the CRDs for the HBase cluster (as well as the CRDs for the required operators).
52+
--
53+
====
5254

5355
== What's next
5456

docs/modules/hbase/pages/index.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Apache HBase is an open-source, distributed, non-relational database that runs o
1818
== Getting started
1919

2020
Follow the xref:getting_started/index.adoc[] guide to learn how to xref:getting_started/installation.adoc[install] the Stackable operator for Apache HBase as well as the dependencies.
21-
The guide will also show you how to xref:getting_started/first_steps.adoc[interact] with HBase running on Kubernetes by creating tables and some data using the REST API or Apache Phoenix.
21+
The guide shows you how to xref:getting_started/first_steps.adoc[interact] with HBase running on Kubernetes by creating tables and some data using the REST API or Apache Phoenix.
2222

2323
The xref:usage-guide/index.adoc[] contains more information on xref:usage-guide/phoenix.adoc[] as well as other topics
2424
such as xref:usage-guide/resource-requests.adoc[CPU and memory configuration], xref:usage-guide/monitoring.adoc[] and
@@ -55,7 +55,7 @@ The xref:demos:hbase-hdfs-load-cycling-data.adoc[] demo shows how you can use HB
5555
== Supported versions
5656

5757
The Stackable operator for Apache HBase currently supports the HBase versions listed below.
58-
To use a specific HBase version in your HBaseCluster, you have to specify an image - this is explained in the xref:concepts:product-image-selection.adoc[] documentation.
58+
To use a specific HBase version in your HBaseCluster, you have to specify an image -- this is explained in the xref:concepts:product-image-selection.adoc[] documentation.
5959
The operator also supports running images from a custom registry or running entirely customized images; both of these cases are explained under xref:concepts:product-image-selection.adoc[] as well.
6060

6161
include::partial$supported-versions.adoc[]

docs/modules/hbase/pages/reference/commandline-parameters.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ stackable-hbase-operator run --product-config /foo/bar/properties.yaml
2323

2424
*Multiple values:* false
2525

26-
The operator will **only** watch for resources in the provided namespace `test`:
26+
The operator **only** watches for resources in the provided namespace `test`:
2727

2828
[source]
2929
----

docs/modules/hbase/pages/reference/environment-variables.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ docker run \
3636

3737
*Multiple values:* false
3838

39-
The operator will **only** watch for resources in the provided namespace `test`:
39+
The operator **only** watches for resources in the provided namespace `test`:
4040

4141
[source]
4242
----

docs/modules/hbase/pages/usage-guide/operations/graceful-shutdown.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,8 @@ You can configure the graceful shutdown as described in xref:concepts:operations
66

77
As a default, masters have `20 minutes` to shut down gracefully.
88

9-
The HBase master process will receive a `SIGTERM` signal when Kubernetes wants to terminate the Pod.
10-
After the graceful shutdown timeout runs out, and the process still didn't exit, Kubernetes will issue a `SIGKILL` signal.
9+
The HBase master process receives a `SIGTERM` signal when Kubernetes wants to terminate the Pod.
10+
After the graceful shutdown timeout runs out, and the process is still running, Kubernetes issues a `SIGKILL` signal.
1111

1212
This is equivalent to executing the `bin/hbase-daemon.sh stop master` command, which internally executes `kill <master-pid>` (https://github.com/apache/hbase/blob/8382f55b15be6ae190f8d202a5e6a40af177ec76/bin/hbase-daemon.sh#L338[code]), waits for a configurable period of time (defaults to 20 minutes), and finally executes `kill -9 <master-pid>` to `SIGKILL` the master (https://github.com/apache/hbase/blob/8382f55b15be6ae190f8d202a5e6a40af177ec76/bin/hbase-common.sh#L20-L41[code]).
1313

docs/modules/hbase/pages/usage-guide/operations/pod-placement.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ In the examples above `cluster-name` is the name of the HBase custom resource th
106106
The `hdfs-cluster-name` is the name of the HDFS cluster that was configured in the `hdfsConfigMapName` property.
107107

108108
NOTE: It is important that the `hdfsConfigMapName` property contains the name the HDFS cluster.
109-
You could instead configure ConfigMaps of specific name or data roles, but for the purpose of pod placement, this will lead to faulty behavior.
109+
You could instead configure ConfigMaps of specific name or data roles, but for the purpose of Pod placement, this leads to faulty behavior.
110110

111111
== Use custom pod placement
112112
For general configuration of Pod placement, see the xref:concepts:operations/pod_placement.adoc[Pod placement concepts] page.
@@ -131,4 +131,4 @@ spec:
131131
replicas: 2
132132
----
133133

134-
WARNING: Please note that the Pods will be stuck in `Pending`, when your Kubernetes cluster does not have any node without a masters already running on it and sufficient compute resources.
134+
WARNING: The Pods remain in the `Pending` phase until the masters are up and running and there are sufficient compute resources available.

docs/modules/hbase/pages/usage-guide/overrides.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,15 @@
44

55
The HBase xref:concepts:stacklet.adoc[Stacklet] definition also supports overriding configuration properties, environment variables and Pod specs, either per role or per role group, where the more specific override (role group) has precedence over the less specific one (role).
66

7-
IMPORTANT: Overriding certain properties which are set by operator can interfere with the operator and can lead to problems.
7+
IMPORTANT: Overriding operator-set properties can interfere with the operator and can lead to problems.
88

99
== Configuration properties
1010

1111
For a role or role group, at the same level of `config`, you can specify: `configOverrides` for the following files:
1212

13-
- `hbase-site.xml`
14-
- `hbase-env.sh`
15-
- `security.properties`
13+
* `hbase-site.xml`
14+
* `hbase-env.sh`
15+
* `security.properties`
1616

1717
NOTE: `hdfs-site.xml` is not listed here, the file is always taken from the referenced HDFS cluster.
1818
If you want to modify it, take a look at xref:hdfs:usage-guide/configuration-environment-overrides.adoc[HDFS configuration overrides].
@@ -33,7 +33,7 @@ restServers:
3333
replicas: 1
3434
----
3535

36-
Just as for the `config`, it is possible to specify this at role level as well:
36+
Just as for the `config`, you can specify this at role level as well:
3737

3838
[source,yaml]
3939
----
@@ -50,7 +50,7 @@ restServers:
5050
----
5151

5252
All override property values must be strings.
53-
The properties will be formatted and escaped correctly into the XML file, respectively inserted as is into the `hbase-env.sh` file.
53+
The properties are formatted and escaped correctly into the XML file, respectively inserted as is into the `hbase-env.sh` file.
5454

5555
For a full list of configuration options we refer to the HBase https://hbase.apache.org/book.html#config.files[configuration documentation].
5656

docs/modules/hbase/pages/usage-guide/phoenix.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
:sqlline-github: https://github.com/julianhyde/sqlline
55

66
Apache Phoenix allows you to interact with HBase using a familiar SQL-syntax via a JDBC driver.
7-
The Phoenix dependencies are bundled with the Stackable HBase image and do not need to be installed separately (client components will need to ensure that they have the correct client-side libraries available).
7+
The Phoenix dependencies are bundled with the Stackable HBase image and do not need to be installed separately (client components need to ensure that they have the correct client-side libraries available).
88
Information about client-side installation can be found {phoenix-installation}[here].
99

1010
Apache Phoenix comes bundled with a few simple scripts to verify a correct server-side installation.

0 commit comments

Comments
 (0)