Skip to content

Commit 74259e4

Browse files
authored
docs(getting-started): update stackablectl op install output (#553)
* chore(getting-started): add warning about editing the script * chore(ruff): apply formatting * chore(markdownlint): ignore heading rule for template partials * chore(readme): render readme * docs(getting-started): update stackablectl op install output --------- Co-authored-by: Nick <NickLarsenNZ@users.noreply.github.com>
1 parent b1b3907 commit 74259e4

File tree

10 files changed

+80
-42
lines changed

10 files changed

+80
-42
lines changed

.readme/README.md.j2

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
<!-- markdownlint-disable MD041 -->
12
{%- set title="Stackable Operator for Apache Hadoop" -%}
23
{%- set operator_name="hdfs" -%}
34
{%- set operator_docs_slug="hdfs" -%}

.readme/partials/main.md.j2

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
<!-- markdownlint-disable MD041 -->
12
This is a Kubernetes operator to manage [Apache Hadoop](https://hadoop.apache.org/).
23

34
{% filter trim %}

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<!-- markdownlint-disable MD041 -->
1+
<!-- markdownlint-disable MD041 --><!-- markdownlint-disable MD041 -->
22
<p align="center">
33
<img width="150" src="./.readme/static/borrowed/Icon_Stackable.svg" alt="Stackable Logo"/>
44
</p>
@@ -13,6 +13,7 @@
1313

1414
[Documentation](https://docs.stackable.tech/home/stable/hdfs) | [Stackable Data Platform](https://stackable.tech/) | [Platform Docs](https://docs.stackable.tech/) | [Discussions](https://github.com/orgs/stackabletech/discussions) | [Discord](https://discord.gg/7kZ3BNnCAF)
1515

16+
<!-- markdownlint-disable MD041 -->
1617
This is a Kubernetes operator to manage [Apache Hadoop](https://hadoop.apache.org/).
1718

1819
<!-- markdownlint-disable MD041 MD051 -->

docs/modules/hdfs/examples/getting_started/getting_started.sh

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,20 @@
11
#!/usr/bin/env bash
22
set -euo pipefail
33

4+
# DO NOT EDIT THE SCRIPT
5+
# Instead, update the j2 template, and regenerate it for dev:
6+
# cat <<EOF | jinja2 --format yaml getting_started.sh.j2 -o getting_started.sh
7+
# helm:
8+
# repo_name: stackable-dev
9+
# repo_url: https://repo.stackable.tech/repository/helm-dev/
10+
# versions:
11+
# commons: 0.0.0-dev
12+
# hdfs: 0.0.0-dev
13+
# listener: 0.0.0-dev
14+
# secret: 0.0.0-dev
15+
# zookeeper: 0.0.0-dev
16+
# EOF
17+
418
# This script contains all the code snippets from the guide, as well as some assert tests
519
# to test if the instructions in the guide work. The user *could* use it, but it is intended
620
# for testing only.

docs/modules/hdfs/examples/getting_started/getting_started.sh.j2

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,20 @@
11
#!/usr/bin/env bash
22
set -euo pipefail
33

4+
# DO NOT EDIT THE SCRIPT
5+
# Instead, update the j2 template, and regenerate it for dev:
6+
# cat <<EOF | jinja2 --format yaml getting_started.sh.j2 -o getting_started.sh
7+
# helm:
8+
# repo_name: stackable-dev
9+
# repo_url: https://repo.stackable.tech/repository/helm-dev/
10+
# versions:
11+
# commons: 0.0.0-dev
12+
# hdfs: 0.0.0-dev
13+
# listener: 0.0.0-dev
14+
# secret: 0.0.0-dev
15+
# zookeeper: 0.0.0-dev
16+
# EOF
17+
418
# This script contains all the code snippets from the guide, as well as some assert tests
519
# to test if the instructions in the guide work. The user *could* use it, but it is intended
620
# for testing only.
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
Installed commons=0.0.0-dev operator
2+
Installed secret=0.0.0-dev operator
3+
Installed listener=0.0.0-dev operator
4+
Installed zookeeper=0.0.0-dev operator
5+
Installed hdfs=0.0.0-dev operator
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
Installed commons={{ versions.commons }} operator
2+
Installed secret={{ versions.secret }} operator
3+
Installed listener={{ versions.listener }} operator
4+
Installed zookeeper={{ versions.zookeeper }} operator
5+
Installed hdfs={{ versions.hdfs }} operator

docs/modules/hdfs/pages/getting_started/installation.adoc

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -25,13 +25,8 @@ include::example$getting_started/getting_started.sh[tag=stackablectl-install-ope
2525

2626
The tool will show
2727

28-
----
29-
[INFO ] Installing commons operator
30-
[INFO ] Installing secret operator
31-
[INFO ] Installing listener operator
32-
[INFO ] Installing zookeeper operator
33-
[INFO ] Installing hdfs operator
34-
----
28+
[source]
29+
include::example$getting_started/install_output.txt[]
3530

3631
TIP: Consult the xref:management:stackablectl:quickstart.adoc[] to learn more about how to use `stackablectl`. For
3732
example, you can use the `--cluster kind` flag to create a Kubernetes cluster with link:https://kind.sigs.k8s.io/[kind].

tests/templates/kuttl/logging/test_log_aggregation.py

Lines changed: 18 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@
44

55
def check_sent_events():
66
response = requests.post(
7-
'http://hdfs-vector-aggregator:8686/graphql',
7+
"http://hdfs-vector-aggregator:8686/graphql",
88
json={
9-
'query': """
9+
"query": """
1010
{
1111
transforms(first:100) {
1212
nodes {
@@ -20,29 +20,30 @@ def check_sent_events():
2020
}
2121
}
2222
"""
23-
}
23+
},
2424
)
2525

26-
assert response.status_code == 200, \
27-
'Cannot access the API of the vector aggregator.'
26+
assert (
27+
response.status_code == 200
28+
), "Cannot access the API of the vector aggregator."
2829

2930
result = response.json()
3031

31-
transforms = result['data']['transforms']['nodes']
32+
transforms = result["data"]["transforms"]["nodes"]
3233
for transform in transforms:
33-
sentEvents = transform['metrics']['sentEventsTotal']
34-
componentId = transform['componentId']
34+
sentEvents = transform["metrics"]["sentEventsTotal"]
35+
componentId = transform["componentId"]
3536

36-
if componentId == 'filteredInvalidEvents':
37-
assert sentEvents is None or \
38-
sentEvents['sentEventsTotal'] == 0, \
39-
'Invalid log events were sent.'
37+
if componentId == "filteredInvalidEvents":
38+
assert (
39+
sentEvents is None or sentEvents["sentEventsTotal"] == 0
40+
), "Invalid log events were sent."
4041
else:
41-
assert sentEvents is not None and \
42-
sentEvents['sentEventsTotal'] > 0, \
43-
f'No events were sent in "{componentId}".'
42+
assert (
43+
sentEvents is not None and sentEvents["sentEventsTotal"] > 0
44+
), f'No events were sent in "{componentId}".'
4445

4546

46-
if __name__ == '__main__':
47+
if __name__ == "__main__":
4748
check_sent_events()
48-
print('Test successful!')
49+
print("Test successful!")

tests/templates/kuttl/profiling/run-profiler.py

Lines changed: 18 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -9,29 +9,32 @@
99
def start_profiling_and_get_refresh_header(service_url):
1010
prof_page = requests.get(
1111
f"{service_url}/prof"
12-
f"?event={EVENT_TYPE}&duration={PROFILING_DURATION_IN_SEC}")
12+
f"?event={EVENT_TYPE}&duration={PROFILING_DURATION_IN_SEC}"
13+
)
1314

14-
assert prof_page.ok, \
15-
f"""Profiling could not be started.
15+
assert prof_page.ok, f"""Profiling could not be started.
1616
URL: {prof_page.request.url}
1717
Status Code: {prof_page.status_code}"""
1818

19-
return prof_page.headers['Refresh']
19+
return prof_page.headers["Refresh"]
2020

2121

2222
def parse_refresh_header(refresh_header):
23-
refresh_time_in_sec, refresh_path = refresh_header.split(';', 1)
23+
refresh_time_in_sec, refresh_path = refresh_header.split(";", 1)
2424
refresh_time_in_sec = int(refresh_time_in_sec)
2525

26-
assert refresh_time_in_sec == PROFILING_DURATION_IN_SEC, \
27-
f"""Profiling duration and refresh time should be equal.
26+
assert (
27+
refresh_time_in_sec == PROFILING_DURATION_IN_SEC
28+
), f"""Profiling duration and refresh time should be equal.
2829
expected: {PROFILING_DURATION_IN_SEC}
2930
actual: {refresh_time_in_sec}"""
3031

31-
expected_refresh_path_pattern = \
32-
r'/prof-output-hadoop/async-prof-pid-\d+-itimer-\d+.html'
33-
assert re.fullmatch(expected_refresh_path_pattern, refresh_path), \
34-
f"""The path to the flamegraph contains an unexpected pattern.
32+
expected_refresh_path_pattern = (
33+
r"/prof-output-hadoop/async-prof-pid-\d+-itimer-\d+.html"
34+
)
35+
assert re.fullmatch(
36+
expected_refresh_path_pattern, refresh_path
37+
), f"""The path to the flamegraph contains an unexpected pattern.
3538
expected pattern: {expected_refresh_path_pattern}"
3639
actual path: {refresh_path}"""
3740

@@ -46,23 +49,21 @@ def wait_for_profiling_to_finish(refresh_time_in_sec):
4649
def fetch_flamegraph(service_url, refresh_path):
4750
flamegraph_page = requests.get(f"{service_url}{refresh_path}")
4851

49-
assert flamegraph_page.ok, \
50-
f"""The flamegraph could not be fetched.
52+
assert flamegraph_page.ok, f"""The flamegraph could not be fetched.
5153
URL: {flamegraph_page.request.url}
5254
Status Code: {flamegraph_page.status_code}"""
5355

5456

5557
def test_profiling(role, port):
5658
service_url = (
57-
f"http://test-hdfs-{role}-default-0.test-hdfs-{role}-default"
58-
f":{port}")
59+
f"http://test-hdfs-{role}-default-0.test-hdfs-{role}-default" f":{port}"
60+
)
5961

6062
print(f"Test profiling on {service_url}")
6163

6264
refresh_header = start_profiling_and_get_refresh_header(service_url)
6365

64-
refresh_time_in_sec, refresh_path = \
65-
parse_refresh_header(refresh_header)
66+
refresh_time_in_sec, refresh_path = parse_refresh_header(refresh_header)
6667

6768
wait_for_profiling_to_finish(refresh_time_in_sec)
6869

0 commit comments

Comments
 (0)