Skip to content

Commit 5e572af

Browse files
xeniapesiegfriedweberTechassirazvanNickLarsenNZ
authored
Chore: update demos for release (#60)
* upgrade postgres and redis versions, upgrade airflow-scheduled-job demo versions * update versions in hbase-hdfs-load-cycling-data and airflow-scheduled-job demos * Update logging demo for the next release * Update signal-processing demo and container image * upgrade minio version and nifi-kafka-druid-superset-s3 versions and druid db credentials * anomaly-detection: update trino, superset, spark products * docs(demos/data-lakehouse-iceberg-trino-spark): update requirements to 12 nodes * docs(demos/data-lakehouse-iceberg-trino-spark): be less specific about how many files appear in minio * docs(demos/data-lakehouse-iceberg-trino-spark): update the link for tpch * docs(demos/data-lakehouse-iceberg-trino-spark): upgrade the NOTE about bug to IMPORTANT, and move it above the image. * bump versions for jupyterhub-pyspark demo * bump opa version for the trino-superset-s3 stack * bump testing-tools in nifi-kafka-druid demos and revert druid version bump * bump end-2-end-security versions * consolidate stackable versions * more version bumps (untested) * bump to 24.7.0 * adapt to release 24.7 * Apply suggestions from code review --------- Co-authored-by: Siegfried Weber <mail@siegfriedweber.net> Co-authored-by: Techassi <sascha.lautenschlaeger@stackable.tech> Co-authored-by: Razvan-Daniel Mihai <84674+razvan@users.noreply.github.com> Co-authored-by: Nick Larsen <nick.larsen@stackable.tech> Co-authored-by: Malte Sander <malte.sander.it@gmail.com> Co-authored-by: Nick <10092581+NickLarsenNZ@users.noreply.github.com>
1 parent e513db2 commit 5e572af

File tree

75 files changed

+141
-124
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

75 files changed

+141
-124
lines changed

demos/airflow-scheduled-job/03-enable-and-run-spark-dag.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ spec:
88
spec:
99
containers:
1010
- name: start-pyspark-job
11-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.3.0
11+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
1212
# N.B. it is possible for the scheduler to report that a DAG exists, only for the worker task to fail if a pod is unexpectedly
1313
# restarted. Additionally, the db-init job takes a few minutes to complete before the cluster is deployed. The wait/watch steps
1414
# below are not "water-tight" but add a layer of stability by at least ensuring that the db is initialized and ready and that

demos/airflow-scheduled-job/04-enable-and-run-date-dag.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ spec:
88
spec:
99
containers:
1010
- name: start-date-job
11-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.3.0
11+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
1212
# N.B. it is possible for the scheduler to report that a DAG exists, only for the worker task to fail if a pod is unexpectedly
1313
# restarted. Additionally, the db-init job takes a few minutes to complete before the cluster is deployed. The wait/watch steps
1414
# below are not "water-tight" but add a layer of stability by at least ensuring that the db is initialized and ready and that

demos/data-lakehouse-iceberg-trino-spark/create-nifi-ingestion-job.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,11 @@ spec:
99
serviceAccountName: demo-serviceaccount
1010
initContainers:
1111
- name: wait-for-kafka
12-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.3.0
12+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
1313
command: ["bash", "-c", "echo 'Waiting for all kafka brokers to be ready' && kubectl wait --for=condition=ready --timeout=30m pod -l app.kubernetes.io/instance=kafka -l app.kubernetes.io/name=kafka"]
1414
containers:
1515
- name: create-nifi-ingestion-job
16-
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.3.0
16+
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.7.0
1717
command: ["bash", "-c", "curl -O https://raw.githubusercontent.com/stackabletech/demos/main/demos/data-lakehouse-iceberg-trino-spark/LakehouseKafkaIngest.xml && python -u /tmp/script/script.py"]
1818
volumeMounts:
1919
- name: script

demos/data-lakehouse-iceberg-trino-spark/create-spark-ingestion-job.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,11 @@ spec:
1212
serviceAccountName: demo-serviceaccount
1313
initContainers:
1414
- name: wait-for-kafka
15-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.3.0
15+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
1616
command: ["bash", "-c", "echo 'Waiting for all kafka brokers to be ready' && kubectl wait --for=condition=ready --timeout=30m pod -l app.kubernetes.io/name=kafka -l app.kubernetes.io/instance=kafka"]
1717
containers:
1818
- name: create-spark-ingestion-job
19-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.3.0
19+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
2020
command: ["bash", "-c", "echo 'Submitting Spark job' && kubectl apply -f /tmp/manifest/spark-ingestion-job.yaml"]
2121
volumeMounts:
2222
- name: manifest

demos/data-lakehouse-iceberg-trino-spark/create-trino-tables.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,11 @@ spec:
99
serviceAccountName: demo-serviceaccount
1010
initContainers:
1111
- name: wait-for-testdata
12-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.3.0
12+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
1313
command: ["bash", "-c", "echo 'Waiting for job load-test-data to finish' && kubectl wait --for=condition=complete --timeout=30m job/load-test-data"]
1414
containers:
1515
- name: create-tables-in-trino
16-
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.3.0
16+
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.7.0
1717
command: ["bash", "-c", "python -u /tmp/script/script.py"]
1818
volumeMounts:
1919
- name: script

demos/data-lakehouse-iceberg-trino-spark/setup-superset.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ spec:
88
spec:
99
containers:
1010
- name: setup-superset
11-
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.3.0
11+
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.7.0
1212
command: ["bash", "-c", "curl -o superset-assets.zip https://raw.githubusercontent.com/stackabletech/demos/main/demos/data-lakehouse-iceberg-trino-spark/superset-assets.zip && python -u /tmp/script/script.py"]
1313
volumeMounts:
1414
- name: script

demos/end-to-end-security/create-spark-report.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ spec:
1212
serviceAccountName: demo-serviceaccount
1313
initContainers:
1414
- name: wait-for-trino-tables
15-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.3.0
15+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
1616
command:
1717
- bash
1818
- -euo
@@ -23,7 +23,7 @@ spec:
2323
kubectl wait --timeout=30m --for=condition=complete job/create-tables-in-trino
2424
containers:
2525
- name: create-spark-report
26-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.3.0
26+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
2727
command:
2828
- bash
2929
- -euo

demos/end-to-end-security/create-trino-tables.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ spec:
88
spec:
99
containers:
1010
- name: create-tables-in-trino
11-
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable23.11.0
11+
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.7.0
1212
command: ["bash", "-c", "python -u /tmp/script/script.py"]
1313
volumeMounts:
1414
- name: script

demos/hbase-hdfs-load-cycling-data/create-hfile-and-import-to-hbase.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ spec:
99
spec:
1010
containers:
1111
- name: create-hfile-and-import-to-hbase
12-
image: docker.stackable.tech/stackable/hbase:2.4.17-stackable24.3.0
12+
image: docker.stackable.tech/stackable/hbase:2.4.18-stackable24.7.0
1313
env:
1414
- name: HADOOP_USER_NAME
1515
value: stackable

demos/hbase-hdfs-load-cycling-data/distcp-cycling-data.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ spec:
88
spec:
99
containers:
1010
- name: distcp-cycling-data
11-
image: docker.stackable.tech/stackable/hadoop:3.3.6-stackable24.3.0
11+
image: docker.stackable.tech/stackable/hadoop:3.4.0-stackable24.7.0
1212
env:
1313
- name: HADOOP_USER_NAME
1414
value: stackable

0 commit comments

Comments
 (0)