Skip to content

Commit d4baa48

Browse files
adwk67TechassiNickLarsenNZ
authored
chore(release): Backport relevant release-24.11 changes (#146)
* chore(release): Update stackableRelease to 24.11 * chore(release): Update image references with stackable24.11.0 * chore(release): Replace githubusercontent references main->release-24.11 * chore: Change docs version from nightly to 24.11 * chore: Remove superfluous page-aliases * chore: Explicitly bump 24.11.0 to 24.11.1 These changes should be pulled into `main`, but this script needs more work. * chore(release): Update image references with stackable24.11.1 * fix(stack/end-to-end-security): Skip DB restore if the DB exists (#139) Otherwise it breaks with: ``` ERROR [flask_migrate] Error: Requested revision 17fcea065655 overlaps with other requested revisions b7851ee5522f ``` The latter revision being the one that exists in the uploaded dump. * change references back to main etc. * prepare update_refs for next release * typo * revert changes for products * set images back to dev except 24.3.0 exceptions * Update .scripts/update_refs.sh Co-authored-by: Nick <10092581+NickLarsenNZ@users.noreply.github.com> --------- Co-authored-by: Techassi <sascha.lautenschlaeger@stackable.tech> Co-authored-by: Techassi <git@techassi.dev> Co-authored-by: Nick Larsen <nick.larsen@stackable.tech> Co-authored-by: Nick <10092581+NickLarsenNZ@users.noreply.github.com>
1 parent de005da commit d4baa48

File tree

42 files changed

+42
-53
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+42
-53
lines changed

.scripts/update_refs.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,10 +35,10 @@ function prepend {
3535
function maybe_commit {
3636
[ "$COMMIT" == "true" ] || return 0
3737
local MESSAGE="$1"
38-
PATCH=$(mktemp)
38+
PATCH=$(mktemp --suffix=.diff)
3939
git add -u
4040
git diff --staged > "$PATCH"
41-
git commit -S -m "$MESSAGE" --no-verify
41+
git diff-index --quiet HEAD -- || git commit -S -m "$MESSAGE" --no-verify
4242
echo "patch written to: $PATCH" | prepend "\t"
4343
}
4444

@@ -55,7 +55,7 @@ if [[ "$CURRENT_BRANCH" == release-* ]]; then
5555

5656
# Replace 0.0.0-dev refs with ${STACKABLE_RELEASE}.0
5757
# TODO (@NickLarsenNZ): handle patches later, and what about release-candidates?
58-
SEARCH='stackable(0\.0\.0-dev|24\.7\.[0-9]+)' # TODO (@NickLarsenNZ): After https://github.com/stackabletech/stackable-cockpit/issues/310, only search for 0.0.0-dev
58+
SEARCH='stackable(0\.0\.0-dev)'
5959
REPLACEMENT="stackable${STACKABLE_RELEASE}.0" # TODO (@NickLarsenNZ): Be a bit smarter about patch releases.
6060
MESSAGE="Update image references with $REPLACEMENT"
6161
echo "$MESSAGE"

demos/airflow-scheduled-job/03-enable-and-run-spark-dag.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ spec:
88
spec:
99
containers:
1010
- name: start-pyspark-job
11-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
11+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable0.0.0-dev
1212
# N.B. it is possible for the scheduler to report that a DAG exists, only for the worker task to fail if a pod is unexpectedly
1313
# restarted. Additionally, the db-init job takes a few minutes to complete before the cluster is deployed. The wait/watch steps
1414
# below are not "water-tight" but add a layer of stability by at least ensuring that the db is initialized and ready and that

demos/airflow-scheduled-job/04-enable-and-run-date-dag.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ spec:
88
spec:
99
containers:
1010
- name: start-date-job
11-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
11+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable0.0.0-dev
1212
# N.B. it is possible for the scheduler to report that a DAG exists, only for the worker task to fail if a pod is unexpectedly
1313
# restarted. Additionally, the db-init job takes a few minutes to complete before the cluster is deployed. The wait/watch steps
1414
# below are not "water-tight" but add a layer of stability by at least ensuring that the db is initialized and ready and that

demos/data-lakehouse-iceberg-trino-spark/create-nifi-ingestion-job.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,11 @@ spec:
99
serviceAccountName: demo-serviceaccount
1010
initContainers:
1111
- name: wait-for-kafka
12-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
12+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable0.0.0-dev
1313
command: ["bash", "-c", "echo 'Waiting for all kafka brokers to be ready' && kubectl wait --for=condition=ready --timeout=30m pod -l app.kubernetes.io/instance=kafka -l app.kubernetes.io/name=kafka"]
1414
containers:
1515
- name: create-nifi-ingestion-job
16-
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.7.0
16+
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable0.0.0-dev
1717
command: ["bash", "-c", "curl -O https://raw.githubusercontent.com/stackabletech/demos/main/demos/data-lakehouse-iceberg-trino-spark/LakehouseKafkaIngest.xml && python -u /tmp/script/script.py"]
1818
volumeMounts:
1919
- name: script

demos/data-lakehouse-iceberg-trino-spark/create-spark-ingestion-job.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,11 @@ spec:
1212
serviceAccountName: demo-serviceaccount
1313
initContainers:
1414
- name: wait-for-kafka
15-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
15+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable0.0.0-dev
1616
command: ["bash", "-c", "echo 'Waiting for all kafka brokers to be ready' && kubectl wait --for=condition=ready --timeout=30m pod -l app.kubernetes.io/name=kafka -l app.kubernetes.io/instance=kafka"]
1717
containers:
1818
- name: create-spark-ingestion-job
19-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
19+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable0.0.0-dev
2020
command: ["bash", "-c", "echo 'Submitting Spark job' && kubectl apply -f /tmp/manifest/spark-ingestion-job.yaml"]
2121
volumeMounts:
2222
- name: manifest

demos/data-lakehouse-iceberg-trino-spark/create-trino-tables.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,11 @@ spec:
99
serviceAccountName: demo-serviceaccount
1010
initContainers:
1111
- name: wait-for-testdata
12-
image: docker.stackable.tech/stackable/tools:1.0.0-stackable24.7.0
12+
image: docker.stackable.tech/stackable/tools:1.0.0-stackable0.0.0-dev
1313
command: ["bash", "-c", "echo 'Waiting for job load-test-data to finish' && kubectl wait --for=condition=complete --timeout=30m job/load-test-data"]
1414
containers:
1515
- name: create-tables-in-trino
16-
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.7.0
16+
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable0.0.0-dev
1717
command: ["bash", "-c", "python -u /tmp/script/script.py"]
1818
volumeMounts:
1919
- name: script

demos/data-lakehouse-iceberg-trino-spark/setup-superset.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ spec:
88
spec:
99
containers:
1010
- name: setup-superset
11-
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.7.0
11+
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable0.0.0-dev
1212
command: ["bash", "-c", "curl -o superset-assets.zip https://raw.githubusercontent.com/stackabletech/demos/main/demos/data-lakehouse-iceberg-trino-spark/superset-assets.zip && python -u /tmp/script/script.py"]
1313
volumeMounts:
1414
- name: script

demos/end-to-end-security/create-spark-report.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ spec:
1212
serviceAccountName: demo-serviceaccount
1313
initContainers:
1414
- name: wait-for-trino-tables
15-
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.7.0
15+
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable0.0.0-dev
1616
command:
1717
- bash
1818
- -euo
@@ -23,7 +23,7 @@ spec:
2323
kubectl wait --timeout=30m --for=condition=complete job/create-tables-in-trino
2424
containers:
2525
- name: create-spark-report
26-
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.7.0
26+
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable0.0.0-dev
2727
command:
2828
- bash
2929
- -euo

demos/end-to-end-security/create-trino-tables.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ spec:
88
spec:
99
containers:
1010
- name: create-tables-in-trino
11-
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable24.7.0
11+
image: docker.stackable.tech/stackable/testing-tools:0.2.0-stackable0.0.0-dev
1212
command: ["bash", "-c", "python -u /tmp/script/script.py"]
1313
volumeMounts:
1414
- name: script

demos/hbase-hdfs-load-cycling-data/create-hfile-and-import-to-hbase.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ spec:
99
spec:
1010
containers:
1111
- name: create-hfile-and-import-to-hbase
12-
image: docker.stackable.tech/stackable/hbase:2.4.18-stackable24.7.0
12+
image: docker.stackable.tech/stackable/hbase:2.4.18-stackable0.0.0-dev
1313
env:
1414
- name: HADOOP_USER_NAME
1515
value: stackable

0 commit comments

Comments
 (0)