Skip to content

Commit d669fe2

Browse files
authored
fix: base deployable architecture catalog onboarding (#158)
1 parent c166acc commit d669fe2

31 files changed

+816
-388
lines changed

.catalog-onboard-pipeline.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ offerings:
1313
install_type: fullstack # ensure value matches what is in ibm_catalog.json (fullstack or extension)
1414
destroy_resources_on_failure: false # defaults to false if not specified so resources can be inspected to debug failures during validation
1515
destroy_workspace_on_failure: false # defaults to false if not specified so schematics workspace can be inspected to debug failures during validation
16-
import_only: false # defaults to false - set to true if you do not want to do any validation, but be aware offering can't be publish if not validated
16+
import_only: true # defaults to false - set to true if you do not want to do any validation, but be aware offering can't be publish if not validated
1717
validation_rg: validation # the resource group in which to do validation in. Will be created if does not exist. If not specified, default value is 'validation'
1818
# scc details needed if your offering is claiming any compliance controls
1919
scc:

.tekton/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,14 +27,14 @@ https://cloud.ibm.com/devops/getting-started?env_id=ibm:yp:eu-de
2727
## Actions on the OnePipeline
2828

2929
1. When PR raised to the develop branch from feature branch, pipeline with trigger and it will run PR_TEST related testcases on top of feature branch
30-
2. When Commit/Push happens to develop branch, pipeline will trigger and it will run all PR_TEST and OTHER_TEST testcases
30+
2. When Commit/Push happens to develop branch, pipeline will trigger and it will run all the PR_TEST and OTHER_TEST testcases
3131

3232
### Setup required parameters to run pipeline
3333

3434
1. ibmcloud-api
3535
2. ssh_keys
3636
3. cluster_prefix
37-
4. zones
37+
4. zone
3838
5. resource_group
3939
6. cluster_id
4040
7. reservation_id

.tekton/git-pr-status/listener-git-pr-status.yaml

Lines changed: 34 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,9 @@ spec:
3131
- name: ssh_keys
3232
default: ""
3333
description: List of names of the SSH keys that is configured in your IBM Cloud account, used to establish a connection to the IBM Cloud HPC bastion and login node. Ensure that the SSH key is present in the same resource group and region where the cluster is being provisioned. If you do not have an SSH key in your IBM Cloud account, create one by according to [SSH Keys](https://cloud.ibm.com/docs/vpc?topic=vpc-ssh-keys).
34-
- name: zones
34+
- name: zone
3535
default: ""
36-
description: IBM Cloud zone names within the selected region where the IBM Cloud HPC cluster should be deployed. Two zone names are required as input value and supported zones for eu-de are eu-de-2, eu-de-3 and for us-east us-east-1, us-east-3. The management nodes and file storage shares will be deployed to the first zone in the list. Compute nodes will be deployed across both first and second zones, where the first zone in the list will be considered as the most preferred zone for compute nodes deployment. [Learn more](https://cloud.ibm.com/docs/vpc?topic=vpc-creating-a-vpc-in-a-different-region#get-zones-using-the-cli).
36+
description: The IBM Cloud zone name within the selected region where the IBM Cloud HPC cluster should be deployed and requires a single zone input value. Supported zones are eu-de-2 and eu-de-3 for eu-de, us-east-1 and us-east-3 for us-east, and us-south-1 for us-south. The management nodes, file storage shares, and compute nodes will be deployed in the same zone.[Learn more](https://cloud.ibm.com/docs/vpc?topic=vpc-creating-a-vpc-in-a-different-region#get-zones-using-the-cli).
3737
- name: cluster_prefix
3838
description: Prefix that is used to name the IBM Cloud HPC cluster and IBM Cloud resources that are provisioned to build the IBM Cloud HPC cluster instance. You cannot create more than one instance of the IBM Cloud HPC cluster with the same name. Ensure that the name is unique.
3939
default: cicd-wes
@@ -58,12 +58,30 @@ spec:
5858
- name: reservation_id
5959
description: Ensure that you have received the reservation ID from IBM technical sales. Reservation ID is a unique identifier to distinguish different IBM Cloud HPC service agreements. It must start with a letter and can only contain letters, numbers, hyphens (-), or underscores (_).
6060
default: ""
61+
- name: us_east_zone
62+
default: ""
63+
description: The IBM Cloud zone name within the selected region where the IBM Cloud HPC cluster should be deployed and requires a single zone input value. Supported zones are eu-de-2 and eu-de-3 for eu-de, us-east-1 and us-east-3 for us-east, and us-south-1 for us-south. The management nodes, file storage shares, and compute nodes will be deployed in the same zone.[Learn more](https://cloud.ibm.com/docs/vpc?topic=vpc-creating-a-vpc-in-a-different-region#get-zones-using-the-cli).
64+
- name: us_east_cluster_id
65+
description: Ensure that you have received the cluster ID from IBM technical sales. A unique identifer for HPC cluster used by IBM Cloud HPC to differentiate different HPC clusters within the same reservation. This can be up to 39 alphanumeric characters including the underscore (_), the hyphen (-), and the period (.) characters. You cannot change the cluster ID after deployment.
66+
default: ""
6167
- name: us_east_reservation_id
6268
description: Ensure that you have received the reservation ID from IBM technical sales. Reservation ID is a unique identifier to distinguish different IBM Cloud HPC service agreements. It must start with a letter and can only contain letters, numbers, hyphens (-), or underscores (_).
6369
default: ""
70+
- name: eu_de_zone
71+
default: ""
72+
description: The IBM Cloud zone name within the selected region where the IBM Cloud HPC cluster should be deployed and requires a single zone input value. Supported zones are eu-de-2 and eu-de-3 for eu-de, us-east-1 and us-east-3 for us-east, and us-south-1 for us-south. The management nodes, file storage shares, and compute nodes will be deployed in the same zone.[Learn more](https://cloud.ibm.com/docs/vpc?topic=vpc-creating-a-vpc-in-a-different-region#get-zones-using-the-cli).
73+
- name: eu_de_cluster_id
74+
description: Ensure that you have received the cluster ID from IBM technical sales. A unique identifer for HPC cluster used by IBM Cloud HPC to differentiate different HPC clusters within the same reservation. This can be up to 39 alphanumeric characters including the underscore (_), the hyphen (-), and the period (.) characters. You cannot change the cluster ID after deployment.
75+
default: ""
6476
- name: eu_de_reservation_id
6577
description: Ensure that you have received the reservation ID from IBM technical sales. Reservation ID is a unique identifier to distinguish different IBM Cloud HPC service agreements. It must start with a letter and can only contain letters, numbers, hyphens (-), or underscores (_).
6678
default: ""
79+
- name: us_south_zone
80+
default: ""
81+
description: The IBM Cloud zone name within the selected region where the IBM Cloud HPC cluster should be deployed and requires a single zone input value. Supported zones are eu-de-2 and eu-de-3 for eu-de, us-east-1 and us-east-3 for us-east, and us-south-1 for us-south. The management nodes, file storage shares, and compute nodes will be deployed in the same zone.[Learn more](https://cloud.ibm.com/docs/vpc?topic=vpc-creating-a-vpc-in-a-different-region#get-zones-using-the-cli).
82+
- name: us_south_cluster_id
83+
description: Ensure that you have received the cluster ID from IBM technical sales. A unique identifer for HPC cluster used by IBM Cloud HPC to differentiate different HPC clusters within the same reservation. This can be up to 39 alphanumeric characters including the underscore (_), the hyphen (-), and the period (.) characters. You cannot change the cluster ID after deployment.
84+
default: ""
6785
- name: us_south_reservation_id
6886
description: Ensure that you have received the reservation ID from IBM technical sales. Reservation ID is a unique identifier to distinguish different IBM Cloud HPC service agreements. It must start with a letter and can only contain letters, numbers, hyphens (-), or underscores (_).
6987
default: ""
@@ -105,8 +123,8 @@ spec:
105123
value: $(params.directory-name)
106124
- name: ssh_keys
107125
value: $(params.ssh_keys)
108-
- name: zones
109-
value: $(params.zones)
126+
- name: zone
127+
value: $(params.zone)
110128
- name: cluster_prefix
111129
value: $(params.cluster_prefix)
112130
- name: resource_group
@@ -123,10 +141,22 @@ spec:
123141
value: $(params.cluster_id)
124142
- name: reservation_id
125143
value: $(params.reservation_id)
144+
- name: us_east_zone
145+
value: $(params.us_east_zone)
146+
- name: us_east_cluster_id
147+
value: $(params.us_east_cluster_id)
126148
- name: us_east_reservation_id
127149
value: $(params.us_east_reservation_id)
150+
- name: eu_de_zone
151+
value: $(params.eu_de_zone)
152+
- name: eu_de_cluster_id
153+
value: $(params.eu_de_cluster_id)
128154
- name: eu_de_reservation_id
129155
value: $(params.eu_de_reservation_id)
156+
- name: us_south_zone
157+
value: $(params.us_south_zone)
158+
- name: us_south_cluster_id
159+
value: $(params.us_south_cluster_id)
130160
- name: us_south_reservation_id
131161
value: $(params.us_south_reservation_id)
132162
workspaces:

.tekton/git-pr-status/pipeline-git-pr-status.yaml

Lines changed: 83 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,9 @@ spec:
3636
- name: ssh_keys
3737
default: ""
3838
description: List of names of the SSH keys that is configured in your IBM Cloud account, used to establish a connection to the IBM Cloud HPC bastion and login node. Ensure that the SSH key is present in the same resource group and region where the cluster is being provisioned. If you do not have an SSH key in your IBM Cloud account, create one by according to [SSH Keys](https://cloud.ibm.com/docs/vpc?topic=vpc-ssh-keys).
39-
- name: zones
39+
- name: zone
4040
default: ""
41-
description: IBM Cloud zone names within the selected region where the IBM Cloud HPC cluster should be deployed. Two zone names are required as input value and supported zones for eu-de are eu-de-2, eu-de-3 and for us-east us-east-1, us-east-3. The management nodes and file storage shares will be deployed to the first zone in the list. Compute nodes will be deployed across both first and second zones, where the first zone in the list will be considered as the most preferred zone for compute nodes deployment. [Learn more](https://cloud.ibm.com/docs/vpc?topic=vpc-creating-a-vpc-in-a-different-region#get-zones-using-the-cli).
41+
description: The IBM Cloud zone name within the selected region where the IBM Cloud HPC cluster should be deployed and requires a single zone input value. Supported zones are eu-de-2 and eu-de-3 for eu-de, us-east-1 and us-east-3 for us-east, and us-south-1 for us-south. The management nodes, file storage shares, and compute nodes will be deployed in the same zone.[Learn more](https://cloud.ibm.com/docs/vpc?topic=vpc-creating-a-vpc-in-a-different-region#get-zones-using-the-cli).
4242
- name: cluster_prefix
4343
description: Prefix that is used to name the IBM Cloud HPC cluster and IBM Cloud resources that are provisioned to build the IBM Cloud HPC cluster instance. You cannot create more than one instance of the IBM Cloud HPC cluster with the same name. Ensure that the name is unique.
4444
default: cicd-wes
@@ -143,6 +143,8 @@ spec:
143143
params:
144144
- name: pipeline-debug
145145
value: $(params.pipeline-debug)
146+
- name: repository
147+
value: $(params.repository)
146148
- name: ssh-key-creation
147149
runAfter: [git-clone, pre-requisites-install]
148150
taskRef:
@@ -155,6 +157,8 @@ spec:
155157
value: $(params.pipeline-debug)
156158
- name: resource_group
157159
value: $(params.resource_group)
160+
- name: pr-revision
161+
value: $(params.pr-revision)
158162
- name: wes-hpc-da-rhel-pr
159163
runAfter: [git-clone, pre-requisites-install, ssh-key-creation]
160164
taskRef:
@@ -169,8 +173,8 @@ spec:
169173
value: $(params.pipeline-debug)
170174
- name: ssh_keys
171175
value: $(params.ssh_keys)
172-
- name: zones
173-
value: $(params.zones)
176+
- name: zone
177+
value: $(params.zone)
174178
- name: cluster_prefix
175179
value: $(params.cluster_prefix)
176180
- name: resource_group
@@ -187,6 +191,8 @@ spec:
187191
value: $(params.cluster_id)
188192
- name: reservation_id
189193
value: $(params.reservation_id)
194+
- name: pr-revision
195+
value: $(params.pr-revision)
190196
- name: wes-hpc-da-ubuntu-pr
191197
runAfter: [git-clone, pre-requisites-install, ssh-key-creation]
192198
taskRef:
@@ -201,8 +207,8 @@ spec:
201207
value: $(params.pipeline-debug)
202208
- name: ssh_keys
203209
value: $(params.ssh_keys)
204-
- name: zones
205-
value: $(params.zones)
210+
- name: zone
211+
value: $(params.zone)
206212
- name: cluster_prefix
207213
value: $(params.cluster_prefix)
208214
- name: resource_group
@@ -219,6 +225,8 @@ spec:
219225
value: $(params.cluster_id)
220226
- name: reservation_id
221227
value: $(params.reservation_id)
228+
- name: pr-revision
229+
value: $(params.pr-revision)
222230
- name: ssh-key-deletion
223231
runAfter: [wes-hpc-da-rhel-pr, wes-hpc-da-ubuntu-pr]
224232
taskRef:
@@ -229,6 +237,64 @@ spec:
229237
params:
230238
- name: pipeline-debug
231239
value: $(params.pipeline-debug)
240+
- name: pr-revision
241+
value: $(params.pr-revision)
242+
- name: display-test-run-pr-output-log
243+
runAfter: [git-clone, set-git-pr-running, wes-hpc-da-rhel-pr, wes-hpc-da-ubuntu-pr]
244+
workspaces:
245+
- name: workspace
246+
workspace: pipeline-ws
247+
taskSpec:
248+
workspaces:
249+
- name: workspace
250+
description: The git repo will be cloned onto the volume backing this workspace
251+
mountPath: /artifacts
252+
steps:
253+
- name: test-run-output-pr-log-rhel-ubuntu
254+
image: icr.io/continuous-delivery/pipeline/pipeline-base-ubi:latest
255+
workingDir: "/artifacts"
256+
command: ["/bin/bash", "-c"]
257+
args:
258+
- |
259+
#!/bin/bash
260+
DIRECTORY="/artifacts/tests"
261+
if [ -d "$DIRECTORY" ]; then
262+
echo "*******************************************************"
263+
count=`ls -1 $DIRECTORY/test_output/log* 2>/dev/null | wc -l`
264+
if [ $count == 0 ]; then
265+
echo "Test Suite have not initated and log file not created, check with packages or binaries installation"
266+
exit 1
267+
else
268+
cat $DIRECTORY/test_output/log*
269+
fi
270+
echo "*******************************************************"
271+
else
272+
echo "$DIRECTORY does not exits"
273+
exit 1
274+
fi
275+
- name: test-run-output-pr-log-rhel-ubuntu-error-check
276+
image: icr.io/continuous-delivery/pipeline/pipeline-base-ubi:latest
277+
workingDir: "/artifacts"
278+
command: ["/bin/bash", "-c"]
279+
args:
280+
- |
281+
#!/bin/bash
282+
DIRECTORY="/artifacts/tests"
283+
if [ -d "$DIRECTORY" ]; then
284+
echo "*******************************************************"
285+
if [ -d "$DIRECTORY" ]; then
286+
# Check any error message in the test run output log
287+
error_check=$(eval "grep -E -w 'FAIL|Error|ERROR' $DIRECTORY/test_output/log*")
288+
if [[ "$error_check" ]]; then
289+
echo "$error_check"
290+
echo "Found Error/FAIL/ERROR in the test run output log. Please check log."
291+
fi
292+
fi
293+
echo "*******************************************************"
294+
else
295+
echo "$DIRECTORY does not exits"
296+
exit 1
297+
fi
232298
- name: inspect-wes-hpc-infra-log
233299
runAfter: [git-clone, set-git-pr-running, wes-hpc-da-rhel-pr, wes-hpc-da-ubuntu-pr]
234300
workspaces:
@@ -241,14 +307,14 @@ spec:
241307
mountPath: /artifacts
242308
steps:
243309
- name: inspect-infra-error-rhel-pr
310+
onError: continue
244311
image: icr.io/continuous-delivery/pipeline/pipeline-base-ubi:latest
245312
workingDir: "/artifacts"
246313
command: ["/bin/bash", "-c"]
247314
args:
248315
- |
249316
#!/bin/bash
250-
pwd
251-
LOG_FILE="pipeline-TestRunBasic-rhel*"
317+
LOG_FILE="pipeline-testrunbasic-rhel*"
252318
DIRECTORY="/artifacts/tests"
253319
if [ -d "$DIRECTORY" ]; then
254320
# Check any error message on the plan/apply log
@@ -260,22 +326,18 @@ spec:
260326
else
261327
count=`ls -1 $DIRECTORY/test_output/log* 2>/dev/null | wc -l`
262328
if [ $count == 0 ]; then
263-
echo "Test Suite have not initated and log file not created, check with packages or binaries installation"
264-
exit 1
265-
else
266-
echo "*******************************************************""
267-
cat $DIRECTORY/test_output/log*
268-
echo "*******************************************************""
269-
echo "No Error Found, infra got SUCCESS"
329+
echo "Test Suite have not initated and log file not created, check with packages or binaries installation"
330+
exit 1
270331
fi
271332
fi
272333
else
273334
echo "$DIRECTORY does not exits"
274335
exit 1
275336
fi
337+
276338
count=`ls -1 $DIRECTORY/*.cicd 2>/dev/null | wc -l`
277339
if [ $count == 0 ]; then
278-
echo "Test Suite have not initated, check with packages or binaries installation"
340+
echo "Test Suite have not initated, check with packages or binaries installations"
279341
exit 1
280342
fi
281343
- name: inspect-infra-error-ubuntu-pr
@@ -285,8 +347,7 @@ spec:
285347
args:
286348
- |
287349
#!/bin/bash
288-
pwd
289-
LOG_FILE="pipeline-TestRunBasic-ubuntu*"
350+
LOG_FILE="pipeline-testrunbasic-ubuntu*"
290351
DIRECTORY="/artifacts/tests"
291352
if [ -d "$DIRECTORY" ]; then
292353
# Check any error message on the plan/apply log
@@ -298,22 +359,18 @@ spec:
298359
else
299360
count=`ls -1 $DIRECTORY/test_output/log* 2>/dev/null | wc -l`
300361
if [ $count == 0 ]; then
301-
echo "Test Suite have not initated and log file not created, check with packages or binaries installation"
302-
exit 1
303-
else
304-
echo "*******************************************************""
305-
cat $DIRECTORY/test_output/log*
306-
echo "*******************************************************""
307-
echo "No Error Found, infra got SUCCESS"
362+
echo "Test Suite have not initated and log file not created, check with packages or binaries installation"
363+
exit 1
308364
fi
309365
fi
310366
else
311367
echo "$DIRECTORY does not exits"
312368
exit 1
313369
fi
370+
314371
count=`ls -1 $DIRECTORY/*.cicd 2>/dev/null | wc -l`
315372
if [ $count == 0 ]; then
316-
echo "Test Suite have not initated, check with packages or binaries installation"
373+
echo "Test Suite have not initated, check with packages or binaries installations"
317374
exit 1
318375
fi
319376
finally:

0 commit comments

Comments
 (0)