Skip to content

Commit 05cc517

Browse files
feat: remove requirement tests (#354)
### Description Remove code related to the requirement tests as those are not being used anymore. ### Checklist - [x] `README.md` has been updated or is not required - [ ] push trigger tests - [ ] manual release test - [ ] automated releaes test - [ ] pull request trigger tests - [ ] schedule trigger tests - [ ] workflow errors/warnings reviewed and addressed ### Testing done (for each selected checkbox, the corresponding test results link should be listed here) Co-authored-by: mkolasinski-splunk <mkolasinski@splunk.com>
1 parent c023e69 commit 05cc517

File tree

3 files changed

+2
-317
lines changed

3 files changed

+2
-317
lines changed

.github/workflows/reusable-build-test-release.yml

Lines changed: 1 addition & 289 deletions
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,6 @@ jobs:
127127
execute-modinput-labeled: ${{ steps.configure-tests-on-labels.outputs.execute_modinput_functional_labeled }}
128128
execute-ucc-modinput-labeled: ${{ steps.configure-tests-on-labels.outputs.execute_ucc_modinput_functional_labeled }}
129129
execute-scripted_inputs-labeled: ${{ steps.configure-tests-on-labels.outputs.execute_scripted_inputs_labeled }}
130-
execute-requirement-labeled: ${{ steps.configure-tests-on-labels.outputs.execute_requirement_test_labeled }}
131130
execute-upgrade-labeled: ${{ steps.configure-tests-on-labels.outputs.execute_upgrade_test_labeled }}
132131
exit-first: ${{ steps.configure-tests-on-labels.outputs.exit-first }}
133132
s3_bucket_k8s: ${{ steps.k8s-environment.outputs.s3_bucket }}
@@ -158,7 +157,7 @@ jobs:
158157
run: |
159158
set +e
160159
declare -A EXECUTE_LABELED
161-
TESTSET=("execute_knowledge" "execute_spl2" "execute_ui" "execute_modinput_functional" "execute_ucc_modinput_functional" "execute_scripted_inputs" "execute_requirement_test" "execute_upgrade")
160+
TESTSET=("execute_knowledge" "execute_spl2" "execute_ui" "execute_modinput_functional" "execute_ucc_modinput_functional" "execute_scripted_inputs" "execute_upgrade")
162161
for test_type in "${TESTSET[@]}"; do
163162
EXECUTE_LABELED["$test_type"]="false"
164163
done
@@ -380,7 +379,6 @@ jobs:
380379
knowledge: ${{ steps.testset.outputs.knowledge }}
381380
ui: ${{ steps.testset.outputs.ui }}
382381
modinput_functional: ${{ steps.testset.outputs.modinput_functional }}
383-
requirement_test: ${{ steps.testset.outputs.requirement_test }}
384382
scripted_inputs: ${{ steps.testset.outputs.scripted_inputs }}
385383
ucc_modinput_functional: ${{ steps.testset.outputs.ucc_modinput_functional }}
386384
upgrade: ${{ steps.testset.outputs.upgrade }}
@@ -826,37 +824,6 @@ jobs:
826824
with:
827825
version: ${{ steps.BuildVersion.outputs.VERSION }}
828826

829-
run-requirements-unit-tests:
830-
runs-on: ubuntu-22.04
831-
needs:
832-
- build
833-
- test-inventory
834-
if: ${{ !cancelled() && needs.build.result == 'success' && needs.test-inventory.outputs.requirement_test == 'true' }}
835-
permissions:
836-
actions: read
837-
deployments: read
838-
contents: read
839-
packages: read
840-
statuses: read
841-
checks: write
842-
steps:
843-
- uses: actions/checkout@v4
844-
- name: Install Python 3
845-
uses: actions/setup-python@v5
846-
with:
847-
python-version: 3.7
848-
- name: run-tests
849-
uses: splunk/addonfactory-workflow-requirement-files-unit-tests@v1.4
850-
with:
851-
input-files: tests/requirement_test/logs
852-
- name: Archive production artifacts
853-
if: ${{ !cancelled() }}
854-
uses: actions/upload-artifact@v4
855-
with:
856-
name: test-results
857-
path: |
858-
test_*.txt
859-
860827
appinspect:
861828
name: quality-appinspect-${{ matrix.tags }}
862829
needs: build
@@ -1582,261 +1549,6 @@ jobs:
15821549
name: |
15831550
summary-ko*
15841551
1585-
run-requirement-tests:
1586-
if: ${{ !cancelled() && needs.build.result == 'success' && needs.test-inventory.outputs.requirement_test == 'true' && needs.setup-workflow.outputs.execute-requirement-labeled == 'true' }}
1587-
needs:
1588-
- build
1589-
- test-inventory
1590-
- setup
1591-
- meta
1592-
- setup-workflow
1593-
runs-on: ubuntu-latest
1594-
strategy:
1595-
fail-fast: false
1596-
matrix:
1597-
splunk: ${{ fromJson(needs.meta.outputs.matrix_Splunk) }}
1598-
sc4s: ${{ fromJson(needs.meta.outputs.matrix_supportedSC4S) }}
1599-
container:
1600-
image: ghcr.io/splunk/workflow-engine-base:4.1.0
1601-
env:
1602-
ARGO_SERVER: ${{ needs.setup.outputs.argo-server }}
1603-
ARGO_HTTP1: ${{ needs.setup.outputs.argo-http1 }}
1604-
ARGO_SECURE: ${{ needs.setup.outputs.argo-secure }}
1605-
ARGO_BASE_HREF: ${{ needs.setup.outputs.argo-href }}
1606-
ARGO_NAMESPACE: ${{ needs.setup.outputs.argo-namespace }}
1607-
TEST_TYPE: "requirement_test"
1608-
TEST_ARGS: ""
1609-
permissions:
1610-
actions: read
1611-
deployments: read
1612-
contents: read
1613-
packages: read
1614-
statuses: read
1615-
checks: write
1616-
steps:
1617-
- uses: actions/checkout@v4
1618-
with:
1619-
submodules: recursive
1620-
- name: configure git # This step configures git to omit "dubious git ownership error" in later test-reporter stage
1621-
id: configure-git
1622-
run: |
1623-
git --version
1624-
git_path="$(pwd)"
1625-
echo "$git_path"
1626-
git config --global --add safe.directory "$git_path"
1627-
- name: capture start time
1628-
id: capture-start-time
1629-
run: |
1630-
echo "start_time=$(date +%s)" >> "$GITHUB_OUTPUT"
1631-
- name: Configure AWS credentials
1632-
uses: aws-actions/configure-aws-credentials@v4
1633-
with:
1634-
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
1635-
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
1636-
aws-region: ${{ secrets.AWS_DEFAULT_REGION }}
1637-
- name: Read secrets from AWS Secrets Manager into environment variables
1638-
id: get-argo-token
1639-
run: |
1640-
ARGO_TOKEN=$(aws secretsmanager get-secret-value --secret-id "${{ needs.setup-workflow.outputs.argo_token_secret_id_k8s }}" | jq -r '.SecretString')
1641-
echo "argo-token=$ARGO_TOKEN" >> "$GITHUB_OUTPUT"
1642-
- name: create job name
1643-
id: create-job-name
1644-
shell: bash
1645-
run: |
1646-
RANDOM_STRING=$(head -3 /dev/urandom | tr -cd '[:lower:]' | cut -c -4)
1647-
JOB_NAME=${{ needs.setup.outputs.job-name }}-${RANDOM_STRING}
1648-
JOB_NAME=${JOB_NAME//TEST-TYPE/${{ env.TEST_TYPE }}}
1649-
JOB_NAME=${JOB_NAME//[_.]/-}
1650-
JOB_NAME=$(echo "$JOB_NAME" | tr '[:upper:]' '[:lower:]')
1651-
echo "job-name=$JOB_NAME" >> "$GITHUB_OUTPUT"
1652-
- name: run-tests
1653-
id: run-tests
1654-
timeout-minutes: 340
1655-
continue-on-error: true
1656-
env:
1657-
ARGO_TOKEN: ${{ steps.get-argo-token.outputs.argo-token }}
1658-
uses: splunk/wfe-test-runner-action@v5.1
1659-
with:
1660-
splunk: ${{ matrix.splunk.version }}
1661-
test-type: ${{ env.TEST_TYPE }}
1662-
test-args: ${{ needs.setup-workflow.outputs.exit-first }}
1663-
job-name: ${{ steps.create-job-name.outputs.job-name }}
1664-
labels: ${{ needs.setup.outputs.labels }}
1665-
workflow-tmpl-name: ${{ needs.setup.outputs.argo-workflow-tmpl-name }}
1666-
workflow-template-ns: ${{ needs.setup.outputs.argo-namespace }}
1667-
addon-url: ${{ needs.setup.outputs.addon-upload-path }}
1668-
addon-name: ${{ needs.setup.outputs.addon-name }}
1669-
sc4s-version: ${{ matrix.sc4s.version }}
1670-
sc4s-docker-registry: ${{ matrix.sc4s.docker_registry }}
1671-
k8s-manifests-branch: ${{ needs.setup.outputs.k8s-manifests-branch }}
1672-
- name: calculate timeout
1673-
id: calculate-timeout
1674-
run: |
1675-
start_time=${{ steps.capture-start-time.outputs.start_time }}
1676-
current_time=$(date +%s)
1677-
remaining_time_minutes=$(( 350-((current_time-start_time)/60) ))
1678-
echo "remaining_time_minutes=$remaining_time_minutes" >> "$GITHUB_OUTPUT"
1679-
- name: Check if pod was deleted
1680-
id: is-pod-deleted
1681-
timeout-minutes: ${{ fromJson(steps.calculate-timeout.outputs.remaining_time_minutes) }}
1682-
if: ${{ !cancelled() }}
1683-
shell: bash
1684-
env:
1685-
ARGO_TOKEN: ${{ steps.get-argo-token.outputs.argo-token }}
1686-
run: |
1687-
set -o xtrace
1688-
if argo watch ${{ steps.run-tests.outputs.workflow-name }} -n workflows | grep "pod deleted"; then
1689-
echo "retry-workflow=true" >> "$GITHUB_OUTPUT"
1690-
fi
1691-
- name: Cancel workflow
1692-
env:
1693-
ARGO_TOKEN: ${{ steps.get-argo-token.outputs.argo-token }}
1694-
if: ${{ cancelled() || steps.is-pod-deleted.outcome != 'success' }}
1695-
run: |
1696-
cancel_response=$(argo submit -v -o json --from wftmpl/${{ needs.setup.outputs.argo-cancel-workflow-tmpl-name }} -l workflows.argoproj.io/workflow-template=${{ needs.setup.outputs.argo-cancel-workflow-tmpl-name }} --argo-base-href '' -p workflow-to-cancel=${{ steps.run-tests.outputs.workflow-name }})
1697-
cancel_workflow_name=$( echo "$cancel_response" |jq -r '.metadata.name' )
1698-
cancel_logs=$(argo logs --follow "$cancel_workflow_name" -n workflows)
1699-
if echo "$cancel_logs" | grep -q "workflow ${{ steps.run-tests.outputs.workflow-name }} stopped"; then
1700-
echo "Workflow ${{ steps.run-tests.outputs.workflow-name }} stopped"
1701-
else
1702-
echo "Workflow ${{ steps.run-tests.outputs.workflow-name }} didn't stop"
1703-
exit 1
1704-
fi
1705-
- name: Retrying workflow
1706-
id: retry-wf
1707-
shell: bash
1708-
env:
1709-
ARGO_TOKEN: ${{ steps.get-argo-token.outputs.argo-token }}
1710-
if: ${{ !cancelled() }}
1711-
run: |
1712-
set -o xtrace
1713-
set +e
1714-
if [[ "${{ steps.is-pod-deleted.outputs.retry-workflow }}" == "true" ]]
1715-
then
1716-
WORKFLOW_NAME=$(argo resubmit -v -o json -n workflows "${{ steps.run-tests.outputs.workflow-name }}" | jq -r .metadata.name)
1717-
echo "workflow-name=$WORKFLOW_NAME" >> "$GITHUB_OUTPUT"
1718-
argo logs --follow "${WORKFLOW_NAME}" -n workflows || echo "... there was an error fetching logs, the workflow is still in progress. please wait for the workflow to complete ..."
1719-
else
1720-
echo "No retry required"
1721-
argo wait "${{ steps.run-tests.outputs.workflow-name }}" -n workflows
1722-
argo watch "${{ steps.run-tests.outputs.workflow-name }}" -n workflows | grep "test-addon"
1723-
fi
1724-
- name: check if workflow completed
1725-
env:
1726-
ARGO_TOKEN: ${{ steps.get-argo-token.outputs.argo-token }}
1727-
shell: bash
1728-
if: ${{ !cancelled() }}
1729-
run: |
1730-
set +e
1731-
# shellcheck disable=SC2157
1732-
if [ -z "${{ steps.retry-wf.outputs.workflow-name }}" ]; then
1733-
WORKFLOW_NAME=${{ steps.run-tests.outputs.workflow-name }}
1734-
else
1735-
WORKFLOW_NAME="${{ steps.retry-wf.outputs.workflow-name }}"
1736-
fi
1737-
ARGO_STATUS=$(argo get "${WORKFLOW_NAME}" -n workflows -o json | jq -r '.status.phase')
1738-
echo "Status of workflow:" "$ARGO_STATUS"
1739-
while [ "$ARGO_STATUS" == "Running" ] || [ "$ARGO_STATUS" == "Pending" ]
1740-
do
1741-
echo "... argo Workflow ${WORKFLOW_NAME} is running, waiting for it to complete."
1742-
argo wait "${WORKFLOW_NAME}" -n workflows || true
1743-
ARGO_STATUS=$(argo get "${WORKFLOW_NAME}" -n workflows -o json | jq -r '.status.phase')
1744-
done
1745-
- name: pull artifacts from s3 bucket
1746-
if: ${{ !cancelled() }}
1747-
run: |
1748-
echo "pulling artifacts"
1749-
aws s3 cp s3://${{ needs.setup.outputs.s3-bucket }}/artifacts-${{ steps.create-job-name.outputs.job-name }}/${{ steps.create-job-name.outputs.job-name }}.tgz ${{ needs.setup.outputs.directory-path }}/
1750-
tar -xf ${{ needs.setup.outputs.directory-path }}/${{ steps.create-job-name.outputs.job-name }}.tgz -C ${{ needs.setup.outputs.directory-path }}
1751-
- name: pull logs from s3 bucket
1752-
if: ${{ !cancelled() }}
1753-
run: |
1754-
# shellcheck disable=SC2157
1755-
if [ -z "${{ steps.retry-wf.outputs.workflow-name }}" ]; then
1756-
WORKFLOW_NAME=${{ steps.run-tests.outputs.workflow-name }}
1757-
else
1758-
WORKFLOW_NAME="${{ steps.retry-wf.outputs.workflow-name }}"
1759-
fi
1760-
echo "pulling logs"
1761-
mkdir -p ${{ needs.setup.outputs.directory-path }}/argo-logs
1762-
aws s3 cp s3://${{ needs.setup.outputs.s3-bucket }}/workflows/${WORKFLOW_NAME}/ ${{ needs.setup.outputs.directory-path }}/argo-logs/ --recursive
1763-
- uses: actions/upload-artifact@v4
1764-
if: ${{ !cancelled() }}
1765-
with:
1766-
name: archive splunk ${{ matrix.splunk.version }} ${{ env.TEST_TYPE }} tests artifacts
1767-
path: |
1768-
${{ needs.setup.outputs.directory-path }}/test-results
1769-
- uses: actions/upload-artifact@v4
1770-
if: ${{ !cancelled() }}
1771-
with:
1772-
name: archive splunk ${{ matrix.splunk.version }} ${{ env.TEST_TYPE }} tests logs
1773-
path: |
1774-
${{ needs.setup.outputs.directory-path }}/argo-logs
1775-
- name: Test Report
1776-
id: test_report
1777-
uses: dorny/test-reporter@v1.9.1
1778-
if: ${{ !cancelled() }}
1779-
with:
1780-
name: splunk ${{ matrix.splunk.version }} ${{ env.TEST_TYPE }} test report
1781-
path: "${{ needs.setup.outputs.directory-path }}/test-results/*.xml"
1782-
reporter: java-junit
1783-
- name: Parse JUnit XML
1784-
if: ${{ !cancelled() }}
1785-
run: |
1786-
apt-get update
1787-
apt-get install -y libxml2-utils
1788-
junit_xml_path="${{ needs.setup.outputs.directory-path }}/test-results"
1789-
junit_xml_file=$(find "$junit_xml_path" -name "*.xml" -type f 2>/dev/null | head -n 1)
1790-
if [ -n "$junit_xml_file" ]; then
1791-
total_tests=$(xmllint --xpath "count(//testcase)" "$junit_xml_file")
1792-
failures=$(xmllint --xpath "count(//testcase[failure])" "$junit_xml_file")
1793-
errors=$(xmllint --xpath "count(//testcase[error])" "$junit_xml_file")
1794-
skipped=$(xmllint --xpath "count(//testcase[skipped])" "$junit_xml_file")
1795-
passed=$((total_tests - failures - errors - skipped))
1796-
echo "splunk ${{ matrix.splunk.version }}${{ secrets.OTHER_TA_REQUIRED_CONFIGS }} |$total_tests |$passed |$failures |$errors |$skipped |${{steps.test_report.outputs.url_html}}" > job_summary.txt
1797-
else
1798-
echo "no XML File found, exiting"
1799-
exit 1
1800-
fi
1801-
- name: Upload-artifact-for-github-summary
1802-
uses: actions/upload-artifact@v4
1803-
if: ${{ !cancelled() }}
1804-
with:
1805-
name: summary-requirement-${{ matrix.splunk.version }}-${{ secrets.OTHER_TA_REQUIRED_CONFIGS }}
1806-
path: job_summary.txt
1807-
- name: pull diag from s3 bucket
1808-
if: ${{ failure() && steps.test_report.outputs.conclusion == 'failure' }}
1809-
run: |
1810-
echo "pulling diag"
1811-
aws s3 cp s3://${{ needs.setup.outputs.s3-bucket }}/diag-${{ steps.create-job-name.outputs.job-name }}/diag-${{ steps.create-job-name.outputs.job-name }}.tgz ${{ needs.setup.outputs.directory-path }}/
1812-
- uses: actions/upload-artifact@v4
1813-
if: ${{ failure() && steps.test_report.outputs.conclusion == 'failure' }}
1814-
with:
1815-
name: archive splunk ${{ matrix.splunk.version }} ${{ env.TEST_TYPE }} tests diag
1816-
path: |
1817-
${{ needs.setup.outputs.directory-path }}/diag*
1818-
1819-
Requirement-input-tests-report:
1820-
needs: run-requirement-tests
1821-
runs-on: ubuntu-latest
1822-
if: ${{ !cancelled() && needs.run-requirement-tests.result != 'skipped' }}
1823-
steps:
1824-
- name: Download all summaries
1825-
uses: actions/download-artifact@v4
1826-
with:
1827-
pattern: summary-requirement*
1828-
- name: Combine summaries into a table
1829-
run: |
1830-
echo "| Job | Total Tests | Passed Tests | Failed Tests | Errored Tests | Skipped Tests | Report Link" >> "$GITHUB_STEP_SUMMARY"
1831-
echo "| ---------- | ----------- | ------ | ------ | ------ | ------- | ------ |" >> "$GITHUB_STEP_SUMMARY"
1832-
for file in summary-requirement*/job_summary.txt; do
1833-
cat "$file" >> "$GITHUB_STEP_SUMMARY"
1834-
done
1835-
- uses: geekyeggo/delete-artifact@v5
1836-
with:
1837-
name: |
1838-
summary-requirement*
1839-
18401552
run-ui-tests:
18411553
if: ${{ !cancelled() && needs.build.result == 'success' && needs.test-inventory.outputs.ui == 'true' && needs.setup-workflow.outputs.execute-ui-labeled == 'true' }}
18421554
needs:

README.md

Lines changed: 1 addition & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,6 @@ test-inventory
258258
**Output:**
259259

260260
```
261-
requirement_test::true
262261
ui_local::true
263262
knowledge::true
264263
unit::true
@@ -432,31 +431,6 @@ appinspect-api-html-report-self-service
432431

433432
- Junit Test result xml file
434433

435-
# run-requirements-unit-tests
436-
**Description**
437-
438-
- This action provides unit tests for Splunk TA's requirement logs. test_lib contains tests for XML format checking, schema validating and CIM model mapping.
439-
440-
**Action used** https://github.com/splunk/addonfactory-workflow-requirement-files-unit-tests
441-
442-
**Pass/fail behaviour:**
443-
444-
- The stage is expected to fail only if there are any test-case failures observed related to logs, CIM fields related issue or XML file does not matches the schema defined https://github.com/splunk/requirement-files-unit-tests/blob/main/test_lib/schema.xsd .
445-
446-
**Troubleshooting steps for failures if any**
447-
448-
- Check for failure logs and update the log/XML files accordingly to match the schema defined in the repo.
449-
450-
**Artifacts:**
451-
452-
```
453-
test_validation_output.txt
454-
test_transport_params_output.txt
455-
test_format_output.txt
456-
test_cim_output.txt
457-
test_check_unicode_output.txt
458-
```
459-
460434
# run-btool-check
461435

462436
**Description:**
@@ -663,8 +637,7 @@ pre-publish
663637

664638
**Troubleshooting steps for failures if any**
665639

666-
- In the logs it outputs a json with the info of stages and their pass/fail status. <br />
667-
<img src="images/requirement-tests/stage-logs.png" alt="stage-logs" style="width:200px;"/>
640+
- In the logs it outputs a json with the info of stages and their pass/fail status. <br />
668641

669642
**Artifacts:**
670643
- No additional artifacts
-744 KB
Binary file not shown.

0 commit comments

Comments
 (0)