This repository is subjected to the CADET-Core simulator and includes a comprehensive suite of tests that extend beyond the scope of a typical CI pipeline. CADET-Verification verifies the implementation, models and methods within CADET-Core via order-of-convergence tests and the recreation of validated case studies. CADET-Verification is part of the deployment pipeline of CADET-Core and must additionally be run on demand, i.e. when critical changes are made to the simulator.
The research data generated by CADET-Verification is automatically managed using CADET-RDM. The results of the verification studies can be accessed in the CADET-Verification-Output repository.
CADET-Verification must be executed as part of the CADET-Core release process and on demand, particularly when critical changes are introduced to the simulator.
To trigger a verification run, manually dispatch the verify.yml
GitHub Actions workflow. During this process, you must specify the appropriate pytest
fixtures.
For guidance on available parameters and their default values, refer to the conftest.py
file.
The input for the workflow for a release verification run might look like this:
--branch-name=release/cadet-core_v504 --small-test=false --rdm-debug-mode=false --rdm-push=true
For release verification runs, ensure the following options are explicitly set:
rdm-debug: false
small_test: false
Once the workflow completes, the generated verification data will be published to the CADET-Verification-Output repository.
⚠️ Note: There is currently no automated validation of the verification results. Final evaluation must be performed manually by comparing the new output with results from the previous run.
We welcome and appreciate all contributions!
If you are a CADET-Core developer adding a new feature, you must also include an appropriate set of verification tests or case studies in this repository. This ensures that your contribution meets the quality standards and helps maintain the long-term reliability and maintainability of the project.
Furthermore, contributions aimed at improving or extending the evaluation functions are highly encouraged.The output repo can be found at: output_repo