-
Notifications
You must be signed in to change notification settings - Fork 36
Added initial testing documentation #88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
etpeterson
wants to merge
2
commits into
main
Choose a base branch
from
testing/documentation
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,77 @@ | ||
This document describes the testing done in the OSIPI TF2.4 IVIM repository | ||
|
||
|
||
-- Outline -- | ||
1. Testing philosophy | ||
2. Testing structure | ||
3. Testing results | ||
|
||
|
||
|
||
-- Testing philosophy -- | ||
Testing is integral to the repository. | ||
There are many different contributions from many different people and only through diligent testing can they all be ensured to work correctly together. | ||
Automated testing happens on different platforms and versions. | ||
There are 3 major types of tests we conduct. | ||
1. Requirements | ||
- Runs on each commit | ||
- Must pass to merge | ||
- All algorithms have the same requirements | ||
- E.g. bounds honored, code runs reasonably, properly integrated | ||
- Would prevent a merge if not passing | ||
- Flexibile input/output | ||
- Categories for testing | ||
-- Contributions - Some aspects currently tested as unit_test | ||
--- Initial bounds are respected | ||
---- Needs implemening | ||
--- Initial guess is respected | ||
---- Needs implemening - may no be possible | ||
--- Runs reasonably | ||
---- Needs implemening: reduced data size, broadened limits | ||
--- Contains information about algorithm | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How do we see this happen? |
||
-- Wrapper | ||
--- Initial guess is in bounds | ||
--- Reasonable default bounds - f: [0 1], D >= 0 & D < D*, D* >= 0 | ||
--- Input size is respected - result is same size as input | ||
--- Dictionary is returned - worth explicit testing? | ||
-- Phantom - lower priority | ||
--- Data can be pulled | ||
2. Expectations | ||
- Run on each merge | ||
- Considered warnings | ||
- Should not necessarily prevent a merge | ||
- Categories for testing | ||
-- Determine performance changes from reference run | ||
--- Currently implemented but could be made easier to interact with | ||
--- Could be made easier and faster | ||
3. Characterization | ||
- Run on demand | ||
- Performance of the algorithms | ||
- The accuracy and precision of the results | ||
- The speed of the generated results | ||
- Human readable report of the wrapped algorithms | ||
- Categories for testing | ||
-- Simulations | ||
--- Voxels from tissue and characterize algorithms | ||
--- Visualize parameter maps | ||
-- True data | ||
--- Visualize parameter maps | ||
--- Correlations between algorithms - plot the results and differences | ||
|
||
-- Testing structure -- | ||
|
||
* The testing is controlled in several places. | ||
* The testing itself is done with pytest which parses files for "test_" and runs the appropriate tests. | ||
* The pytest testing can be done on your own machine by running "python -m pytest". | ||
** This is configured with pytests.ini and conftest.py | ||
* Testing on github is controlled by the github actions which are in the workflows folder. | ||
* Each workflow performs a series of tests, and is defined by the yml file. | ||
* Each workflow can run at specified times and with specified outputs. | ||
* Currently the major testing workflows are unit_test.yml and analysis.yml. | ||
* The unit_test workflow is done frequently and is relativly fast and does some basic algorithm testing. | ||
* The analysis workflow is done more infrequently with merges to the main branch and does more Expecation testing. | ||
|
||
|
||
-- Testing results -- | ||
The test results are written to several files, most notably an xml file for machine parsing, and a web page for code coverage. | ||
The "analysis" tests are written to a csv file as well. |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Daan wondered whether we can somehow test whether we implemented the code the way the contributed intended it.