Skip to content

swap actual and desired in test #105

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

swap actual and desired in test #105

wants to merge 4 commits into from

Conversation

oscarjalnefjord
Copy link
Collaborator

Describe the changes you have made in this PR

Addressing some things found during preparation of the paper.

Swapped the arguments "actual" and "desired" to match what is given to assert_allclose. This is important to get the correct effect from rtol in the test.

Also made some minor changes to make the RMSE function in OsipiBase up-to-date and to enable download of Zenodo data to arbitrary folder.

Link this PR to an issue [optional]

None

Checklist

  • Self-review of changed code
  • Added automated tests where applicable
  • Update Docs & Guides

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine for me; but all of the osipi code is expecting the download to be in the defined folder destination. For what purpose do you want to put it elsewhere?

Copy link
Collaborator Author

@oscarjalnefjord oscarjalnefjord Jun 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed. I just wanted to add some flexibility in case you want to place the data somewhere else and use for other purposes.

@oscarjalnefjord
Copy link
Collaborator Author

I added brain data to the unit tests, but I had to make some changes to the tests to make this work. In particular, I had to change the parameterization from being local to making use of the conftest.py file. It seems to do the job, but let me know if I removed some feature by accident.

Copy link
Contributor

@etpeterson etpeterson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly the one major comment. Also, this and Oliver's will almost certainly have conflicts so you should figure out a merge order.

@@ -59,15 +59,16 @@ def data_ivim_fit_saved():
first = False
yield name, bvals, data, algorithm, xfail, kwargs, tolerances, skiptime

@pytest.mark.parametrize("name, bvals, data, algorithm, xfail, kwargs, tolerances, skiptime", data_ivim_fit_saved())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why un-parameterize this? It seems to be doing the right thing, but I'm honestly surprised it's working. Perhaps this was somewhat unnecessary, but I'm not even really following how it's getting the data. For example, data_ivim_fit_saved was a nested for loop that ran each algorithm for each dataset. I don't see how this is happening anymore, but I think the number of tests hasn't changed, so it must be happening somewhere?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function previously used for parameterization (data_ivim_saved()) was hardcoded to load data from generic.json. To make use of --dataFile in the call to pytest I had to switch to parameterization by pytest_generate_tests, but an alterative would have been to change data_ivim_saved() to also loop over json files. I felt that the fomer would be more less work, but I am open for other solutions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah ok, took me a while but I see now. I do like this approach more, it seems to be more how pytest should work. My concern is that your addition doesn't cover everything the previous function, data_ivim_fit_saved was doing. Maybe it was no longer necessary or thing we don't want to support anymore, like xfail or fail_first_time? I think data_ivim_fit_saved could be entirely moved over to conftest.py into the algorithm_list function that you modified, but maybe that's not necessary?

@@ -34,3 +34,4 @@ jobs:
run: |
pip install pytest pytest-cov
python -m pytest --doctest-modules --junitxml=junit/test-results.xml --cov=. --cov-report=xml --cov-report=html
python -m pytest --doctest-modules --junitxml=junit/test-results.xml --cov=. --cov-report=xml --cov-report=html --dataFile tests/IVIMmodels/unit_tests/generic_brain.json
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the test-results.xml should be a different file so it doesn't get ovewritten. Perhaps also the cov, but maybe that's ok?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed. I have now changed the file naming of the test report. Not sure if the cov report is used, but if it is it should be the same regardless of data, i.e. no problem that it is overwritten?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like cov-report can be extended with :DEST, so perhaps a different destination file for that 2nd test for that piece as well?
https://pytest-cov.readthedocs.io/en/latest/config.html

@etpeterson
Copy link
Contributor

I don't have time to look at your changes at the moment, but I'm sorry I forgot that Oliver's would conflict with yours so now you have merge conflicts to fix. Sorry about that.

Copy link
Contributor

@etpeterson etpeterson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking pretty good, just the comments and resulting changes if necessary. And the conflicts, sorry again about that.

@@ -59,15 +59,16 @@ def data_ivim_fit_saved():
first = False
yield name, bvals, data, algorithm, xfail, kwargs, tolerances, skiptime

@pytest.mark.parametrize("name, bvals, data, algorithm, xfail, kwargs, tolerances, skiptime", data_ivim_fit_saved())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah ok, took me a while but I see now. I do like this approach more, it seems to be more how pytest should work. My concern is that your addition doesn't cover everything the previous function, data_ivim_fit_saved was doing. Maybe it was no longer necessary or thing we don't want to support anymore, like xfail or fail_first_time? I think data_ivim_fit_saved could be entirely moved over to conftest.py into the algorithm_list function that you modified, but maybe that's not necessary?

@@ -34,3 +34,4 @@ jobs:
run: |
pip install pytest pytest-cov
python -m pytest --doctest-modules --junitxml=junit/test-results.xml --cov=. --cov-report=xml --cov-report=html
python -m pytest --doctest-modules --junitxml=junit/test-results.xml --cov=. --cov-report=xml --cov-report=html --dataFile tests/IVIMmodels/unit_tests/generic_brain.json
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like cov-report can be extended with :DEST, so perhaps a different destination file for that 2nd test for that piece as well?
https://pytest-cov.readthedocs.io/en/latest/config.html

@oliverchampion
Copy link
Collaborator

So I think my previous merge moved some stuff to conftest which is causing conflicts here... I am happy to take a closer look into solving the conflicts if you want me to, as I probably caused them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants