Skip to content

Dl wrapper + DL testing #109

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
Open

Dl wrapper + DL testing #109

wants to merge 13 commits into from

Conversation

oliverchampion
Copy link
Collaborator

Describe the changes you have made in this PR

A clear and concise description of what you want to happen

Link this PR to an issue [optional]

Fixes #ISSUE-NUMBER

Checklist

  • [ X ] Self-review of changed code
  • [ X ] Added automated tests where applicable
  • Update Docs & Guides

oliverchampion and others added 3 commits June 19, 2025 16:47
I built a wrapper in which DL algorithm train at initialisation
- If self-supervised, one can give data to the algorithm to train from.
- If supervised, or if no data is given, data is simulated
       - For IVIM-NET  this happens within the wrapper
       - For Super IVIM DC this happens in the package (as I did not see an option to give training data)

Then testing occurs. For speed, I give all testing data in 1 go. Also, as deep learning is known to predominantly work better than LSQ in noisy data, and actually do a poor job in noise-less data, I made a second DL-specific dataset which contains much more noise. Also, I made DL-specific boundaries for passing unit testing.
@oliverchampion
Copy link
Collaborator Author

Hey reviewers,

Before you review some points:

Ivan: For DL, it is often easier to throw through all data in 1 go as a 2D array (voxels, b-values). I think I now throw this though the single-voxel fit and put an if-statement. Maybe we can come up with a way to deal with this properly within the wrapper?
Ivan: Also, not sure whether I put code in the DL-specific wrappers that could/should go in the main wrapper.

All: I have added an additional data-set for AI-algorithms as these algorithms typically characterise themselves by working well in noisy data, but usually perform poorly in noise-less data. I also added broader margins. We can discuss whether this is the best option.

All:

Copy link

@IvanARashid IvanARashid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of comments regarding the changes to the wrapper. I also checked the standardized implementations and I don't see anything weird. Although we could think about adding supported_parallel_cpu and supported_gpu attributes now that we are getting more algorithms that do support it.

@oliverchampion
Copy link
Collaborator Author

I think this is running now and allows testing of AI algorithms. Happy to merge!
SUPER-IVIM_DC still gives some conflicts with the latest versions of Python, so I've excluded it from testing at the moment. Easy to add later :)

Copy link

@IvanARashid IvanARashid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just the reshaping function I think should be moved out of OsipiBase and be delegated to the subclasses. Otherwise I'm happy. Perhaps Eric should review the new testing functions as I've never been familiar with that side of the repo.

@IvanARashid IvanARashid requested a review from etpeterson June 24, 2025 12:10
Returns:
_type_: _description_
"""
signals = self.osipi_reshape_to_voxelwise(signals)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beware that there is still a call to osipi_reshape_to_voxelwise here. I don't think this shows up in unit tests as we don't test this full_volume function there

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants