-
Notifications
You must be signed in to change notification settings - Fork 36
Dl wrapper + DL testing #109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
I built a wrapper in which DL algorithm train at initialisation - If self-supervised, one can give data to the algorithm to train from. - If supervised, or if no data is given, data is simulated - For IVIM-NET this happens within the wrapper - For Super IVIM DC this happens in the package (as I did not see an option to give training data) Then testing occurs. For speed, I give all testing data in 1 go. Also, as deep learning is known to predominantly work better than LSQ in noisy data, and actually do a poor job in noise-less data, I made a second DL-specific dataset which contains much more noise. Also, I made DL-specific boundaries for passing unit testing.
Hey reviewers, Before you review some points: Ivan: For DL, it is often easier to throw through all data in 1 go as a 2D array (voxels, b-values). I think I now throw this though the single-voxel fit and put an if-statement. Maybe we can come up with a way to deal with this properly within the wrapper? All: I have added an additional data-set for AI-algorithms as these algorithms typically characterise themselves by working well in noisy data, but usually perform poorly in noise-less data. I also added broader margins. We can discuss whether this is the best option. All: |
… errors after merge :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of comments regarding the changes to the wrapper. I also checked the standardized implementations and I don't see anything weird. Although we could think about adding supported_parallel_cpu and supported_gpu attributes now that we are getting more algorithms that do support it.
- Now use ivim_fit_full_volume instead - Skipped
I think this is running now and allows testing of AI algorithms. Happy to merge! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just the reshaping function I think should be moved out of OsipiBase and be delegated to the subclasses. Otherwise I'm happy. Perhaps Eric should review the new testing functions as I've never been familiar with that side of the repo.
Returns: | ||
_type_: _description_ | ||
""" | ||
signals = self.osipi_reshape_to_voxelwise(signals) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Beware that there is still a call to osipi_reshape_to_voxelwise here. I don't think this shows up in unit tests as we don't test this full_volume function there
Describe the changes you have made in this PR
A clear and concise description of what you want to happen
Link this PR to an issue [optional]
Fixes #ISSUE-NUMBER
Checklist