-
Notifications
You must be signed in to change notification settings - Fork 36
Implement checks for fulfilment of algorithm requirements in the OsipiBase class #96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Thanks for this. It's a good start. I'd like some feedback from @IvanARashid on this too. Basically I have one main suggestion for making it easier to maintain. It would also be great to get a test that goes through the algorithms and somehow verifies the validators are running. |
Also looks like tests are failing because of this. I think it's too strict, or not all the algorithms are acting in quite the standard way. |
Would be good if the check error messages also prints both the requirement and the current input. It would make it clearer for us why the tests are currently failing |
Sorry for the delay, I've made the changes now. Please let me know if there are any more changes I should make. |
Still failing tests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking pretty good but the tests are still failing as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests are still failing because of the checks you added. For example here: https://github.com/OSIPI/TF2.4_IVIM-MRI_CodeCollection/actions/runs/14249360771/job/39959244676#step:6:79
Either the checks need to be modified or how the algorithms are called. Probably the algorithm calls aren't consistent or as they should be, but I haven't looked into it. Let me know if you need some help with this.
Checking in on this because someone might start work on tests related to this. |
Hi @harshithasudhakar, thanks for this. I think the major remaining issue is the failing tests. Are you seeing these? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The failing tests need to be resolved before merging.
I've fixed them, the test cases should pass now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests are still failing, sorry.
Just checking on this, are you planning on finishing this up? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey! I was thinking it may make more sense to come up with some "default" values for initial guess, bounds and cut-off when they are missing, rather then give an error.
Now, I personally solve the missing of these values in my wrapper. But it may be better to have the Osipi base to come up with default values that are identical for all algorithms.
#else: | ||
#return True | ||
return True | ||
if not hasattr(self, "required_thresholds"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering whether for some attributes, like "required_thresholds", we should add "default" values when tests fail. Like, 200 would be an okay value for most body applications. Then we can do with a warning stating that the default value is used, instead.
#else: | ||
#return True | ||
return True | ||
if self.required_bounds is False and self.bounds is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same for bounds. I think if no bounds are given, but bounds will be used, then we should give some default bounds.
def osipi_check_required_bvalues(): | ||
"""Minimum number of b-values required""" | ||
pass | ||
if self.required_initial_guess is False and self.initial_guess is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same for initial guess :)
raise ValueError( | ||
f"Spatial dimensions {spatial_dim}D not supported. " | ||
f"Requires {min_dim}-{max_dim}D." | ||
) | ||
|
||
def osipi_check_required_bvalues(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
b-values is another story. Here we cannot give a "default"
Describe the changes you have made in this PR
Implemented a centralized validation logic for inputs in the base class.
Link this PR to an issue [optional]
Fixes #45
Checklist