Skip to content

Implement checks for fulfilment of algorithm requirements in the OsipiBase class #96

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

harshithasudhakar
Copy link

@harshithasudhakar harshithasudhakar commented Mar 20, 2025

Describe the changes you have made in this PR

Implemented a centralized validation logic for inputs in the base class.

Link this PR to an issue [optional]

Fixes #45

Checklist

  • Self-review of changed code
  • Added automated tests where applicable
  • Update Docs & Guides

@harshithasudhakar harshithasudhakar changed the title implemented validation functions Implement checks for fulfilment of algorithm requirements in the OsipiBase class Mar 20, 2025
@harshithasudhakar harshithasudhakar marked this pull request as ready for review March 20, 2025 22:31
@etpeterson
Copy link
Contributor

Thanks for this. It's a good start. I'd like some feedback from @IvanARashid on this too. Basically I have one main suggestion for making it easier to maintain. It would also be great to get a test that goes through the algorithms and somehow verifies the validators are running.

@etpeterson
Copy link
Contributor

Also looks like tests are failing because of this. I think it's too strict, or not all the algorithms are acting in quite the standard way.

@IvanARashid
Copy link

Would be good if the check error messages also prints both the requirement and the current input. It would make it clearer for us why the tests are currently failing

@harshithasudhakar
Copy link
Author

Sorry for the delay, I've made the changes now. Please let me know if there are any more changes I should make.

@etpeterson
Copy link
Contributor

Still failing tests.

Copy link
Contributor

@etpeterson etpeterson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking pretty good but the tests are still failing as well.

Copy link
Contributor

@etpeterson etpeterson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests are still failing because of the checks you added. For example here: https://github.com/OSIPI/TF2.4_IVIM-MRI_CodeCollection/actions/runs/14249360771/job/39959244676#step:6:79
Either the checks need to be modified or how the algorithms are called. Probably the algorithm calls aren't consistent or as they should be, but I haven't looked into it. Let me know if you need some help with this.

@etpeterson
Copy link
Contributor

Checking in on this because someone might start work on tests related to this.

@etpeterson
Copy link
Contributor

Hi @harshithasudhakar, thanks for this. I think the major remaining issue is the failing tests. Are you seeing these?
image

Copy link
Contributor

@etpeterson etpeterson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The failing tests need to be resolved before merging.

@harshithasudhakar
Copy link
Author

Hi @harshithasudhakar, thanks for this. I think the major remaining issue is the failing tests. Are you seeing these? image

I've fixed them, the test cases should pass now.

Copy link
Contributor

@etpeterson etpeterson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests are still failing, sorry.

@etpeterson
Copy link
Contributor

Just checking on this, are you planning on finishing this up?

Copy link
Collaborator

@oliverchampion oliverchampion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey! I was thinking it may make more sense to come up with some "default" values for initial guess, bounds and cut-off when they are missing, rather then give an error.

Now, I personally solve the missing of these values in my wrapper. But it may be better to have the Osipi base to come up with default values that are identical for all algorithms.

#else:
#return True
return True
if not hasattr(self, "required_thresholds"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering whether for some attributes, like "required_thresholds", we should add "default" values when tests fail. Like, 200 would be an okay value for most body applications. Then we can do with a warning stating that the default value is used, instead.

#else:
#return True
return True
if self.required_bounds is False and self.bounds is None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same for bounds. I think if no bounds are given, but bounds will be used, then we should give some default bounds.

def osipi_check_required_bvalues():
"""Minimum number of b-values required"""
pass
if self.required_initial_guess is False and self.initial_guess is None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same for initial guess :)

raise ValueError(
f"Spatial dimensions {spatial_dim}D not supported. "
f"Requires {min_dim}-{max_dim}D."
)

def osipi_check_required_bvalues(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

b-values is another story. Here we cannot give a "default"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature] Implement checks for fulfilment of algorithm requirements in the OsipiBase class
4 participants