Skip to content

Alternative scoring metrics #965

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

carl-offerfit
Copy link

  • Allow specification of an alternative sklearn score function for double ML models
  • Add score_nuisances function

…commit

Signed-off-by: Carl Gold <carl@offerfit.ai>
Signed-off-by: Carl Gold <carl@offerfit.ai>
Use a function to verify the scoring function, whether sklearn or otherwise

Signed-off-by: Carl Gold <carl@offerfit.ai>
Signed-off-by: Carl Gold <carl@offerfit.ai>
Signed-off-by: Carl Gold <carl@offerfit.ai>
Signed-off-by: Carl Gold <carl@offerfit.ai>
Signed-off-by: Carl Gold <carl@offerfit.ai>
Signed-off-by: Carl Gold <carl@offerfit.ai>
@carl-offerfit
Copy link
Author

@kbattocchi Here's an update:

  1. I fixed the type hints to use the old Union approach rather than the python 3.10 |. That should fix the Python 3.8 & 3.9 errors.
  2. I added a unit test for the ML score signature validation
  3. I researched sklearn make_scorer and as far as I can tell it doesn't do any validation at all - if the function you passed to make_scorer is not valid it just fails when you try to use it. That makes me wonder if my approach was over engineered, and instead we should just let it fail. At the same time, I'm a bit attached to the approach since I spent time on it. ;-) WDYT?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant