Expanding validation options for meta-learners #649
bsaunders27
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Still getting up to speed on the literature around CATE, but noticed that the docs on validation didn't have anything problem specific, and I wondered if there were some case-specific validations we could perform at least on the fitness of base-learners (mu and treatment propensity). It seems like it'd be fairly straightforward to implement a
.score_baselearners(X,Y,T)
method which could return metrics that could detect whether the base-learners were over/under-fitting.Currently accessible via the
.models_mu_(t|c)
but thinking this might have value to add as a dedicated method for model evaluation. Happy to take a stab at implementing if folks think this would be useful/worth adding as a native feature.Beta Was this translation helpful? Give feedback.
All reactions