-
Notifications
You must be signed in to change notification settings - Fork 227
Interface for MAP/MLE/VI #2509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Relates to #2506; cc @Red-Portal |
I think one downside would be that referring to the documentation becomes a little tricky since everything requires specialization. For instance, if I only care about |
How would the other arguments to I should also check what other PPLs do for this. |
@mhauru I think that could be accommodated by pushing everything VI or ML/MAP specific to the object representing the algorithm like |
For what it's worth (i.e., from my basic user perspective), I really hope there is some kind of unified API (and it might emerge through some other packages acting like an interface to Turing, but might as well think of it early) à-la-rstan (brms / rstanarm):
The algorithm can be swapped for anything ("meanfield", "fullrank", and "pathfinder") through global options. This makes is super convenient to prototype a script, test it and make sure it runs using "fast" algorithms, and then just change the default algo for MCMC and run it without any edits needed to the script. In the case described above there is some additional considerations if one wants to incorporate approaches that return point-estimates (MLE/MAP optimization), but I don't think it poses that big of a challenge |
Hi Dominique. I think here, the situation is slightly different since the unified interface is over ML and MAP. And the potential users of ML and MAP are most likely very different from VI. (On the other hand, I think choosing between MCMC, Pathfinder, or VI, for example, is a more likely scenario.) Therefore, I think there is less incentive the unify the interface. |
A unified interface for all optimisation-based approximate inference algorithms (MAP/MLE/VI) would be nice. There are some implementation details on the |
A more ambitious goal would be
|
Nice! Can I also give a shout out for Batch based (e.g. https://turinglang.org/DynamicPPL.jl/stable/api/#DynamicPPL.MiniBatchContext ) and (nearly equivalently) tempered (e.g. #2524) |
Hi @SamuelBrand1 , yes, minibatching is on our agenda. |
this is how RxInfer does it too |
Imagine a future where models are written in a universal backend-agnostic way (e.g., as a string) and then |
Proposal for a unified interface
This resembles the
sample(model, sampler, ...)
interface for Monte Carlo sampling-based inference algorithms.The text was updated successfully, but these errors were encountered: