Learnable prior over parameters #1731
-
Hi everyone, I have a kernel with a parameter to which I assigned a prior distribution. I would now like to learn the parameters of this prior distribution. Is that possible in GPyTorch? So far, I have only seen that I can assign a prior distribution with fixed but not learnable parameters. Does somebody might know a way around this issue? Thanks a lot! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I mean, purely mechanically, this should be possible by just declaring the hyperparameters of a prior you define extending one of our existing ones to be true gpytorch/gpytorch/priors/utils.py Line 4 in 98dd616 e.g.: gpytorch/gpytorch/priors/torch_priors.py Line 74 in 98dd616 You would just use I'm not sure that this is a good idea though. When doing MAP, it seems like the optimizer would just set the prior parameters such that whatever parameter values maximize the marginal likelihood would maximize the prior also. It's not obvious to me that running NUTS would work at all. |
Beta Was this translation helpful? Give feedback.
I mean, purely mechanically, this should be possible by just declaring the hyperparameters of a prior you define extending one of our existing ones to be true
torch.nn.Parameter
, since our priors all extendModule
. Currently, they are made buffers via:gpytorch/gpytorch/priors/utils.py
Line 4 in 98dd616
e.g.:
gpytorch/gpytorch/priors/torch_priors.py
Line 74 in 98dd616
You would just use
register_parameter
instead ofregister_buffer
.I'm not sure that this is a good idea though. When doing MAP, it seems like the optimizer would just set the prior paramet…