Skip to content

Learnable prior over parameters #1731

Answered by jacobrgardner
fweberling asked this question in Q&A
Discussion options

You must be logged in to vote

I mean, purely mechanically, this should be possible by just declaring the hyperparameters of a prior you define extending one of our existing ones to be true torch.nn.Parameter, since our priors all extend Module. Currently, they are made buffers via:

def _bufferize_attributes(module, attributes):

e.g.:
_bufferize_attributes(self, ("concentration", "rate"))

You would just use register_parameter instead of register_buffer.

I'm not sure that this is a good idea though. When doing MAP, it seems like the optimizer would just set the prior paramet…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by fweberling
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants