Can taper functions be learnable in gpytorch? #2651
Unanswered
Tomas-Pierce-at-NIA
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm currently trying to implement an approach described in https://pubmed.ncbi.nlm.nih.gov/34890155/.
Part of the approach is use a tapering function to promote sparsity.
I have tried to implement this by creating a kernel which implements the tapering behavior and is then multiplied with other kernels using a product kernel, which works to an extent but runs into some issues.
First is that this tends not to learn a tapering distance, and tends to keep the tapering distance it is initialized with.
The second is that a previous variation on this would allow its tapering distance to reach zero.
Third is that under some conditions, previous revisions would fail to maintain the positive semidefinite matrix condition.
I am also curious if a better, more correct implementation is possible.
My implementation of a tapering function as a kernel follows.
I would be very grateful for any help on this.
Beta Was this translation helpful? Give feedback.
All reactions