Custom ScaleKernel with ARD prior (batched GPs) #2537
-
Hi, Thank you for making GPytorch!! I really enjoy working with it! I have a question regarding custom decorative kernels: I have read this tutorial: https://docs.gpytorch.ai/en/stable/examples/00_Basic_Usage/Implementing_a_custom_Kernel.html but I am still not so sure how to apply these on decorative kernels. What is the best practice for writing a custom decorative Kernel, e.g. a
One simpler custom I have inherited from ScaleKernel
Is that the correct way? Thanks and warm regards, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Unless you specifically want to be working with an outputscale I'm just a different way, I would honestly just copy ScaleKernel and implement what you need it to do rather than extending the original class. The only real reason to extend ScaleKernel would be code reuse -- is there code in ScaleKernel that you'd still be using? For your very simple example, I'd just extend ScaleKernel and override the outputscale getters and setters to just return 1/ the outputscale: gpytorch/gpytorch/kernels/scale_kernel.py Line 96 in 9551eba |
Beta Was this translation helpful? Give feedback.
Unless you specifically want to be working with an outputscale I'm just a different way, I would honestly just copy ScaleKernel and implement what you need it to do rather than extending the original class. The only real reason to extend ScaleKernel would be code reuse -- is there code in ScaleKernel that you'd still be using?
For your very simple example, I'd just extend ScaleKernel and override the outputscale getters and setters to just return 1/ the outputscale:
gpytorch/gpytorch/kernels/scale_kernel.py
Line 96 in 9551eba