Replies: 4 comments 4 replies
-
Update I misunderstood the question. New response below. |
Beta Was this translation helpful? Give feedback.
-
Thank you Dr. Pleiss for provided information. Since I am using Multitask GP Regression (https://docs.gpytorch.ai/en/stable/examples/03_Multitask_Exact_GPs/Multitask_GP_Regression.html), all tasks use the same training data set. I use one training data for all 24 tasks. Based on Titsias’ paper and some other papers, in Sparse Gaussian Process Regression (SGPR) example (https://docs.gpytorch.ai/en/stable/examples/02_Scalable_Exact_GPs/SGPR_Regression_CUDA.html), a subset of training data (i.e., some of samples of training data) are used as inducing points. For example, in SGPR example, inducing_points=train_x[:500, :] are 500 samples out of 13279 samples in train_x. My program works fine with SGPR model. Also, my programs works fine when I use MTGP. To extend SGPR to multitask SGPR, as you suggested (if I understood and applied correctly), I considered InducingPointKernel inside MultitaskKernel, defined linear kernel as a base kernel, and provided indices of training data using train_x_indices. As you suggested, I changed the values of train_x_indices to the values between 0 and 23 but again I got the same error message. I am confused why I should use values between 0 and 23 instead of 0 to 716 (I have 716 samples in my training data). Please correct me if I am wrong in any of my explanations and understandings. Thank you very much |
Beta Was this translation helpful? Give feedback.
-
Your setup looks correct. Based on the error that you are getting...
... it seems likely that you are using the incorrect likelihood. Are you using |
Beta Was this translation helpful? Give feedback.
-
PR #2121 should address this issue. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I want to implement multitask sparse gaussian processes. I defined model as follows:
class MultitaskSparseGPModel(gpytorch.models.ExactGP):
def init(self, train_x, train_y, likelihood):
super(MultitaskSparseGPModel, self).init(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.MultitaskMean(gpytorch.means.ConstantMean(), num_tasks=24)
self.base_covar_module = gpytorch.kernels.LinearKernel()
self.covar_module = gpytorch.kernels.MultitaskKernel(
gpytorch.kernels.InducingPointKernel(self.base_covar_module, inducing_points=train_x[train_x_indexes,:], likelihood=likelihood), num_tasks=24)
Also, here is some settings in my program.
train_x_indexes = range(0,716,7)
likelihood = gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=24)
model = MultitaskSparseGPModel(train_x, train_y, likelihood)
train_x is a tensor of size 716x168, train_y is a tensor of size 716x24, test_x is a tensor of size 358x168, and test_y is a tensor of size of 358x24.
I got the following error message when the program wanted to calculate loss (i.e., loss = -mll(output, train_y)):
File "..\multitask_gaussian_likelihood.py", line 117, in _shaped_noise_covar
eye_lt = ConstantDiagLazyTensor(torch.ones(*shape[:-2], 1, dtype=dtype, device=device), diag_shape=shape[-2])
IndexError: tuple index out of range
I appreciate your help.
Beta Was this translation helpful? Give feedback.
All reactions