-
Notifications
You must be signed in to change notification settings - Fork 9
Open
Description
When I run an toy example in my computer with PyTorch 1.9.0, while giving the right answer, it also gives me the following warning:
C:\Users\iTom\Desktop\pytorch-lasso\lasso\linear\utils.py:36: UserWarning: torch.cholesky is deprecated in favor of torch.linalg.cholesky and will be removed in a future PyTorch release.
L = torch.cholesky(A)
should be replaced with
L = torch.linalg.cholesky(A)
and
U = torch.cholesky(A, upper=True)
should be replaced with
U = torch.linalg.cholesky(A.transpose(-2, -1).conj()).transpose(-2, -1).conj() (Triggered internally at ..\aten\src\ATen\native\BatchLinearAlgebra.cpp:1284.)
However, when I try to test the same sample with cuda on the server within a PyTorch 1.2 docker container, it raises the following error:
Traceback (most recent call last):
File "test.py", line 2, in <module>
from lasso.linear import dict_learning, sparse_encode
File "/home/tom/codes/tmp.completer/lasso/__init__.py", line 1, in <module>
from . import linear, nonlinear, conv2d
File "/home/tom/codes/tmp.completer/lasso/nonlinear/__init__.py", line 3, in <module>
from .split_bregman import split_bregman_nl
File "/home/tom/codes/tmp.completer/lasso/nonlinear/split_bregman.py", line 5, in <module>
from torch._vmap_internals import _vmap
ModuleNotFoundError: No module named 'torch._vmap_internals'
root@7509fe2f96ac:/home/tom/codes/tmp.completer# python test.py
Traceback (most recent call last):
File "test.py", line 2, in <module>
from lasso.linear import sparse_encode#, dict_learning
File "/home/tom/codes/tmp.completer/lasso/__init__.py", line 1, in <module>
from . import linear, nonlinear, conv2d
File "/home/tom/codes/tmp.completer/lasso/nonlinear/__init__.py", line 3, in <module>
from .split_bregman import split_bregman_nl
File "/home/tom/codes/tmp.completer/lasso/nonlinear/split_bregman.py", line 5, in <module>
from torch._vmap_internals import _vmap
ModuleNotFoundError: No module named 'torch._vmap_internals'
my code using cuda:
import torch
from lasso.linear import sparse_encode #, dict_learning
n, d = 3, 4
H = torch.arange(n * d).view(n, d).float().cuda()
print("data:\n", H)
# Dictionary Learning
# dictionary, losses = dict_learning(data, n_components=50, alpha=0.5, algorithm='ista',
# device='cuda', progbar=False)
# print(dictionary.size())
# Sparse Coding (lasso solve)
Z = sparse_encode(H, H.T, alpha=0.2, algorithm='interior-point')
print("Z:\n", Z)
H_hat = Z.mm(H)
print("recon:\n", H_hat)
Metadata
Metadata
Assignees
Labels
No labels