-
Notifications
You must be signed in to change notification settings - Fork 62
Open
Description
Hello,
I was wondering if there is much difference in the contraction order given by numpy.einsum_path
vs. what is being done inside of TensorOperations? I've looked at the output of @macroexpand
and its not super clear to me what is being done. The specific contraction I'm looking at is:
@macroexpand @tensor K_true[n,m,l] := psi[i,j,k]*P[i,n]*P[j,m]*P[k,l]
and I'm just trying to understand how things are being multiplied under the hood so I can re-make this on GPU in a way that takes advantage of sparsity. I've coded what the numpy command outputs and TensorOperations is still much faster. I believe they have same computational complexity (4) and have similar memory footprints.
Metadata
Metadata
Assignees
Labels
No labels