-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Description
作者,你好!
在进行DFT转换时,我的疑问是维度torch.fft.rfft(queries, dim=-1),在dimension上进行DFT,和在seq length上,FEDformer的做法 进行DFT有什么区别吗
`class VarCorAttention(nn.Module):
def init(self, args, mask_flag=True, factor=5, scale=None, attention_dropout=0.1, output_attention=False) -> None:
super(VarCorAttention, self).init()
self.scale = scale
self.mask_flag = mask_flag
self.output_attention = output_attention
self.dropout = nn.Dropout(attention_dropout)
def origin_compute_cross_cor(self, queries, keys):
q_fft = torch.fft.rfft(queries, dim=-1)
k_fft = torch.fft.rfft(keys, dim=-1)
res = q_fft*k_fft # 1 x 10 x 257
corr = torch.fft.irfft(res, dim=-1) # 1 x 10 x 512
corr = corr.mean(dim=-1) # 1 x 10
return corr`
Metadata
Metadata
Assignees
Labels
No labels