Skip to content

你好 我在渐进式剪枝这一环节有些疑问 希望您解答 #30

@Hao-tianWang

Description

@Hao-tianWang

你好,很出色的工作!
我在渐进式剪枝这一部分有一些疑问,在论文中,您提出用累计梯度的大小作为alpha参数重要性的衡量因子,不过在论文中好像并未说明更新alpha时变动的元素是alpha的grad值大还是值小的元素?
在代码中,我看到:

sorted_alpha_grad, indices = torch.sort(alpha_grad, descending=True)
    compression_weight = torch.ones_like(indices)
    compression_weight[indices < alpha_grad_attn.numel()] = 36 # 36 = 12 (number of heads) * [1 (weights of query) + 1 (weights of key) + 1 (weights of value)]
    threshold = sorted_alpha_grad[torch.argmin(torch.abs(torch.cumsum(compression_weight, 0) - torch.sum(compression_weight)*pi))]
    
    def update(module, grad):
        mask = ((grad <= threshold) | (grad <= torch.min(grad)))
        module.data.copy_(mask + (~mask)*(1 - pi/p))

这一部分似乎说明,在update过程中是grad比threshold大的部分参数得到了更新,将其趋于0,这里我有些疑问,为何grad比threshold大的部分参数重要性就小以至于能趋于0呢?希望您解答

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions