Skip to content

Code Right? #61

@Mujin-z

Description

@Mujin-z

In adapool/pytorch/SoftPool/idea.py/soft_pool2d, why do we sum over the channel dimension? Shouldn't we sum over the pooling window(when using python to calculate softpool)?

def soft_pool2d(x, kernel_size=2, stride=None, force_inplace=False):
    if x.is_cuda and not force_inplace:
        x = CUDA_SOFTPOOL2d.apply(x, kernel_size, stride)
        # Replace `NaN's if found
        if torch.isnan(x).any():
            return torch.nan_to_num(x)
        return x
    kernel_size = _pair(kernel_size)
    if stride is None:
        stride = kernel_size
    else:
        stride = _pair(stride)
    _, c, h, w = x.size()
    e_x = torch.sum(torch.exp(x),dim=1,keepdim=True) # why do we sum over the channel dimension
    e_x = torch.clamp(e_x , float(0), float('inf'))
    x = F.avg_pool2d(x.mul(e_x), kernel_size, stride=stride).mul_(sum(kernel_size)).div_(F.avg_pool2d(e_x, kernel_size, stride=stride).mul_(sum(kernel_size))) # Why do we multiply by sum(kernel_size) instead of kernel_size*kernel_size. I know you maybe want to get rid of the coefficients that the pooling layer multiplies by.
    return torch.clamp(x , float(0), float('inf'))

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions