Skip to content

question about "zero point " problem with the pretrained global scale weights #5

@oneTaken

Description

@oneTaken

Hi,
I use your pretrained model 2D_modulation.pth to test 2D problem.
I load the weights and find the global scale conv weights as follows, which is corresponding to the "zero point " problem:

print(list(model.children())[0].weight)
Parameter containing:
tensor([[ 1.1378e-05, -8.9791e-05],
        [-5.6083e-05,  6.1680e-05],
        [-4.5423e-05,  3.8486e-05]], requires_grad=True)

print(list(model.children())[0].bias)
Parameter containing:
tensor([-0.0140,  0.0141,  0.0138], requires_grad=True)

If the blur & denoise 2D problem cond vector to "zero point" [0,0], and the weight will be useless, the bias dominate.
And the bias does not be 0.

So, what's the problem?
Why do you just set the global scale linear bias=False?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions