You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I define a class function unfreeze_box_head as below.
@META_ARCH_REGISTRY.register()
class GeneralizedRCNN(nn.Module):
def __init__(self, cfg):
super().__init__()
...
...
def unfreeze_box_head(self):
for p in self.roi_heads.box_head.parameters():
p.requires_grad = True
logger.info('Unfreeze roi_box_head parameters')
And I call it with
if iteration == 500:
model.module.unfreeze_box_head()
I expect that it can unfreeze the weights of box head module at the iteration of 500. However, as I observe the reaction of the loss, it seems that it does not take effect. I wonder whether there is any trick to pay attention to, something that may be specific to the distributed dataparallel framework. Any advice is appreciated. Thanks.
This discussion was converted from issue #2486 on January 19, 2021 19:51.
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I define a class function
unfreeze_box_head
as below.And I call it with
I expect that it can unfreeze the weights of box head module at the iteration of 500. However, as I observe the reaction of the loss, it seems that it does not take effect. I wonder whether there is any trick to pay attention to, something that may be specific to the distributed dataparallel framework. Any advice is appreciated. Thanks.
Beta Was this translation helpful? Give feedback.
All reactions