You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently training models using a backbone like ResNet50, which is pretrained on ImageNet with an input shape of 3x224x224. I was wondering if it's possible to use a larger input size, such as 3x320x320 or 3x416x416, and still benefit from the pretrained weights, or does the input shape need to be strictly 224x224?
Are there any constraints or considerations when using larger input sizes with pretrained models?
Best regards,
Ha
The text was updated successfully, but these errors were encountered:
Hey @hamac03, yes, you can use any size for convolutional backbones, e.g. 512x512. Usually, the only requirement is for the size to be divisible by 32. You can even train on 512x512 input and then inference on 1024x1024 and have similar or even better performance. But it's always better to check the metrics on your particular model and dataset.
Hi,
I'm currently training models using a backbone like ResNet50, which is pretrained on ImageNet with an input shape of 3x224x224. I was wondering if it's possible to use a larger input size, such as 3x320x320 or 3x416x416, and still benefit from the pretrained weights, or does the input shape need to be strictly 224x224?
Are there any constraints or considerations when using larger input sizes with pretrained models?
Best regards,
Ha
The text was updated successfully, but these errors were encountered: