You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
Thank you for the amazing work. I appreciate it and hope to contribute in the future.
So far, I worked with official implementations of the authors that mostly wrote "pure" pytorch/tensorflow code. I never had problems infering the model with any arbitrary size after training (for those networks that were not restricted).
I always trained on lower image sizes (cropped images) due to memory requirements and then infered with much higher image size (image after image). I never had problems with that (I never scaled my images).
I wanted to do the same for my mmdetection trained network. I tried to find documentation or related issues/discussions but everyone just used the same image sizes, which is not possible for me.
Do I miss something? How can I infer the model with any image size after training? Does the mmdetection framework support this? Or is the whole train/test pipeline thing restricting this?
I hope to find some answers here. I would really appreciate it!
Thank you!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi!
Thank you for the amazing work. I appreciate it and hope to contribute in the future.
So far, I worked with official implementations of the authors that mostly wrote "pure" pytorch/tensorflow code. I never had problems infering the model with any arbitrary size after training (for those networks that were not restricted).
I always trained on lower image sizes (cropped images) due to memory requirements and then infered with much higher image size (image after image). I never had problems with that (I never scaled my images).
I wanted to do the same for my mmdetection trained network. I tried to find documentation or related issues/discussions but everyone just used the same image sizes, which is not possible for me.
Do I miss something? How can I infer the model with any image size after training? Does the mmdetection framework support this? Or is the whole train/test pipeline thing restricting this?
I hope to find some answers here. I would really appreciate it!
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions