-
Notifications
You must be signed in to change notification settings - Fork 34
Open
Description
Thank you for your excellent project!
I'm training the keypoint RCNN network implemented in torchvision by scaling the image to the size denoted in object detection result tables.
I know the size of input image between simplebaseline2D and networks of RCNN family(faster, mask, etc.) is different.
I'd like to know how to scale the input image size for minicoco dataset in the keypoint detection experiment using a simplebaseline2D network.
Despite using different networks, is the same size between the object detection task and the keypoint detection task?
Thanks in advance!
Metadata
Metadata
Assignees
Labels
No labels