Replies: 2 comments
-
Resolved |
Beta Was this translation helpful? Give feedback.
0 replies
-
@MerakiV Please explain how you handle that. Thankyou! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I'm trying to train a SlowFast model using a custom dataset.
The bounding boxes of the dataset are of just hands as I want to detect the action of hands in the videos (focus of the camera is on the hands and full body of the person is not in the video). I would also like to add that I have also made other slight modifications since my videos are not the same length.
I have made my annotations the same as the AVA dataset and am able to train the model. However, I get mAP = 0.0 and when I check the predicted bounding boxes, it is always 0,0,1,1. When getting further details, I notice that the original bboxes that are predicted are always [0,0,455,256] (which is the dimension of the image after resizing the original videos in my dataset (1920x1080).
So here are my main questions:
Here is my configuration file in case it helps:
Beta Was this translation helpful? Give feedback.
All reactions