Onnx export with batch_size 1 #4516
Unanswered
marisancans
asked this question in
Q&A
Replies: 1 comment
-
Hi @marisancans , I am also trying to convert faster-rcnn_R_50_fpn_3x.yaml however, I am facing errors while trying to convert the pytorch model to onnx using the export model script provided.
Could you please help me out in figuring what I am doing wrong here? Getting the following error:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am able to use your scripts to export model to onnx, however, input shape is 3, 800, 1202 and I would like it to be 1, 3, 800, 1202, I dont need dynamic batch sizes because my scripts later down the like breaks (I need to convert to TensorRT and use it in triton). How can I achieve this? I have spent couple of days trying to do it with graph surgeon and similar tools without success (converting to tensorRT breaks in triton, wierd errors). Thanks
Im using model: detectron2/configs/COCO-Detection/faster_rcnn_R_101_C4_3x.yaml
Beta Was this translation helpful? Give feedback.
All reactions