Replies: 1 comment
-
Hi @coderAddy , I am facing the same issue. Were you able to solve this? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi folks,
BLOT: Need help exporting detectron2's maskrcnn to ONNX along with the frozen batch norm layers.
I'm fairly new to detectron2 framework and had some issues exporting detectron2's mask-rcnn to onnx, retaining the frozen batch norm layers from the torch model.
I have been successful in importing the resnet-50 mask-rcnn network using the code snippet below. But in this case, the frozen batch norm layers get optimized out/ constant-folded in the exported ONNX network.
Based off a conversation from this thread, my understanding is that,
using the torch.onnx.export()
function, we can turn off constant_folding and optimizations, when exported using the training=TrainingMode.TRAINING option. After which, the frozen batch norm layer wouldn't get optimized out. However, while trying this out, I run into an error (pasted below)Updated export code:
Error message:
I faced a similar issue with
_flatten(in)
, which was resolved by converting all the scalars to tensors. But here, the outs from detectron2 is an object of type 'Instances'. A few questions here -Thanks in advance!
Edit: This issue is addressing the export of fixed batch norm layers from the model. But, currently I'm unable to successfully export any form of the network using the
torch.onnx.export(torch_model, first_batch, "mrcnn.onnx")
syntax.Beta Was this translation helpful? Give feedback.
All reactions