-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Note: for support questions, please use the discord server. This repository's issues are reserved for feature requests and bug reports.
-
I'm submitting a ...
- bug report
- feature request
- support request => Please do not submit support request here, see note at the top of this template.
-
Do you want to request a feature or report a bug? Give a brief on it.
Bug report - Occasionally, model.Facenet() returns a value error when using the non default model, something to do with the final BatchNorm1D torch.nn layer -
What is the current behavior?
Works fine when a list of more than 1 image is passed to the model. However, when only 1 image is passed, the final batch normalisation layer raises the error:
[Expected more than 1 value per channel when training, got input size torch.Size([1, 3, img_height, img_width])]
-
If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
Pass a single image to the model to reproduce this error -
What is the expected behavior?
Model should ideally return embeddings for any number of images passed to it -
What is the motivation / use case for changing the behavior?
-
Please tell us about your environment:
- Version: 2.0.0-beta.X
- Browser: [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
- Language: [all | TypeScript X.X | ES6/7 | ES5 | Dart]
-
Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)
https://stackoverflow.com/questions/65882526/expected-more-than-1-value-per-channel-when-training-got-input-size-torch-size
https://github.com/timesler/facenet-pytorch/blob/fa70227bd5f02209512f60bd10e7e66877fdb4f6/models/inception_resnet_v1.py#L258C81-L258C81