Layer_Normalize or Batch_Normalize for Neural_Receiver ? #878
nnk14102002
started this conversation in
General
Replies: 1 comment
-
Hi @nnk14102002, |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am experimenting with architectural modifications with Neural Receiver. I prepare data for each batch include a range of SNR (-10 -> 5dB). During this process, I found that if I use LayerNorm or None, BCE_Loss will converge vice versa when using BatchNorm. I suspect the problem is due to the data with snr distribution when normalizing (low_SNR => bad data mix with high_SNR => clean data). What do u think about this problem? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions