Replies: 2 comments
-
I assume you are using |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks, I found the event to get grab the gradient eventually: onTrainingBatch |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
During training, at the onEpoch event, I want to read the gradients for certain model parameters, but they are all zero already.
I realized, they become zero by a call to ParameterStore.updateAllParameters() during a training step (I debugged it down to a call of NDArrayEx.adamUpdate()). It surely makes sense to put the parameters to zero again after a training step, however it doesn't let me log the gradients for analysis reasons, is there a better approach to achieve that?
Beta Was this translation helpful? Give feedback.
All reactions