Replies: 1 comment
-
Hi there, this is an issue to open on the Hugging Face repository. We don't develop that code here. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Thanks for the contribution from all crews in hugging face. You make the pre-train process so clear.
According to the Document, we only need to set inputs and labels.
'loss = model(input_ids=input_ids, labels=labels).loss'
But when we do large-scale pre-training, batch and padding are undiscardable.
So how to set attention masks for both input_ids and labels?
Beta Was this translation helpful? Give feedback.
All reactions