Replies: 1 comment 1 reply
-
I was able to do this manually by using a combination of |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Is there a way to get the output token sequence when generating text with llama-cpp-python? I don't need the logprobs, just the tokens.
If I tokenize the result I don't necessarily get the same tokens that were chosen, as there are potentially many ways to get to the same output string.
Beta Was this translation helpful? Give feedback.
All reactions