Replies: 1 comment
-
Just passing the logits of bad words token ID assigned with negative number worked for me: def bad_word_processor(token_ids, logits):
logits[121] = float("-inf")
logits[345] = float("-inf")
logits[420] = float("-inf")
return logits
sampling_params = SamplingParams(temperature=0.2, top_p=0.99, max_tokens=512, frequency_penalty=1.1, logits_processors=[bad_word_processor])
outputs = llm.generate(prompts, sampling_params) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Is there a way to exclude bad tokens when generating? This is different from stop_words mentioned in the vLLM docs.
Looking for something similar to this HF doc.
https://huggingface.co/docs/transformers/v4.38.2/en/internal/generation_utils#transformers.NoBadWordsLogitsProcessor
Beta Was this translation helpful? Give feedback.
All reactions