Skip to content

Correct way to access the logit_bias feature #87

@NullMagic2

Description

@NullMagic2

It seems the OpenAI API supports the attribute .logit_bias, which allows us to encourage or discorage / ban certain words: https://help.openai.com/en/articles/5247780-using-logit-bias-to-alter-token-probability-with-the-openai-api and the lmstudio API also supports it:
https://lmstudio.ai/docs/app/api/endpoints/openai

However, when I try to use it, it seems to have no effect.
For instance, the word "time" for Qwen3-14b is token no. 1678:

token = a.tokenize("time")
print(token)
[1678]

But if I try to pass logit_bias to Qwen 3, it ignores logit_bias completely:

self.model.act(
chat_object,
tool_list, # Pass the actual tools list
config={
"temp": 0.2,
"maxTokens": MAX_HISTORY_LENGTH,
"logit_bias": {"1678":-100},
},
on_prediction_fragment=self._stream_response,
)

[USER (10:08 AM)]: Complete this sentence: "Once upon a..."

[AI (10:08 AM)]: Once upon a time, there lived a curious little girl who loved exploring the enchanted forest behind her cottage.

Is this the right way to access logit_bias? Or is it a specific model issue?

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions