Replies: 2 comments 1 reply
-
Hi @sulth1, great observation! It's because OpenAI has updated gpt-3.5-turbo (GPT-3.5 in the plugin) to default to the new gpt-3.5-turbo-0125 which has 16K context window. The old GPT-3.5-TUBRO-16K is shown as legacy. We can remove it I guess. |
Beta Was this translation helpful? Give feedback.
-
Yes, it is because
No, this is the
Yes
Yes In short, my |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
Again, thank you so much for this life-changing vault QA update.
I have been experimenting it today, and found that on your plugin, when I use ChatGPT 3.5 turbo 16k through Openrouter (https://openrouter.ai/models/openai/gpt-3.5-turbo-0125), the results are much better than through the GPT-3.5 16k option (not sure why it isn't labelled GPT-3.5 turbo 16k btw?).
Is there any reason for that? Aren't those two supposed to be the exact same? Could it have something to do with the temperature settings (kept as default 0.1)?
Beta Was this translation helpful? Give feedback.
All reactions