Replies: 2 comments
-
@the-gigi I'm curious to know what library you were trying. All of the LLM providers supported by simple-openai are aligned to the OpenAI's API, and they shouldn't have that issue. I want to remind you that this is a library to support the OpenAI API and providers with compatible APIs. That said, and just for reasons of making the simple-openai code more robust, one could verify that it is not null and throw the appropriate exceptions, but nothing more than that. |
Beta Was this translation helpful? Give feedback.
-
I have used simple-openai with the standard SimpleOpenAI provider. The LLM provider is Sambanova. Some of their models like Llama 4 Scout return invalid results. Also, when you get rate limited they return a JSON payload with an error that becomes an empty response when it is deserialized with no choices. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
The OpenAI API is the most popular API and even providers that have their own API often provide an OpenAI API compatibility layer. Unfortunately, the OpenAI-compatible APIs are not always 100% compatible.
The simple-openai library assumes that responses coming from the LLM adhere to OpenAI API spec. Some providers don't exactly comply with the spec, but are close enough to be deserialized from JSON, so they don't trigger a parsing error, but the response might be invalid. For example, for the chat completion response the expectation according to the spec is that there is a choices array with at least one choice and message with content and role.
We ran yesterday into some providers that returned some invalid responses, which resulted in NPE from simple-openai.
For example, these methods assume a lot and don't check intermediate objects in the chain for null.
I'm not sure what's the best way to handle it, but throwing NPE is not great for library users. Some options:
Beta Was this translation helpful? Give feedback.
All reactions