Structured Output (response_format) - LiteLLM vs. CrewAI #2338
-
Hello everyone, I've encountered an issue where the from litellm import get_supported_openai_params, completion
from pydantic import BaseModel
from crewai import LLM
import os
PROVIDER = 'openrouter'
MODEL = 'google/gemini-2.0-flash-lite-preview-02-05:free' # I'm poor, gimme free version
# First, let's confirm that 'response_format' is supported by the chosen provider and model.
supported_params = get_supported_openai_params(
model=MODEL,
custom_llm_provider=PROVIDER
)
if 'response_format' in supported_params:
print('Yeah, we got response_format!')
else:
print('No response_format, baby!')
# Set up our tests
os.environ['OPENROUTER_API_KEY'] = 'YOUR_KEY_NOT_MINE'
class CalendarEvent(BaseModel):
name: str
date: str
participants: list[str]
# Test if litellm.completion can handle response_format correctly.
messages = [
{'role': 'system', 'content': 'Extract the event information.'},
{'role': 'user', 'content': 'Alice and Bob are going to a science fair on Friday.'},
]
litellm_response = completion(
model=f'{PROVIDER}/{MODEL}',
messages=messages,
response_format=CalendarEvent,
)
print(f'\nLiteLLM Response:\n\n{litellm_response}')
# Test if crewai.LLM.call can handle response_format correctly.
gemini_llm = LLM(
model=f'{PROVIDER}/{MODEL}',
response_format=CalendarEvent,
)
crewai_response = gemini_llm.call(
"Extract the event information:\n\n"
"Alice and Bob are going to a science fair on Friday."
)
print(f'\nCrewAI Response:\n\n{crewai_response}') When running the code above, I get the following error:
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
UpdateDigging deeper, I identified that the problem is that the In this case, I propose that def _validate_call_params(self) -> None:
"""
Validate parameters before making a call. Currently this only checks if
a response_format is provided and whether the model supports it.
The custom_llm_provider is dynamically determined from the model:
- E.g., "openrouter/deepseek/deepseek-chat" yields "openrouter"
- "gemini/gemini-1.5-pro" yields "gemini"
- If no slash is present, "openai" is assumed.
"""
provider = self._get_custom_llm_provider()
if (
self.response_format is not None
and not self.supports_response_format()
):
raise ValueError(
f"The model {self.model} does not support response_format for provider '{provider}'. "
"Please remove response_format or use a supported model."
)
def supports_response_format(self) -> bool:
try:
supported_params = get_supported_openai_params(model=self.model)
return (
supported_params is not None
and "response_format" in supported_params
)
except Exception as e:
logging.error(f"Failed to get supported params: {str(e)}")
return False |
Beta Was this translation helpful? Give feedback.
-
the same problem |
Beta Was this translation helpful? Give feedback.
@mouramax yeah I found that shortly after. Agree that this is still strange behaviour, since the issue is also present with other providers like Gemini, Nvidia, ollama, etc. Anyways, thanks for diving into this!