Skip to content

(models): Enable JSON Schema Support for Gemini 1.5 Flash Models #5708

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Sep 15, 2024

Conversation

F1bos
Copy link
Contributor

@F1bos F1bos commented Sep 15, 2024

Gemini 1.5 Flash models now support JSON schema through model configuration.

Changelog - https://ai.google.dev/gemini-api/docs/changelog
From docs - https://ai.google.dev/gemini-api/docs/structured-output?lang=python
Also here - https://ai.google.dev/gemini-api/docs/models/gemini#gemini-1.5-flash

Below is the script i made to check which models work with schema, all models except gemini-1.5-flash-8b-exp-0827 support it.

import typing_extensions as typing
import google.generativeai as genai

genai.configure(api_key="")

class Recipe(typing.TypedDict):
    recipe_name: str
    ingredients: list[str]


# Note: gemini-1.5-flash-8b-exp-0827 does not support the response_schema parameter
models_list = [
    'gemini-1.5-flash-001',
    'gemini-1.5-flash',
    'gemini-1.5-flash-latest',
    'gemini-1.5-flash-exp-0827',
]

results = []

for model_name in models_list:
    model = genai.GenerativeModel(model_name)

    response = model.generate_content(
        "List a few popular cookie recipes.",
        generation_config=genai.GenerationConfig(
            response_mime_type="application/json", response_schema=list[Recipe]
        ),
    )

    print(f'[{model_name} Response text: ', response.text)
    results.append(response)


print('Final Results: ', results)

Additionally, I observed some unexpected behavior regarding the check for response_schema support. Currently, it seems to rely on the Vertex AI provider configuration. However, certain models like "gemini-1.5-flash-latest" lack specific Vertex configurations, potentially leading to an inaccurate assessment of their schema support, despite Gemini itself supporting it. It might be necessary to modify this hardcoded provider check to utilize "custom_llm_provider" instead for a more reliable evaluation.

# Checks for 'response_schema' support - if passed in
if "response_schema" in optional_params:
supports_response_schema = litellm.supports_response_schema(
model=model, custom_llm_provider="vertex_ai"
)

Copy link

vercel bot commented Sep 15, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Sep 15, 2024 3:27am

@krrishdholakia
Copy link
Contributor

Additionally, I observed some unexpected behavior regarding the check for response_schema support. Currently, it seems to rely on the Vertex AI provider configuration. However, certain models like "gemini-1.5-flash-latest" lack specific Vertex configurations, potentially leading to an inaccurate assessment of their schema support, despite Gemini itself supporting it. It might be necessary to modify this hardcoded provider check to utilize "custom_llm_provider" instead for a more reliable evaluation.

Hey @F1bos can you file this as an issue for tracking?

@krrishdholakia krrishdholakia merged commit b64b7a9 into BerriAI:main Sep 15, 2024
2 checks passed
@ishaan-jaff
Copy link
Contributor

hi @F1bos - curious do you use LiteLLM Proxy Server today ?

@F1bos
Copy link
Contributor Author

F1bos commented Sep 17, 2024

hi @F1bos - curious do you use LiteLLM Proxy Server today ?

Hi. I used it before, but rn i use LiteLLM via SDK

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants