Skip to content

[Bug]: gemini/gemma-3-27b-it function calling is not enabled exception #10313

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
theimpostor opened this issue Apr 25, 2025 · 0 comments
Open
Labels
bug Something isn't working

Comments

@theimpostor
Copy link

What happened?

model json shows that gemini/gemma-3-27b-it supports function calling, however trying it out via the attached script reports the exception below. Using openai-agents v0.0.13 which I believe is using LiteLLM 1.67.2.

repro script:

#!/usr/bin/env -S uv run --script

# /// script
# requires-python = ">=3.13"
# dependencies = [
#     "openai-agents[litellm]",
# ]
# ///

import asyncio

from agents import Agent, Runner, function_tool, set_tracing_disabled
from agents.extensions.models.litellm_model import LitellmModel

set_tracing_disabled(disabled=True)

@function_tool
def get_weather(city: str):
    print(f"[debug] getting weather for {city}")
    return f"The weather in {city} is sunny."


async def main(model: str, api_key: str):
    agent = Agent(
        name="Assistant",
        instructions="You only respond in haikus.",
        model=LitellmModel(model=model, api_key=api_key),
        tools=[get_weather],
    )

    result = await Runner.run(agent, "What's the weather in Tokyo?")
    print(result.final_output)


if __name__ == "__main__":
    # First try to get model/api key from args
    import argparse

    parser = argparse.ArgumentParser()
    parser.add_argument("--model", type=str, required=False)
    parser.add_argument("--api-key", type=str, required=False)
    args = parser.parse_args()

    model = args.model
    if not model:
        model = input("Enter a model name for Litellm: ")

    api_key = args.api_key
    if not api_key:
        api_key = input("Enter an API key for Litellm: ")

    asyncio.run(main(model, api_key))

Relevant log output

❯ ./litellm-tool-sample.py --api-key="..." --model=gemini/gemma-3-27b-it
Installed 56 packages in 375ms

Provider List: https://docs.litellm.ai/docs/providers


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

Traceback (most recent call last):
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/litellm/llms/vertex_ai/gemini/vertex_and_google_ai_studio_gemini.py", line 1285, in async_completion
    response = await client.post(
               ^^^^^^^^^^^^^^^^^^
        api_base, headers=headers, json=cast(dict, request_body)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )  # type: ignore
    ^
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/litellm/litellm_core_utils/logging_utils.py", line 135, in async_wrapper
    result = await func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/litellm/llms/custom_httpx/http_handler.py", line 256, in post
    raise e
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/litellm/llms/custom_httpx/http_handler.py", line 212, in post
    response.raise_for_status()
    ~~~~~~~~~~~~~~~~~~~~~~~~~^^
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/httpx/_models.py", line 829, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://generativelanguage.googleapis.com/v1beta/models/gemma-3-27b-it:generateContent?key=AIzaSyAIYQqE4rP2Vizu2GDND1qG8a-WaPwNlkw'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/litellm/main.py", line 477, in acompletion
    response = await init_response
               ^^^^^^^^^^^^^^^^^^^
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/litellm/llms/vertex_ai/gemini/vertex_and_google_ai_studio_gemini.py", line 1291, in async_completion
    raise VertexAIError(
    ...<3 lines>...
    )
litellm.llms.vertex_ai.common_utils.VertexAIError: {
  "error": {
    "code": 400,
    "message": "Function calling is not enabled for models/gemma-3-27b-it",
    "status": "INVALID_ARGUMENT"
  }
}


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/private/var/folders/ct/x2gct7yn2bxfqs891n8h1dxr0000gn/T/tmp.i86mChDqDW/./litellm-tool-sample.py", line 52, in <module>
    asyncio.run(main(model, api_key))
    ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Cellar/python@3.13/3.13.3/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ~~~~~~~~~~^^^^^^
  File "/usr/local/Cellar/python@3.13/3.13.3/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
  File "/usr/local/Cellar/python@3.13/3.13.3/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py", line 719, in run_until_complete
    return future.result()
           ~~~~~~~~~~~~~^^
  File "/private/var/folders/ct/x2gct7yn2bxfqs891n8h1dxr0000gn/T/tmp.i86mChDqDW/./litellm-tool-sample.py", line 31, in main
    result = await Runner.run(agent, "What's the weather in Tokyo?")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/agents/run.py", line 218, in run
    input_guardrail_results, turn_result = await asyncio.gather(
                                           ^^^^^^^^^^^^^^^^^^^^^
    ...<19 lines>...
    )
    ^
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/agents/run.py", line 757, in _run_single_turn
    new_response = await cls._get_new_response(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<10 lines>...
    )
    ^
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/agents/run.py", line 916, in _get_new_response
    new_response = await model.get_response(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<10 lines>...
    )
    ^
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/agents/extensions/models/litellm_model.py", line 81, in get_response
    response = await self._fetch_response(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<9 lines>...
    )
    ^
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/agents/extensions/models/litellm_model.py", line 273, in _fetch_response
    ret = await litellm.acompletion(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<18 lines>...
    )
    ^
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/litellm/utils.py", line 1460, in wrapper_async
    raise e
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/litellm/utils.py", line 1321, in wrapper_async
    result = await original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/litellm/main.py", line 496, in acompletion
    raise exception_type(
          ~~~~~~~~~~~~~~^
        model=model,
        ^^^^^^^^^^^^
    ...<3 lines>...
        extra_kwargs=kwargs,
        ^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2214, in exception_type
    raise e
  File "/Users/shoda/.cache/uv/environments-v2/litellm-tool-sample-09def778d79087ab/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 1217, in exception_type
    raise BadRequestError(
    ...<11 lines>...
    )
litellm.exceptions.BadRequestError: litellm.BadRequestError: VertexAIException BadRequestError - {
  "error": {
    "code": 400,
    "message": "Function calling is not enabled for models/gemma-3-27b-it",
    "status": "INVALID_ARGUMENT"
  }
}

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.67.2

Twitter / LinkedIn details

No response

@theimpostor theimpostor added the bug Something isn't working label Apr 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant