Skip to content

[Bug]: together_ai/meta-llama/Llama-4-Scout-17B-16E-Instruct not correctly parsing tool response #11453

@JarettForzano

Description

@JarettForzano

What happened?

On https://models.litellm.ai/ it says that the Model Llama-4-Scout supports tool calling but when used it returns a format which does not work.

Relevant log output

Response: ModelResponse(id='nwwck5e-2j9zxn-94b225f2c82c7aec', created=1749152495, model='together_ai/meta-llama/Llama-4-Scout-17B-16E-Instruct', object='chat.completion', system_fingerprint=None,         
            choices=[Choices(finish_reason='stop', index=0,                                                                                                                                                             
            message=Message(content='<|python_start|>[{"index":0,"function":{"arguments":"{\\"line_number\\":307}","name":"goto"},"id":"call_1v0k7q2m0kcw3d1p0b3w0","type":"function"}]', role='assistant',             
            tool_calls=None, function_call=None, provider_specific_fields={'refusal': None}))], usage=Usage(completion_tokens=52, prompt_tokens=14072, total_tokens=14124, completion_tokens_details=None,              
            prompt_tokens_details=None), service_tier=None, prompt=[])

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

1.72.1

Twitter / LinkedIn details

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions