Skip to content

Anthropic caching not supported on LiteLLM #1257

@suhjohn

Description

@suhjohn

Please read this first

  • Have you read the docs?Agents SDK docs
  • Have you searched for related issues? Others may have had similar requests

Describe the feature

What is the feature you're requesting? How would it work? Please provide examples and details if possible.

The LitellmModel currently drops additional parameters that OpenAI doesn't support but are needed for other model providers. Specifically in items_to_messages of chatcmpl_converter.py's extract_all_content. Alternatively, you can support a post_parse hook to the converted messages for custom processing logic on the messages.

What often happens is that the application side needs to be "smart" about how the cache control is appended anyways which would be helpful.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions