Skip to content

[FEATURE] Add support for Prompt Caching in AWS Bedrock and Anthropic Models via LiteLLM #3535

@ZeroCool2u

Description

@ZeroCool2u

Feature Area

Core functionality

Is your feature request related to a an existing bug? Please link it here.

NA

Describe the solution you'd like

Especially for workflows that use kickoff_for_each() and kickoff_async() with repetitive context, this would dramatically cut costs by leveraging the support that litellm has built-in for prompt caching. See the following link for more info about built-in prompt caching support from litellm: https://docs.litellm.ai/docs/completion/prompt_caching

Describe alternatives you've considered

Spending the extra money on uncached tokens I guess.

Additional context

No response

Willingness to Contribute

I can test the feature once it's implemented

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions