Passing Multi-Modal Content to LiteLLM #18100
Replies: 2 comments
-
LlamaIndex does support passing multi-modal contents like images to its LiteLLM wrapper. The integration with Reka's multi-modal language models allows you to use image input capabilities in your LlamaIndex applications. The The To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Beta Was this translation helpful? Give feedback.
-
@tianqizhao-louis litellm would need to be updated to support actually passing the content blocks, similar to Anthropic, Ollama, Openai, Google GenAI, etc. We've been slowly rolling out this change and I haven't updated litellm yet, would be amazing to get a PR for this |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone,
I'm wondering if llama-index supports passing multi-modal contents like images to its LiteLLM wrapper.
In the code, it seems it just converts all messages'
TextBlock
.Appreciate any help ahead!
Beta Was this translation helpful? Give feedback.
All reactions