-
I'm following the ADK Streaming Quickstart guide, specifically Step 5 ("Building a Custom Streaming App"), to create a custom frontend interacting with an ADK agent using FastAPI and WebSockets. The example here provided demonstrates setting up bidirectional streaming using But my goal is to use a similar streaming setup (ADK backend, custom WebSocket frontend) but with a different Large Language Model (LLM), potentially with claude/openai or another custom model accessed via API, with no voice streaming. When I try to configure the agent # From the example main.py - this call seems tied to Live API models
File "/Users/yunikmaharjan/projects/adk_test/.venv/lib/python3.12/site-packages/google/adk/flows/llm_flows/base_llm_flow.py", line 82, in run_live
async with llm.connect(llm_request) as llm_connection:
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yunikmaharjan/projects/adk_test/.venv/lib/python3.12/site-packages/google/adk/models/base_llm.py", line 85, in connect
raise NotImplementedError(
NotImplementedError: Live connection is not supported for openai/gpt-4o. My question is: What is the recommended approach within the ADK framework to achieve real-time, bidirectional streaming between a custom frontend and the ADK backend. Is there an alternative ADK function or pattern to handle streaming requests (LiveRequestQueue) and responses with different models? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Right now, only the selected models are supported. We are exploring how to support more models. |
Beta Was this translation helpful? Give feedback.
Right now, only the selected models are supported.
We are exploring how to support more models.