Output response in langraph #28888
Replies: 1 comment
-
To return the output response only in streaming for display in a UI app using LangGraph, you can use the config = {"configurable": {"thread_id": "abc789"}}
query = "Hi I'm Todd, please tell me a joke."
language = "English"
input_messages = [HumanMessage(query)]
for chunk, metadata in app.stream(
{"messages": input_messages, "language": language},
config,
stream_mode="messages",
):
if isinstance(chunk, AIMessage): # Filter to just model responses
print(chunk.content, end="|") This code will stream each token of the model's response as it is generated, which can then be displayed in your UI app. The "messages" stream mode is appropriate for this use case as it emits LLM messages token-by-token, allowing for real-time updates in your UI [1][2][3]. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
for chunk in graph.stream({"messages": [("user", user_input)]}):
how to return the output response only in streaming to show in in an UI app .
Beta Was this translation helpful? Give feedback.
All reactions