-
-
Notifications
You must be signed in to change notification settings - Fork 724
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Description
With the latest version of PraisonAI. straming is still not real time. When in verbose mode you can see the internal logs of the tool streaming the output. But as a use the final output we get is an accumulation of the internal streams.
Environment
- Provider (select one):
- Anthropic
- OpenAI
- Google Vertex AI
- AWS Bedrock
- Other:
- PraisonAI version: latest
- Operating System:
Full Code
from praisonaiagents import Agent
agent = Agent(
instructions="You are a helpful assistant",
llm="gemini/gemini-2.0-flash",
self_reflect=False,
verbose=False,
stream=True
)
for chunk in agent.start("Write a report on about the history of the world"):
print(chunk, end="", flush=True)
or
from praisonaiagents import Agent
agent = Agent(
instructions="You are a helpful assistant",
llm="gemini/gemini-2.0-flash",
self_reflect=False,
verbose=True,
stream=True
)
result = agent.start("Write a report on about the history of the world")
print(result)
Steps to Reproduce
- intall the lib
- copy the code above
- run them and observe
Expected Behavior
Instead of streaming the internal response that the user does not see we should stream realtime to the user so as to avoid unnecessary latency and improve user experience
Actual Behavior
Streaming happens internally we do not get the stream in the final response.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working