Releases: MervinPraison/PraisonAI
Releases · MervinPraison/PraisonAI
v2.2.79
What's Changed
- feat: Add comprehensive token metrics tracking for PraisonAI agents by @github-actions[bot] in #1055
- feat: Simplify token metrics to Agent(metrics=True) by @MervinPraison in #1056
- fix: reduce monitoring overhead with performance optimizations by @github-actions[bot] in #1058
- fix: add run() method alias to PraisonAIAgents for consistent API by @MervinPraison in #1063
- feat: make performance monitoring disabled by default by @github-actions[bot] in #1061
- perf: optimize telemetry performance with thread pools and queue-based processing by @github-actions[bot] in #1062
- fix: resolve zero token metrics by fixing agent LLM instance access by @MervinPraison in #1064
- fix: properly close aiohttp sessions in MCP HTTP stream transport by @github-actions[bot] in #1066
Full Changelog: v2.2.78...v2.2.79
v2.2.78
Full Changelog: v2.2.77...v2.2.78
v2.2.77
Full Changelog: v2.2.76...v2.2.77
v2.2.76
Full Changelog: v2.2.75...v2.2.76
v2.2.75
Full Changelog: v2.2.74...v2.2.75
v2.2.74
Full Changelog: v2.2.73...v2.2.74
v2.2.73
What's Changed
- feat: add context CLI command with --url and --goal parameters by @MervinPraison in #1026
- fix: implement real-time streaming for Agent.start() method by @MervinPraison in #1027
- fix: implement real-time streaming for Agent.start() method by @MervinPraison in #1028
- fix: enhance Gemini streaming robustness with graceful JSON parsing error handling by @MervinPraison in #1029
- fix: bypass display_generation for OpenAI streaming to enable raw chunk output by @MervinPraison in #1030
- fix: eliminate streaming pause caused by telemetry tracking by @github-actions[bot] in #1032
- Fix litellm deprecation warnings for issue #1033 by @github-actions[bot] in #1034
- fix: correct tool call argument parsing in streaming mode by @MervinPraison in #1037
- feat: Add comprehensive performance monitoring system by @MervinPraison in #1038
- Fix: Comprehensive LiteLLM deprecation warning suppression by @MervinPraison in #1039
- PR #1038: Monitoring examples by @MervinPraison in #1040
- PR #1039: Logging by @MervinPraison in #1041
- fix: ensure display_generating is called when verbose=True regardless of streaming mode by @MervinPraison in #1042
- fix: enhance LiteLLM streaming error handling for JSON parsing errors (Issue #1043) by @MervinPraison in #1044
- fix: correct display_generating logic to only show when stream=False AND verbose=True by @MervinPraison in #1045
- fix: implement proper streaming fallback logic for JSON parsing errors by @MervinPraison in #1046
- fix: resolve display_generating issue by ensuring stream parameter is correctly passed by @MervinPraison in #1047
- PR #1046: Changes from Claude by @MervinPraison in #1048
- Fix: Add display_generating support for OpenAI non-streaming mode by @MervinPraison in #1049
- fix: Remove display_generating when stream=false to prevent streaming-like behavior by @MervinPraison in #1050
New Contributors
- @github-actions[bot] made their first contribution in #1032
Full Changelog: v2.2.72...v2.2.73
v2.2.72
What's Changed
- fix: show context only when loglevel=debug by @MervinPraison in #1015
- fix(knowledge): resolve 'list' object has no attribute 'get' error by @MervinPraison in #1017
- fix: move verbose context information from live display to debug logging by @MervinPraison in #1020
- fix: move embedding debug logs to trace level by @MervinPraison in #1022
- fix: remove verbose memory context headers from AI agent prompts by @MervinPraison in #1023
- Implement Context Engineering - Add ContextAgent for AI Context Generation by @MervinPraison in #1021
- fix: remove verbose knowledge headers from AI agent prompts by @MervinPraison in #1024
Full Changelog: v2.2.71...v2.2.72
v2.2.71
What's Changed
- feat: Add comprehensive provider examples for all major LiteLLM providers with clean API structure by @Dhivya-Bharathy in #983
- fix: resolve vertex_ai parameter compatibility issue for image generation by @MervinPraison in #986
- fix: Add tool usage time tracking to MCP execution by @MervinPraison in #989
- fix: implement real-time streaming for Agent.start() method by @MervinPraison in #991
- Fix: Resolve agent termination issue by properly cleaning up telemetry system by @MervinPraison in #990
- Fix: Comprehensive telemetry cleanup to prevent agent termination issues by @MervinPraison in #996
- Revert "Fix: Comprehensive telemetry cleanup to prevent agent termination issues" by @MervinPraison in #997
- feat: integrate MongoDB as memory and knowledge provider by @MervinPraison in #994
- Fix: Resolve agent termination issue by adding comprehensive telemetry cleanup by @MervinPraison in #999
- Fix: Comprehensive telemetry cleanup to prevent agent termination hang by @MervinPraison in #1000
- Fix: Add telemetry cleanup to execute() and aexecute() methods to prevent hanging by @MervinPraison in #1001
- Fix: Add comprehensive telemetry cleanup to prevent agent termination hang by @MervinPraison in #1002
- Fix: Agent.start() now auto-consumes generator for better UX by @MervinPraison in #1004
- PR #1002: Changes from Claude by @MervinPraison in #1005
- Revert "PR #1002: Changes from Claude" by @MervinPraison in #1006
- Fix: Correct indentation in chat method try-finally block by @MervinPraison in #998
- fix: prevent telemetry shutdown from hanging indefinitely by @MervinPraison in #1010
- fix: prevent PostHog shutdown errors during interpreter shutdown by @MervinPraison in #1013
Full Changelog: v2.2.70...v2.2.71
v2.2.70
What's Changed
- fix: ensure consistent Task/Response formatting across all LLM providers by @MervinPraison in #961
- fix: properly shutdown telemetry aiohttp sessions to prevent ImportError during Python shutdown by @MervinPraison in #965
- fix: improve Ollama sequential tool execution to prevent redundant calls by @MervinPraison in #966
- fix: make TelemetryCollector.stop() consistent with shutdown behavior by @MervinPraison in #974
Full Changelog: v2.2.69...v2.2.70