Releases: MervinPraison/PraisonAI
Releases · MervinPraison/PraisonAI
v2.2.75
Full Changelog: v2.2.74...v2.2.75
v2.2.74
Full Changelog: v2.2.73...v2.2.74
v2.2.73
What's Changed
- feat: add context CLI command with --url and --goal parameters by @MervinPraison in #1026
- fix: implement real-time streaming for Agent.start() method by @MervinPraison in #1027
- fix: implement real-time streaming for Agent.start() method by @MervinPraison in #1028
- fix: enhance Gemini streaming robustness with graceful JSON parsing error handling by @MervinPraison in #1029
- fix: bypass display_generation for OpenAI streaming to enable raw chunk output by @MervinPraison in #1030
- fix: eliminate streaming pause caused by telemetry tracking by @github-actions[bot] in #1032
- Fix litellm deprecation warnings for issue #1033 by @github-actions[bot] in #1034
- fix: correct tool call argument parsing in streaming mode by @MervinPraison in #1037
- feat: Add comprehensive performance monitoring system by @MervinPraison in #1038
- Fix: Comprehensive LiteLLM deprecation warning suppression by @MervinPraison in #1039
- PR #1038: Monitoring examples by @MervinPraison in #1040
- PR #1039: Logging by @MervinPraison in #1041
- fix: ensure display_generating is called when verbose=True regardless of streaming mode by @MervinPraison in #1042
- fix: enhance LiteLLM streaming error handling for JSON parsing errors (Issue #1043) by @MervinPraison in #1044
- fix: correct display_generating logic to only show when stream=False AND verbose=True by @MervinPraison in #1045
- fix: implement proper streaming fallback logic for JSON parsing errors by @MervinPraison in #1046
- fix: resolve display_generating issue by ensuring stream parameter is correctly passed by @MervinPraison in #1047
- PR #1046: Changes from Claude by @MervinPraison in #1048
- Fix: Add display_generating support for OpenAI non-streaming mode by @MervinPraison in #1049
- fix: Remove display_generating when stream=false to prevent streaming-like behavior by @MervinPraison in #1050
New Contributors
- @github-actions[bot] made their first contribution in #1032
Full Changelog: v2.2.72...v2.2.73
v2.2.72
What's Changed
- fix: show context only when loglevel=debug by @MervinPraison in #1015
- fix(knowledge): resolve 'list' object has no attribute 'get' error by @MervinPraison in #1017
- fix: move verbose context information from live display to debug logging by @MervinPraison in #1020
- fix: move embedding debug logs to trace level by @MervinPraison in #1022
- fix: remove verbose memory context headers from AI agent prompts by @MervinPraison in #1023
- Implement Context Engineering - Add ContextAgent for AI Context Generation by @MervinPraison in #1021
- fix: remove verbose knowledge headers from AI agent prompts by @MervinPraison in #1024
Full Changelog: v2.2.71...v2.2.72
v2.2.71
What's Changed
- feat: Add comprehensive provider examples for all major LiteLLM providers with clean API structure by @Dhivya-Bharathy in #983
- fix: resolve vertex_ai parameter compatibility issue for image generation by @MervinPraison in #986
- fix: Add tool usage time tracking to MCP execution by @MervinPraison in #989
- fix: implement real-time streaming for Agent.start() method by @MervinPraison in #991
- Fix: Resolve agent termination issue by properly cleaning up telemetry system by @MervinPraison in #990
- Fix: Comprehensive telemetry cleanup to prevent agent termination issues by @MervinPraison in #996
- Revert "Fix: Comprehensive telemetry cleanup to prevent agent termination issues" by @MervinPraison in #997
- feat: integrate MongoDB as memory and knowledge provider by @MervinPraison in #994
- Fix: Resolve agent termination issue by adding comprehensive telemetry cleanup by @MervinPraison in #999
- Fix: Comprehensive telemetry cleanup to prevent agent termination hang by @MervinPraison in #1000
- Fix: Add telemetry cleanup to execute() and aexecute() methods to prevent hanging by @MervinPraison in #1001
- Fix: Add comprehensive telemetry cleanup to prevent agent termination hang by @MervinPraison in #1002
- Fix: Agent.start() now auto-consumes generator for better UX by @MervinPraison in #1004
- PR #1002: Changes from Claude by @MervinPraison in #1005
- Revert "PR #1002: Changes from Claude" by @MervinPraison in #1006
- Fix: Correct indentation in chat method try-finally block by @MervinPraison in #998
- fix: prevent telemetry shutdown from hanging indefinitely by @MervinPraison in #1010
- fix: prevent PostHog shutdown errors during interpreter shutdown by @MervinPraison in #1013
Full Changelog: v2.2.70...v2.2.71
v2.2.70
What's Changed
- fix: ensure consistent Task/Response formatting across all LLM providers by @MervinPraison in #961
- fix: properly shutdown telemetry aiohttp sessions to prevent ImportError during Python shutdown by @MervinPraison in #965
- fix: improve Ollama sequential tool execution to prevent redundant calls by @MervinPraison in #966
- fix: make TelemetryCollector.stop() consistent with shutdown behavior by @MervinPraison in #974
Full Changelog: v2.2.69...v2.2.70
v2.2.69
What's Changed
- fix: resolve task_name undefined error in async execution by @MervinPraison in #959
- fix: accumulate tool results across iterations in Ollama sequential execution by @MervinPraison in #960
- fix: enable real-time streaming regardless of verbose setting by @MervinPraison in #962
Full Changelog: v2.2.68...v2.2.69
v2.2.68
What's Changed
- fix: prevent premature termination in Ollama sequential tool execution by @MervinPraison in #954
- fix: resolve Ollama infinite loop issue with minimal changes by @MervinPraison in #955
- Reorganize models and providers structure with comprehensive examples by @Dhivya-Bharathy in #949
- fix: comprehensive task_name undefined error resolution by @MervinPraison in #957
Full Changelog: v2.2.67...v2.2.68
v2.2.67
What's Changed
- fix: resolve task_name undefined error in LLM callback execution by @MervinPraison in #952
Full Changelog: v2.2.66...v2.2.67
v2.2.66
What's Changed
- fix: Resolve Ollama infinite tool call loops by improving response handling by @MervinPraison in #943
- feat: Add 7 comprehensive examples for advanced PraisonAI agent features by @MervinPraison in #942
- fix: prevent Ollama tool summary from being overwritten by empty response by @MervinPraison in #944
- fix: prevent premature termination in Ollama sequential tool execution by @MervinPraison in #945
- Revert "fix: prevent premature termination in Ollama sequential tool execution" by @MervinPraison in #953
- feat: Enable imports from PraisonAI package for issue #950 by @MervinPraison in #951
Full Changelog: v2.2.65...v2.2.66