Releases: pydantic/pydantic-ai
Releases Β· pydantic/pydantic-ai
v1.0.9 (2025-09-18)
What's Changed
- Stream built-in tool calls from OpenAI, Google, Anthropic and return them on next request (required for OpenAI reasoning) by @DouweM in #2877
- Include built-in tool calls and results in OTel messages by @DouweM in #2954
- Add
RunContext.max_retries
and.last_attempt
by @DouweM in #2952 - Fix
StreamedResponse.model_name
for Azure OpenAI with content filter by @DouweM in #2951 - Fix TemporalAgent dropping model-specific
ModelSettings
(e.g.openai_reasoning_effort
) by @DouweM in #2938 - Don't send item IDs to Responses API for non-reasoning models by @DouweM in #2950
- Update DBOS version by @qianl15 in #2939
Full Changelog: v1.0.8...v1.0.9
v1.0.8 (2025-09-16)
What's Changed
- Tools can now return AG-UI events separate from result sent to model by @DouweM in #2922
- Fix bug causing doubled reasoning tokens usage by deepcopying by @DouweM in #2920
- Fix auto-detection of HTTP proxy settings by @maxnilz in #2917
- Fix
new_messages()
andcapture_run_messages()
when history processors are used by @DouweM in #2921 - chore: Remove 'text' from RunUsage docstrings by @alexmojaki in #2919
New Contributors
Full Changelog: v1.0.7...v1.0.8
v1.0.7 (2025-09-15)
What's Changed
- Added MCP metadata and annotations to
ToolDefinition.metadata
for use in filtering by @ChuckJonas in #2880 - When starting run with message history ending in
ModelRequest
, make its content available inRunContext.prompt
by @DouweM in #2891 - Let
FunctionToolset
take default values forstrict
,sequential
,requires_approval
,metadata
by @DouweM in #2909 - Don't require
mcp
orlogfire
to use Temporal or DBOS by @DouweM in #2908 - Combine consecutive AG-UI user and assistant messages into the same model request/response by @DouweM in #2912
- Fix
new_messages()
whendeferred_tool_results
is used withmessage_history
ending inToolReturnPart
s by @DouweM in #2913
Full Changelog: v1.0.6...v1.0.7
v1.0.6 (2025-09-12)
What's Changed
- Add support for
previous_response_id
from Responses API by @GDaamn in #2756 - Let MCP servers be loaded from file by @Kludex in #2698
- Fix how thinking summaries are sent back to Responses API by @DouweM in #2883
- Bump Cohere SDK and remove incorrect typing workaround by @DouweM in #2886
- Update MCP tool call customisation docs by @MasterOdin in #2817
New Contributors
- @MasterOdin made their first contribution in #2817
Full Changelog: v1.0.5...v1.0.6
v1.0.5 (2025-09-11)
What's Changed
- Don't lose Azure OpenAI Responses encrypted_content if no summary was included by @DouweM in #2874
- Store OpenAI Responses text part ID to prevent error with reasoning by @DouweM in #2882
- Make OpenAIResponsesModel work with reasoning from other models and modified history by @DouweM in #2881
Full Changelog: v1.0.4...v1.0.5
v1.0.4 (2025-09-11)
What's Changed
- Add Pydantic AI Gateway provider by @Kludex in #2816, #2863
- Fix OpenAI Responses API tool calls with reasoning by @DouweM in #2869
- Support OpenAI Responses API returning encrypted reasoning content without summary by @DouweM in #2866
- Don't ask for OpenAI Responses API to include encrypted reasoning content for models that don't support it by @DouweM in #2867
- docs: update builtin-tools md by @tberends in #2857
New Contributors
Full Changelog: v1.0.3...v1.0.4
v1.0.3 (2025-09-10)
What's Changed
- Include thinking parts in subsequent model requests to improve performance and cache hit rates by @DouweM in #2823
- Add
on_complete
callback to AG-UI functions to get access toAgentRunResult
by @ChuckJonas in #2429 - Support
ModelSettings.seed
inGoogleModel
by @DouweM in #2842 - Add
with agent.sequential_tool_calls():
contextmanager and use it inDBOSAgent
by @DouweM in #2856 - Ensure
ModelResponse
fields are set from actual model response when streaming by @DouweM in #2848 - Send AG-UI thinking start and end events by @DouweM in #2855
- Support models that return output tool args as
{"response': "<JSON string>"}
by @shaheerzaman in #2836 - Support
NativeOutput
withFunctionModel
by @DouweM in #2843 - Raise error when
WebSearchTool
is used withOpenAIChatModel
and unsupported model by @safina57 in #2824
New Contributors
- @safina57 made their first contribution in #2824
- @shaheerzaman made their first contribution in #2836
Full Changelog: v1.0.2...v1.0.3
v1.0.2 (2025-09-08)
What's Changed
- Add support for durable execution with DBOS by @qianl15 in #2638
- Support sequential tool calling by @strawgate in #2718
- Add
GoogleModelSettings.google_cached_content
to passcached_content
by @Hojland in #2832 - Add
ModelResponse.finish_reason
and setprovider_response_id
while streaming by @fatelei in #2590 - Add support for
gen_ai.response.id
by @Kludex in #2831 - Only send tool choice to Bedrock Converse API for Anthropic and Nova models by @DouweM in #2819
- Handle errors in cost calculation in InstrumentedModel by @alexmojaki in #2834
New Contributors
- @Hojland made their first contribution in #2832
- @qianl15 made their first contribution in #2638
- @fatelei made their first contribution in #2590
Full Changelog: v1.0.1...v1.0.2
v1.0.1 (2025-09-05)
Breaking Changes
The following breaking change was slated to go into v1.0.0 but accidentally left out. Because it's a security issue, an uncommonly used feature, and few people will have updated to v1 yet within 24 hours, we decided it was justified to make an exception to the no-breaking-changes policy to get it out ASAP.
Other Changes
Full Changelog: v1.0.0...v1.0.1
v1.0.0 (2025-09-04)
What's Changed
- Drop support for Python 3.9 by @Kludex in #2725
- Deprecate
OpenAIModelProfile.openai_supports_sampling_settings
by @Kludex in #2730 - Add support for human-in-the-loop tool call approval by @DouweM in #2581
- Add
tool_calls_limit
toUsageLimits
andtool_calls
toRunUsage
by @tradeqvest in #2633 - Add LiteLLM provider for OpenAI API compatible models by @mochow13 in #2606
- Add
identifier
field toFileUrl
and subclasses by @kyuam32 in #2636 - Support
NativeOutput
with Groq by @DouweM in #2772 - Add
docstring_format
,require_parameter_descriptions
,schema_generator
toFunctionToolset
by @g-eoj in #2601 - Gracefully handle errors in evals by @dmontagu in #2295
- Include
logfire
with pydantic-ai package by @Kludex in #2683 - Let almost all types used in docs examples be imported directly from
pydantic_ai
by @DouweM in #2736 - Bump
temporalio
to 1.17.0 by @DouweM in #2811 - Default
InstrumentationSettings
version
to 2 by @alexmojaki in #2726 - Remove cases and averages from eval span by @DouweM in #2715
- Make many more dataclasses kw-only by @dmontagu in #2738
- Don't emit empty AG-UI thinking message events by @DouweM in #2754
- Update
mcp
package version by @BrokenDuck in #2741 - Raise error if MCP server
__aexit__
is called when_running_count
is already0
by @federicociner in #2696 - Fix error when streaming from Gemini includes only
executable_code
orcode_execution_result
by @binaryCrossEntropy in #2719 - Close original response when retrying HTTP request by @DouweM in #2753
Agent.__aenter__
returnsSelf
, use default instrumentation for MCP sampling model by @alexmojaki in #2765- Fix Anthropic streaming usage counting by @DouweM in #2771
- Create separate
ThinkingParts
for separate OpenAI Responses reasoning summary parts by @DouweM in #2775 - Handle Groq
tool_use_failed
errors by getting model to retry by @DouweM in #2774 - Raise error when trying to use Google built-in tools with user/output tools by @DouweM in #2777
- Move
mcp-run-python
to its own repo by @samuelcolvin in #2776 - Fix Azure OpenAI streaming when async content filter is enabled by @frednijsvrt in #2763
- Don't emit AG-UI text message content events with empty text part deltas by @celeritatem in #2779
- Handle streaming thinking signature deltas from Bedrock Converse API by @DouweM in #2785
- Don't require
MCPServerStreamableHTTP
andMCPServerSSE
url
to be a keyword argument by @franciscovilchezv in #2758 - Add
operation.cost
span attribute to model request spans, renameModelResponse.price()
to.cost()
by @alexmojaki in #2767 - Ensure that old
ModelResponse
s stored in a DB can still be deserialized by @DouweM in #2792 - Type
ModelRequest.parts
andModelResponse.parts
asSequence
by @moritzwilksch in #2798 - Always run
event_stream_handler
inside Temporal activity by @DouweM in #2806 - Document that various functions need to be async to be used with Temporal by @DouweM in #2809
New Contributors
- @BrokenDuck made their first contribution in #2741
- @federicociner made their first contribution in #2696
- @g-eoj made their first contribution in #2601
- @binaryCrossEntropy made their first contribution in #2719
- @Trollgeir made their first contribution in #2729
- @kyuam32 made their first contribution in #2636
- @richhuth made their first contribution in #2680
- @frednijsvrt made their first contribution in #2763
- @celeritatem made their first contribution in #2779
- @mochow13 made their first contribution in #2606
- @franciscovilchezv made their first contribution in #2758
- @moritzwilksch made their first contribution in #2798
Full Changelog: v0.8.1...v1.0.0