Releases: evalstate/fast-agent
v0.2.30
What's Changed
HF_TOKEN mode
HF_TOKEN environment variable is used when accessing HuggingFace hosted MCP Servers either at hf.co/mcp or .hf.spaces. Can be overriden with Auth headers or fastagent.config.yaml
.
Use external prompt editor with Ctrl+E
Use CTRL+E to open an external editor for Prompt editing. See #218 for more details.
Aliyun support
Aliyun Bailian, which provides APIs for the Qwen series of models—widely used across mainland China. @yeahdongcn informs me that "Qwen3 is gaining a lot of popularity, and Aliyun is currently offering free tokens for developers"!
Other fixes
- fix(#208): Fix wrong prompt description displayed in the [Available MCP Prompts] table by @codeboyzhou in #209
- Support Deepseek json_format by @Zorro30 in #182
- fix(#226): Error in tool list changed callback by @codeboyzhou in #227
New Contributors
- @codeboyzhou made their first contribution in #209
- @Zorro30 made their first contribution in #182
- @yeahdongcn made their first contribution in #224
Full Changelog: v0.2.29...v0.2.30
v0.2.29
Changes
- Add HF_TOKEN mode @evalstate (#223)
- Support Deepseek json_format @Zorro30 (#182)
- fix(#208): Fix wrong prompt description displayed in the [Available MCP Prompts] table @codeboyzhou (#209)
- Made sure that empty content.parts will not anymore cause bugs when u… @janspoerer (#220)
- adding deprecated to fix missing dependency error @jjwall (#216)
- added slow llm to test parallel sampling @wreed4 (#197)
- ensure tool names are < 64 characters
- fix opentelemetry (the MCP instrumentation isn't compatible with 1.9)
- Update MCP package to 1.9.3
New Contributors
- @jjwall made their first contribution in #216
- @janspoerer made their first contribution in #220
Full Changelog: v0.2.28...v0.2.29
v0.2.28
Release 0.2.28
Gemini Native Support
This release switches to the native Google API as the default for google
models. Thanks to @monotykamary and @janspoerer for this work 🍾. If you run in to issues the old provider is accessible as googleoai
-> but you will need to update your API key to match.
Autosampling
Servers are now offered Sampling capability as default, provided by the Agents model (or system default if not specified). Switch auto_sampling: false
in the config.py file to switch off this behaviour.
Other Changes
- added slow llm to test parallel sampling @wreed4 (#197)
- fix(177): Fix prompt listing across agents @wreed4 (#178)
New Contributors
- @monotykamary made their first contribution in #134
Full Changelog: v0.2.27...v0.2.28
v0.2.27
v0.2.25
What's Changed
- Fix: Remove parallel_tool_calls from OpenAI model provider for 'o' model compatibility by @kikiya in #164
- feat: Add Azure OpenAI Service Support to FastAgent by @pablotoledo in #160
New Contributors
- @kikiya made their first contribution in #164
- @pablotoledo made their first contribution in #160
Full Changelog: v0.2.24...v0.2.25
v0.2.24
v0.2.23
HTTP Streaming Support
fast-agent
now has HTTP Streaming support.
Configure MCP Servers with the http
transport type.
Run agents as MCP Servers with the --transport=http
option - e.g. uv run agent.py --server --transport=http
prompt-server
supports the transport=http
option.
fast-agent go
allows specifying urls and auth token from the command line - e.g. fast-agent go url=http://localhost:8080/mcp auth=<token>
What's Changed
- Add Streaming HTTP Support for Client, Server and prompt-server. by @evalstate in #158
- Renaming max_steps_reached to max_iterations_reached on PlanResult by @theobjectivedad in #157
- Enable markup based on user config by @alpeshvas in #155
New Contributors
- @alpeshvas made their first contribution in #155
Full Changelog: v0.2.22...v0.2.23
v0.2.22
What's Changed
- Feat/add tensorzero inference by @aaronpolhamus in #125
- Minor improvements to TensorZero provider by @GabrielBianconi in #147
- Improvements in error messages for SSE Server Startup/masking issues
- Bump MCP SDK to 1.8.0
- Mitigation for #143
New Contributors
- @aaronpolhamus made their first contribution in #125
- @GabrielBianconi made their first contribution in #147
Full Changelog: v0.2.21...v0.2.22
v0.2.21
Changes
- allow mcp versions >=1.7.0, <1.8.0 @mpereira (#138)
- prompt-server sse port configuration @evalstate (#141)
New Contributors
Full Changelog: v0.2.20...v0.2.21
v0.2.20
What's Changed
- LiteLLM+LangFuse Support: Enhancing how request_params are overridden by @theobjectivedad in #129
- Add more checks for when args parsing is disabled by @kahkeng in #133
- Added MCP server
cwd
setting by @jpeirson in #131 - Fix/misc by @evalstate in #137...
- otel instance id set
- allow hyphens in tool/server names, namespace refactor (llm) #135
- windows sse server startup fix #121
- increase max_iterations for tool call/llm loop
- add cwd functionality test
New Contributors
- @theobjectivedad made their first contribution in #129
- @kahkeng made their first contribution in #133
- @jpeirson made their first contribution in #131
Thanks to - @lizzy-0323 for raising hyphen in server defect
- @wowok-ai for raising windows sse server issue
Full Changelog: v0.2.19...v0.2.20