Releases: evalstate/fast-agent
v0.2.45
Changes
- Improve JSON Printing for Assistant Responses
- Use JSON Mode for Anthropic Structured Outputs @evalstate (#302)
- Update models in Evaluator example @evalstate (#301)
- Feat/display update @evalstate (#300)
- Update default model in Router.py
- Improve handling of System Prompt/Default Prompt for Router Simplify Eval/Optimizer
- Feat/display update by @evalstate in #300
Full Changelog: v0.2.44...v0.2.45
v0.2.44
Visual Update
This release refreshes the Console Display to use a new style (trialled with the new fast-agent go multi-model feature). If this is causing you issues then update the logger config to use_legacy_display: true.
What's Changed
- Fix/resource display by @evalstate in #294
- Adjust the lifetime of event queue for listeners in
AsyncEventBusby @ufownl in #285 - Fix ResourceLinks for OpenAI
- Standardize on ContentBlock
- Bump SDK to MCP 1.12.0
Full Changelog: v0.2.42...v0.2.44
v0.2.42
What's Changed
MCP Tool, Resource, Prompt filtering selection:
Powerful filtering for MCP primitives per-agent so you can configure the precise tools, prompts and resources you need. See https://fast-agent.ai/mcp/#mcp-filtering for instructions
- MCP filtering by @jdecker76 in #282
AWS Bedrock Provider Native Support
Another huge contribution! https://fast-agent.ai/models/llm_providers/#aws-bedrock
- AWS Bedrock provider native support by @jdecker76 in #274
Small changes
- fix o4 and adjust hf space auth token @evalstate in #293 (solves #279 - thanks for raising @storlien
New Contributors
- @jdecker76 made their first contribution in #282
Full Changelog: v0.2.41...v0.2.42
v0.2.41
What's Changed (fast-agent users please read!)
Auto-Parallel, NPX and UVX from Command Line
Auto-Parallel FEEDBACK REQUESTED
You can now specify multiple models for fast-agent, and get a formatted response for each model. This is perfect for comparing model outputs side-by-side or MCP Server behaviour. Comma delimit the models like so:
fast-agent --models gpt-4.1,grok-3,haiku
You can also specify a message to automatically prompt with (-m "hello, world" or a prompt file --prompt-file "testing.md").
The parallel output in this mode is styled differently to the normal fast-agent output - no side bars (easy copy/paste) and markdown rendering enabled. There is also a "tree" style view for Router and Parallel agents. I am thinking about deploying this style more widely - please let me know what you think by trying the multi-model feature.
You can now run fast-agent to specify an NPX or UVX MCP Server directly:
fast-agent --npx @llmindset/mcp-webcam.
To pass arguments:
fast-agent --npx "@llmindset/mcp-webcam --arg1 --arg2
- allow npx/uvx from command line by @evalstate in #269
- "tree display" for router and parallel.
- fix azure api key environment variable handling (thanks for raising #270 @storlien)
xAI Grok 3 and Grok 4 support
- Add xAI (Grok 3 and 4) support by @Craftzman7 in #276
Show MCP Prompts / Tools
- Display MCP prompt and tool title in the [Available MCP Prompts/Tools] table by @codeboyzhou in #271
Set API key per-agent
New Contributors
- @Craftzman7 made their first contribution in #276
Full Changelog: v0.2.40...v0.2.41
v0.2.38
fast-agent: MCP Elicitation Support
Tip
Read the Quick Start guide here https://fast-agent.ai/mcp/elicitations/ to get up and running with fully working MCP Client/Server demonstrations.
fast-agent now has comprehensive support for the MCP Elicitations feature, providing compliant, interactive forms that allow full preview and validation before submission. It supports all data types and validations (including email, uri, date and date-time) with fully interactive forms.
There is an cancel-all option (easily navigable with ← + Enter) for safety that automatically cancels further Elicitations from the MCP Server.
For MCP Server developers, the form is fast and easy to navigate to facilitating iterative development. You can also specify your own custom Elicitation handlers.
What's Changed
- Feat/elicitation by @evalstate in #261
- Fix the issue of google native when
use_history=Falseby @ufownl in #257 - Fix: Pass include_request parameter to ParallelAgent constructor by @gwburg in #265
- Fix/build res other by @evalstate in #262
New Contributors
Full Changelog: v0.2.36...v0.2.38
Thanks to @gwburg and @ufownl for their contributions in this release.
v0.2.36
What's Changed
- Support for Streaming with OpenAI endpoints, migrate to Async API.
- Added
/toolscommand and MCP Server summary on startup/agent switch. - Migrate to official A2A Types
- SDK bumps (including MCP 1.10.1)
Full Changelog: v0.2.35...v0.2.36
v0.2.35
Streaming Support for Anthropic Endpoint
NB: The default max_output_tokens for Opus 4 and Sonnet 4 is now 32,000 and 64,000 tokens respectively. For OpenAI and Anthropic models, in general unless you want lower than max I'd recommend using the fast-agent defaults from the model database.
- Feat/streaming @evalstate (#251)
Full Changelog: v0.2.34...v0.2.35
v0.2.34
Usage and Context Window Support
Last Turn and Cumulative usage is now available via the UsageAccumulator available on the Agent interface (usage_tracking). This also contains the Provider API specific Usage information on a per-turn basis.

Added support for Context Window %age usage for known models, including tokenizable content support in preparation for improved multi-modality support.
/usage command available in interactive mode to show current agent token usage.
- Feat/context windows @evalstate (#247 #248)
v0.2.33
What's Changed
- fix last message is assistant handling for structured by @evalstate in #238
- add env var support to the config yaml file by @hevangel in #239
New Contributors
Full Changelog: v0.2.31...v0.2.33
v0.2.31
Changes
- bump MCP SDK to 1.9.4
- fix: last message is assistant for gemini structured handling @evalstate (#238)
- Fix orchestrator doc (max_iterations => plan_iterations) @yeahdongcn (#237)
- Add OpenAI o3 @kahkeng (#236)
- Support explicitly marking an agent as default @yeahdongcn (#231)
- custom agent poc @wreed4 (#92)
- Fix deepseek-reasoner request with history @ufownl (#228)
- Feat/hf token mode @evalstate (#230)
