Skip to content

Releases: evalstate/fast-agent

v0.2.45

20 Jul 21:57

Choose a tag to compare

Changes

  • Improve JSON Printing for Assistant Responses
  • Use JSON Mode for Anthropic Structured Outputs @evalstate (#302)
  • Update models in Evaluator example @evalstate (#301)
  • Feat/display update @evalstate (#300)
  • Update default model in Router.py
  • Improve handling of System Prompt/Default Prompt for Router Simplify Eval/Optimizer
  • Feat/display update by @evalstate in #300

Full Changelog: v0.2.44...v0.2.45

v0.2.44

19 Jul 22:28

Choose a tag to compare

Visual Update

This release refreshes the Console Display to use a new style (trialled with the new fast-agent go multi-model feature). If this is causing you issues then update the logger config to use_legacy_display: true.

image

What's Changed

  • Fix/resource display by @evalstate in #294
  • Adjust the lifetime of event queue for listeners in AsyncEventBus by @ufownl in #285
  • Fix ResourceLinks for OpenAI
  • Standardize on ContentBlock
  • Bump SDK to MCP 1.12.0

Full Changelog: v0.2.42...v0.2.44

v0.2.42

16 Jul 20:39

Choose a tag to compare

What's Changed

MCP Tool, Resource, Prompt filtering selection:

Powerful filtering for MCP primitives per-agent so you can configure the precise tools, prompts and resources you need. See https://fast-agent.ai/mcp/#mcp-filtering for instructions

AWS Bedrock Provider Native Support

Another huge contribution! https://fast-agent.ai/models/llm_providers/#aws-bedrock

Small changes

New Contributors

Full Changelog: v0.2.41...v0.2.42

v0.2.41

13 Jul 19:00

Choose a tag to compare

What's Changed (fast-agent users please read!)

Auto-Parallel, NPX and UVX from Command Line

Auto-Parallel FEEDBACK REQUESTED

You can now specify multiple models for fast-agent, and get a formatted response for each model. This is perfect for comparing model outputs side-by-side or MCP Server behaviour. Comma delimit the models like so:

fast-agent --models gpt-4.1,grok-3,haiku

You can also specify a message to automatically prompt with (-m "hello, world" or a prompt file --prompt-file "testing.md").

The parallel output in this mode is styled differently to the normal fast-agent output - no side bars (easy copy/paste) and markdown rendering enabled. There is also a "tree" style view for Router and Parallel agents. I am thinking about deploying this style more widely - please let me know what you think by trying the multi-model feature.

You can now run fast-agent to specify an NPX or UVX MCP Server directly:
fast-agent --npx @llmindset/mcp-webcam.
To pass arguments:
fast-agent --npx "@llmindset/mcp-webcam --arg1 --arg2

image image
  • allow npx/uvx from command line by @evalstate in #269
  • "tree display" for router and parallel.
  • fix azure api key environment variable handling (thanks for raising #270 @storlien)

xAI Grok 3 and Grok 4 support

Show MCP Prompts / Tools

  • Display MCP prompt and tool title in the [Available MCP Prompts/Tools] table by @codeboyzhou in #271

Set API key per-agent

  • Add api_key argument to specify API KEY when creating agents by @ufownl in #268

New Contributors

Full Changelog: v0.2.40...v0.2.41

v0.2.38

06 Jul 21:47

Choose a tag to compare

fast-agent: MCP Elicitation Support

Tip

Read the Quick Start guide here https://fast-agent.ai/mcp/elicitations/ to get up and running with fully working MCP Client/Server demonstrations.

fast-agent now has comprehensive support for the MCP Elicitations feature, providing compliant, interactive forms that allow full preview and validation before submission. It supports all data types and validations (including email, uri, date and date-time) with fully interactive forms.

There is an cancel-all option (easily navigable with + Enter) for safety that automatically cancels further Elicitations from the MCP Server.

placeholder

For MCP Server developers, the form is fast and easy to navigate to facilitating iterative development. You can also specify your own custom Elicitation handlers.

What's Changed

New Contributors

Full Changelog: v0.2.36...v0.2.38

Thanks to @gwburg and @ufownl for their contributions in this release.

v0.2.36

29 Jun 08:59

Choose a tag to compare

What's Changed

  • Support for Streaming with OpenAI endpoints, migrate to Async API.
  • Added /tools command and MCP Server summary on startup/agent switch.
  • Migrate to official A2A Types
  • SDK bumps (including MCP 1.10.1)

Full Changelog: v0.2.35...v0.2.36

v0.2.35

26 Jun 18:12

Choose a tag to compare

Streaming Support for Anthropic Endpoint

stream_release

NB: The default max_output_tokens for Opus 4 and Sonnet 4 is now 32,000 and 64,000 tokens respectively. For OpenAI and Anthropic models, in general unless you want lower than max I'd recommend using the fast-agent defaults from the model database.

Closes #234 , #186

Full Changelog: v0.2.34...v0.2.35

v0.2.34

22 Jun 11:39

Choose a tag to compare

Usage and Context Window Support

Last Turn and Cumulative usage is now available via the UsageAccumulator available on the Agent interface (usage_tracking). This also contains the Provider API specific Usage information on a per-turn basis.
usage_info

Added support for Context Window %age usage for known models, including tokenizable content support in preparation for improved multi-modality support.

/usage command available in interactive mode to show current agent token usage.

v0.2.33

19 Jun 19:46

Choose a tag to compare

What's Changed

  • fix last message is assistant handling for structured by @evalstate in #238
  • add env var support to the config yaml file by @hevangel in #239

New Contributors

Full Changelog: v0.2.31...v0.2.33

v0.2.31

12 Jun 09:41

Choose a tag to compare

Changes