An autonomous, extensible AI agent built in Rust that can use various tools to accomplish complex goals.
The agent uses a "Reasoning and Acting" (ReAct) pattern, allowing it to think through a problem, decide on an action, execute it using a tool, and observe the result before continuing.
- Autonomous Goal Pursuit: Set a complex goal, and the agent iterates until it finds a solution or reaches the maximum iteration limit.
- Multi-Provider Support: Compatible with OpenAI, and other providers via OpenRouter.
- Smart Memory Management: Implements semantic summarization and priority-based memory management to keep the context window efficient.
- Tool Use Capabilities:
- Calculator: Evaluate complex mathematical expressions.
- File Operations: Read, write, list, and manage files in a sandboxed
agent_workspace. - DateTime: Get current date and time information.
- Web Search: Search the web for real-time information using Serper.dev.
- Interactive Chat Mode: A conversational interface for interacting with the agent.
- Streaming Output: Real-time feedback as the agent "thinks" and performs actions.
- Rust and Cargo (2024 edition recommended)
- API keys for your preferred AI provider (OpenAI or OpenRouter)
- Serper.dev API key for web search capabilities
-
Clone the repository:
git clone <repository-url> cd rust-agent
-
Create a
.envfile in the project root:# Provider selection (openai or openrouter) PROVIDER=openai API_KEY=your_api_key # Required for web search SERPER_API_KEY=your_serper_key
-
Build the project:
cargo build --release
Start a conversation with the agent:
cargo run --releaseAvailable commands in chat:
/quitor/exit: End the session./clear: Clear conversation memory./memory: Show current memory trace./help: Show help message.
Run the agent to accomplish a specific task and exit:
cargo run --release -- --goal "Write a short poem about Rust and save it to poem.txt"cargo run --release -- --help| Option | Description | Default |
|---|---|---|
-g, --goal <GOAL> |
Run a single goal directly | - |
-m, --model <MODEL> |
LLM model to use | openai/gpt-4o-mini |
--max-iterations <N> |
Maximum reasoning iterations | 5 |
-v, --verbose |
Show debug information and memory traces | false |
--no-stream |
Disable streaming output | false |
# Simple calculation
cargo run --release -- --goal "What is 25 * 4 + 100?"
# File operations
cargo run --release -- --goal "Create a file called notes.txt with a shopping list"
# Web search (requires SERPER_API_KEY)
cargo run --release -- --goal "Search for the latest Rust version and save it to a file"src/
├── main.rs # CLI entry point and orchestration
├── agent/ # Core agent logic
│ ├── agent.rs # ReAct loop and tool execution
│ ├── memory.rs # Token-aware memory management
│ ├── prompts.rs # System prompts for the LLM
│ └── action_parser.rs # Parse LLM output into actions
├── clients/ # API clients
│ ├── openai.rs # OpenAI API client
│ ├── openrouter.rs # OpenRouter API client
│ └── config.rs # Provider configuration
├── tools/ # Available tools
│ ├── calculator_tool.rs # Mathematical expressions
│ ├── file_ops_tool.rs # Sandboxed file operations
│ ├── datetime_tool.rs # Date/time information
│ └── web_search.rs # Web search via Serper.dev
├── error.rs # Centralized error types
├── http.rs # HTTP client configuration
└── retry.rs # Retry logic with exponential backoff
Adding new tools is straightforward:
-
Create a new file in
src/tools/implementing theTooltrait:#[async_trait] impl Tool for MyTool { fn name(&self) -> &str { "my_tool" } fn description(&self) -> &str { "Description for the LLM" } async fn execute(&self, args: Value) -> anyhow::Result<ToolOutput> { // Tool implementation } }
-
Register the tool in
src/main.rswithin thesetup_toolsfunction.