VenusAI is a secure and extensible Agent framework built for modern AI applications. It offers dynamic tool management, powerful decorators, advanced caching, robust error handling, a built-in CLI, and seamless Claude MCP integration.
Install library via pip or uv.
Note: The venusai is alias of venai, you can use both but venai is the main package.
pip install venai
pip install venusaior
uv add venai
uv add venusaiInstall latest NodeJS with npx for Claude Desktop HTTP support.
Note: mcp-remote package used for support.
- π‘οΈ Security-first design with permission-based tool filtering & E2B sandbox integration
- π§ Dynamic tool ecosystem with decorators for safety, autofix & error recovery
- β‘ High-performance caching with multiple backends (
aiocache,lrucache,async-lru,cachetools) - π HTTP API generation β automatically expose tools as REST endpoints
- π€ MCP Protocol native support β seamless Claude Desktop integration
- π― Type-safe dependency injection with advanced context management
- π Self-healing tools β automatic error recovery & retry mechanisms
- π Comprehensive error tracking with detailed frame info & custom handlers
Whether you're building simple chatbots or complex multi-agent systems, VenusAI provides the foundation for scalable, maintainable, and secure AI applications.
-
Venus
- Base class for all Agents
- No default toolset (bare Agent)
-
VenusCode
- Subclass of Venus with coding capabilities
- Built-in filesystem toolset
- Permission-based tool filtering (supports custom permitters)
- Code execution disabled by default
- E2B sandbox integration for safe execution
-
Dynamic tool integration from modules
-
Dynamic Dependency Injection
-
Decorators
@agent.safeβ error-safe wrapper for non-context tools@agent.safe_plainβ error-safe wrapper for context tools@agent.autofixβ self-healing tools (functions can fix themselves)@agent.on_errorβ custom error handler
-
Register tools as HTTP endpoints (beta)
- Convert registered tools to HTTP API (via FastAPI)
- Just call
agent.tools_http_api()andagent.serve()
-
Sync/Async caching for tools with
@cached- Backends:
aiocache,lrucache,async-lru,cachetools
- Backends:
-
Autofix mechanism
- Implicitly handles errors via
@safe - Customizable fix-prompt & fix-model
- Falls back to a default model if none provided
- Implicitly handles errors via
-
Error Handlers
- Errors yield an ErrorDict with frame & error details
- Fully customizable responses/actions
from venus import Venus
from venus.errors import ErrorDict
from venus.types import CacheDeps, Deps, DepsT, ModelRetry, RunContext
import hashlib
import logfire
logfire.configure(console=logfire.ConsoleOptions(show_project_link=False))
logfire.instrument_pydantic_ai()
agent = Venus("grok:grok-3", deps_type=int)
class Bank(Deps[DepsT]):
reserve: int
"""Current bank reserves."""
@agent.on_error
async def retry_on_failure(err: ErrorDict):
print(f"Error occurred: {err.exception} at {err.location}. Retrying...")
raise ModelRetry(err.exception)
@agent.on_error
async def notify(err: ErrorDict):
# e.g: await send_mail(body=err.message)
pass
def get_reserves():
return 1_881_938
def get_details():
return {'code': 'tr', 'swift': 1283, 'id': 1710}
@agent.safe(retries=3, deps=Deps(reserve=get_reserves, details=get_details))
async def add_money(ctx: RunContext[Bank[int]], fund: int):
if fund <= 5:
raise ValueError("Enter a number greater than 5.")
ctx.deps.reserve += fund
bank_details = ctx.deps.get(dict)
bank_id = bank_details['id']
tx_hash = hashlib.md5(str(bank_id + ctx.deps.reserve).encode()).hexdigest()
print(f"Connected bank with ID {bank_details['code'].upper()}{bank_details['swift']}")
print(f"Added ${fund} to current (${ctx.deps.reserve - fund}) reserves.")
print(f"Hash for transaction: {tx_hash}")
return ctx.deps.reserve
@agent.safe(deps=CacheDeps(id=lambda: 7))
async def test(ctx: RunContext[CacheDeps]):
return ctx.deps.idRun:
result = agent.run_sync("Add random money to the bank, pick 4 to 6.", output_type=int)
print(result.output)or
a2a = agent.to_a2a()venus serve agent:agent a2a --env devβ This example is complete and runnable as-is.
- Tool integration from modules via
@tool/@mcp_tool - Dynamic Claude configuration with
MCP.configure(configure_claude=True) - Dependency Injection support for MCP tools
- mcp-remote integration with HTTP/SSE for Claude Desktop
Venus provides a command-line interface (CLI) to manage and run agents. You can start chats, serve APIs, or launch MCP servers directly from the terminal.
- Chat with an agent
venus chat module:app- Run MCP Server
venus mcp --path my_tools.py --name "Venus MCP" --host 127.0.0.1 --port 8000 --transport <sse|http|stdio> --configurevenus mcp --path my_tools.py --name "Venus MCP" --host 127.0.0.1 --port 8000 --transport <sse|http|stdio> --configure --all- Serve an Agent as API
venus serve mymodule:agent --auto --env devchatβ Start interactive CLI chat with an agentmcpβ Run an MCP server with tools from modulesserveβ Expose your agent via HTTP (FastAPI/Uvicorn)- Supports plugins such as A2A (
a2a)
from agent import Venus
agent = Venus(name="venus")
response = agent.run_sync("Hello there!")
print(result.output)from venus import VenusCode
from venus.permissions import Permission
from venus.helpers.io import io_toolset
def my_permitter(permission: int):
if not permission & Permission.EXECUTE and permission & Permission.READ:
return ["read_file_content"]
return list(io_toolset.tools.keys())
code_agent = VenusCode(
name="coder",
permission=Permission.READ_EXECUTE,
permitter=my_permitter, # do not set a permitter to use default permitter
)from venus import Venus
from venus.types import Deps, DepsT, RunContext
import uuid
import time
agent = Venus(deps_type=int)
uuidgen = lambda: uuid.uuid4().hex
datagen = lambda: {'foo': [Deps(bar='baz')]}
class Auth(Deps[DepsT]):
id: str
@agent.safe(deps=Deps(id=uuidgen, data=datagen))
def get_tx(ctx: RunContext[Auth[int]]): # AgentDepsT is int here
# attribute-style access to deps entity `id`
txhash = f'%d$%s' % (time.time(), ctx.deps.id)
# type-based access to deps entity `foo`
data = ctx.deps.get(dict) # None
data = ctx.deps.get(list) # [Deps(bar='baz')]
# access main dependency for agent
agentdeps = ctx.deps.main # int
# type-based access to deps entity `foo`
# use exact annotation to access it:
data = ctx.deps.get(list[Deps]) # [Deps(bar='baz')]
return txhash + data.bar# agent.py
from venus import Venus
agent = Venus(tool_modules='agent_tools')# agent_tools.py
from venus.types import Deps
from venus.caching import cached
from venus.decorators import tool, mcp_tool, safe_call, autofix
@tool
@cached(ttl=240)
def get_id():
return 1
@mcp_tool(deps=Deps(id=get_id))
def get_username(deps: Deps):
return f'@user{deps.id}'
@safe_call
async def create_user(username: str):
return True
@autofix
async def risky_function():
raise Exception('An error occured')# agent.py
from venus import Venus
from venus.types import RunContext
agent = Venus()
@agent.safe_plain
def add(x: int, y: int) -> int:
return x + y
@agent.safe(retries=3)
def sub(ctx: RunContext, x: int, y: int) -> int:
return x - y
@agent.autofix(retries=2, deps=Deps(result=lambda: 20))
def risky_function(data: str):
raise Exception('An error occured')- Python 3.10+ β async-first with modern type hints
- Based on PydanticAI β robust validation & AI agent foundation
- ASGI-compatible β works with FastAPI, Uvicorn, etc.
- MCP Protocol β native Model Context Protocol integration
- Secure execution with E2B Sandbox
- CLI powered by Click β ergonomic, extensible command line
- Advanced Caching β multiple backend support
- Dependency Injection β type-safe, dynamic DI system
- Error Handling β custom recovery & retry strategies
- Decorator System β tool safety, autofix & error control
- HTTP API Generation β auto REST endpoint conversion
Contributions are welcome! π Please open an issue before submitting a PR to discuss your idea.
Licensed under the MIT License β see the LICENSE file for details.