Open source AI coding agent. Terminal-first, built for developers who want control.
This is a v0.0.1 personal build — I'm building this for myself and open sourcing it as I go. If you want to try it, contribute ideas, or just follow along — welcome.
git clone https://github.com/useing123/slarkcli
cd slarkcli
uv venv
uv pip install -r requirements.txt
# First run will prompt you for your API key and model
uv run python main.py --dir /path/to/your/projectConfig is saved to ~/.slark/config.toml — never committed to the repo.
uv run python main.py --dir /path/to/your/projectWith session tracing (saves every LLM request to ~/.slark/traces/):
SLARK_TRACE=1 uv run python main.py --dir /path/to/your/project>> /cost — token usage and cost for this session
>> /sessions — list all sessions for this project
>> /switch <n> — switch to session by number or ID prefix
>> /new — start a new session (current is archived)
>> /clear — wipe current session messages from memory and DB
>> /init — re-index the project
>> /exit — quit
>> /settings
File context: type @filename in any message to attach a file:
>> look at @src/components/Button.tsx and refactor it
>> compare @api/routes.py @api/models.py
Tab completion works for both @filename and /commands.
Interrupt: Ctrl+C during agent run stops it and returns to prompt. Ctrl+C at prompt exits.
- Never reads
.env,.git/,~/.ssh/or any secrets — hardcoded, not prompt-based - Dangerous commands require explicit user confirmation
move_to_garbageinstead ofrm— deleted files are always recoverable from~/.slark/garbage/
- Full session history saved to SQLite at
~/.slark/slark.db - Switch between sessions and continue from where you left off
- Sessions are scoped per project directory — different projects don't mix
- Agent tracks its own progress using structured tasks
- Every subtask is created, updated, and verified before moving on
- Stored in SQLite — query your agent's work history anytime
- Automatic context pruning when token budget gets large
- AST-based project index —
search_symbol,get_file_symbols,index_summary - Session traces saved locally at
~/.slark/traces/
read_file,read_lines,tree,outline— smart file reading with 150-line truncationwrite_file,str_replace,create_dir,move_to_garbage— safe editinggrep,find_definition— code search across Python, TS, JS, Go, Rustrun_command,run_background,check_port,kill_background— process controlcreate_task,update_task,list_tasks— task managementsearch_symbol,get_file_symbols,index_summary— AST index
- All conversations saved to SQLite for fine-tuning
- Export anytime
- Python + asyncio — core
- SQLite + aiosqlite — storage, sessions, task board, dataset
- OpenAI SDK → OpenRouter — LLM calls
- prompt_toolkit — tab completion for
@filesand/commands - rich — markdown rendering in terminal
- pathspec —
.gitignore-aware file traversal
Auto-generated at ~/.slark/config.toml on first run:
[agent]
model = "deepseek/deepseek-v3.2"
provider = "openrouter"
[keys]
openrouter = "sk-..."
[context]
prune_threshold = 80000
large_context = 50000
[pricing.openrouter]
price_in = 0.00000027
price_out = 0.00000079Next up:
- Streaming output — see the agent think in real time
- Command allowlist/blocklist via
config.toml - Gemini provider
- Web search + fetch URL tools
Orchestration:
- Orchestrator agent — decomposes tasks, never writes code itself
- Architect/Librarian agent — answers questions about the codebase via Project Index
- Coder agent — focused purely on writing and editing code
- Reviewer agent — verifies the coder's output
Swarm mode:
- Parallel workers — multiple agents tackle the same problem with different approaches
- Orchestrator picks the winner
Memory:
- Cross-session knowledge base — agent remembers your project
- Vector memory via sqlite-vec
- Session summarizer
UI:
- TUI via Textual
- Web dashboard — real-time swarm visualization
- Your data belongs to you
- The agent only does what you explicitly allow
- No hidden magic — all code is readable
PRs and ideas welcome. Still early — things will break and change.
MIT