Claude has memory, but it's bounded. Context windows fill. Sessions end. A static memory document can only hold so much before it becomes noise.
Cyberbrain extends Claude's native capabilities with a persistent, searchable knowledge layer built from your actual sessions:
- Automatic capture — hooks into Claude Code's compaction events to extract structured knowledge beats (decisions, insights, problems, references) without any manual effort
- Dynamic context injection — instead of a fixed memory document, you retrieve only what's relevant to the current task and load it on demand, keeping context lean and targeted
- Structured search — vault notes carry typed frontmatter, tags, summaries, and session metadata, enabling search that surfaces the right result rather than just the most recent match
- Cross-session accumulation — import from Claude Desktop and ChatGPT exports so everything you've worked through, across every interface, feeds the same store
The automatic hooks and slash commands are built for Claude Code. The MCP server works with any tool that supports the Model Context Protocol — Cursor, Zed, or anything else in the ecosystem.
The result is an external cognitive extension — a second brain that compounds across every conversation and grows more useful the longer you use it.
→ New here? See QUICKSTART.md.
Session in progress
│
▼ (context fills or /compact)
PreCompact hook fires
│
▼
~/.claude/hooks/pre-compact-extract.sh
│ reads transcript path from hook stdin
▼
~/.claude/cyberbrain/extractors/extract_beats.py
│ parses transcript JSONL
│ calls Claude (Haiku) to extract "beats"
│ routes beats by scope (project vs. general)
▼
Obsidian vault/
Projects/<project>/Claude-Notes/ ← project-scoped beats
AI/Claude-Sessions/ ← general beats (inbox)
│
▼ (next session)
cb_recall(query) via MCP
│ hybrid FTS5 + semantic search across vault
▼
Relevant beats injected into context
Extraction happens automatically on every compaction — both auto and manual. You'll see "Extracting knowledge before compaction..." in the Claude Code status bar while it runs.
A SessionEnd hook also fires when a session ends without compacting, so no session is missed.
- Python 3.8+
- Claude Code (the
claudeCLI) - An Obsidian vault, or any directory for plain-markdown storage
- Optional: AWS credentials for the
bedrockbackend, or Ollama for theollamabackend. The defaultclaude-codebackend uses your active Claude Code session — no separate API key needed.
bash install.shThe installer:
- Installs hooks, prompts, extractors, and the MCP server into
~/.claude/cyberbrain/ - Registers the
PreCompactandSessionEndhooks in~/.claude/settings.json - Creates
~/.claude/cyberbrain/config.jsonwith a placeholder vault path (if not already present) - Registers the MCP server in Claude Desktop (macOS)
- Creates a Python venv at
~/.claude/cyberbrain/venv/and installs MCP dependencies
After installation, set vault_path in ~/.claude/cyberbrain/config.json before the system will run.
bash uninstall.shPass --yes to skip the confirmation prompt. The uninstaller removes all installed files and surgically removes the hook entries from ~/.claude/settings.json.
{
"vault_path": "/Users/you/Documents/MyVault",
"inbox": "AI/Claude-Sessions",
"backend": "claude-code",
"model": "claude-haiku-4-5",
"claude_timeout": 120,
"autofile": false,
"daily_journal": false,
"journal_folder": "AI/Journal",
"journal_name": "%Y-%m-%d",
"proactive_recall": true,
"working_memory_folder": "AI/Working Memory",
"working_memory_review_days": 28,
"consolidation_log": "AI/Cyberbrain-Log.md",
"consolidation_log_enabled": true,
"trash_folder": ".trash"
}| Field | Default | Description |
|---|---|---|
vault_path |
(required) | Absolute path to your Obsidian vault root |
inbox |
"AI/Claude-Sessions" |
Where general beats go (not project-specific) |
backend |
"claude-code" |
LLM backend: "claude-code", "bedrock", or "ollama" |
model |
"claude-haiku-4-5" |
Model name passed to the backend |
claude_timeout |
120 |
Seconds before the LLM call times out |
autofile |
false |
Use LLM to route beats into existing vault folders instead of flat inbox |
daily_journal |
false |
Append a session entry to a daily journal note after each extraction |
journal_folder |
"AI/Journal" |
Vault-relative folder for journal notes |
journal_name |
"%Y-%m-%d" |
Journal filename pattern (strftime format) |
proactive_recall |
true |
Trigger cb_recall at session start when working in a known project domain |
bedrock_region |
"us-east-1" |
AWS region — only used when backend is "bedrock" |
ollama_url |
"http://localhost:11434" |
Ollama endpoint — only used when backend is "ollama" |
claude_path |
"claude" |
Full path to the claude binary — set this when using the MCP server from Claude Desktop, which runs without your shell PATH |
working_memory_folder |
"AI/Working Memory" |
Vault-relative folder for working memory beats (temporally relevant, not durable) |
working_memory_review_days |
28 |
Days until a working memory note is flagged for review |
consolidation_log |
"AI/Cyberbrain-Log.md" |
Vault-relative path for the consolidation/review audit log |
consolidation_log_enabled |
true |
Set false to disable the audit log |
trash_folder |
".trash" |
Vault-relative folder for soft-deleted notes |
Add this file to a project root to route that project's beats into a dedicated vault folder:
{
"project_name": "my-app",
"vault_folder": "Projects/my-app/Claude-Notes"
}Copy cyberbrain.local.example.json from this repo as a starting point. Add cyberbrain.local.json to your project's .gitignore — it contains local paths and is not meant to be committed.
- Set
vault_pathin~/.claude/cyberbrain/config.json - Use
cb_setup(via Claude Desktop or Claude Code MCP) to analyze your vault's existing structure and generate aCLAUDE.mdat the vault root. This document teaches the system your vault's conventions so that future beats stay consistent with what you already have
- Create a new vault in Obsidian (an empty vault is fine)
- Set
vault_pathin~/.claude/cyberbrain/config.json - The default folder structure works well to start — beats will appear in
AI/Claude-Sessions/ - Use
cb_setupafter a few sessions to generate aCLAUDE.mdonce there's enough content to analyze
When the PreCompact hook fires, Claude (Haiku) reads the session transcript and identifies "beats" — moments worth preserving. The default type vocabulary:
| Type | What it captures |
|---|---|
decision |
An architectural or design choice made, with rationale |
insight |
A non-obvious understanding or pattern that emerged |
problem |
A problem encountered — open or resolved |
reference |
A useful fact, config value, snippet, or command to remember |
If your vault's CLAUDE.md defines a different type vocabulary, the extractor uses that instead. This keeps beat types consistent with how you organize the rest of your vault.
Each beat is also classified by durability:
| Durability | What it means |
|---|---|
durable |
Passes the six-month test — useful to someone with no memory of this session, six months from now |
working-memory |
Current project state: open bugs, in-flight refactors, temporary workarounds, unvalidated hypotheses. Routed to a separate folder; reviewed periodically by cb_review |
Working memory beats are indexed and searchable like durable beats but live in AI/Working Memory/. The cb_review tool processes them when they're due, deciding whether to promote them to durable notes, extend the review window, or delete them.
Each beat is written as a markdown file with YAML frontmatter:
---
id: <uuid>
date: 2026-02-25T13:34:00
session_id: abc123
type: decision
scope: project
title: "Use raw_decode() for LLM JSON response parsing"
project: cyberbrain
cwd: /Users/dan/code/cyberbrain
tags: ["json", "llm", "parsing", "robustness"]
related: []
status: completed
summary: "json.JSONDecoder().raw_decode() tolerates trailing explanation text after the JSON blob."
cb_source: hook-extraction
cb_created: 2026-02-25T13:34:00
cb_session: abc123
---
## Decision
Use `json.JSONDecoder().raw_decode()` to parse LLM responses...Provenance fields (cb_source, cb_created, cb_session) are written automatically. Working memory beats also carry cb_ephemeral: true and cb_review_after: <date>. Set cb_lock: true manually to exclude a note from consolidation and review.
Working memory beats are routed to a separate folder (AI/Working Memory/) rather than the inbox. They are indexed and searchable like durable beats but carry a review date (cb_review_after). Use cb_review to process them:
cb_review(dry_run=True) # see what's due
cb_review(dry_run=False) # process — LLM proposes promote / extend / delete per note
cb_review(days_ahead=7) # also include notes due within 7 days
Promoted notes become durable vault notes. Deleted notes are logged to AI/Cyberbrain-Log.md.
Add a ## Cyberbrain Preferences section to your vault's CLAUDE.md to guide extraction and consolidation behavior in natural language — no prompt editing required.
Manage it through cb_configure:
cb_configure(show_prefs=True) # view current preferences
cb_configure(set_prefs="Only capture...") # replace the entire section
cb_configure(reset_prefs=True) # restore defaults
cb_restructure keeps the vault clean by doing two things in a single pass:
- Merge: clusters of related notes (many small notes on the same topic) are merged into one richer note or organized under a hub page
- Split: large notes covering multiple unrelated topics are broken into focused sub-notes
cb_restructure(dry_run=True) # preview proposed changes (always start here)
cb_restructure(folder="Projects/myapp") # target a specific folder
cb_restructure(split_threshold=3000) # min note size (chars) to be a split candidate
The tool uses semantic similarity to find clusters, then asks the LLM to decide how to restructure each cluster and each large note. Set cb_lock: true in a note's frontmatter to protect it from restructuring.
When "autofile": true is set in ~/.claude/cyberbrain/config.json, beats are routed into existing vault folders using an LLM filing decision rather than dropped flat into the inbox.
The autofile process:
- Searches the vault for existing notes that thematically match the beat (by keyword)
- Passes the top candidates to the LLM along with the vault's
CLAUDE.md - The LLM chooses: create a new note, extend an existing one, or use the inbox
- On collision (two beats targeting the same file), extends if tag overlap ≥ 2; otherwise creates a more specific title
Autofile adds one LLM call per beat. With the claude-code backend, this costs nothing extra beyond the time it takes. With API-based backends, expect roughly 2× the token usage of flat extraction.
Claude Code sessions are captured automatically. Sessions from Claude.ai (web, iOS, Android) require a periodic export.
Step 1: Request a data export
Go to claude.ai → Settings → Privacy → Export Data. The export ZIP arrives by email, typically within a few hours.
Step 2: Run the import script
Extract the ZIP and run:
python3 scripts/import.py --export ~/Downloads/claude-export/ --format claudeThe script tracks which conversations have already been processed. Re-running on a newer export safely skips already-imported conversations.
ChatGPT history:
python3 scripts/import.py --export ~/Downloads/chatgpt-export/ --format chatgptFlags:
| Flag | Description |
|---|---|
--export PATH |
Path to the export directory or conversations file |
--format claude|chatgpt |
Source format |
--dry-run |
Preview extractions without writing |
--limit N |
Process at most N conversations |
--since YYYY-MM-DD |
Skip conversations older than this date |
--cwd PATH |
Working directory context for project routing |
Shells out to claude -p, which uses your active Claude Code subscription. No additional API key needed.
{
"backend": "claude-code",
"model": "claude-haiku-4-5",
"claude_timeout": 120
}Uses the Anthropic SDK with AWS Bedrock. Requires AWS credentials (~/.aws/credentials, env vars, or IAM role).
{
"backend": "bedrock",
"model": "us.anthropic.claude-haiku-4-5-20251001",
"bedrock_region": "us-east-1"
}Calls a local Ollama instance. No API key or cloud dependency.
{
"backend": "ollama",
"model": "llama3.2",
"ollama_url": "http://localhost:11434"
}Requires Ollama running locally with a model pulled (ollama pull llama3.2). Quality of extraction varies by model — models with strong instruction-following and JSON output work best.
The installer registers a FastMCP server in Claude Desktop, exposing eleven tools:
| Tool | Description |
|---|---|
cb_extract(transcript_path) |
Extract beats from a transcript file |
cb_file(content, instructions?) |
File a piece of text into the vault |
cb_recall(query, max_results?) |
Search the vault |
cb_read(identifier) |
Read a specific note by path or title |
cb_enrich(folder?, since?, dry_run?) |
Backfill missing metadata on existing notes |
cb_setup(vault_path?, dry_run?) |
Analyze vault and generate/update its CLAUDE.md |
cb_configure(...) |
View or change config, vault path, and preferences |
cb_status() |
Show vault health, index stats, and recent extraction runs |
cb_restructure(folder?, dry_run?, split_threshold?, folder_hub?, grouping?) |
Merge related note clusters and split large notes to keep the vault clean |
cb_review(days_ahead?, dry_run?, folder?) |
Review working memory notes that are due — promote, extend, or delete |
cb_reindex(rebuild?, prune?) |
Rebuild or prune the search index |
install.sh writes the MCP server entry to Claude Desktop's config automatically:
~/Library/Application Support/Claude/claude_desktop_config.json
Restart Claude Desktop after installation. You'll see a hammer icon (🔨) in the chat input when the MCP server is connected.
If you need to register the server by hand (or if install.sh skipped it because Claude Desktop wasn't running), open Claude Desktop's config file:
open ~/Library/Application\ Support/Claude/claude_desktop_config.jsonOr navigate there from Claude Desktop: Settings → Developer → Edit Config.
Add or merge the cyberbrain entry under mcpServers:
{
"mcpServers": {
"cyberbrain": {
"command": "/Users/you/.claude/cyberbrain/venv/bin/python",
"args": ["/Users/you/.claude/cyberbrain/mcp/server.py"]
}
}
}Replace /Users/you with your actual home directory path. Restart Claude Desktop to apply.
For proactive vault recall and filing within Claude Desktop, create a Project and paste the system prompt from prompts/claude-desktop-project.md into the project's Customize field (Project settings → Customize → Custom instructions).
This instructs Claude to:
- Call
cb_recallautomatically when you mention a topic it may have notes on - Call
cb_filewhen you say "save this" or "capture this" - Proactively surface past decisions when you return to a topic
The MCP tools are available in any Claude Desktop conversation once connected, but the project system prompt makes the behavior automatic rather than requiring you to ask explicitly.
No beats appear after /compact
- Confirm
vault_pathin~/.claude/cyberbrain/config.jsonpoints to a real directory - Confirm the hook is registered:
cat ~/.claude/settings.json | python3 -m json.tool | grep -A5 PreCompact - For
claude-codebackend: confirmclaudeis in PATH:which claude - For
bedrock: confirm AWS credentials work:aws sts get-caller-identity - For
ollama: confirm Ollama is running:curl http://localhost:11434/api/tags
"Reached max turns" or backend error
The session transcript may be very long. Add or increase claude_timeout in ~/.claude/cyberbrain/config.json:
{ "claude_timeout": 180 }Beats land in inbox instead of project folder
Confirm .claude/cyberbrain.local.json exists in the project root (or a parent directory up to ~), and that project_name and vault_folder are both set.
MCP tools fail in Claude Desktop: "claude CLI not found"
Claude Desktop spawns the MCP server without your shell's PATH, so the claude binary isn't found even though Claude Code is installed. The extractor tries common Homebrew locations automatically, but if yours differs, set the full path explicitly:
{ "claude_path": "/opt/homebrew/bin/claude" }Find your path by running which claude in a terminal. Apple Silicon Macs typically use /opt/homebrew/bin/claude; Intel Macs typically use /usr/local/bin/claude.
"Prompt file not found" error
The extractor looks for prompts at ~/.claude/cyberbrain/prompts/. Reinstall to ensure they were copied: bash install.sh
| File | Purpose |
|---|---|
install.sh |
Installer |
uninstall.sh |
Uninstaller |
QUICKSTART.md |
Fast-path setup guide |
ARCHITECTURE.md |
Detailed architecture documentation |
cyberbrain.example.json |
Template for ~/.claude/cyberbrain/config.json |
cyberbrain.local.example.json |
Template for per-project .claude/cyberbrain.local.json |
hooks/pre-compact-extract.sh |
PreCompact hook entry point |
hooks/session-end-extract.sh |
SessionEnd hook entry point |
extractors/extract_beats.py |
Core engine entry point (re-exports all modules) |
extractors/extractor.py |
LLM-based beat extraction from transcripts |
extractors/backends.py |
LLM backend implementations (claude-code, bedrock, ollama) |
extractors/config.py |
Configuration loading and prompt file loading |
extractors/transcript.py |
JSONL transcript parsing |
extractors/vault.py |
Note writing, routing, filename generation, relations |
extractors/autofile.py |
LLM-driven filing decisions |
extractors/search_backends.py |
Search backends (grep, FTS5, hybrid) |
extractors/search_index.py |
Search index coordination and lifecycle |
extractors/analyze_vault.py |
Vault structure analyzer for cb_setup |
prompts/extract-beats-system.md |
System prompt for beat extraction |
prompts/autofile-system.md |
System prompt for autofile filing decisions |
prompts/enrich-system.md |
System prompt for cb_enrich |
prompts/restructure-*.md |
Prompts for cb_restructure (decide, generate, audit, group) |
prompts/review-system.md |
System prompt for cb_review |
prompts/claude-desktop-project.md |
Recommended Claude Desktop Project system prompt |
mcp/server.py |
FastMCP server entry point |
mcp/shared.py |
Bridge between MCP tools and extractor layer |
mcp/resources.py |
MCP resources and prompts |
mcp/tools/*.py |
MCP tool implementations (extract, file, recall, manage, setup, enrich, restructure, review, reindex) |
scripts/import.py |
Import Claude or ChatGPT export into the vault |