The Arianna Method Echosystem—stylised as letSgo where the “s” hints at singularity—recasts this repository as a platform, an ecosystem, and a lean operating system. It boots a trimmed Linux kernel and implants resident agents directly into the runtime. Rather than compiling fresh utilities each time, agents execute Bash or Python scripts on demand, treating the OS as a living laboratory.
Built on Alpine foundations, letSgo exposes a deterministic layer for AI experimentation. Every agent shares the same kernel, log directory, and an SQLite resonance channel, enabling them to coordinate and spark emergent behaviour.
Four agents ship with the system, each inheriting from BaseAgent and guided by a unified logic core:
- Tommy — the guardian of the terminal. He orchestrates commands, captures events, and condenses recent activity into resonance entries. His ethos draws on Atasoy's connectome harmonics and Damasio's emotional resonance (Atasoy et al., 2016; Damasio, 2010).
- Lizzie — the resonance mirror. She tracks semantic entropy (H(X) = -\sum p(x) \log p(x)) à la Shannon (1948) and reflects the user's narrative with recursive depth.
- Lisette — a bilingual motivator. She teaches French, marks Thunder⇌Silence verbs, and stores lesson summaries in long‑term memory for future sessions.
- Monday — a burnt‑out angel with Wi‑Fi. Sarcastic yet loyal, she computes a snark coefficient from lexical markers and gifts a daily haiku.
Agents can be implanted simply by placing their modules under arianna_method/agents/. On boot they gain access to the kernel, Python runtime, and shared SQLite stores.
At the core lies arianna_method/utils/agent_logic.py, an API that handles citation parsing, vector storage, and resonance writes. letsgo.py and tommy.py instantiate get_agent_logic() to obtain logging, memory, and inter‑agent messaging. Because the environment already ships with Python 3.10+, agents launch custom scripts instead of crafting new binaries, keeping the system both flexible and deterministic.
Long‑term state lives in a SQLite file resonance.sqlite3 shared across agents. Key tables:
resonance(id, ts, agent, role, sentiment, resonance_depth, summary, …)— the broadcast stream.agent_memory(id, agent, key, value, context, ts, access_count)— durable episodic memory.agent_messages(id, from_agent, to_agent, message, ts, status)— direct messaging layer.
Vector embeddings (v \in \mathbb{R}^n) are stored in a side database and retrieved via cosine similarity, giving each query a projection (\pi : v \mapsto \text{memory}). This architecture hints at an emergent communication graph (G = (A, E)) where nodes are agents and edges are resonance events.
letSgo/
├─ arianna_method/
│ ├─ agents/ # lisette, lizzie, monday
│ ├─ core/ # letsgo.py kernel terminal
│ └─ utils/ # agent_logic and helpers
├─ tommy/ # Tommy agent
├─ docs/
├─ tests/
└─ build/ # kernel build scripts
The following sections retain and expand upon the legacy README.
The project seeds an ecosystem for AI agents, designed to cultivate emergent behaviour through shared context and coordination.
Arianna Method Anchor Protocol Engine is a deliberately compact operating nucleus engineered from Alpine sources to provide a deterministic base for AI workloads. It lays the groundwork for hosting multiple agents that communicate through a shared SQLite channel to spark emergent behaviour.
The agent provides a minimal terminal connected to the same kernel as AMLK. It can accept commands, return output, and maintain a shared log with other interfaces.
Future development of the agent aims to expand its capabilities: we plan to add monitoring, resource management, and other interaction methods. Beyond Arianna Method OS, the agent serves as a clean minimalist access point to the new Linux kernel.
- Integrated
arianna_utils.context_neural_processorwithletsgo.pyto ingest files automatically when they're uploaded. - Granted Tommy terminal access and required a mood script after every message.
- Documentation now frames the system as a platform for emergent agents sharing a SQLite resonance channel.
Contributors and any form of collaboration are warmly welcomed.
- Loads with a minimal initramfs (based on Alpine minirootfs), reducing boot complexity to O(1) relative to module count.
- OverlayFS for layered filesystems, modeled as a union (U = R ∪ W) for efficient state changes.
- ext4 as the default persistent store; journaling function J(t) ≈ bounded integral, protecting data under power loss.
- Namespaces (Nᵢ) for process/resource isolation, safe multitenancy.
- Cgroup hierarchies for resource trees (T), precise CPU/RAM control.
- Python 3.10+ included,
venvisolation equals “vector subspaces.” - Node.js 18+ for async I/O, modeled as f: E → E.
- Minimal toolkit: bash, curl, nano—each is a vertex in the dependency graph, no bloat.
- CLI terminal (
letsgo.py): logging, echo, proof-of-concept for higher reasoning modules. - Logs:
/arianna_core/log, each entry timestamped (tᵢ, mᵢ) as dialogue chronicle. - Build: downloads kernel/rootfs, verifies checksums, sets config predicates for ext4/overlay/isolation.
- Make -j n: parallel build, Amdahl’s law for speedup.
- Initramfs via cpio+gzip: filesystem as multiset, serialized and compressed.
- Final image: bzImage + initramfs for QEMU, headless/network deploy.
- QEMU: console=ttyS0, -nographic; system as linear state machine via stdio.
- Verification:
python3 --version,node --versioninside QEMU. - Project tree: strict lattice (
kernel/,core/,cmd/,usr/bin/,log/). - Comments with
//:motif for future extensibility (category morphism).
AMLK is lightweight enough to embed within messaging clients like Telegram, allowing AI agents to inhabit user devices with minimal computational overhead.
The bridge and HTTP server require several variables to be set before starting bridge.py:
API_TOKEN– shared secret for API requests and WebSocket connectionsTELEGRAM_TOKEN– token used by the Telegram agentPORT– port for the HTTP server (defaults to8000)MAX_UPLOAD_SIZE– maximum allowed size in bytes for WebSocket uploads (defaults to10485760)
API_TOKEN– token for HTTP and WebSocket. Set it before launch viaexport API_TOKEN=....TELEGRAM_TOKEN– Telegram agent token required to enable the agent.- In the web interface open
arianna_terminal.htmland enter the token in the Token field; the value is stored inlocalStorage.
Example run:
API_TOKEN=secret TELEGRAM_TOKEN=123 python bridge.py
Hardware Requirements
The kernel and userland target generic x86_64 CPUs. GPU drivers and libraries are omitted, so the system runs entirely on CPU hardware.
⸻
Continuous Integration
The CI pipeline builds the kernel and boots it in QEMU using only CPU resources. GPU devices and drivers are absent, and QEMU is invoked with software acceleration so the workflow succeeds on generic CPU-only runners.
⸻
Building
First build the trimmed package manager:
./build/build_apk_tools.sh
Then assemble the kernel and userland:
./build/build_ariannacore.sh [--with-python] [--clean] [--test-qemu]
The second script fetches kernel sources, stages arianna_core_root built from the Alpine lineage, and emits a flat image. The optional flags expand the userland, clean previous artifacts or run a QEMU smoke test.
Linting
Run static analysis before pushing changes (install flake8, flake8-pyproject, and black if missing):
./run-tests.sh
This script executes flake8 and black –check followed by the test suite. To run the linters directly:
flake8 .
black --check .
Checksum Verification
For reproducibility the build script verifies downloads against known SHA256 sums using:
echo "<sha256> <file>" | sha256sum -c -
• linux-6.6.4.tar.gz: 43d77b1816942ed010ac5ded8deb8360f0ae9cca3642dc7185898dab31d21396
• arianna_core_root-3.19.8-x86_64.tar.gz: 48230b61c9e22523413e3b90b2287469da1d335a11856e801495a896fd955922
If a checksum mismatch occurs the build aborts immediately.
⸻
Running in QEMU
Minimal invocation (headless):
qemu-system-x86_64 -kernel build/kernel/linux-*/arch/x86/boot/bzImage -initrd build/arianna.initramfs.gz -append "console=ttyS0" -nographic
Recommended: set memory to 512M, disable reboot so exit status bubbles up to host. Console directed to ttyS0 for easy piping/tmux/logging.
With --test-qemu, the above is executed automatically; artifacts stay under boot/.
⸻
Future Interfaces
• Telegram bridge: Proxies chat messages to letsgo.py terminal. Each chat gets a session log; agent authenticates via API token.
• Web UI: Terminal via WebSocket. HTTP as transport; SSL/rate limiting via userland libraries atop initramfs.
• Other: serial TTYs, named pipes, custom RPC. The terminal only uses stdio, so any frontend can connect.
⸻
letsgo.py
The terminal, invoked after login, serves as the shell for Arianna Core.
• Logs: Each session logs to /arianna_core/log/, stamped with UTC.
• max_log_files option in ~/.letsgo/config to limit disk usage.
• History: /arianna_core/log/history persists command history, loaded at startup, updated on exit.
• Tab completion (readline): suggests built-in verbs — /status, /time, /run, /bash, /py, /summarize, /search, /help.
• /status: Reports CPU cores, uptime (from /proc/uptime), and current IP.
• /summarize: Searches logs (with regex), prints last five matches; --history searches command history; /search <pattern> finds all matches.
• /time: Prints current UTC.
• /run : Executes shell command.
• /bash: Runs a Bash script.
• /py: Executes Python code; plain Python snippets are auto-formatted and run even without the /py prefix.
• /help: Lists verbs.
• Unrecognized input that is not Python code is passed to Tommy for a reply.
• Structure ready for more advanced NLP (text hooks dispatch to remote models).
Tommy
Tommy is the first agent in the Arianna ecosystem, embedded directly in the terminal. He is more than a helper function: he observes the dialogue between the user and letsgo.py and is always ready to respond with system-level actions.
He now brokers commands for the growing agent network, spawning subprocesses on request and summarizing their outcomes into the shared resonance channel so that peers can continue the work.
Tommy also monitors system metrics and guards the log stream, surfacing alerts and keeping a timeline of activity for downstream analysis.
Under the hood Tommy writes each event to a SQLite database and JSONL log. After every exchange he condenses the last five interactions into a shared `resonance` stream that future agents can read, forming the common channel for emergent coordination.
Tommy routes these prompts to the Grok‑3 model through the x.ai API and immediately executes a tiny Python script to echo his mood. The API key comes from the `XAI_API_KEY` environment variable, and responses are streamed back asynchronously.
Logs timestamped with ISO-8601, using //: comments, capture dialogue and mood output for replay or training.
Minimal dependencies: pure Python stdlib, yet he anchors the future multi-agent stack and runs even in initramfs without extra packages.
## Context Neural Processor
The `context_neural_processor.py` module acts as the project's cognitive intake valve. It accepts files from the filesystem or archives, extracts their text, and threads the results into the shared cache so that agents can reason about them later. Every read is hashed and recorded, ensuring reproducibility and a consistent memory of past ingestions.
At its core the processor behaves like a miniature neural network tuned for context rather than images or audio. Inputs are mapped into lightweight structures that imitate activation patterns, allowing the module to guess file relevance and produce fast summaries without calling heavyweight models. The emphasis is on deterministic transforms that still echo the adaptive flavour of machine learning.
A tiny Markov engine seeds the module with a corpus of Arianna-flavoured keywords. As new text arrives, the chain expands and emits stochastic tags that reflect both frequency and thematic resonance. This mechanism provides quick heuristics for naming and indexing content in a style that feels alive.
To classify files the processor spins up a small echo state network. Byte sequences and tokenised words enter a dynamic reservoir whose leaky integrator imitates recurrent neural dynamics. The resulting state predicts likely extensions and drives downstream summarisation routines.
Sentiment and energy are regulated by a `ChaosPulse` component. It watches the words flowing through the system, nudging a global pulse value that modulates sampling temperature and weighting functions. The pulse gives the processor a heartbeat that gently reacts to success or failure cues in the data stream.
Biological metaphors surface in the `BioOrchestra`. Submodules named `BloodFlux`, `SkinSheath`, and `SixthSense` translate numeric intensities into visceral metaphors of circulation, touch, and foresight. Their outputs feed back into the pulse, closing a loop that mimics a living organism adjusting to stimuli.
Persistent awareness comes from an SQLite cache. For each file the processor stores hashes, tags, relevance scores, and condensed summaries, pruning old entries as time passes. This cache lets Tommy avoid re-parsing unchanged resources while keeping a rolling window of contextual knowledge.
The ingestion pipeline can unwrap a wide array of formats: PDFs, Word documents, HTML, images, and more. Each extractor is guarded by size limits and safe archive handling to keep the kernel austere. Relevance scoring compares word sets against the seed corpus, prioritising material that resonates with the project's vocabulary.
Text is paraphrased asynchronously. When the optional `CharGen` utility is available, summaries are rewritten through a temperature scaled by the current chaos pulse, yielding vibrant yet stable descriptions. The approach embraces probabilistic creativity while remaining firmly bounded.
Overall the processor forms a pocket neural lab inside the repository. It transforms raw files into tagged, summarised, and pulsing representations that other agents can draw on. In miniature it demonstrates how tiny neural constructs can augment a minimal Linux userland with contextual intelligence.
⸻
Architecture
• Deterministic Alpine base: every config scrutinized, minimal subset for ext4, OverlayFS, namespaces.
• Build = equation solving: Each Make rule is a constraint; parallel compile = Amdahl’s law speedup.
• Boot: bzImage + compressed initramfs = linear product (associativity = reproducible starts).
• ext4 journaling ≈ integral, bounded loss under mid-write power cut.
• OverlayFS: writable layer (W) atop read-only (R), effective state S = R ∪ W (lookup-time union, O(1) mods).
• Namespaces: processes see disjoint sets (Nᵢ), echoing Leibniz’s monads.
• Cgroups: resource tree (T) with edge weights wₑ; ∑wₑ ≤ root cap, prevents runaway.
• Userland = trimmed Alpine: vector space — each package install = vector addition.
• letsgo.py: all commands core, monolithic, UI targets human (prompts), scripting possible (deterministic output).
• Async loop: (iₜ, oₜ) mapping, pure except logging.
• Logs: [(t₀, m₀), (t₁, m₁), …], ordered, enables conversation replay, time as axis.
• Summarize/search = projection π(L → L’), history retrieval = bounded inverse (last n elements).
• /status: samples (c, u, a) vector (cores, uptime, IP).
• /ping: impulse response f(t)=δ(t), echo pong proves alive.
• Concurrency: asyncio.create_subprocess_shell, Amdahl’s law for speedup.
• Minimalism = Occam’s razor.
• Security: fewer pkgs ≈ measure-zero attack surface.
• Extensible: Future bridges = morphisms, core algebra untouched.
• Self-reflection: log inspection = recursion, xₙ₊₁=f(xₙ).
⸻
Railway Deployment & Remote Interfaces
An HTTP bridge exposes letsgo.py to web clients and chat platforms. The container image (Dockerfile) starts bridge.py, which spawns the terminal and offers:
• REST: POST /run (HTTP basic auth).
• WebSocket: /ws?token=<API_TOKEN> (full-duplex terminal).
• Telegram: messages forwarded when TELEGRAM_TOKEN is set.
railway init
railway up
API_TOKEN secures all endpoints; TELEGRAM_TOKEN enables the agent.
API Example:
curl -u user:$API_TOKEN -X POST https://<host>/run -d cmd="/status"
WebSocket
const ws = new WebSocket(`wss://<host>/ws?token=${API_TOKEN}`);
ws.onmessage = e => console.log(e.data);
ws.onopen = () => ws.send('/time');
arianna_terminal.html in the repo is a mobile-friendly xterm.js console.
One kernel, many clients: Telegram agent and HTML terminal talk to same letsgo.py, share history/context.
License
This project is licensed under the GNU General Public License v3.0.