This doc describes how the AI-PATCH format fits into a tool-calling flow (e.g. LLM with ReadFile + ApplyPatch). Concept only; implementation is up to the environment.
The patch format is consumed by an ApplyPatch tool. The LLM outputs one patch per call; the environment parses it and edits the file. Often the LLM first reads the file (ReadFile) to get exact context, then emits a patch.
flowchart LR
A[User Request] --> B[LLM]
B --> C[ReadFile?]
C --> D[LLM Generates Patch]
D --> E[ApplyPatch Tool]
E --> F[File Updated]
F --> G[Result to User]
C --> D
- User request — e.g. "add JSDoc to this function", "fix the return type".
- LLM — Has access to tools; decides to read and/or patch.
- ReadFile (optional) — Gets current content so context lines in the patch match exactly.
- LLM generates patch — Single string in AI-PATCH format (Begin Patch → Add/Update File → hunks → End Patch).
- ApplyPatch tool — Receives
patch(string); parses and applies; returns success or error. - Result — Shown to user or fed back to the LLM for follow-up.
Typical sequence when the model uses tools to edit one file:
sequenceDiagram
participant U as User
participant S as System / Orchestrator
participant L as LLM
participant R as ReadFile Tool
participant A as ApplyPatch Tool
U->>S: Edit file X (natural language)
S->>L: Request + tool schemas (ReadFile, ApplyPatch, ...)
L->>S: tool_call(ReadFile, path: X)
S->>R: Execute ReadFile(X)
R->>S: File content (lines)
S->>L: Tool result (content)
L->>S: tool_call(ApplyPatch, patch: "*** Begin Patch\n...")
S->>A: Execute ApplyPatch(patch)
A->>S: Applied / error message
S->>L: Tool result
L->>S: Reply to user (e.g. "Done.")
S->>U: Response
- User asks for an edit.
- System sends the request and tool definitions (including ApplyPatch with a
patchstring parameter). - LLM may call ReadFile first to get current content.
- LLM then calls ApplyPatch with a single string: the full patch in AI-PATCH format.
- ApplyPatch parses the string, applies add/update hunks, writes the file, returns success or error.
- System returns the tool result to the LLM, which can reply to the user or do more tool calls.
- One tool, one argument — ApplyPatch takes one string (
patch). No nested JSON for hunks; the patch format is the contract. Easy to describe in a tool schema. - Parseable and streamable — Line-based format; an LLM can stream patch text and the environment can parse when it sees
*** End Patch. - Single target per call — One file per patch. Multiple files = multiple ApplyPatch calls. Clear, predictable, and easy to retry or revert per file.
- Aligned with NES — Same unified-diff line semantics and optional
@@ :Ntargeting. NES describes pattern learning and next-edit suggestion; AI-PATCH is the format those suggestions can be delivered in via ApplyPatch.
How ApplyPatch might be exposed to the LLM (conceptual; your schema may differ):
| Field | Type | Description |
|---|---|---|
| name | string | e.g. ApplyPatch |
| description | string | Short note: patch in AI-PATCH format, one file per call, absolute path. |
| parameters | object | One property: patch (string). |
The LLM is instructed (via description or system prompt) to output a valid patch: *** Begin Patch, then *** Add File: <path> or *** Update File: <path>, then hunks, then *** End Patch.