Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 53 additions & 1 deletion .claude/commands/sp.orchestrator.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,15 @@
description: Universal platform orchestrator implementing Spec-Driven Development with Reusable Intelligence (SDD-RI). Routes work to appropriate agents based on stakeholder, work type, and hardware tier. Works for content authoring, engineering features, and platform infrastructure.
---

# /sp.orchestrate: Platform Reasoning Orchestrator (v4.3)
# /sp.orchestrate: Platform Reasoning Orchestrator (v4.4)

**Purpose**: Execute the complete SDD-RI workflow (Spec → Plan → Tasks → Implement → Validate) for ANY platform task by **routing to appropriate agents** based on context analysis. This orchestrator serves all three stakeholders (Students, Authors, Institutions).

**v4.4 Updates**:
- **Rule 10: Iteration PHR Enforcement** - Create PHR for EACH user feedback round, not just phase completions
- Updated Rule 7 table with iteration trigger row
- Added concrete examples from 039-panaversity-fs-hardening session

**v4.3 Updates**:
- **Rule 9: ADR Location Enforcement** - ADRs must go in `history/adr/`, NOT in `specs/` folders
- Fixed incorrect ADR examples in Rule 8
Expand Down Expand Up @@ -1103,9 +1108,54 @@ Every significant action MUST have a corresponding PHR:
| /sp.implement completes | green | [feature]-implementation |
| Validation completes | misc | [feature]-validation |
| Orchestration completes | misc | [feature]-orchestration-summary |
| **User feedback iteration** | [artifact stage] | [feature]-iteration-[topic] |

**PHR recording is NOT optional.** If a PHR is skipped, the orchestration is incomplete.

### Rule 10: Iteration PHRs (CRITICAL)

**When user provides feedback that leads to artifact updates, create a PHR for EACH iteration.**

**Iteration PHR Triggers:**
- User corrects a contradiction in spec/plan/tasks
- User requests changes to an artifact before approval
- User provides clarifying information that changes approach
- User identifies missing coverage (tests, requirements, criteria)

**Iteration PHR Format:**
```
Title: [feature]-iteration-[topic]
Stage: [matches artifact being iterated - spec/plan/tasks]
Content:
- What user feedback identified
- What was changed in response
- Why this decision matters
```

**Example Iteration PHRs (from 039-panaversity-fs-hardening):**
```
# Plan iteration when user said "we start fresh, no migrations"
0004-plan-iteration-fresh-start.plan.prompt.md

# Plan iteration when user identified R4/SC-002 test gaps
0005-plan-iteration-test-coverage.plan.prompt.md
```

**Why Iteration PHRs Matter:**
- Final artifacts show WHAT was decided, not WHY
- Iteration PHRs capture the reasoning behind changes
- Without them, decision rationale is lost when context compacts
- They document user corrections that improve future agent behavior

**Common Failure Pattern:**
```
❌ WRONG: Create PHR only when artifact approved
- Result: 3 rounds of feedback → 0 PHRs → lost rationale

✅ RIGHT: Create PHR for EACH user feedback round
- Result: 3 rounds of feedback → 3 PHRs → full decision trail
```

</enforcement_summary>

---
Expand Down Expand Up @@ -1159,4 +1209,6 @@ Skills CAN be used now for discovery—but we still need spec approval before im

---

**Version 4.4: Added Rule 10 (Iteration PHR enforcement), updated Rule 7 table with iteration trigger.**

**Version 4.3: Added Rule 9 (ADR location enforcement), corrected ADR examples in Rule 8.**
48 changes: 48 additions & 0 deletions .claude/commands/sp.tasks.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,54 @@ The tasks.md should be immediately executable - each task must be specific enoug

**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.

### CLI-First Principle (REQUIRED)

**ALWAYS prefer CLI commands over manual file creation** when tools exist for scaffolding:

| Tool | CLI Command | NOT Manual Creation |
|------|-------------|---------------------|
| **Alembic** | `alembic init <dir>` | ❌ Don't manually create env.py, script.py.mako |
| **Alembic** | `alembic revision --autogenerate -m "msg"` | ❌ Don't manually create migration files |
| **uv** | `uv add <package>` | ❌ Don't manually edit pyproject.toml dependencies |
| **pytest** | `pytest --collect-only` | ❌ Don't guess test discovery |
| **pnpm/npm** | `pnpm add <package>` | ❌ Don't manually edit package.json |

**Task Format for CLI Operations**:
```text
- [ ] T00X Use `<cli command>` to <action>. Verify output with `<verification command>`.
```

**Example**:
```text
- [ ] T002 Use `alembic init src/app/migrations` to scaffold migrations directory. Verify with `ls src/app/migrations/`.
- [ ] T009 Use `alembic revision --autogenerate -m "initial schema"` to generate migration. Review generated file for CHECK constraints.
```

### Documentation Lookup Principle (REQUIRED)

**ALWAYS reference documentation tools** when tasks involve unfamiliar libraries or complex patterns:

| Library | Task Must Include |
|---------|-------------------|
| SQLAlchemy 2.0 async | `**Doc**: Fetch SQLAlchemy docs via Context7 for async patterns` |
| Alembic async | `**Doc**: Fetch Alembic docs via Context7 for async migration setup` |
| prometheus-client | `**Doc**: Fetch prometheus-client docs via Context7 for metric types` |
| hypothesis | `**Doc**: Fetch hypothesis docs via Context7 for property strategies` |
| FastAPI | `**Doc**: Fetch FastAPI docs via Context7 for dependency injection` |
| Pydantic v2 | `**Doc**: Fetch Pydantic docs via Context7 for model_validator patterns` |
| Any new library | `**Doc**: Fetch <library> docs via Context7 before implementation` |

**Task Format for Doc Lookup**:
```text
- [ ] T00X Create <file> with <functionality>. **Doc**: Fetch <library> docs via Context7 for <specific pattern>.
```

**Example**:
```text
- [ ] T005 Create `src/database/models.py` with FileJournal SQLAlchemy model. **Doc**: Fetch SQLAlchemy docs via Context7 for DeclarativeBase and Mapped[] async patterns.
- [ ] T034 Create `tests/property/test_invariants.py` with hypothesis property tests. **Doc**: Fetch hypothesis docs via Context7 for composite strategies.
```

### Checklist Format (REQUIRED)

Every task MUST strictly follow this format:
Expand Down
3 changes: 1 addition & 2 deletions .claude/settings.local.json
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,7 @@
"ask": []
},
"disabledMcpjsonServers": [
"playwright",
"context7"
],
"enableAllProjectMcpServers": true,
"alwaysThinkingEnabled": false
}
2 changes: 2 additions & 0 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,8 @@ jobs:
AUTH_URL: ${{ secrets.AUTH_URL }}
OAUTH_CLIENT_ID: ${{ secrets.OAUTH_CLIENT_ID }}
BASE_URL: ${{ secrets.BASE_URL }}
PANAVERSITY_API_KEY: ${{ secrets.PANAVERSITY_API_KEY }}

run: npm run build

- name: Upload build artifacts
Expand Down
13 changes: 9 additions & 4 deletions .github/workflows/sync-content.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ jobs:
- name: Sync content to R2
env:
PANAVERSITY_SERVER_URL: ${{ secrets.PANAVERSITY_SERVER_URL }}
PANAVERSITY_API_KEY: ${{ secrets.PANAVERSITY_API_KEY }}
run: |
# Upload changed markdown files to PanaversityFS MCP server
# Uses write_content tool via JSON-RPC
Expand Down Expand Up @@ -93,6 +94,7 @@ jobs:
CONTENT=$(cat "$FILE" | jq -Rs .)

# Create JSON-RPC request for write_content tool
# Note: MCP SDK expects arguments wrapped in 'params' object
REQUEST=$(jq -n \
--arg book_id "$BOOK_ID" \
--arg path "$CONTENT_PATH" \
Expand All @@ -104,17 +106,20 @@ jobs:
params: {
name: "write_content",
arguments: {
book_id: $book_id,
path: $path,
content: $content
params: {
book_id: $book_id,
path: $path,
content: $content
}
}
}
}')

# Send to MCP server
# Send to MCP server with API key authentication
RESPONSE=$(curl -s -X POST "$SERVER_URL" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer $PANAVERSITY_API_KEY" \
-d "$REQUEST")

# Check for errors
Expand Down
21 changes: 19 additions & 2 deletions .mcp.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,18 @@
{
"mcpServers": {
"context7": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@upstash/context7-mcp"
],
"env": {}
},
"deepwiki": {
"type": "http",
"url": "https://mcp.deepwiki.com/mcp"
},
"playwright": {
"type": "stdio",
"command": "npx",
Expand All @@ -8,11 +21,15 @@
],
"env": {}
},
"context7": {
"better-auth": {
"type": "http",
"url": "https://mcp.chonkie.ai/better-auth/better-auth-builder/mcp"
},
"next-devtools": {
"type": "stdio",
"command": "npx",
"args": [
"@upstash/context7-mcp"
"next-devtools-mcp@latest"
],
"env": {}
}
Expand Down
39 changes: 38 additions & 1 deletion CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -275,6 +275,29 @@ find specs/ history/prompts/ -type d -name "*home-page*" | head -1

---

## FAILURE MODE: Missing Iteration PHRs

**What I did wrong** (2025-12-04):
- ❌ Created PHR for initial plan (0001-plan.plan.prompt.md)
- ❌ User provided 3 rounds of feedback (fresh-start, test coverage, approval)
- ❌ Only created PHR for tasks phase at end
- ❌ Result: 3 decision-making conversations with NO documentation

**What I should have done**:
1. ✅ Create PHR after initial plan: `0001-panaversityfs-hardening-plan.plan.prompt.md`
2. ✅ Create PHR after "fresh start" feedback: `0002-plan-iteration-fresh-start.plan.prompt.md`
3. ✅ Create PHR after "test coverage" feedback: `0003-plan-iteration-test-coverage.plan.prompt.md`
4. ✅ Create PHR after tasks generation: `0004-panaversityfs-hardening-tasks.tasks.prompt.md`

**Root Cause**: Treated PHR as "one per phase" instead of "one per meaningful interaction." User feedback that changes artifacts IS a meaningful interaction worth documenting.

**Key Insight**: Iteration PHRs capture WHY decisions changed, not just WHAT the final artifact says. Without them, future sessions lose context about:
- Why migration was removed (user said "POC, fresh start")
- Why R4 isn't in property tests (performance invariant, not logical)
- What alternatives were considered and rejected

---

## II. Recognize Your Cognitive Mode (After Context Gathered)

### You Tend to Converge Toward:
Expand Down Expand Up @@ -512,7 +535,21 @@ As the main request completes, you MUST create and complete a PHR (Prompt Histor
4) Validate + report
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
- On failure: warn, don't block. Skip only for `/sp.phr`.


5) **CRITICAL: PHRs for Iterative Feedback**
- When user provides feedback that leads to artifact updates (spec revisions, plan corrections), create a PHR for EACH iteration
- Title format: `{artifact}-iteration-{topic}` (e.g., `plan-iteration-fresh-start`, `spec-iteration-postgres-choice`)
- Stage matches the artifact being iterated (plan feedback → stage: plan)
- **Why**: Iterations capture decision rationale that's lost if only final artifact is documented
- Example sequence for a feature:
```
0001-feature-spec.spec.prompt.md # Initial spec
0002-feature-plan.plan.prompt.md # Initial plan
0003-plan-iteration-migration.plan.prompt.md # User feedback: no migration
0004-plan-iteration-test-coverage.plan.prompt.md # User feedback: missing R4
0005-feature-tasks.tasks.prompt.md # Final tasks
```

---

## VIII. Execution Contract (Every Request)
Expand Down
6 changes: 6 additions & 0 deletions book-source/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,12 @@ PANAVERSITY_PLUGIN_ENABLED=false
# - Production: Your hosted server URL (e.g., https://panaversity-fs.example.com/mcp)
PANAVERSITY_SERVER_URL=http://localhost:8000/mcp

# API Key for PanaversityFS MCP server authentication.
# Get this from the Panaversity SSO server (API key management).
# Required when server has auth enabled. Leave empty for dev mode (no auth).
# Format: pana_xxx... or sk_live_xxx... or sk_test_xxx...
PANAVERSITY_API_KEY=

# -----------------------------------------------------------------------------
# Authentication Configuration (SSO)
# -----------------------------------------------------------------------------
Expand Down
5 changes: 4 additions & 1 deletion book-source/plugins/docusaurus-panaversityfs-plugin/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ module.exports = function panaversityFSPlugin(context, options) {
bookId = 'ai-native-dev',
enabled = false, // Disabled by default
serverUrl = process.env.PANAVERSITY_SERVER_URL || 'http://localhost:8000/mcp',
apiKey = process.env.PANAVERSITY_API_KEY || null, // API key for authenticated requests
timeoutMs = 120000, // 2 minutes default (book fetch can be slow with large content)
docsDir = 'docsfs', // Output directory relative to siteDir (separate from docs/)
cleanDocsDir = true, // Clean docsfs/ before writing
// Files matching these patterns are stored in R2 but NOT written to docsfs/
Expand Down Expand Up @@ -61,6 +63,7 @@ module.exports = function panaversityFSPlugin(context, options) {
console.log(`[PanaversityFS] Book ID: ${bookId}`);
console.log(`[PanaversityFS] Enabled: ${enabled}`);
console.log(`[PanaversityFS] Server URL: ${serverUrl}`);
console.log(`[PanaversityFS] Auth: ${apiKey ? 'API Key configured' : 'No auth (dev mode)'}`);
console.log(`[PanaversityFS] Docs Path: ${docsPath}`);

if (!enabled) {
Expand All @@ -70,7 +73,7 @@ module.exports = function panaversityFSPlugin(context, options) {

// Connect to PanaversityFS MCP server via HTTP
try {
const client = new MCPHttpClient({ serverUrl, bookId });
const client = new MCPHttpClient({ serverUrl, bookId, apiKey, timeoutMs });

// Check server availability
console.log('[PanaversityFS] Checking server availability...');
Expand Down
Loading