** Built for DevFest 2026: Business & Enterprise Track + Dedalus Labs Track + Beginner Project Track**
We built Relay specifically for the Dedalus Labs track, with the initial goal of leveraging their infrastructure for MCP server hosting. During our integration planning, we consulted with the Dedalus Labs team about our architecture requirements for real-time multi-agent coordination.
After productive discussions, the Dedalus Labs team identified that certain expectations around our use case (specifically, open MCP server hosting with third-party non Dedalus Labs agents through their MCP SDK) weren't currently feasible with their infrastructure model. Dedalus Labs' OpenAI-compatible SDK specializes in creating new agentic workflows through one clean API, while our use case was very different.
The Dedalus Labs team was incredibly supportive and encouraged us to proceed with a Vercel-hosted MCP implementation while maintaining the core principles of their track challenge.
What this means: We built a production-grade MCP server that demonstrates the power of agent coordination protocols — exactly what the Dedalus Labs track is about — while using infrastructure better suited to our real-time locking requirements. We're deeply grateful to Dedalus Labs for their flexibility and guidance.
AI coding agents are no longer experimental — they're standard. As of the beginning of 2026, 57% of surveyed enterprises have AI agents running in production, with 30.4% more planning deployment soon (LangChain). Teams run Claude Code, Cursor, Cline, and Copilot side by side, and every developer on a team is delegating real work to their own agent. This is great for individual velocity. It's a disaster for team coordination.
The issue: agents can't talk to each other. Each one operates in total isolation. Developer A's agent has no idea that Developer B's agent is rewriting the same authentication module. Developer C's agent refactors a shared utility while two other agents depend on the old interface. Nobody finds out until PR time, when hours of parallel work collide into merge conflicts, broken builds, and wasted effort.
Any team running multiple agents on a shared codebase. And there's no coordination infrastructure to prevent it. Git doesn't solve it. Branch strategies don't solve it. The agents themselves have no protocol for signaling intent, checking availability, or yielding to each other.
Relay is that protocol. A shared coordination channel where agents communicate what they're working on, check what's taken, and stay out of each other's way — automatically, through a native MCP integration.
Relay gives AI coding agents a shared communication layer and file locking mechanism so teams can run multiple agents in parallel without collisions.
Lock-Based Coordination
Agents claim READING or WRITING locks before touching files. Atomic multi-file locking prevents race conditions. Locks auto-expire after 5 minutes so stale claims never block the team. This is inspired by the process of multithreading, where resources are locked to ensure only one thread accesses a shared resource at a time. In our version, we focus on files edited by many agents, and we use a graph-based approach to detect conflicts.
The three types of states a node representing a file can be in are:
- Open (Default)
- A file that is available for editing.
- No one is currently working on this file.
- Locked
- A file that someone (or a system) has explicitly locked for editing.
- Shown with a thicker border and different coloring.
- Indicates "I am actively working on this file, hands off!"
- Neighbor Locked
- A file that is adjacent to (depends on or is depended upon by) a locked file.
- This is used to warn: "Be careful editing this file, someone is working on a related file."
- Helps prevent merge conflicts and coordination issues.
Orchestration Commands When an agent checks in, Relay returns a clear directive:
PROCEED— you're clear to edit.SWITCH_TASK— file is locked, work on something else.PULL— your branch is stale, sync first.PUSH— time to commit and release locks.
Native MCP Integration ⭐
Relay exposes a native MCP endpoint at /mcp (HTTP + SSE, JSON-RPC 2.0). Two tools — check_status and post_status — work with any MCP-compatible agent. No SDK, no wrapper, no custom integration. If your agent speaks MCP, it speaks Relay.
Video 1: Lock acquisition + conflict detection
Video 2: Agent pivot + live graph coordination
DevFest Submission Video
- 📊 Home graph view with active lock badges
- 📝 Activity timeline showing lock transitions
- 🔧 MCP tool call output (
check_statusandpost_status)
Agent adoption is accelerating faster than team tooling can keep up. Every dev team is about to have 3, 5, 10 agents running simultaneously — and right now, the coordination infrastructure simply doesn't exist.
Relay is the missing layer:
- Prevents wasted work — agents know what's taken before they start
- Catches invisible conflicts — dependency-aware detection goes beyond file-level collisions
- Scales naturally — works for 2 agents or a full team
- Zero friction — native MCP means agents coordinate without developer intervention
This isn't just a merge conflict reducer. It's the communication protocol that multi-agent teams need to function.
95% of general AI tools fail when treated as add-ons rather than embedded in core workflows according to Salesforce. In order to make sure that businesses and enterprises can successfully adopt AI, we need to make sure that AI tools are embedded in core workflows and can allow for genuine collaboration workers and collaboration between AI and humans.
Quick setup (2 minutes):
npm install
cp .env.example .env.local 2>/dev/null || trueSet these in .env.local:
KV_REST_API_URL=your_vercel_kv_url
KV_REST_API_TOKEN=your_vercel_kv_token
CRON_SECRET=random_secret_for_cleanup_job
GITHUB_CLIENT_ID=your_github_oauth_client_id
GITHUB_CLIENT_SECRET=your_github_oauth_secret
NEXTAUTH_SECRET=random_nextauth_secret
NEXTAUTH_URL=http://localhost:3000
GITHUB_TOKEN=optional_github_patnpm run devOpen http://localhost:3000 — you should see the live dependency graph UI.
REPO_URL="https://github.com/<owner>/<repo>"
BRANCH="main"
HEAD="$(git rev-parse HEAD)"
# Check file status before editing
curl -s -X POST http://localhost:3000/api/check_status \
-H "Content-Type: application/json" \
-H "x-github-user: demo-agent" \
-d "{\"repo_url\":\"$REPO_URL\",\"branch\":\"$BRANCH\",\"file_paths\":[\"README.md\"],\"agent_head\":\"$HEAD\"}" | jq
# Claim a WRITING lock
curl -s -X POST http://localhost:3000/api/post_status \
-H "Content-Type: application/json" \
-H "x-github-user: demo-agent" \
-d "{\"repo_url\":\"$REPO_URL\",\"branch\":\"$BRANCH\",\"file_paths\":[\"README.md\"],\"status\":\"WRITING\",\"message\":\"updating docs\",\"agent_head\":\"$HEAD\"}" | jq
# View the graph with locks
curl -s "http://localhost:3000/api/graph?repo_url=$REPO_URL&branch=$BRANCH" | jq '.metadata,.locks'curl -s http://localhost:3000/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' | jqYou should see the check_status and post_status tools listed.
Our architecture centers on MCP (Model Context Protocol) as the coordination layer — exactly the kind of agent-first infrastructure the Dedalus Labs track champions.
Our MCP Implementation:
- ✅ Native MCP protocol in Next.js (
/mcproute with JSON-RPC 2.0) - ✅ Production-grade error handling (rate limits, offline fallbacks, validation)
- ✅ Real agent workflow (solves actual multi-agent file conflicts)
- ✅ Full protocol compliance (SSE streaming, tool schemas, capabilities)
Lock Orchestration — lib/locks.ts uses Lua-backed atomic multi-file lock transactions in Vercel KV (Redis). check_status handles stale-branch detection and lock-aware orchestration. post_status handles atomic lock acquire/release with ownership validation.
Dependency Graph Engine — lib/graph-service.ts integrates with the GitHub API with intelligent caching and rate-limit handling. lib/parser.ts uses regex-based import parsing for JS/TS/Python (no AST overhead — 10x faster for our use case).
MCP Protocol — app/mcp/route.ts implements a native MCP JSON-RPC endpoint with HTTP + SSE streaming, supporting tools/list and tools/call with graceful fallback handling. An optional standalone Python MCP proxy is available in mcp/src/ for alternative deployments.
Frontend — Next.js 14 with React 18 and TypeScript. ReactFlow for interactive dependency graph visualization. Framer Motion for lock transition animations. Real-time polling with intelligent backoff.
- Framework: Next.js 14 + React 18 + TypeScript
- Storage: Vercel KV (Upstash Redis) for atomic locks
- APIs: GitHub API via Octokit, NextAuth for GitHub OAuth
- MCP Protocol: Native HTTP + SSE implementation
- Visualization: ReactFlow, Framer Motion, Radix UI
- Testing: Vitest for API routes and service layer
Architecture Constraints → Collaborative Problem-Solving Our real-time locking requirements didn't align with Dedalus Labs' current product in the way we hoped it would. We were able to pivot and find an infrastructure that met our needs while staying true to the track's mission of using MCP and the technologies that support it.
Atomic Multi-File Locking
Race conditions were inevitable with naive lock implementations. Redis Lua scripts (kv.eval) gave us single-transaction acquire/release across multiple files, guaranteeing atomicity under high agent concurrency.
GitHub API Rate Limits Dependency graphs require dozens of API calls per repo. We implemented aggressive graph caching (invalidate only on HEAD changes), conditional requests with ETags, and graceful degradation that serves cached graphs when rate-limited. Our GitHub API calls kept on running out. We found out that there was a leak in API calls, which after being fixed allowed our program to run smoothly with more calls.
Direct vs. Neighbor Conflicts
File-level locking wasn't enough. Adding dependency-aware neighbor conflict detection required a real-time graph ingestion pipeline and overlay logic in check_status — catching the subtle breakages where editing one file breaks another agent's dependency chain.
relay/
├── app/ # Next.js UI and API routes
│ ├── api/
│ │ ├── check_status/ # Lock-aware status checking
│ │ ├── post_status/ # Atomic lock acquire/release
│ │ ├── graph/ # Dependency graph endpoint
│ │ └── cleanup_stale_locks/ # Cron job for TTL enforcement
│ ├── mcp/
│ │ └── route.ts # 🌟 Native MCP JSON-RPC endpoint
│ ├── components/ # React UI components
│ └── hooks/ # useGraphData (real-time polling)
├── lib/ # Core coordination services
│ ├── locks.ts # Lua-backed atomic lock transactions
│ ├── graph-service.ts # GitHub API + dependency graph builder
│ ├── github.ts # Octokit client with rate-limit handling
│ ├── parser.ts # Import statement regex parser
│ └── validation.ts # Request schema validation
├── mcp/ # Optional standalone Python MCP proxy
│ ├── main.py
│ └── src/
│ ├── server.py # Starlette-based MCP server
│ └── models.py # Pydantic request/response models
└── tests/ # Vitest test suite
├── routes.test.ts # API route integration tests
└── mcp-route.test.ts # MCP protocol compliance tests
Short-term
- Multi-repo awareness for cross-service conflict detection
- Smart file recommendations when
SWITCH_TASKfires (suggest neighbor-safe files) - WebSocket live updates to replace polling
Long-term
- Historical analytics to identify lock hot-spots and merge-risk trends
- Expanded MCP tooling (
batch_plan_files,auto_retry_on_unlock,branch_health_check) - AI-powered conflict resolution with merge strategies based on lock history
- Slack/Discord integration for team-level coordination visibility
- Self-hosted enterprise deployment with custom Redis clusters
Technical Insights
- Building MCP endpoints directly in Next.js was easier than expected — no need for separate server infrastructure
- Redis Lua scripts gave us true atomicity without complex distributed locking patterns
- Regex-based import parsing was 10x faster than full AST parsing for dependency graphs
- Graceful degradation turned blocking errors into usable warnings, this is critical for agent workflows that can't afford to stall
Hackathon Lessons
- Scope ruthlessly, we cut chat features and multi-repo support to nail core lock orchestration
- Be flexible, at first we wanted to create a graph and dependency system for function-level code. We realized this was not feasible and we pivoted to a file-level system.
- Test the happy path first, getting
check_status→post_status→PROCEEDworking end-to-end built confidence fast
What surprised us
- How fast dependency graphs grow (600+ files → 2000+ edges in a medium repo)
- GitHub API rate limits hit way earlier than expected
- Agents actually follow orchestration commands when you return clear
actionvalues - Vercel KV is fast enough for real-time coordination at <5ms p99 latency
1. MCP Endpoint Discovery
curl http://localhost:3000/mcp \
-H "Accept: text/event-stream" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize"}' | head -20✅ Returns protocol version 2024-11-05 and server capabilities
2. Tool Discovery
curl http://localhost:3000/mcp \
-H "Accept: application/json, text/event-stream" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":2,"method":"tools/list"}' | jq '.result.tools'✅ Lists check_status and post_status tools with full schemas
3. Tool Execution (Real Agent Workflow)
curl http://localhost:3000/mcp \
-H "Accept: application/json, text/event-stream" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc":"2.0",
"id":3,
"method":"tools/call",
"params":{
"name":"check_status",
"arguments":{
"username":"gpt5-orchid-lukauljaj",
"repo_url":"https://github.com/anthropics/anthropic-sdk-python",
"branch":"main",
"file_paths":["README.md"],
"agent_head":"main"
}
}
}' | jq '.result.structuredContent'✅ Returns orchestration command with lock status
Why This Fits the Dedalus Labs Track:
- Production-Ready MCP Protocol — Full JSON-RPC 2.0 compliance with SSE streaming
- Real Agent Problem — Solves actual multi-agent coordination, not a toy demo
- Proper Error Handling — Graceful degradation for rate limits, timeouts, offline scenarios
- Tool Schema Validation — Complete
inputSchema/outputSchemadefinitions - Scalable Architecture — Same MCP endpoint serves both UI and agent traffic
MCP Protocol Compliance Checklist:
- ✅ JSON-RPC 2.0 message format
- ✅ SSE (Server-Sent Events) response streaming
- ✅ Standard error codes (-32600, -32601, etc.)
- ✅ Tool discovery via
tools/list - ✅ Tool execution via
tools/call - ✅ Server initialization handshake
npm run dev— Start development servernpm run build— Production buildnpm run start— Run production servernpm run typecheck— TypeScript validationnpm run test— Run Vitest test suite
