An AI agent powered by Temporal workflows and AI SDK 5.0.
# Install dependencies
npm install
# Configure environment variables
cp .env.example .env
# Edit .env and add your OPENAI_API_KEYTEMPORAL_ADDRESS=localhost:7233 # Temporal server address
TEMPORAL_NAMESPACE=default # Temporal namespace
TEMPORAL_API_KEY= # Optional: for Temporal Cloud
OPENAI_API_KEY= # Required: OpenAI API key
PORT=3000 # Server portTerminal 1 - Start Temporal (via Docker):
docker run -p 7233:7233 temporalio/auto-setup:latestTerminal 2 - Start Worker:
npm run worker
# or with auto-reload:
npm run worker.watchTerminal 3 - Start API Server:
npm run devVisit: http://localhost:3000
Start workflow (non-blocking):
curl -X POST http://localhost:3000/agent \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the weather in San Francisco in Celsius?"}'Get workflow status:
curl http://localhost:3000/agent/{workflowId}Execute workflow (blocking):
curl -X POST http://localhost:3000/agent/execute \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the weather in San Francisco in Celsius?"}'Stream workflow progress (SSE):
curl -X POST http://localhost:3000/agent/stream \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the weather in San Francisco in Celsius?"}'Stream LLM tokens in real-time (SSE + Signal/Query):
curl -N -X POST http://localhost:3000/agent/stream-tokens \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the weather in Tokyo in Celsius?"}'Stream existing workflow:
curl http://localhost:3000/agent/{workflowId}/streamHealth check:
curl http://localhost:3000/healthnpm run dev # Start API server with auto-reload
npm run worker # Start Temporal worker
npm run worker.watch # Start worker with auto-reload
npm run build # Build TypeScript to JavaScript
npm start # Run production build