This repository contains the INSAlgo team project for entry 12: Building a viral video-making app of the Global AI Hackathon 2025 by Hack-Nation that took place on August 9-10, 2025. Team was composed of Onyr (Florian Rascoussier), WiredMind2 (William Michaud), & Pichu (Ngoc Ha Dao).
Onyr's postmortem post on Linkedin Hackathon organizers Linkedin page
Unified conversational + video generation assistant (CLI & Web) built during the Global AI Hackathon (deadline Aug 10, 09:00 ET). Team: INSAlgo.
BuzzBot unifies ideation, iteration, and early media generation for shortβform content creators who currently juggle separate LLM chat tabs, video tools, and manual file handling. Within the hackathon window we built a minimal, extensible assistant that: (1) provides a fast terminal and web chat interface to OpenAIβcompatible models; (2) supports structured tool/function calling (extending the model with deterministic Python functions); (3) integrates Google Veo 3 preview to generate short videos directly from within a conversation; (4) persists sessions in both SQLite and portable JSONL, autoβgenerating semantic titles after sufficient context; and (5) exposes a thin API layer that can later orchestrate social distribution. The core ChatSession loop implements a robust toolβcalling phase (nonβstreaming until tools are resolved, then graceful fallback) while keeping the code surface small. Extensibility requires only adding a JSON schema entry + dispatcher mapping. A React/Vite/Tailwind frontend consumes the same REST endpoints the CLI uses, demonstrating interface parity. Although we intentionally deferred advanced auth, async background workers, and full publishing, the delivered MVP proves the architecture: a single conversational nucleus augmented by composable tools bridging creative intent to media artifacts.
- Team/challenge lockβin: Aug 9 (13:15 ET)
- Final submission deadline: Aug 10 (09:00 ET)
- Credits request window (Lovable / ElevenLabs): until Aug 10 (13:15 ET)
Fragmented workflow & context loss between brainstorming (LLM), media generation, and draft social distribution.
| Layer | Technologies |
|---|---|
| Core Chat / Tools | Python, OpenAI SDK (chat.completions function calling), custom tool dispatcher |
| Video Generation | Google Veo 3 preview (google-genai) |
| Backend API | Flask, SQLAlchemy (SQLite), threading locks |
| Persistence | SQLite (buzzbot.db), JSONL session export, filesystem video artifacts |
| Frontend | React 18, Vite, TypeScript, Tailwind CSS, Radix UI components |
| Dev / Ops | uv (venv + deps), requirements freeze, smoke tests |
ββββββββββββββββββ
CLI (buzzcli) ββΆβ ChatSession βββ Web UI (React)
β (history, β
β tool loop) β
βββββββ¬ββββββββββ
β
Function Call Dispatcher
(dice, Veo3 video, future tools)
β
βββββββββββββ΄ββββββββββββ
β β
OpenAI-compatible API Google Veo 3
β β
LLM Responses Long Op Polling
β
Persistence Layer (SQLite + JSONL)
src/buzzbot/chat.pyβ Core chat + tool invocation loop.src/buzzbot/veo3.pyβ Veo 3 video generation (polling + artifact save).src/buzzbot/webserver.pyβ Flask REST API & session endpoints.src/buzzbot/buzzcli.pyβ Interactive terminal client & commands.src/buzzbot/models.pyβ SQLAlchemy models (User, ChatSessionDB, MessageDB).src/buzzbot/webui/β React/Vite frontend.data/sessions/β JSONL archived sessions.instance/buzzbot.dbβ SQLite database (created at runtime).
Create env/.env or export before running:
OPENAI_API_KEY=sk-...
OPENAI_BASE_URL=https://api.openai.com/v1 # optional override
OPENAI_MODEL=gpt-4o-mini # optional
OPENAI_SYSTEM_PROMPT=You are a helpful assistant. # optional
GOOGLE_API_KEY=... # required for Veo3 tool
NO_COLOR=0 # set to 1 to disable ANSI
DEBUG=0 # set 1 for verbose tool debug
uv venv .venv
source .venv/bin/activate
uv pip install -r requirements.txt
cp env/.env.example env/.env # if example exists, then edit values
python -m src.buzzbot.webserver # starts Flask API (default 8000)uv venv .venv
.\.venv\Scripts\Activate.ps1
uv pip install -r requirements.txt
python -m src.buzzbot.webserverpython src/main.py --help
python src/main.py # single prompt loop
python src/main.py --multiline # multiline entry
python src/main.py --session sessions/<file>.jsonlIn-chat commands: /exit, /save, /new, /model <name>.
Install Node deps inside src/buzzbot/webui/:
cd src/buzzbot/webui
npm install
npm run dev # or: npm run build && npm run previewEnsure backend (Flask) is running; the frontend will call /chat, /sessions, etc.
tests_smoke.py validates basic chat loop and save/load roundtrip:
python tests_smoke.py- Add JSON schema entry in
ChatSession._tool_specs(). - Map name β Python callable in
_tool_dispatch. - Implement function (pure, deterministic preferred) returning a string.
- (Optional) Add persistence or artifact storage.
| Requirement | Location / Plan |
|---|---|
| Short Description | README (this section) |
| Demo Video | (To add link) |
| Tech Video | (To add link) |
| 1βPage Report PDF | Report.md (export to PDF) |
| GitHub Repo | This repository |
| Zipped Code | Generate via git archive or zip root dir |
| Dataset | N/A (no external dataset) |
- No production auth / rate limiting (User model placeholder).
- Veo3 long operations are synchronous polling (future: async task queue + WebSocket/SSE progress).
- Social posting endpoint is a stub; integrate platform APIs + scheduling.
- Add richer evaluation tests and parameter controls (temperature, topβp).
- Backend & Tooling: Name
- Frontend & UX: Name
- Infra / DevOps: Name
- Integration & QA: Name
External contributions outside hackathon scope: fork + PR (lightweight code review encouraged).
Hackathon prototype β license to be finalized (assume internal evaluation use only until updated).
Minimal terminal chat client for OpenAI-compatible models using openai-agents (retained for convenience).
- Streamed token output (falls back gracefully if streaming unsupported)
- Commands:
/exit,/save,/new,/model <name> - Multiline input mode (
--multiline) - Session persistence to JSONL (one message per line)
- Reload prior session with
--session path/to/file.jsonl - System prompt injection with
--system "You are..." - ANSI color (disable with
--no-coloror env NO_COLOR=1)
uv venv .venv
source .venv/bin/activate
uv pip install --upgrade pip
uv pip install -r requirements.txt
uv pip freeze > requirements.txt
cp .env.example .env # then edit with your keys (SECRET_KEY, API keys, etc.)
python src/main.py --helpOPENAI_API_KEY=sk-... python src/main.py
OPENAI_BASE_URL=http://localhost:8080/v1 OPENAI_MODEL=gpt-4o-mini python src/main.py --multiline
python src/main.py --session sessions/2025-08-09T12-00-00.jsonl/exitquit/savewritesessions/<timestamp>.jsonl/newclear in-memory history (retains current model & system prompt)/model <name>switch model mid-session
Stored under sessions/ as UTC timestamped JSONL. Each line is a message object:
{"role": "user", "content": "Hello"}
{"role": "assistant", "content": "Hi there!"}Load an existing session:
python src/main.py --session sessions/2025-08-09T12-00-00.jsonlTokens are printed as they arrive. If streaming errors occur, a warning prints and the client falls back to a single non-streaming response.
A lightweight tests_smoke.py is included to validate:
- Chat loop can append a mock assistant reply
- JSONL save/load roundtrip
Run:
python tests_smoke.py- Add new provider support in
chat.py(abstract client creation / response loop). - Insert tool/function calling logic inside
ChatSession.completeafter accumulating full assistant message. - Add richer CLI flags in
buzzcli.py(e.g.,--temperature,--top-p). - Implement a
/veocommand as a placeholder for future tool integration. - For different model gateways, just set
OPENAI_BASE_URL(must be OpenAI-compatible).
Pinned in requirements.txt. Adjust version of openai-agents as needed if streaming API surface changes.
- Missing key: ensure
OPENAI_API_KEYis set (env or.env). - Connection errors: verify
OPENAI_BASE_URLand network reachability. - Model errors: try switching with
/model gpt-4o-minior another available model.