PeerPrep is a collaborative coding platform that matches users to solve coding questions together in real-time.
-
User Authentication
- Users register and log in via Firebase Authentication
- Session management through backend user service
-
Question Filtering
- Filter coding questions by difficulty level
- Filter by topics of interest
-
Matchmaking System
- Users click "Find Match" to enter matchmaking queue
- Server matches users with similar preferences who are currently searching
- Real-time matching algorithm
-
Collaborative Coding Room
- Once matched, users enter a shared coding environment
- Similar to LeetCode/online coding interview platforms
- Real-time collaborative code editing
- Shared workspace for solving problems together
- Frontend: React + TypeScript + Vite
- Backend Services: Microservices architecture
- User Service: Authentication and user management
- (More services to be added: Matching, Question, Collaboration)
- Database: MongoDB Atlas
- Authentication: Firebase
- You are required to develop individual microservices within separate folders within this repository.
- The teaching team should be given access to the repositories as we may require viewing the history of the repository in case of any disputes or disagreements.
- The question service uses data from TACO the license details and links are shown below:
- License: apache-2.0
- Dataset: https://huggingface.co/datasets/BAAI/TACO
- Copyright: Beijing Academy of Artificial Intelligence (BAAI)
Tools Used: Claude Code (Claude Sonnet 4.5)
Team Member Using AI: All
Prohibited Phases Avoided:
- Requirements elicitation and prioritization
- Architecture and design decisions
- Sprint planning and backlog consolidation
Allowed Uses:
- Debugging assistance: Identified and fixed deployment script issues (Firebase JSON handling, YAML syntax, SSH variable expansion, cookie settings)
- Implementation code: Generated migration logic for user profile completion feature
- Refactoring: Improved deployment script structure and secret handling
Verification: All AI-generated outputs were reviewed, understood, edited, and tested by the author (LuBolin).
For detailed prompts, responses, and usage context, see /ai/usage-log.md.