This project is a Knowledge Management Platform built using the MERN stack (MongoDB, Express.js, React, Node.js) with RAG (Retrieval Augmented Generation) capabilities for conversational AI.
Before you begin, ensure you have the following installed:
- Node.js
- npm
- MongoDB
- Git
- Navigate to the
backenddirectory:cd backend - Install dependencies:
npm install
- Create a
.envfile in thebackenddirectory and add your environment variables. A.env.examplefile might be provided for reference.MONGO_URI=your_mongodb_connection_string JWT_SECRET=your_jwt_secret GOOGLE_API_KEY=your_google_gemini_api_key - Start the backend server:
The backend server will typically run on
npm run dev
http://localhost:3000.
- Navigate to the
frontenddirectory:cd frontend - Install dependencies:
npm install
- Start the frontend development server:
The frontend application will typically run on
npm run dev
http://localhost:5173.
Once both the backend and frontend servers are running:
- Open your web browser and navigate to the frontend URL (e.g.,
http://localhost:5173). - Register a new user or log in with existing credentials.
- Upload documents through the Document Management System.
- Interact with the conversational AI by asking questions related to the uploaded documents.
- Explore the analytics dashboard for insights into document usage and user activity.
The platform follows a MERN stack architecture with a clear separation between the frontend and backend. The backend handles API requests, database interactions, and integrates with LLMs for RAG capabilities. The frontend provides a responsive user interface for document management, conversational AI, and analytics.
- Frontend: Built with React, responsible for user interaction and displaying data.
- Backend: Built with Node.js (Express.js), handles API routing, business logic, authentication, and integration with MongoDB and LLMs.
- Database: MongoDB is used for storing user data, document metadata, chat history, and other application-specific information.
- Vector Store: A vector database FAISS is used to store document embeddings for efficient retrieval during RAG.
- LLM Integration: Utilizes Google Gemini for conversational AI and document understanding.
Large Language Models (LLMs) are primarily used in the Conversational AI Interface for Retrieval Augmented Generation (RAG). When a user asks a question:
- The query is embedded and used to retrieve relevant document chunks from the vector store.
- These retrieved chunks, along with the user's query, are sent to the LLM (Google Gemini).
- The LLM generates a coherent and contextually relevant response based on the provided documents.
This approach ensures that the AI responses are grounded in the organization's knowledge base, reducing hallucinations and providing accurate information.
Detailed API documentation can be found here or by exploring the backend/routes and backend/controllers directories. Key endpoints include:
/api/v1/auth/register: User registration./api/v1/auth/login: User login./api/v1/auth/profile: Get user profile./api/v1/documents/upload: Upload documents./api/v1/documents: Get list of documents./api/v1/chat/message: Send message to conversational AI.
For more details, refer to the source code in the backend directory.