A simple chatbot API for your website using RAG and langchain.
LLM-Powered Conversations: Uses advanced models like Llama-3.2 and BAAI's embedding model for intelligent and contextual responses. Vector-Based Contextual Search: Enhances relevance using FAISS to search similar contexts in preprocessed documents. Customizable Personality: Embodies the personality of Gargi, a wise and knowledgeable assistant with a touch of humor. Modular Design: Easy-to-understand codebase with clear separation of concerns.
Node.js (v16 or later) TypeScript Hugging Face API key (for accessing models) FAISS for vector-based similarity search
- Clone the repository
- Add your additional information in
data/doc.txtarranged in different sections so that chunking does not effect RAG performance much - Create a
.envfile withHF_API_TOKENinside - Run
npm install - Run
npm run build - Run
npm startto start the node.js server
- Change the Assistant's Personality: Modify the systemPrompt in
assistant/config.tsto customize Gargi's character. - Switch Models: Update the model field in embeddingConfig and generationConfig in
assistant/config.tsto use different Hugging Face models. - Add More Data: Add additional documents to the
data/folder and updatedoc.txtas needed.
- Document Processing: The application reads doc.txt and preprocesses it into smaller chunks for efficient similarity search using FAISS.
- User Query Handling: When a query is sent to /assistant/ask, the server: Searches the vector store for relevant context. Combines the query and context into a prompt. Sends the prompt to Hugging Face's language model for response generation.
- Error Handling: If any issue occurs, the server returns a witty and friendly error message.