RAG-LLM Question Answering System
CompletedA Retrieval Augmented Generation (RAG) system for question answering over PDF documents. Users can upload a PDF, which is split into semantic chunks and embedded using sentence-transformers. A FAISS vector store enables fast similarity search, and relevant chunks are retrieved for each query. The DeepSeek LLM generates answers using both the user's question and the retrieved context, providing accurate, context-aware responses. Built with LangChain, Hugging Face Transformers, ChromaDB, and FAISS.

A Retrieval Augmented Generation (RAG) system for question answering over PDF documents. Users can upload a PDF, which is split into semantic chunks and embedded using sentence-transformers. A FAISS vector store enables fast similarity search, and relevant chunks are retrieved for each query. The DeepSeek LLM generates answers using both the user's question and the retrieved context, providing accurate, context-aware responses. Built with LangChain, Hugging Face Transformers, ChromaDB, and FAISS.