A Retrieval-Augmented Generation (RAG) agent built with Flask, LangChain, Hugging Face, and Redis.
It retrieves relevant context from your dataset and generates answers using Hugging Face models.
- Flask backend with /chat endpoint
- RAG pipeline (vector search + LLM)
- Redis as vector database
- Configurable data source (data/product.txt)
- Session-based chat
git clone https:/github.com/Unfathomable-08/Rag-Agent.git
cd Rag-Agentpython -m venv venvWindows
venv\Scripts\activateLinux/Mac
source venv/bin/activatepip install -r requirements.txtCopy the example file:
cp .env.example .envOpen .env and fill in:
- Hugging Face API Token → Get it here
- Redis Cloud URL → Get it here
Example Redis URL format:
textredis:/default:<your-password>@<your-host>:<port>You can keep or modify the file: textdata/product.txt
python vector_builder.pypython main.pyThe Flask server will start locally.
Endpoint textPOST /chat
Request body
json{
"session_id": "user123",
"question": "What products do you have?"
}Example using curl
curl -X POST http:/127.0.0.1:5000/chat \
-H "Content-Type: application/json" \
-d '{"session_id": "user123", "question": "What products do you have?"}'Response
json{
"answer": "We have a range of products including..."
}- Python 3
- Flask
- LangChain
- Hugging Face Inference API
- Redis (as vector DB)
- FAISS
""" Replace data/product.txt with your own dataset for custom answers. Ensure Redis Cloud is running and accessible. Hugging Face token must have inference API permissions. """