An intelligent customer support system leveraging LangGraph and LangChain for Retrieval-Augmented Generation (RAG) with agent-like behavior to deliver accurate, context-aware responses.
This project implements a intelligent RAG-based customer support system that combines the power of LangGraph for workflow orchestration and LangChain for LLM interactions. The system provides intelligent, context-aware responses to customer queries through a multi-stage validation and retrieval pipeline.
Built with FastAPI, FAISS, LangGraph, and Ollama, this system efficiently processes customer support queries while maintaining high accuracy and safety standards through comprehensive validation checks.
✅ Intelligent Workflow Orchestration – LangGraph-powered pipeline for sophisticated query processing
✅ Advanced Document Retrieval – FAISS vector store for efficient semantic search
✅ Multi-Stage Validation – Comprehensive quality checks at each step
✅ Local LLM Support – Integration with Ollama for on-premise deployment
✅ Content Safety – LLM Guard implementation for safe responses
✅ Efficient Data Processing – Polars-based data preprocessing
✅ API-First Design – FastAPI backend for scalable deployment
Category | Tools Used |
---|---|
Programming | Python 3.9+ |
LLM Integration | LangChain , Ollama , OpenAI API (optional) |
Vector Search | FAISS |
Workflow Orchestration | LangGraph |
Backend Framework | FastAPI |
Data Processing | Polars |
Safety & Validation | LLM Guard |
Deployment | Docker , Docker Compose |
The system follows a sophisticated agentic workflow with six main components:
-
Question Validation
- Input safety checks
- Token limit verification
- Toxicity detection
-
Topic Classification
- Customer support relevance verification
- Query categorization
-
Document Retrieval
- FAISS-powered semantic search
- Context gathering
-
Document Grading
- Relevance scoring
- Context validation
-
Answer Generation
- Context-aware response generation
- Local or cloud LLM integration
-
Answer Validation
- Output quality assessment
- Safety verification
├── data/
│ ├── indexes/ # FAISS index storage
│ └── customer_care_emails.csv
├── src/
│ ├── api/ # FastAPI application
│ ├── graph/ # LangGraph workflow components
│ │ ├── answer_check_node.py
│ │ ├── answer_node.py
│ │ ├── docs_grader_node.py
│ │ ├── graph.py
│ │ ├── question_check_node.py
│ │ ├── retriever_node.py
│ │ ├── state.py
│ │ ├── topic_check_node.py
│ │ └── utils.py
│ ├── static/ # Frontend assets
│ └── indexing/ # Data preprocessing and indexing
├── tests/ # Test cases
├── Dockerfile # Docker file
└── docker-compose.yml # Docker configuration
Before you begin, ensure you have:
- Python 3.9 or higher
- Docker (optional)
- Ollama installed (for local LLM support)
- OpenAI API key (optional, for cloud LLM)
git clone https://github.com/amine-akrout/customer-support-agentic-rag.git
cd customer-support-agentic-rag
Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: .\venv\Scripts\activate
Install dependencies:
pip install -r requirements.txt
Create a .env
file with your settings:
OPENAI_API_KEY=your_api_key_here # Optional
LANGCHAIN_API_KEY=your_api_key_here # Optional
LANGCHAIN_TRACING_V2= true # Optional
LANGCHAIN_PROJECT=your_project_id_here # Optional
- Preprocess and Index Data
python -m src.indexing.preprocess
- Start the API Server
uvicorn src.main:app --reload
- Access the API at
http://localhost:8000
- Build and Start Containers
docker-compose up --build
- Access the API at
http://localhost:8000
Method | Endpoint | Description |
---|---|---|
POST |
/answer |
Submit question and get response |
GET |
/health |
Check API health status |
Example request:
{
"question": "How do I return a damaged product?"
}
The system follows this process for each query:
-
Input Processing
- Question validation
- Safety checks
- Topic classification
-
Context Retrieval
- Document search
- Relevance scoring
- Context selection
-
Response Generation
- Answer formulation
- Quality validation
- Safety verification
We welcome contributions! Here's how you can help:
- Fork the repository
- Create a feature branch (
git checkout -b feature/improvement
) - Make your changes
- Commit (
git commit -am 'Add new feature'
) - Push (
git push origin feature/improvement
) - Create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- LangChain - LLM framework
- LangGraph - Workflow orchestration
- FastAPI - API framework
- FAISS - Vector similarity search
- Ollama - Local LLM support
If you find this project useful, please consider giving it a star! 🌟
For questions or feedback, please open an issue in the repository.