This project showcases an advanced LLM-based application using LangChain to integrate multiple language models and embedding providers including OpenAI, Claude (Anthropic), Google Gemini AI, and Hugging Face. It supports dynamic prompt handling, multi-model querying, and flexible output generation based on user input or business logic. Embedding models are used to power semantic search and retrieval-based tasks.
-
🔗 Multi-LLM integration: OpenAI (GPT-4), Claude, Google Gemini, and Hugging Face (local and API-based)
-
🧠 Chat-based workflows using
ChatOpenAI
,ChatGoogle
,ChatAnthropic
, andChatHuggingFace
-
🧩 Modular architecture using LangChain's prompt templates, chains, and tools
-
📡 Real-time data fetching, user input processing, and SQL query generation via LLMs
-
💬 Natural language to SQL conversion using integrated language models
-
🧠 Embeddings integration:
- OpenAI embeddings (
OpenAIEmbeddings
) - Hugging Face embeddings (
HuggingFaceEmbeddings
) used for semantic document similarity - Supports both local and hosted models
- OpenAI embeddings (
-
🔐 Secure API key handling via
.env
for all providers
- Python
- LangChain
- OpenAI API
- Claude (Anthropic) API
- Google Gemini API
- Hugging Face Inference API & Local Models
- Python
- LangChain
- OpenAI GPT
- Anthropic Claude
- Google Gemini
- Hugging Face (Local & API models)
- Hugging Face & OpenAI Embeddings