This project implements a RAG (Retrieval-Augmented Generation) pipeline using n8n, integrated with OpenAI, Supabase, and PostgreSQL. It enables contextual AI chat with persistent memory and document search capabilities.
- π§ RAG Agent powered by OpenAI's GPT model
- πΎ Postgres-backed chat memory for context retention
- π Supabase vector store for embedding search
- π Google Drive integration to load documents
- π§© LangChain nodes for document parsing and splitting
- π Automated embedding + indexing pipeline
- OpenAI Chat Model: Handles user interaction via GPT-4o-mini
- Postgres Memory: Stores past messages for contextual understanding
- Supabase Vector Store: Enables semantic search with embedded documents
- Document Loader & Splitter: Parses binary data, splits text for embedding
- Google Drive Node: Automatically pulls documents from Drive
- Trigger: Chat message or manual trigger
- RAG Agent uses OpenAI + Postgres Memory + Supabase Vector Search
- Binary files from Drive are loaded, split, embedded
- Embeddings stored in Supabase for retrieval
- Answers are returned with contextual + factual relevance
- Download the workflow and auth it .
- Connect credentials for:
- Google Drive
- OpenAI
- Supabase
- Postgres
- Upload documents to the specified Drive folder.
- Trigger the workflow using chat or manual test.
- Enjoy your intelligent assistant with memory and vector search!