A simple RAG pipeline that answers JavaScript questions over your own .txt
documents, using LangChain’s in-memory vector store and a locally-hosted Ollama LLM.
- Loads your
docs/javascript.txt
file - Splits it into “documents” by sentence (splitting on
.
) - Embeds each chunk with
nomic-embed-text
- Indexes embeddings in memory (no external vector DB)
- Retrieves the top 2 most-similar chunks for any question
- Prompts your Ollama model with those chunks + strict instructions
- Returns a concise, accurate answer (max 3 sentences) or “Insufficient information…”
- Node.js ≥ 24
- Ollama installed & running locally