Skip to content

MajorAbdullah/LocalAIAgentWithRAG-main

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 

Repository files navigation

Local AI Agent with RAG

Overview

This project implements a local AI agent capable of answering questions about pizza restaurant reviews. It utilizes the Retrieval-Augmented Generation (RAG) pattern, leveraging a local Large Language Model (LLM) via Ollama and a vector database (Chroma) to provide contextually relevant answers based on a provided dataset of reviews.

Purpose and Problem Solved

The primary purpose of this project is to demonstrate how to build a question-answering system using local AI models and RAG. It addresses the problem of efficiently querying and summarizing information from a collection of text documents (restaurant reviews in this case) without relying on external cloud services.

Key Features

  • Local LLM Integration: Uses Ollama to run LLMs (like Llama 3.2) locally.
  • Retrieval-Augmented Generation (RAG): Employs Langchain and Chroma to retrieve relevant review snippets before generating an answer.
  • Vector Embeddings: Uses mxbai-embed-large via langchain-ollama for creating text embeddings.
  • Persistent Vector Store: Creates and utilizes a persistent Chroma vector database (chrome_langchain_db/) to store review embeddings.
  • CSV Data Source: Reads restaurant reviews from a realistic_restaurant_reviews.csv file.
  • Interactive Q&A: Provides a command-line interface to ask questions about the reviews.

Requirements

  • Python 3.x
  • Ollama installed and running with a model like llama3.2 and mxbai-embed-large pulled.
  • The following Python packages (install via pip install -r requirements.txt):
    • langchain
    • langchain-ollama
    • langchain-chroma
    • pandas

File Structure

. 
├── chrome_langchain_db/    # Chroma vector store directory (created on first run)
├── main.py                 # Main application script for interactive Q&A
├── realistic_restaurant_reviews.csv # Dataset containing restaurant reviews
├── requirements.txt        # Project dependencies
├── vector.py               # Script for embedding data and setting up the retriever
└── README.md               # This file

Getting Started

  1. Clone the repository:
    git clone <repository-url>
    cd LocalAIAgentWithRAG-main
  2. Ensure Ollama is running: Make sure the Ollama service is active and you have pulled the necessary models:
    ollama pull llama3.2
    ollama pull mxbai-embed-large
  3. Install dependencies:
    pip install -r requirements.txt
  4. Run the application: The first time you run main.py, it will execute vector.py to create the vector store. Subsequent runs will use the existing store.
    python main.py
  5. Ask questions: Follow the prompts in the terminal to ask questions about the pizza reviews. Type q to quit.

Contributing

Contributions are welcome! Please feel free to submit pull requests or open issues to improve the project.

License

(Optional: Add your preferred license information here, e.g., MIT License)

Contact

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages