Skip to content

arvin-forks/RAG-LLM

 
 

Repository files navigation

RAG-based Local Language Model (LLM) Project

Overview

This project aims to implement a RAG-based Local Language Model (LLM) using a locally available dataset. The RAG (Retrieval-Augmented Generation) model combines the strengths of retriever and generator models, enabling more effective and contextually relevant language generation.

Introduction

The Local Language Model (LLM) implemented in this project is designed to operate locally, ensuring that no sensitive data is leaked to the internet. The model utilizes the RAG architecture, which involves a retriever component for efficient information retrieval and a generator component for language generation.

Features

  1. RAG Architecture: Integration of the RAG architecture for improved language generation based on local data.
  2. Ingestor Component: The ingestor component ingests the documents' information into the chromaDB vector database.
  3. Data Security: No data is sent or leaked to the internet, ensuring the privacy and security of locally available datasets.
  4. Retriever Component: The retriever component efficiently retrieves relevant information from the local dataset.
  5. Generator Component: The generator component utilizes the retrieved information to generate contextually relevant language.

Getting Started

Follow these steps to set up the RAG-based Local Language Model:

  1. Clone the Repository:
  2. git clone https://github.com/Sankethhhh/RAG-LLM.git
  3. Install Dependencies:
  4. cd RAG-LLM
    pip install -r requirements.txt
  5. Save the documents that need to be used to the SOURCE_DOCUMENTS folder.

  6. Run the Ingestor:
  7. python ingest.py
  8. Run the Model:
  9. streamlit run app.py
  10. Access the application in your web browser at http://localhost:8501)

Usage

The RAG-based Local Language Model can be used for various natural language processing tasks, such as text completion, question answering, and content generation. Customize the app.py script according to your specific use case.

User Interface

The user interface (UI) for this project is based on Streamlit, providing a simple and interactive way to interact with the RAG-based Local Language Model.

License

This project is licensed under the MIT License.

Acknowledgments

  • The RAG model concept is based on the work by Facebook AI Research.
  • Special thanks to the localGPT open-source community for their valuable contributions.

Feel free to explore, experiment, and enhance the RAG-based Local Language Model for your specific use cases!

About

RAG LLM model based of LocalGPT

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%