Skip to content

aihpi/ragsst

Repository files navigation

Logo of the AI Service Center Berlin-Brandenburg. Logo of the German Federal Ministry of Education and Research: Gefördert vom Bundesministerium für Bildung und Forschung.

Ask your documents!

Retrieval Augmented Generation and Semantic-Search Tool (RAGSST)

A quick start, locally-run tool to test and use as basis for various document-related use cases:

  • Rag Query: Prompt an LLM that uses relevant context to answer your queries.
  • Semantic Retrieval: Retrieve relevant passages from documents, showing sources and relevance.
  • Rag Chat: Interact with an LLM that utilizes document retrieval and chat history.
  • LLM Chat: Chat and test a local LLM, without document context.

RAGSST

The interface is divided into tabs for users to select and try the feature for the desired use case. The implementation is focused on simplicity, low-level components, and modularity, in order to depict the working principles and core elements, allowing developers and Python enthusiasts to modify and build upon.

Rag systems rely on sentence embeddings and vector databases. More information on embeddings can be found in our MOOC Understanding Embeddings for Natural Language Processing.

Installation

  1. Download or clone the repository.

  2. Create and activate a virtual environment (optional).

$ python3 -m venv .myvenv
$ source .myvenv/bin/activate

Option 1: Automatic Installation

  1. In bash, run the following installation script:
(.myvenv)$ bin/install.sh

The script might not work for MacOS, please follow the manual installation instructions.

Option 2: Manual Installation

  1. Install dependencies.
(.myvenv)$ pip3 install -r requirements.txt
  1. Install Ollama to run Large Language Models (LLMs) locally. (Or follow the installation instructions for your operating system: Install Ollama).
(.myvenv)$ curl -fsSL https://ollama.ai/install.sh | sh
  1. Choose and download an LLM model [*]. For example:
(.myvenv)$ ollama pull llama3.2

Usage

  1. Place your documents in the intended data folder (default: data/).

  2. Activate your virtual environment.

$ source .myvenv/bin/activate
  1. Start the tool. [†]
(.myvenv)$ python3 app.py
  1. Open http://localhost:7860 in your web browser.

Alternative usage option: Docker Compose

  1. Ensure you have Docker and Docker Compose installed.

  2. Install the latest NVIDIA drivers for your GPU on your host system. Install the NVIDIA Container Toolkit. (Only necessary for utilizing a GPU.)

  3. Build and start the app and Ollama, either in default CPU mode...

docker compose up --build

... or with a dedicated GPU.

docker compose -f docker-compose-gpu.yml up
  1. Wait for both services to start, which will take several minutes on the first run. ragsst-app has loaded succesfully when it prints to the console: Set Collection: my_docs. Embedding Model: multi-qa-mpnet-base-cos-v1

  2. Open http://localhost:7860 or http://127.0.0.1:7860 in your browser.

Key Settings

Retrieval Parameters

  • Relevance threshold: Sets the minimum similarity threshold for retrieved passages. Lower values result in more selective retrieval.

  • Top n results: Specifies the maximum number of relevant passages to retrieve.

Additional Input parameters for the LLMs

  • Top k: Ranks the output tokens in descending order of probability, selects the first k tokens to create a new distribution, and it samples the output from it. Higher values result in more diverse answers, and lower values will produce more conservative answers.

  • Temperature (Temp): This affects the 'randomness' of the answers by scaling the probability distribution of the output elements. Increasing the temperature will make the model answer more creatively.

  • Top p: Works together with Top k, but instead of selecting a fixed number of tokens, it selects enough tokens to cover the given cumulative probability. A higher value will produce more varied text, and a lower value will lead to more focused and conservative answers.

FAQ

Check out the Frequently Asked Questions (FAQ) and please let us know if you encounter any problems.


[*] Performance consideration: On notebooks/PCs with dedicated GPUs, models such as llama3.1, mistral or gemma2 should be able to run smoothly and rapidly. On a standard notebook, or if you encounter any memory of performance issues, prioritize smaller models such as llama3.2 or qwen2.5:3b.

Development

Before committing, format the code using Black:

$ black -t py311 -S -l 99 .

Linters:

  • Pylance
  • flake8 (args: --max-line-length=100 --extend-ignore=E401,E501,E741)

For more detailed logging, set the LOG_LEVEL environment variable:

$ export LOG_LEVEL='DEBUG'

Author

License

GPLv3

About

Retrieval Augmented Generation and Semantic-search Tools

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 5