![]() |
![]() |
A quick start, locally-run tool to test and use as basis for various document-related use cases:
- Rag Query: Prompt an LLM that uses relevant context to answer your queries.
- Semantic Retrieval: Retrieve relevant passages from documents, showing sources and relevance.
- Rag Chat: Interact with an LLM that utilizes document retrieval and chat history.
- LLM Chat: Chat and test a local LLM, without document context.
The interface is divided into tabs for users to select and try the feature for the desired use case. The implementation is focused on simplicity, low-level components, and modularity, in order to depict the working principles and core elements, allowing developers and Python enthusiasts to modify and build upon.
Rag systems rely on sentence embeddings and vector databases. More information on embeddings can be found in our MOOC Understanding Embeddings for Natural Language Processing.
-
Download or clone the repository.
-
Create and activate a virtual environment (optional).
$ python3 -m venv .myvenv
$ source .myvenv/bin/activate
- In bash, run the following installation script:
(.myvenv)$ bin/install.sh
The script might not work for MacOS, please follow the manual installation instructions.
- Install dependencies.
(.myvenv)$ pip3 install -r requirements.txt
- Install Ollama to run Large Language Models (LLMs) locally. (Or follow the installation instructions for your operating system: Install Ollama).
(.myvenv)$ curl -fsSL https://ollama.ai/install.sh | sh
- Choose and download an LLM model [*]. For example:
(.myvenv)$ ollama pull llama3.2
-
Place your documents in the intended data folder (default:
data/
). -
Activate your virtual environment.
$ source .myvenv/bin/activate
- Start the tool. [†]
(.myvenv)$ python3 app.py
- Open http://localhost:7860 in your web browser.
-
Ensure you have Docker and Docker Compose installed.
-
Install the latest NVIDIA drivers for your GPU on your host system. Install the NVIDIA Container Toolkit. (Only necessary for utilizing a GPU.)
-
Build and start the app and Ollama, either in default CPU mode...
docker compose up --build
... or with a dedicated GPU.
docker compose -f docker-compose-gpu.yml up
-
Wait for both services to start, which will take several minutes on the first run. ragsst-app has loaded succesfully when it prints to the console:
Set Collection: my_docs. Embedding Model: multi-qa-mpnet-base-cos-v1
-
Open http://localhost:7860 or http://127.0.0.1:7860 in your browser.
-
Relevance threshold: Sets the minimum similarity threshold for retrieved passages. Lower values result in more selective retrieval.
-
Top n results: Specifies the maximum number of relevant passages to retrieve.
-
Top k: Ranks the output tokens in descending order of probability, selects the first k tokens to create a new distribution, and it samples the output from it. Higher values result in more diverse answers, and lower values will produce more conservative answers.
-
Temperature (Temp): This affects the 'randomness' of the answers by scaling the probability distribution of the output elements. Increasing the temperature will make the model answer more creatively.
-
Top p: Works together with Top k, but instead of selecting a fixed number of tokens, it selects enough tokens to cover the given cumulative probability. A higher value will produce more varied text, and a lower value will lead to more focused and conservative answers.
Check out the Frequently Asked Questions (FAQ) and please let us know if you encounter any problems.
[*] Performance consideration: On notebooks/PCs with dedicated GPUs, models such as llama3.1, mistral or gemma2 should be able to run smoothly and rapidly. On a standard notebook, or if you encounter any memory of performance issues, prioritize smaller models such as llama3.2 or qwen2.5:3b.
Before committing, format the code using Black:
$ black -t py311 -S -l 99 .
Linters:
- Pylance
- flake8 (args: --max-line-length=100 --extend-ignore=E401,E501,E741)
For more detailed logging, set the LOG_LEVEL
environment variable:
$ export LOG_LEVEL='DEBUG'