A simple tool designed to make efficient use of LLM's by leveraging Retrieval Augmented Generation in order to obtain relevant information from a provided corpus of text quickly, and reduce chances of hallucination.
- Create a virtual environment in a directory
mkdir <name> && cd <name>
python3 -m venv .venv/<name>
- activate with
source .venv/bin/activate
- Dependencies have beel listed in
requirements.txt
- install them with
pip install -r requirements.txt
- Configure, the
prompt, corpus, model to be used
inconfig.toml
- Generate API keys to either OpenAI or Gemini
- Populate
.env
withOPENAI_API_KEY
orGEMINI_API_KEY
- perform
source .env
before running the program
- run
python3 main.py
- output written to
response.txt