RAG leverages Meta Llama 3 (8B parameters) on GPU and Hugging Face API models on CPU. It supports two primary functionalities:
- Chat with LLM: Engage in conversations with the large language model using GPU.
- RAG Chat with PDFs: Perform Retrieval-Augmented Generation with up to 4 PDF documents.
- Chat with LLM: Utilize Meta Llama 3 for chat interactions, supporting system and user messages.
- RAG Chat with PDFs: Interact with content from PDFs using various prompts:
- Detailed Prompt
- Short Prompt
- Summary Prompt
- Explanation Prompt
- Opinion Prompt
- Instruction Prompt
-
Clone the Repository:
git clone https://github.com/SirajuddinShaik/RAG.git cd RAG -
Install Dependencies:
pip install -r requirements.txt
-
Run the Application:
chainlit app.py
To run the application with GPU support, you can use the following Colab link: Run on Colab
-
Start a chat session:
chainlit app.py
-
Load and interact with PDFs: Ensure your PDFs are in the appropriate directory and use the UI to upload and query them.
- app.py: Main application script.
- requirements.txt: Dependencies required for the project.
- src/: Source code directory.
- config/: Configuration files.
- src/utils/prompts: various prompts used to interact.
- data_ingestion/: Scripts and tools for data ingestion.
- logs/: Log files.
- Dockerfile: Docker setup for containerized deployment.
We welcome contributions! Please fork the repository, create a new branch, and submit a pull request.
This project is licensed under the MIT License.
If you have any questions or need further assistance, please open an issue on the GitHub repository.
Happy Coding! 🚀