An AI-powered Retrieval-Augmented Generation (RAG) application that provides personalized wine recommendations based on user prompts, utilizing multiple Large Language Models (LLMs) for comparison.
Wine-LLM is a web-based application designed to assist users in selecting the perfect wine for various occasions. By leveraging a Retrieval-Augmented Generation (RAG) approach, the system processes user inputs to provide tailored wine suggestions. Users can compare responses from different LLMs, including OpenAI, Llama, and a custom RAG model, to choose the recommendation that best suits their preferences.
- Personalized Wine Recommendations: Input scenarios like "I'm having seafood tonight; what French wine pairs well?" to receive tailored suggestions.
- LLM Comparison: Evaluate and compare responses from OpenAI, Llama, and a custom RAG model.
- Interactive Web Interface: User-friendly frontend built with Next.js for seamless interactions.
- Backend Processing: Flask-based backend handles API requests, model interactions, and data retrieval.
- Embedding Generation: Utilize the
Wine_RAG.ipynb
notebook to process and store embeddings for efficient data retrieval.
- Frontend: Next.js (React)
- Backend: Flask (Python)
- LLMs: OpenAI API, Llama, Custom RAG Model
- Environment Management: Python's
venv
andrequirements.txt
git clone https://github.com/MozartofCode/Wine-LLM.git
cd Wine-LLM
-
Create a Virtual Environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install Dependencies:
pip install -r requirements.txt
-
Set Environment Variables:
Create a
.env
file in the root directory and add:OPENAI_API_KEY=your_openai_api_key GROQ_API_KEY=your_groq_api_key
-
Generate Embeddings:
Run the
Wine_RAG.ipynb
notebook to process wine data and generate embeddings. Ensure the resulting embeddings are saved appropriately for the backend to access. -
Start the Flask Server:
python app.py
The backend will run on
http://localhost:5000
.
-
Navigate to the Frontend Directory:
cd frontend
-
Install Dependencies:
npm install
-
Start the Frontend Server:
npm run dev
The application will be accessible at
http://localhost:3000
.
-
Access the Application:
Open your browser and navigate to
http://localhost:3000
. -
Input Your Scenario:
Enter a description of your meal or occasion, such as:
- "I'm having grilled salmon tonight; what wine would pair well?"
- "Looking for a wine to accompany dark chocolate dessert."
-
Select LLM for Recommendation:
Choose between OpenAI, Llama, or the custom RAG model to generate a wine recommendation.
-
View and Compare Results:
Analyze the suggestions provided by each model to select the most suitable wine.
- Enhanced Database Integration: Incorporate a comprehensive wine database for more accurate recommendations.
- Feedback Mechanism: Enable users to provide feedback on recommendations to improve model accuracy.
Author: Bertan Berker
📧 Email: bb6363@rit.edu
💻 GitHub: MozartofCode
Author: Jacob Sakelarios
📧 Email:
💻 GitHub: