Quizzy is an AI-powered virtual interviewer built to simulate personalized interviews through a robust integration of Retrieval-Augmented Generation (RAG), multi-modal analysis (vision, audio, text), and intelligent evaluation systems. It processes resumes and job descriptions, evaluates candidates in real-time, and generates detailed feedback, scoring, and job suggestions.Quizzy delivers an end-to-end mock interview experience that is hyper-personalized and interactive.
Built on Django, Quizzy orchestrates a modular architecture where core machine learning components are managed in a dedicated ML repository. These components are dynamically pulled and tracked using MLflow, DagsHub, and MLOps pipelines, ensuring reproducibility, scalability, and version control for both data and models.
Video Overview
Watch the full walkthrough of how Quizzy functions and performs an AI-driven interview:
YouTube Demo Link
- Input Collection: Users provide their resume, job description, and preferred interview time.
- Resume Analysis:
- Name and domain extraction
- Resume-job description similarity using MXBAI embeddings (cosine similarity)
- Interview Phase:
- Dynamic question generation via Groq's LLaMA 70B model
- Document retrieval from ChromaDB with Gemini embeddings
- Real-time TTS (Edge TTS) and STT (Whisper)
- Vision-based posture and emotion detection using MediaPipe and MobileNet
- Profile summarization using HuggingFace model
- Post Interview:
- Score computation and evaluation report
- Suggestions for improvement
- Curated job recommendations from LinkedIn scraping based on candidate profile
- Model Management: MLflow
- Experiment Tracking & Versioning: DagsHub
- Vision Models: MobileNet (transfer learned), MediaPipe
- Text Models: Groq (LLaMA 70B), Gemini Embeddings
- TTS/STT: Edge TTS and Whisper
- Resume Matching: MXBAI embeddings
- Document Store: ChromaDB
ML code and training pipelines are managed in a separate repository:
https://github.com/RijoSLal/Quizzy_MLOPS
Models and data are versioned and stored at:
https://dagshub.com/slalrijo2005/Quizzy_MLOPS
- CPU: 12 threads (e.g., 4P + 4E cores, 2.0 GHz+)
- RAM: 6 GB
- GPU: Integrated, 48 EUs+, OpenGL 4.5+
⚠️ Systems below these specs may face performance issues or instability.
To configure a production-ready self-hosted environment using NGINX and Cloudflare Tunnel, refer to my detailed blog post:
Self-hosting Django with NGINX & Cloudflare Tunnel
- Python 3.8+
- pip
- virtualenv
git clone https://github.com/RijoSLal/quizzy.git
cd quizzy
python -m venv venv
source venv/bin/activate # or venv\Scripts\activate on Windows
pip install -r requirements.txt
Create a .env
file:
APIKEY=gemini_api_key
GROQ=groq_api_key
GROQQ=groq_api_key
GROQ_API_KEY=groq_api_key
DJANGO=django_secret_key
python manage.py test
python manage.py runserver
docker build -t quizzy .
docker run --env-file .env quizzy
docker run --env-file .env quizzy python manage.py test
quizzy/
├── chroma_langchain_db/
├── interview/
| ├── urls.py
| ├── no_stream_camera_capture.py
│ ├── camera_capture.py
│ ├── resume_management.py
│ ├── retriever.py
│ ├── scrape.py
│ ├── speech.py
│ ├── vectordb.py
│ └── views.py
├── manage.py
├── quizzy/
│ ├── settings.py
│ └── urls.py
├── static/
| ├── images/
│ └── styling/
├── templates/
│ ├── home.html
│ ├── interview.html
│ ├── resume.html
│ └── score.html
├── Dockerfile
├── .dockerignore
├── .env
└── requirements.txt
Contributions are welcome. Please open an issue or submit a pull request if you would like to suggest improvements or contribute features. Ensure your contributions follow the project’s code standards and include relevant documentation.
This project is licensed under the MIT License.
See the LICENSE file for details.