An AI-powered research assistant that takes user queries, fetches real-time web data, and generates concise, source-cited answers using a language model.
Built with a modern frontend (Vite + React.js) and a modular backend (FastAPI, OpenAI, SerpAPI), this assistant helps users get accurate answers with references—fast.
The backend is a modular, high-performance service built with FastAPI. It handles:
- 📝 Query Intake: Accepts natural language input from the frontend.
- 🌐 Web Search Integration: Uses SerpAPI to gather relevant real-time web data.
- 📑 Information Extraction: Filters and selects the most relevant snippets.
- 🧠 LLM Summarization: Utilizes OpenAI's GPT models to produce a well-structured answer.
- 🔗 Citation Handling: Formats output with inline numbered citations, mapped to actual URLs.
Technologies:
- FastAPI
- OpenAI GPT
- SerpAPI
- Python, Pydantic
- Uvicorn (async server)
A clean and responsive single-page application built for speed and simplicity.
- 🔎 Intuitive Input Field: Users can enter research questions naturally.
- ⚡ Asynchronous Communication: Smooth interaction with backend API.
- 📘 Answer Display with Citations: Clearly formatted answers with numbered inline citations and clickable links.
- 🧼 Minimalist UI: Optimized for clarity, accessibility, and ease of use.
Technologies:
- Vite
- React.js
- HTML5 & CSS3
- Axios / Fetch API
Watch the demo here:
- User submits a research query via the frontend.
- The backend sends the query to SerpAPI for web results.
- Relevant snippets are extracted and cleaned.
- OpenAI GPT generates a summary with numbered citations.
- Response with citations is sent back and rendered in the UI.