This repository provides a simple yet powerful example of building a conversational agent with real-time web access, leveraging Tavily's search, extract, and crawl capabilities.
Designed for ease of customization, you can extend this core implementation to:
- Integrate proprietary data
- Modify the chatbot architecture
- Modify LLMs
- 🔍 Intelligent question routing between base knowledge and tavily search, extract, and crawl.
- 🧠 Conversational memory with LangGraph
- 🚀 FastAPI backend with async support
- 🔄 Streaming of Agentic Substeps
- 💬 Markdown support in chat responses
- 🔗 Citations for web results
- 👁️ Observability with Weave
This repository includes everything required to create a functional chatbot with web access:
📡 Backend (backend/
)
The core backend logic, powered by Tavily and LangGraph:
agent.py
– Defines the ReAct agent architecture, state management, and processing nodes.prompts.py
– Contains customizable prompt templates.
🌐 Frontend (ui/
)
Interactive React frontend for dynamic user interactions and chatbot responses.
app.py
– FastAPI server that handles API endpoints and streaming responses.
a. Create a .env
file in the root directory with:
TAVILY_API_KEY="your-tavily-api-key"
OPENAI_API_KEY="your-openai-api-key"
VITE_APP_URL=http://localhost:5173
b. Create a .env
file in the ui
directory with:
VITE_BACKEND_URL=http://localhost:8080
- Create a virtual environment and activate it:
python3 -m venv venv
source venv/bin/activate # On Windows: .\venv\Scripts\activate
- Install dependencies:
python3 -m pip install -r requirements.txt
- From the root of the project, run the backend server:
python app.py
- In a new terminal, navigate to the frontend directory:
cd ui
- Install dependencies:
npm install
- Start the development server:
npm run dev
Open the app in your browser at the locally hosted url (e.g. http://localhost:5173/)
POST /stream_agent
: Chat endpoint that handles streamed LangGraph execution

Powered by Tavily - The web API Built for AI Agents