AI-powered support system for EV charging stations with LangGraph orchestration, multi-provider LLM support, and voice interface through VAPI integration for phone calling. This system allows users to check station status, reboot charging stations, and get assistance through text or voice interactions.
- π€ Intelligent Chatbot: LangGraph-based agent with tool calling capabilities and state management
- β‘ Station Management: Check status and reboot EV charging stations with safety limits
- π Multi-LLM Support: Dynamic selection between OpenAI, Ollama, Together AI, Groq, and Gemini
- π¬ Interactive UI: Modern Chainlit interface with problem selection buttons and provider switching
- π€ Voice Interface: VAPI integration for voice-based interactions and phone calls
- π¨ Safety Controls: Limits station reboots to three attempts per 5 minutes
- π§ FastAPI Backend: REST API with streaming support and session management (OpenAI Compatible)
- πΎ Singleton Services: Persistent state management across requests
text_demo.mp4
voice_demo.mp4
call_demo.mp4
# Clone the repository
git clone https://github.com/extrawest/vapi_ai_chatbot_for_ev_charging.git
cd vapi_ai_chatbot_for_ev_charging
# Install dependencies
pip install -r requirements.txt
Copy the example environment file and configure your settings:
cp .env.example .env
# Run the FastAPI server without UI
python -m src.main
Open http://localhost:8001 in your browser.
# Run the Chainlit interface
python run_chanilit.app
API documentation: http://localhost:8000/docs
-
LLM Provider Selection: Choose your preferred AI model from available providers
- π§ OpenAI (GPT models)
- π¦ Ollama (local open-source models)
- π€ Together AI (various open models)
- β‘ Groq (optimized for speed)
- π¨βπ Gemini (Google's models)
-
Common Issues: Quick access buttons for frequent problems
- π Reboot Station
- π Connector Stuck
- π΄ Station Offline
-
Voice Interface: Start a voice call for hands-free assistance
- Click the "π€ Start Voice Call" button
- Speak naturally to the assistant
- The system will process your voice commands and respond verbally
-
Station Operations Flow:
- Provide your station ID (e.g., "ST001")
- System checks station status automatically
- If issues are detected, system guides through troubleshooting
- Reboot option with safety limits (max 3 reboots per 5 minutes)
curl -X POST "http://localhost:8000/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "Hello"}],
"provider": "openai", # Optional: specify LLM provider
"stream": true,
"session_id": "user-session-123",
"user_id": "user-123"
}'
The application supports multiple LLM providers through a unified interface. Users can dynamically switch between providers during a chat session via the UI buttons or API parameters.
Provider | Icon | Configuration |
---|---|---|
OpenAI | π§ | OPENAI_API_KEY and OPENAI_MODEL |
Ollama | π¦ | OLLAMA_BASE_URL and OLLAMA_MODEL |
Together AI | π€ | TOGETHER_API_KEY and TOGETHER_MODEL |
Groq | β‘ | GROQ_API_KEY and GROQ_MODEL |
Gemini | π¨βπ | GEMINI_API_KEY and GEMINI_MODEL |
- Default Provider: Set in
.env
withLLM_PROVIDER
variable - UI Selection: Click provider buttons at chat start
- API Override: Specify
provider
parameter in API requests
The application integrates with VAPI (Voice API) to provide voice-based interactions with the chatbot.
- Natural Voice Conversations: Speak directly to the assistant. Phone calling
- Custom Voice Configuration: Configurable voice model and characteristics
- Direct LLM Integration: Uses the same LLM backend as the chat interface
The chatbot uses LangGraph to orchestrate conversation flow with a structured state graph:
-
Message Processing:
- Receive user input and session context
- Loads persistent state from
ChatService
- Formats system instructions for the LLM
-
Tool Node:
- Analyzes user intent with the selected LLM provider
- Decides whether to use station tools
- Handles tool execution and result processing
-
Reboot Management:
- Tracks reboot attempts with safety limits (3 per 5 minutes)
- Stores timestamps for rate limiting
- Provides appropriate feedback when limits are reached
-
Response Generation:
- Formats responses based on tool execution results
- Saves conversation history to persistent storage
- Returns structured responses to the UI
The application implements real-time message streaming using LangGraph's built-in capabilities:
This enables:
- Progressive updates as the LLM generates responses
- Real-time feedback during tool execution (e.g., "Checking station status...")
- Improved user experience with immediate feedback