A powerful Next.js application for simultaneously chatting with multiple AI models and comparing their responses in real-time. Built with modern web technologies and featuring a beautiful, responsive UI.
Created by Kushagra
- OpenAI GPT-5 - OpenAI's most advanced reasoning model
- Claude 4 Sonnet - Anthropic's most capable model with enhanced reasoning
- DeepSeek V3 - DeepSeek's latest reasoning model
- Gemini 2.0 Flash - Google's next-generation multimodal AI
- Perplexity Sonar - Online search-powered AI model
- Multi-Model Chat: Compare responses from multiple AI models simultaneously
- Single Model Mode: Focus on conversation with one specific model
- Chat Sessions: Save and manage conversation history
- Fork Conversations: Branch off specific model responses into single-model chats
- Real-time Streaming: See responses as they're generated
- Markdown Support: Proper formatting for code, lists, and rich text
- Dark/Light Mode: Toggle between themes
- Responsive Design: Works seamlessly on desktop, tablet, and mobile
- Horizontal Scrolling: Navigate between model responses easily
- Analytics Dashboard: Track API usage and model performance
- Model Logos: Visual identification for each AI provider
- SQLite Database: Local storage for chat sessions and messages
- Upstash Redis: Analytics and caching layer
- Session Management: Organize and search through chat history
- Export Options: Save conversations for future reference
- Node.js 18+ and npm
- API Keys from AI providers (see Environment Variables section)
- Upstash Redis account (optional, for analytics)
-
Clone the repository
git clone https://github.com/kushagra-18/multi-model-chat.git cd multi-model-chat
-
Install dependencies
npm install
-
Set up environment variables
cp .env.example .env
Edit
.env
and add your API keys (see Environment Variables section below). -
Initialize the database The SQLite database will be created automatically when you start the application.
-
Start the development server
npm run dev
-
Open your browser Navigate to http://localhost:3000
Create a .env
file in the root directory with the following variables:
Variable | Provider | Get From | Required |
---|---|---|---|
OPENAI_API_KEY |
OpenAI | Platform Dashboard | โ |
ANTHROPIC_API_KEY |
Anthropic | Console | โ |
GOOGLE_GENERATIVE_AI_API_KEY |
AI Studio | โ | |
DEEPSEEK_API_KEY |
DeepSeek | Platform | โ |
PERPLEXITY_API_KEY |
Perplexity | Settings | โ |
Variable | Service | Get From | Purpose |
---|---|---|---|
KV_REST_API_URL |
Upstash Redis | Console | Analytics |
KV_REST_API_TOKEN |
Upstash Redis | Console | Analytics |
Note: At minimum, you need
GOOGLE_GENERATIVE_AI_API_KEY
as all requests are currently shadowed to Gemini 2.0 Flash for demo purposes. Other API keys enable the full multi-model experience.
multi-model-chat/
โโโ app/ # Next.js app router
โ โโโ api/ # API routes
โ โ โโโ chat/ # Chat endpoints
โ โ โโโ chats/ # Session management
โ โ โโโ analytics/ # Usage tracking
โ โโโ globals.css # Global styles
โ โโโ page.tsx # Main application
โโโ components/ # React components
โ โโโ ui/ # Reusable UI components
โ โโโ chat/ # Chat-specific components
โโโ lib/ # Utilities and configurations
โ โโโ models.ts # AI model definitions
โ โโโ ai-providers.ts # Provider clients
โ โโโ database.ts # SQLite operations
โโโ types/ # TypeScript type definitions
โโโ public/ # Static assets
- Enter your message in the input field at the bottom
- Press Enter or click Send to send to all enabled models
- Compare responses side by side in the grid layout
- Scroll horizontally to view all model responses
- Use action buttons to copy, like, or fork responses
- Click the "Single Model" button in the header
- Select your preferred model from the dropdown
- Have a focused conversation with one AI model
- Switch back to multi-model anytime
- Save Sessions: Conversations are automatically saved
- Access History: Click the history icon to view past chats
- Fork Conversations: Click the fork button on any model response to create a new single-model session
- Search Chats: Use the sidebar to find specific conversations
POST /api/chat/multi
- Multi-model chatPOST /api/chat
- Single model chat (streaming)
GET /api/chats
- List all chat sessionsPOST /api/chats
- Create new sessionGET /api/chats/[id]
- Get specific sessionPOST /api/chats/[id]/messages
- Add message to session
POST /api/analytics/track
- Track usage eventsGET /api/analytics/model-stats
- Get model statistics
npm test
npm run build
npm start
npm run lint
- Fork Button Limitation: Fork functionality currently only shows toast notification; doesn't switch to single-model mode automatically
- Model Selection: All API calls are shadowed to Gemini 2.0 Flash for demo purposes
- Timestamp Type Mismatch: Some TypeScript warnings related to Date vs number types
- Session Loading: Forked sessions may not load conversation history in single-model mode
- Horizontal Scroll: Grid layout may not work perfectly on all screen sizes
- Message Overflow: Long messages might overflow card boundaries in some cases
- Real-time Updates: Analytics refresh every 30 seconds, not real-time
- Mobile Experience: Some UI elements may be cramped on very small screens
- SQLite: May not scale for high-traffic deployments (consider PostgreSQL)
- API Rate Limits: No built-in rate limiting for AI provider APIs
- Memory Usage: Long conversations may impact browser performance
- Concurrent Requests: Multiple model requests happen in parallel (may hit rate limits)
- User Authentication: No login system implemented
- Conversation Export: No built-in export to PDF/Word functionality
- Custom Models: Can't add custom AI models without code changes
- File Uploads: No support for document or image uploads
- Voice Input: No speech-to-text functionality
We welcome contributions! Here's how you can help:
- Use the GitHub Issues page
- Include steps to reproduce the bug
- Add screenshots or screen recordings if helpful
- Specify your browser and operating system
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Make your changes with proper commit messages
- Add tests if applicable
- Submit a pull request with a clear description
# Clone your fork
git clone https://github.com/your-username/multi-model-chat.git
cd multi-model-chat
# Install dependencies
npm install
# Create environment file
cp .env.example .env
# Add your API keys
# Start development server
npm run dev
This project is open source and available under the MIT License.
- Vercel AI SDK - Powerful AI integration toolkit
- Next.js - React framework for production
- Tailwind CSS - Utility-first CSS framework
- Radix UI - Accessible component primitives
- Lucide Icons - Beautiful icon library
- Upstash - Redis-compatible database
- Documentation: Check this README and inline code comments
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Creator: @i_kushagra on X/Twitter
- Fix fork-to-single-model functionality
- Improve mobile responsiveness
- Add conversation export features
- Implement proper error boundaries
- Add user authentication system
- Support for file uploads (images, documents)
- Custom model integration
- Advanced analytics dashboard
- Voice input/output support
- Collaborative chat sessions
- Plugin system for extensions
- Self-hosted deployment guides
Kushagra is a passionate developer creating innovative AI applications and tools.
- Twitter/X: @i_kushagra
- GitHub: kushagra
Made with โค๏ธ for the AI community
Star โญ this repo if you found it helpful!