A modern web interface for chatting with local LLMs using OpenAI-compatible APIs. Built with Vue 3, it includes real-time streaming, rich markdown support, and a clean design that works on all devices.
- OpenAI-Compatible API - Works with Ollama, LM Studio, and other local LLM servers
- Model Selection - Choose from available models with automatic loading
- Real-time Streaming - See responses as they are generated
- Stop Button - Cancel generation at any time
- Temperature Control - Adjust creativity with manual or automatic temperature settings
- Markdown Support - Full GitHub Flavored Markdown rendering
- Code Highlighting - Syntax highlighting for 180+ programming languages
- Copy Code Blocks - Easy copy-to-clipboard functionality
- Responsive Tables - Clean table display with hover effects
- Message History - Navigate through previous messages with arrow keys
- Clean Design - Modern, easy-to-use interface
- Works on All Devices - Responsive design for desktop, tablet, and mobile
- Connection Status - Visual indicator showing API connection status
- Conversation Starters - Pre-made prompts to get started quickly
- Personal Prompts - Add your own prompts via GitHub Gist
- Vue 3 - Latest Vue.js framework with Composition API
- Vite - Fast build tool and development server
- Tailwind CSS - Utility-first CSS framework
- Auto-scroll - Messages automatically scroll to show latest content
- Visit the Releases page
- Download the latest
chat-ui-vX.X.X.zip
file - Extract the files to a folder
- Serve the files using any web server:
# Using Python python -m http.server 3000 # Using Node.js npx serve -s . -p 3000 # Using Bun bun --bun serve -p 3000 .
- Open http://localhost:3000 in your browser
# Clone the repository
git clone https://github.com/devkabir/chat-ui.git
cd chat-ui
# Install dependencies (Bun is recommended)
bun install # or npm install
# Start development server
bun run dev # or npm run dev
# Build for production
bun run build # or npm run build
bun run preview # or npm run preview
You need an OpenAI-compatible API server running locally. Here are popular options:
Ollama (Recommended):
# Install and run Ollama
ollama serve
ollama run llama2 # or any model you want
LM Studio:
- Download from LM Studio
- Load a model and start the local server
- Default endpoint:
http://localhost:1234
The app uses environment variables for API endpoints. You can set them in a .env
file:
# .env file (optional)
VITE_API_BASE_URL=http://localhost:1234
If not set, it defaults to http://localhost:1234
.
Your LLM server must support these endpoints:
GET /v1/models
- Returns list of available modelsPOST /v1/chat/completions
- Chat completions (with streaming support)
chat-ui/
βββ public/ # Static files
βββ src/
β βββ components/ # Vue components
β β βββ ChatHeader.vue # Top bar with model info and status
β β βββ ChatMessages.vue # Message list with conversation starters
β β βββ ChatInput.vue # Input area with model selection and controls
β β βββ MessageBubble.vue # Individual message display
β βββ services/ # API services
β β βββ llm.js # LLM API calls (streaming and non-streaming)
β β βββ models.js # Model list API calls
β βββ utils/ # Helper functions
β β βββ markdown.js # Markdown processing with syntax highlighting
β βββ views/ # Page components
β β βββ ChatView.vue # Main chat page
β βββ data/ # Static data
β β βββ conversationStarters.json # Pre-made conversation prompts
β βββ App.vue # Main app component
β βββ main.js # App entry point
β βββ style.css # Global styles
βββ .github/workflows/ # GitHub Actions for deployment
βββ package.json # Dependencies and scripts
βββ vite.config.js # Vite build configuration
βββ tailwind.config.js # Tailwind CSS configuration
- Bun (recommended) or Node.js 18+
- Git
- Local LLM Server (Ollama, LM Studio, etc.)
# Install dependencies
bun install
# Start development server (with hot reload)
bun run dev
# Build for production
bun run build
# Preview production build
bun run preview
- Use Vue 3 Composition API for components
- Follow single responsibility principle
- Use proper prop definitions
- Add comments for complex logic
- Tailwind CSS for all styling
- Responsive design using Tailwind breakpoints
- Custom CSS only when absolutely necessary
- Connect to local LLM server
- Send messages and receive responses
- Test streaming and non-streaming modes
- Check markdown rendering (headers, lists, links)
- Test code block syntax highlighting and copy function
- Test table rendering
- Try stop button during message generation
- Test model selection dropdown
- Test temperature controls (manual and auto)
- Test conversation starters
- Test on different screen sizes
- Test message history navigation with arrow keys
# Test your LLM server endpoints
curl http://localhost:1234/v1/models
curl -X POST http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "your-model-name",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": false
}'
Deploy the dist/
folder to any static hosting service:
- Netlify: Drag and drop or connect to Git
- Vercel: Import GitHub repository
- GitHub Pages: Use included GitHub Actions workflow
- AWS S3 + CloudFront: Static website hosting
FROM nginx:alpine
COPY dist/ /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
# Build and run
docker build -t chat-ui .
docker run -p 8080:80 chat-ui
# Using any static file server
npm install -g serve
serve -s dist -p 3000
Releases are created automatically using GitHub Actions:
# Create and push a new tag
git tag v1.0.0
git push origin v1.0.0
The workflow automatically:
- β Builds the application with Bun
- β Creates production bundle
- β Creates ZIP and tar.gz archives
- β Creates GitHub release with notes
- β Uploads downloadable files
Each release includes:
chat-ui-vX.X.X.zip
- Ready-to-use web applicationchat-ui-vX.X.X.tar.gz
- Compressed archive- Source code - GitHub-generated source files
- Major (v1.0.0): Breaking changes or major new features
- Minor (v1.1.0): New features, backwards compatible
- Patch (v1.1.1): Bug fixes and small improvements
We welcome contributions! Here's how to get started:
- Fork the repository
- Create a feature branch:
git checkout -b feature/new-feature
- Make your changes
- Test your changes
- Commit your changes:
git commit -m 'Add new feature'
- Push to the branch:
git push origin feature/new-feature
- Open a Pull Request
# Fork and clone your fork
git clone https://github.com/YOUR_USERNAME/chat-ui.git
cd chat-ui
# Install dependencies
bun install
# Create feature branch
git checkout -b feature/my-new-feature
# Make changes and test
bun run dev
# Commit and push
git commit -m "Add my new feature"
git push origin feature/my-new-feature
- Vue 3 - JavaScript framework
- Vite - Build tool and development server
- Tailwind CSS - CSS framework
- Marked - Markdown parser
- Highlight.js - Code syntax highlighting
- DOMPurify - HTML sanitizer for security
- Bun - JavaScript runtime (recommended)
- PostCSS - CSS processing
- Autoprefixer - CSS vendor prefixes
- Fetch API - HTTP requests
- AbortController - Request cancellation
- Streaming - Real-time response streaming
CORS Errors:
# Don't open index.html directly in browser
# Use a development server instead:
bun run dev
API Connection Failed:
- Make sure your LLM server is running
- Check the API endpoint in your environment variables
- Ensure CORS is enabled on your LLM server
Build Issues:
# Clear cache and reinstall
rm -rf node_modules dist
bun install
bun run build
Styling Issues:
- Make sure Tailwind CSS is properly imported
- Check for conflicting CSS rules
- Verify build process includes CSS processing
- Use streaming mode for better user experience
- Enable gzip compression on your server
- Monitor bundle size
- Optimize images and assets
This project is licensed under the MIT License - see the LICENSE file for details.
MIT License allows:
- β Commercial use
- β Modification
- β Distribution
- β Private use
- β No warranty
- β No liability
- Vue.js team for the excellent framework
- Vite team for the fast build tool
- Tailwind CSS for the utility-first CSS framework
- Open source community for the amazing libraries and tools
- GitHub Issues: Report bugs and request features
- GitHub Discussions: Community discussions and Q&A
Star β this repository if you find it useful!