A modern, full-stack SaaS AI chatbot platform with Feature-Driven Development architecture. Supports local LLMs (Ollama, LM Studio) and cloud providers (Google AI, OpenAI et al.) with real-time chat, chat memory, advanced UI, and production-ready authentication.
Crafted with β€οΈ in Paris by kunalsuri, blending Human Intellect (mine βοΈ) with state-of-the-art Agentic AI Systems (Human-in-the-Loop).
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
This project has been developed using a combination of AI-assisted software development and whiteboarding tools, including (but not limited to) Visual Studio Code, GitHub Copilot Pro, Windsurf, Cursor, and Krio, with Human-in-the-Loop supervision and review.
While every reasonable precaution has been taken, including AI-generated code validation, malware scanning, and static analysis using tools such as CodeQL β the authors and contributors do not accept any responsibility for potential errors, security vulnerabilities, or unintended behavior within the generated code.
This software is provided βas isβ, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and noninfringement.
Use this project at your own discretion and risk.
Please review and validate any AI-generated code before committing or merging changes.
- ποΈ Feature-Driven Development: Modular architecture with self-contained features
- π€ Multi-LLM Support: Local (Ollama, LM Studio) + Cloud (Google AI, OpenAI et al.)
- π¬ Real-time Chat: Streaming responses with conversation history
- π Translation & Summarization: Built-in AI-powered tools
- π Enterprise Security: RBAC, CSRF protection, session management
- π± Modern UI/UX: React 18, TypeScript, Tailwind CSS, Framer Motion
- π Production Ready: PWA support, performance monitoring, error boundaries
# 1. Clone and install
git clone https://github.com/kunalsuri/ai-chatbot-saas-template.git
cd ai-chatbot-saas-template
npm install
# 2. Start development server
npm run dev
# 3. Open http://localhost:5000
That's it! The app works immediately with no configuration required.
# Run all tests, quality checks, and build verification
npm run test:all
This single command runs:
- β All unit tests (9 passing tests)
- β TypeScript type checking
- β Code quality analysis
- β Security audit
- β Production build test
- β Coverage report generation
For Local LLMs (No API keys needed):
- Install LM Studio or Ollama
- Download a model (e.g., Llama 3.1)
- Start the local server
- Select the provider in the app
For Cloud Providers:
- Copy
.env.example
to.env
- Add your API keys (Google AI, OpenAI, etc.)
- Restart the server
Each feature is self-contained with its own components, services, types, and routes following modern React and TypeScript best practices:
π Features (Client & Server)
βββ π auth/ # Authentication & RBAC with session management
βββ π¬ chatbot/ # AI Chat with streaming responses
βββ π dashboard/ # Analytics with real-time updates
βββ π€ model-management/ # AI provider management & health monitoring
βββ π translation/ # Translation services with history
βββ β¨ prompt-improver/ # AI-powered prompt enhancement
βββ π editor/ # Content editing with auto-save
βββ βοΈ settings/ # User preferences & configuration
π Modern Infrastructure
βββ π¨ components/ui/ # Accessible Radix UI components
βββ π§ lib/ # Type-safe utilities & API client
βββ π types/ # Strict TypeScript definitions
βββ π security/ # CSRF, rate limiting, validation
βββ π docs/ # Comprehensive documentation
Modern Development Benefits:
- β React 18+: Concurrent features, Suspense, automatic batching
- β TypeScript Strict: Full type safety with branded types
- β Performance: Code splitting, memoization, optimized queries
- β Accessibility: WCAG 2.1 AA compliance with Radix UI
- β Security: CSRF protection, input validation, secure headers
- β Developer Experience: Hot reload, type-safe APIs, structured logging
- π Setup Guide - Quick start guide and setup instructions
- ποΈ Architecture - Modern system architecture and design patterns
- β‘ Best Practices - React 18+, TypeScript, and AI development best practices
- π§ͺ Testing Guide - Comprehensive testing setup and best practices
- π TypeScript Management - Managing TypeScript errors in large codebases
- π§ Contributing - Contribution guidelines and development workflow
- π API Reference - Complete API documentation
- π Deployment Guide - Production deployment instructions
- π Security Guide - Security guidelines and best practices
No database setup required! Everything is stored in local JSON files:
data/
βββ chat_history.json # Your conversations
βββ users.json # User accounts
βββ templates.json # Saved templates
βββ translation_history.json # Translation history
Benefits:
- β Zero configuration
- β Easy to backup/restore
- β Perfect for development and small deployments
- β Optional PostgreSQL support for scaling
LM Studio (Recommended for beginners)
# 1. Download from https://lmstudio.ai
# 2. Download a model (e.g., Llama 3.1 8B)
# 3. Start local server
# 4. Select "LM Studio" in the app
Ollama (Command-line friendly)
# 1. Install from https://ollama.com
ollama serve
ollama pull llama3.1
# 2. Select "Ollama" in the app
Add to .env
file:
GOOGLE_AI_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
# Build and start
npm run build
npm run dev
docker-compose up -d
- Vercel: Click the deploy button above
- Railway: Connect your GitHub repo
- DigitalOcean: Use App Platform
See Deployment Guide for detailed instructions.
Adding a new feature is easy with FDD:
# 1. Create feature directories
mkdir -p server/features/my-feature/{routes,services,types}
mkdir -p client/src/features/my-feature/{components,hooks,api}
# 2. Implement your feature
# 3. Register routes in server/routes.ts
# 4. Add navigation in client sidebar
# 5. Submit PR
See Development Guide for detailed contribution guidelines.