This project provides a solid foundation for building modern web applications using a curated stack of technologies focused on developer experience, performance, and type safety.
- Framework: TanStack Start on Vite + Vinxi (Modern React foundation with SSR)
- Routing: TanStack Router (Type-safe client and server routing)
- API: tRPC (End-to-end typesafe APIs)
- Database: Drizzle ORM with Neon + Vite Plugin (Auto-provisioned serverless PostgreSQL)
- UI: React 19, Tailwind CSS, shadcn/ui, Lucide Icons
- State Management: TanStack Query (Server State), TanStack Store (Client State), TanStack DB (Reactive Collections)
- Forms: React Hook Form, TanStack Form, Zod (Validation)
- Authentication: Better Auth (Details below)
- Email: Resend, React Email (Transactional emails)
- Monitoring: Sentry (Error tracking and performance monitoring)
- Testing: Vitest (Unit/Integration testing)
- Tooling & DX: Biome (Linting/Formatting), T3 Env, TypeScript
- AI: @ai-sdk/react, ai (Ready for AI features)
- i18n: i18next (Internationalization)
- Route:
/dashboard/tanstack-db-example - What it demonstrates: Reactive collections with live queries, multiple filtered views (All, Pending, Completed), and seamless integration with TanStack Query/tRPC.
- Docs: TanStack DB Overview
This boilerplate uses Neon as the primary database solution, providing a modern serverless PostgreSQL experience that's perfect for full-stack applications.
Neon is a serverless PostgreSQL platform that separates storage and compute, offering several key advantages:
- Zero-Friction Setup: Create databases instantly without signup via neon.new - perfect for prototyping
- Serverless Architecture: Automatically scales to zero when not in use, reducing costs
- Instant Provisioning: Create database branches in seconds, not minutes
- Developer-Friendly: Built-in connection pooling, automatic backups, and point-in-time recovery
- Database Branching: Create database branches for each feature, just like Git branches
- Modern Tooling: Native integration with popular ORMs like Drizzle, Prisma, and more
- Global Edge: Low-latency read replicas across multiple regions
Get started with Neon in minutes using one of these approaches:
Choose any of these three methods for instant database setup:
Method A: Browser Setup
- Visit neon.new to instantly create a new PostgreSQL database
Method B: CLI Script 2. Use our built-in script for instant setup:
bun run db:neon-setup
# This runs: npx neondb --yesMethod C: Automatic Vite Plugin (Recommended) 3. Already configured! The @neondatabase/vite-plugin-postgres plugin automatically:
- Checks for
DATABASE_URLin your.envfile on firstbun run dev - Creates a claimable Neon database if not found
- Writes the connection string directly to your
.envfile - Provides both direct and pooled connection strings
- Gives you a 7-day claim URL to take ownership
Just run bun run dev and you're ready to go! ✨
All methods create a temporary Neon database instantly without requiring any login or signup. Perfect for:
- Quick prototyping and testing
- Open-source templates and demos
- Getting started immediately without account setup
- Temporary development environments
Note: The database expires after 72 hours unless you claim it with a free Neon account.
- Sign up: Create a free account at neon.com
- Create project: Set up a permanent database in your dashboard
- Get connection string: Copy from your Neon project dashboard
Copy your Neon connection string to your .env file:
DATABASE_URL=postgresql://[user]:[password]@[endpoint]/[database]?sslmode=require- Automatic Database Provisioning: Pre-configured with
@neondatabase/vite-plugin-postgresfor zero-config database setup - Drizzle Integration: Pre-configured with
@neondatabase/serverlessfor optimal performance - Connection Pooling: Built-in pooling for better connection management
- Type Safety: Full TypeScript support with Drizzle ORM
- Migrations: Seamless database schema management with Drizzle Kit
- Development Workflow: Database branching for feature development
- Auto Environment Setup: Plugin automatically writes connection strings to your
.envfile
One of Neon's most powerful features is database branching, allowing you to:
- Create feature branches: Each feature gets its own database copy
- Test safely: Make schema changes without affecting production
- Collaborate effectively: Team members can work with isolated data
- Deploy confidently: Merge database changes alongside code changes
For more information about Neon's features and capabilities, visit neon.com.
The boilerplate includes several AI-powered chat features and file handling capabilities:
- Basic Chat: Simple streaming chat interface powered by OpenAI's GPT-4o.
- Vercel v0 Chat: Advanced chat interface using Vercel's v0-1.0-md model for web development assistance.
- Image Generation: AI-based image generation within chat using the AI SDK.
- RAG (Retrieval Augmented Generation): Chat with context from your knowledge base:
- Upload documents to be processed into embeddings
- AI responses enhanced with information retrieved from your documents
- Knowledge base searching before answering questions
- File Upload: PDF document processing for knowledge base:
- Drag-and-drop interface with progress indicators
- PDF text extraction and embedding generation
- Uses tRPC v11's FormData and non-JSON content type support
The implementation leverages tRPC v11's support for FormData and various content types, making it easy to handle file uploads directly through your type-safe API without additional libraries.
The boilerplate includes integration with Vercel's v0 API, which provides an AI model specifically designed for building modern web applications. The v0-1.0-md model is framework-aware, evaluated on modern stacks like Next.js and Vercel, and includes features like auto-fix and quick edit capabilities.
Features:
- Framework-aware completions optimized for Next.js and modern web stacks
- Streaming responses with low latency
- OpenAI-compatible API format
- Multimodal support (text and image inputs)
- Auto-fix for common coding issues
- Optimized for frontend and full-stack web development
Implementation:
- API Route:
src/routes/api/ai/vercel/chat.ts- Handles streaming chat with the v0 model - Chat Interface:
src/routes/dashboard/chat/vercel.tsx- Frontend chat component for v0 interactions - Model: Uses
vercel("v0-1.0-md")via the@ai-sdk/vercelpackage
Setup:
- Requirements: Vercel Premium or Team plan with usage-based billing enabled
- API Key: Create an API key at v0.dev
- Environment: Add your v0 API key to your environment variables:
V0_API_KEY=your_v0_api_key_here
Usage:
Navigate to /dashboard/chat/vercel to access the v0-powered chat interface. This chat is specifically optimized for web development questions and can help with:
- Next.js application development
- React component creation
- TailwindCSS styling
- TypeScript implementation
- Modern web development patterns
API Limits:
- Max messages per day: 200
- Max context window: 128,000 tokens
- Max output context: 32,000 tokens
For higher limits, contact Vercel support at support@v0.dev.
This boilerplate includes TanStack DB, a reactive client store for building super fast apps with live queries and optimistic mutations. TanStack DB provides a powerful abstraction for managing complex client-side data with automatic reactivity.
TanStack DB is a reactive data management library that extends TanStack Query with:
- Collections: Typed sets of objects that can be populated with data
- Live Queries: Reactive queries that automatically update when underlying data changes
- Optimistic Mutations: Instant UI updates with automatic rollback on errors
- Differential Updates: Incremental query result updates for blazing fast performance
Visit /dashboard/tanstack-db-example to see TanStack DB in action with a reactive todo application that demonstrates:
// One collection, three reactive views
const { data: allTodos } = useLiveQuery((q) =>
q.from({ todo: todoCollection }),
);
const { data: completed } = useLiveQuery((q) =>
q
.from({ todo: todoCollection })
.where(({ todo }) => eq(todo.completed, true)),
);
const { data: pending } = useLiveQuery((q) =>
q
.from({ todo: todoCollection })
.where(({ todo }) => eq(todo.completed, false)),
);- All Todos: Shows complete list with real-time updates
- Pending Todos: Live-filtered view of incomplete tasks
- Completed Todos: Live-filtered view of finished tasks
- When data changes → all views update automatically without manual refetch
// ❌ Without TanStack DB - Multiple API calls
const allTodos = useQuery(['todos']);
const completedTodos = useQuery(['todos', 'completed']);
const pendingTodos = useQuery(['todos', 'pending']);
// ✅ With TanStack DB - Single data source, multiple reactive views
const todoCollection = createCollection(queryCollectionOptions({...}));
// All live queries automatically sync from the same collection| Feature | Benefit | Implementation |
|---|---|---|
| Live Queries | Automatic reactive updates when data changes | Multiple filtered views update simultaneously |
| Reactive Updates | No manual refetch needed | Changes propagate through all views instantly |
| Filtered Views | Multiple live-filtered views from same data source | Pending/Completed sections filter automatically |
| Type Safety | Full TypeScript support | End-to-end type safety with schema validation |
Perfect for applications with:
- Multiple views of the same data (dashboards, admin panels)
- Complex client-side filtering and aggregations
- Real-time data requirements (when paired with WebSockets/SSE)
- Large datasets requiring efficient updates
- Complex data relationships and joins
Consider regular TanStack Query for:
- Simple CRUD operations
- Basic server state management
- Applications with minimal data transformation needs
The example shows how TanStack DB integrates seamlessly with your existing tRPC endpoints:
const todoCollection = createCollection(
queryCollectionOptions<Todo>({
queryKey: ['todos'],
queryFn: async () => {
const data = await todos.refetch();
return data.data ?? [];
},
queryClient,
getKey: (item) => item.id,
}),
);- Differential Updates: Only affected UI components re-render
- Single Data Source: Eliminates duplicate API calls for related views
- Optimistic Updates: Instant feedback with automatic error handling
- Memory Efficient: Shared data across multiple query consumers
- TanStack DB Documentation: https://tanstack.com/db/latest
- Live Queries Guide: https://tanstack.com/db/latest/docs/live-queries
- Collection Options: https://tanstack.com/db/latest/docs/overview
- Example Code: See
src/routes/dashboard/tanstack-db-example.tsx
This implementation showcases TanStack DB's core strength: efficient client-side data management with automatic reactivity. For real-time server updates, pair with WebSocket backends or sync engines like ElectricSQL.
This boilerplate includes a fully functional Model Context Protocol (MCP) server that allows AI assistants like Claude Desktop and Cursor to interact with your application's tools and data in real-time.
The Model Context Protocol is a standard for connecting AI assistants to external tools and data sources. It enables your AI assistant to execute functions, access APIs, and interact with your application directly from the chat interface.
The MCP server is implemented using the @vercel/mcp-adapter package, adapted for TanStack Start (instead of Next.js). The implementation consists of:
- MCP Handler: Located at
src/routes/api/ai/mcp/$transport.ts - Tools Definition: Located at
src/lib/ai/mcp-tools.ts
The boilerplate comes with several example tools that demonstrate different capabilities:
getCatFact: Fetches random cat facts from an external APIgetQuote: Retrieves inspirational quotes from an external APIgetJoke: Gets random programming jokes from an external APIgetWelcomeMessage: Simple greeting with parameter inputcalculateBMI: BMI calculator with weight and height parametersgetTodos: Retrieves todos from the application's database (demonstrates database integration)
Add the following configuration to your Claude Desktop config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"your-app": {
"command": "npx",
"args": ["mcp-remote", "http://localhost:3000/api/ai/mcp/mcp"]
}
}
}Add the following configuration to your Cursor MCP config file at ~/.cursor/mcp.json:
{
"mcpServers": {
"your-app": {
"command": "npx",
"args": ["mcp-remote", "http://localhost:3000/api/ai/mcp/mcp"]
}
}
}To add new tools to your MCP server:
- Define the tool function in
src/lib/ai/mcp-tools.ts:
const yourNewTool = async ({ param }: { param: string }) => {
// Your tool logic here
return {
content: [{ type: 'text', text: `Result: ${param}` }],
};
};- Add the tool to the tools array:
export const tools = [
// ... existing tools
{
name: 'yourNewTool',
description: 'Description of what your tool does',
callback: yourNewTool,
inputSchema: z.object({
param: z.string(),
}),
},
];- Restart your development server and the AI assistant to pick up the new tool.
Once configured, you can interact with your tools directly from your AI assistant:
- Ask Claude or Cursor to "get a cat fact" → triggers
getCatFact - Say "calculate my BMI for 70kg and 1.75m" → triggers
calculateBMI - Request "tell me a joke" → triggers
getJoke - Ask "show me my todos" or "what needs to be done" → triggers
getTodos - Say "welcome me as John" → triggers
getWelcomeMessage
The AI assistant will automatically determine which tools to use based on your requests and execute them in real-time, including accessing your application's database for dynamic data.
Powered by Better Auth, providing secure user management features out-of-the-box:
- Core: Sign Up, Sign In, Password Reset Flow (Forgot/Reset).
- Security: Two-Factor Authentication (OTP).
- User Management: Invitation Acceptance Flow.
- Documentation: API reference available at
http://localhost:3000/api/auth/referencewhen running the application. - (See TODO list for planned additions like Passkey, Admin Dashboard, Org Support)
- Hot Module Replacement (HMR): Fast development cycles with Vite.
- Type Safety: End-to-end type safety from database to frontend.
- Code Quality: Integrated linting and formatting with Biome.
- Environment Variables: Type-safe env management with T3 Env.
-
Install Bun: If you don't have Bun installed, you can install it using:
# For macOS, Linux, and WSL curl -fsSL https://bun.sh/install | bash # For Windows (via PowerShell) powershell -c "irm bun.sh/install.ps1 | iex" # Verify installation bun --version
-
Clone the repository:
git clone <repository-url> cd <repository-name>
-
Install dependencies:
bun install
-
Set up environment variables: Copy the
.env.examplefile to.envand configure the required values:cp .env.example .env
Key environment variables to configure:
- Database: Neon Postgres database (automatically configured!)
- Zero-config setup: Just run
bun run dev- the Vite plugin will automatically create a database and configure your.envfile - Alternative methods: Visit neon.new or run
bun run db:neon-setupfor manual setup - No signup needed: All methods create temporary databases without any registration
- 72-hour trial: Database expires after 72 hours unless claimed with a free Neon account
- Auto-configured: The plugin writes the connection string to your
.envfile automatically
- Zero-config setup: Just run
- Auth: Generate a secure secret for Better Auth
Add it to your
# Generate a secure random string openssl rand -base64 32.envfile asBETTER_AUTH_SECRET - Email: Set up a Resend account for email sending
- Get your API key and add it as
RESEND_API_KEY
- Get your API key and add it as
- Monitoring (optional): Configure Sentry for error tracking
- Get your DSN, organization, and project values from your Sentry dashboard
- Set the corresponding environment variables
- Database: Neon Postgres database (automatically configured!)
-
Database Setup: Ensure your PostgreSQL database is running and accessible.
Vector Extension Setup (Required for AI Features): This project includes vector embeddings for AI features. The
pg_vectorextension needs to be enabled:# Automated setup (recommended) bun run db:setup-vectorIf the automated script fails, enable it manually:
- Open your Neon dashboard
- Navigate to SQL Editor
- Run the following query:
CREATE EXTENSION vector;
📖 Reference: Neon pg_vector documentation
Schema Setup: Push the schema (for development/initial setup):
bun run db:push
For production or more controlled migrations, generate migration files:
# bun run db:generate # Apply migrations (tool/command depends on setup)
Optional: Use
bun run db:studioto explore the schema via Drizzle Studio. -
Run the development server:
bun run dev
The application should now be running on
http://localhost:3000.
This project follows a structured organization pattern for better maintainability:
src/
├─ app/ # App specific files
├─ components/ # Reusable UI components (including shadcn/ui)
├─ features/ # Feature-specific components and logic
│ ├─ ai-embedding.ts # Vector embedding generation for RAG functionality
│ ├─ resource-create.ts # Knowledge base resource creation
│ ├─ file-upload.schema.ts # File upload validation schemas
│ ├─ auth/ # Authentication related features
│ └─ organization/ # Organization management features
├─ hooks/ # Custom React hooks
├─ lib/ # Core libraries and utilities
│ ├─ auth/ # Better Auth implementation
│ ├─ db/ # Drizzle ORM setup and schema
│ ├─ intl/ # i18next internationalization setup
│ ├─ trpc/ # tRPC client and server setup
│ ├─ env.client.ts # Type-safe client environment variables (T3 Env)
│ ├─ env.server.ts # Type-safe server environment variables
│ └─ resend.ts # Email sending with Resend and React Email
├─ routes/ # TanStack Router routes with file-based routing
│ ├─ (auth)/ # Authentication related routes (protected)
│ ├─ (public)/ # Public facing routes
│ ├─ api/ # API routes
│ │ ├─ ai/ # AI-related API endpoints
│ │ │ ├─ chat.ts # Basic chat API
│ │ │ ├─ chat.rag.ts # RAG-enhanced chat API
│ │ │ └─ chat.image.generation.ts # Image generation chat API
│ ├─ dashboard/ # Dashboard related routes
│ │ ├─ chat/ # Chat interface routes
│ └─ _root.tsx # Root layout component
├─ server/ # Server-side code
│ ├─ router.ts # Main API router setup
│ └─ routes/ # Server-side route handlers
├─ api.ts # API client export
├─ client.tsx # Client entry point
├─ router.tsx # Router configuration
└─ ssr.tsx # Server-side rendering setup
public/ # Static assets
The structure organizes code by feature and responsibility, keeping related code together for better maintainability.
bun run dev: Starts the development server.bun run build: Builds the application for production.bun run start: Starts the production server (requires build first).bun run serve: Serves the built production app locally (via Vite preview).bun run test: Runs tests using Vitest.bun run db:generate: Generates Drizzle ORM migration files.bun run db:push: Pushes the current Drizzle schema to the database.bun run db:studio: Opens Drizzle Kit Studio.bun run db:neon-setup: Sets up Neon database integration locally.bun run db:setup-vector: Enables the pg_vector extension for AI embedding features.bun run add-ui-components <component-name>: Adds shadcn/ui components.bun run format: Formats code using Biome.bun run lint: Lints code using Biome.bun run check: Runs Biome check (format, lint, safety).
This project includes a complete Docker setup with automated CI/CD pipeline for containerized deployment.
The project uses GitHub Actions for automated Docker image building and deployment:
Workflow: .github/workflows/build-docker.yml
Features:
- Multi-stage build: Builds application with Bun, then creates optimized Docker image
- Container Registry: Pushes images to GitHub Container Registry (ghcr.io)
- Caching: Uses GitHub Actions cache for faster builds
- Automated tagging: Creates tags based on branch names and commit SHAs
- Environment handling: Injects build-time and runtime environment variables
- Deployment stage: Includes deployment job for production environments
Triggers:
- Push to
dev-testandmain-testbranches (configure for your preferred branches) - Manual workflow dispatch
Build Process:
- Setup: Installs Bun and dependencies
- Build: Compiles application with environment variables
- Docker: Creates multi-stage Docker image with optimized production build
- Push: Uploads image to GitHub Container Registry
- Deploy: Deploys to production (when pushed to main branch)
Test your Docker setup locally using the included Docker Compose configuration:
File: compose.yaml
# Build and run the application in Docker
docker-compose up --build
# Run in detached mode
docker-compose up -d --build
# View logs
docker-compose logs -f
# Stop services
docker-compose downConfiguration:
- Build context: Uses
Dockerfile.devfor development-oriented builds - Port mapping: Exposes application on
localhost:3000 - Environment variables: Automatically loads from your
.envfile - Health checks: Monitors application health with built-in checks
- Volume mounting: Mounts
.envfile for configuration
Environment Setup:
The Docker Compose setup automatically uses your local .env file, so ensure you have:
DATABASE_URL: Your Neon database connection stringANTHROPIC_API_KEY: For AI featuresOPENAI_API_KEY: For AI features- Other required environment variables
This allows you to test the exact same containerized environment that will be used in production, ensuring consistency across development and deployment environments.
- Implement Planned Auth Features:
- Passkey Support
- Admin Dashboard (User Management UI)
- Organization Support (Multi-tenancy/Teams)
- Refactor Auth Hooks: Ensure auth logic (e.g.,
useSession) is cleanly extracted into custom hooks. - MCP Integration: Model Context Protocol server implementation with example tools.
- i18n Management: Add Internationalization (translation platform integration).
- AI SDK Examples: Add examples using
@ai-sdk/react. - Email Templates: Add more examples/implementations using
react-email. - Sentry Configuration: Add details on advanced Sentry setup (sourcemaps, user identification).
- Theme Toggle: Implement UI for switching between light/dark themes (uses
next-themes). - CI/CD: Set up a basic CI/CD pipeline (e.g., GitHub Actions for linting, testing, building).
- TanStack DB Example: Add comprehensive example showcasing reactive collections and live queries.
- Deployment Guides: Add specific guides (Vercel, Docker, etc.).