This is a modern chat UI built with Next.js App Router and the AI SDK. It streams model responses with reasoning and sources, supports web search, and lets users switch models.
- Streaming responses with the AI SDK
streamText
- Reasoning and sources panes (toggleable in the UI)
- Model picker (e.g.
openai/gpt-4o
,deepseek/deepseek-r1
) - Web search toggle that routes to
perplexity/sonar
- Dark/light theme with system preference
- Next.js 15 (App Router)
- React 19
- AI SDK 5 (
ai
,@ai-sdk/react
) - Radix UI + small UI primitives in
components/ui
- Node.js 18.17+ (Node 20+ recommended)
- npm, yarn, pnpm, or bun
pnpm install
# or: npm install / yarn / bun install
Create a local env file and set the keys you plan to use.
cp .env.example .env.local
You can run via an AI Gateway (recommended) or connect directly to providers.
Set these variables:
AI_GATEWAY_URL= # e.g. your AI Gateway base URL
AI_GATEWAY_API_KEY=
Notes:
- The app uses model IDs like
openai/gpt-4o
,deepseek/deepseek-r1
, andperplexity/sonar
. - Ensure your gateway is configured to route these model IDs to the corresponding providers.
Set the keys for the models you intend to use:
OPENAI_API_KEY=
DEEPSEEK_API_KEY=
PERPLEXITY_API_KEY=
# Optional advanced overrides if needed:
# OPENAI_BASE_URL=
# DEEPSEEK_API_BASE=
# PERPLEXITY_BASE_URL=
Security tips:
- Do not prefix server-only secrets with
NEXT_PUBLIC_
. - Keep real secrets out of git. Use
.env.local
for local dev; it’s gitignored. - If any secret was committed previously (e.g. in
.env
), rotate it.
pnpm dev
# or: npm run dev / yarn dev / bun dev
Open http://localhost:3000
.
pnpm dev # start dev server (Turbopack)
pnpm build # production build
pnpm start # start production server
pnpm lint # run ESLint
- Client:
components/ai-chat.tsx
usesuseChat
from@ai-sdk/react
. It lets users select a model and toggle web search. Submissions are sent to the API with{ model, webSearch }
. - Server:
app/api/chat/route.ts
callsstreamText
and streams tokens back. When web search is on, it uses theperplexity/sonar
model; otherwise it uses the selected model. The response includes reasoning and sources, which the UI renders.
You can add more models by editing the models
array in components/ai-chat.tsx
.
- Deploy on Vercel or any Node host. Ensure the same env vars are set in the hosting environment.
- 401 or 403 errors: check that your
AI_GATEWAY_API_KEY
or provider API keys are set and valid. - Model not found: ensure the gateway or provider supports the model ID you selected.
- No streaming: verify your host supports streaming responses and that the route isn’t buffered by a proxy.
MIT (or your preferred license)