-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Docs/add vercel deployment guide zh #637
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Docs/add vercel deployment guide zh #637
Conversation
- Add URL import dialog for VRM and Live2D models - Support VPM JSON format parsing - Implement model caching with IndexedDB - Auto-detect model format from URL extension - Support direct .vrm, .zip, and .json URLs 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Fix CORS font loading from jsdelivr.net by using local Kiwi Maru - Add smart WebSocket URL detection (disabled in production) - Add CORS headers to Vercel configuration - Support auto-switching between ws:// and wss:// protocols Fixes font loading errors and WebSocket connection failures on deployed instances. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Auto-select default chat provider when configured - Auto-select default speech provider when configured - Auto-select default transcription provider when configured - Add watcher to set active provider from env variables Improves onboarding UX by pre-selecting providers based on deployment configuration. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add provider API keys to Vite define config - Add default provider selection environment variables - Update .gitignore for build artifacts - Update stage-web README and type definitions Enables deploying with pre-configured provider credentials via environment variables. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add comprehensive Vercel deployment section to README - Document all LLM provider environment variables - Add default provider selection variables - Include configuration examples - Support multiple languages (EN, ZH-CN, JA-JP, FR) This helps users deploy AIRI to Vercel with pre-configured providers, improving the deployment experience. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add globalEnv variables to turbo.json to pass environment to build - Add dependsOn to build task to ensure proper dependency order - Fix vercel.json buildCommand to use 'pnpm run build:web' Fixes: - Turborepo warning about missing environment variables - Build command execution issues on Vercel 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Change buildCommand to use 'pnpm -w run build:web' - Ensures build script runs at workspace root level - Fixes 'ERR_PNPM_NO_SCRIPT Missing script: build:web' error - Add all provider env vars to turbo.json globalEnv - Add build task dependencies to turbo.json 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
…nings - Change globalEnv to globalPassThroughEnv in turbo.json - Prevents warnings for optional provider environment variables - These variables are not required and warnings are unnecessary 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Disable Turbo remote cache to fix 400 error with Vercel artifacts API - Update build command to use turbo filter syntax for better integration 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Set vite output directory to project root dist folder for Vercel - Add empty env array to turbo build task to suppress unnecessary environment variable warnings - Update vercel.json outputDirectory to match new build output location 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Add empty passThroughEnv array to build task to override globalPassThroughEnv, preventing unnecessary environment variable warnings for packages that don't need them. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Revert to original output directory setup where Vite outputs to apps/stage-web/dist to match Turbo's outputs configuration and ensure proper build artifact tracking. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
helper loads
Fix build error where @proj-airi/memory-system was imported but not declared as a dependency in server-runtime/package.json. This caused unresolved import warnings during build: [UNRESOLVED_IMPORT] Could not resolve '@proj-airi/memory-system' in src/services/memory.ts 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Merged latest changes from moeru-ai/airi upstream including: - feat(injecta): new dependency injection package - refactor(stage-tamagotchi): improved structure - Various fixes and improvements Preserved local changes: - Memory system (@proj-airi/memory-system) - Memory UI components and configuration - All memory-related functionality Resolved conflicts by regenerating pnpm-lock.yaml 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Add comprehensive Chinese (Simplified) translations for memory-related settings pages, including: - Short-term memory configuration (providers, TTL, namespace, etc.) - Long-term memory configuration (database, embeddings, search, etc.) - All labels, descriptions, hints, and error messages Translations follow the existing i18n patterns and terminology used throughout the project. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
… fetching Add comprehensive support for loading Live2D models directly from model3.json URLs (including GitHub blob URLs): Features: - Auto-convert GitHub blob URLs to raw URLs - Parse model3.json and collect all referenced resources (textures, motions, physics, expressions, sounds, etc.) - Batch download all resources from the same base directory - Package everything into a zip file using JSZip - Cache the packaged model for offline use - Generate preview images for imported models This enables users to load Live2D models directly from GitHub or other hosting services by simply pasting the model3.json URL, without needing to manually download and package all the files. Example URL support: https://github.com/user/repo/blob/main/model/model.model3.json 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Add comprehensive Chinese (Simplified) documentation for deploying AIRI Stage Web on Vercel, including: - Step-by-step deployment instructions - Environment variable configuration guide - Memory system configuration (Redis/Postgres/Qdrant) - Embedding provider setup - Local verification steps - Common troubleshooting tips This mirrors the existing English guide and provides Chinese users with complete deployment documentation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
✅ Deploy Preview for airi-docs ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
✅ Deploy Preview for airi-vtuber ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
Summary of ChangesHello @inoribea, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the AIRI Stage Web application by integrating a comprehensive memory system, offering both short-term conversation context persistence and long-term semantic memory storage using various backend technologies. It also introduces a detailed deployment guide for Vercel in Simplified Chinese, broadening the application's reach. Further improvements include a new UI for memory configuration, expanded environment variable support for flexible deployments, and advanced functionality for importing display models from URLs. Highlights
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request is quite large and introduces several major features beyond the Chinese Vercel deployment guide mentioned in the title. These include a comprehensive memory system, extensive environment variable handling, local font bundling, and URL-based model imports. In the future, please consider splitting such significant changes into smaller, more focused pull requests to facilitate easier and more thorough reviews. My feedback focuses on improving maintainability, robustness, and code clarity in the new additions.
| import * as zodCore from 'zod/v4/core' | ||
|
|
||
| void zodCore // ensure bundlers keep the zod v4 helper chunk for xsschema imports |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using void zodCore to prevent tree-shaking is a known workaround, but it can be a bit of a code smell. It indicates that the bundler isn't correctly detecting the side effects or dependencies required by xsschema.
Have you considered a more explicit configuration-based approach? For example, in Vite, you could use optimizeDeps.include in vite.config.ts to ensure the necessary zod chunks are always included in the bundle.
// in vite.config.ts
export default defineConfig({
optimizeDeps: {
include: ['zod/v4/core'], // Or the specific chunk needed by xsschema
},
// ...
})This would move the bundling hint from the application code into the build configuration, which is generally a cleaner separation of concerns.
| ## Déploiement sur Vercel | ||
|
|
||
| Vous pouvez déployer AIRI sur Vercel avec des fournisseurs LLM pré-configurés en définissant des variables d'environnement. Cela permet aux utilisateurs d'utiliser votre instance déployée sans configurer leurs propres clés API. | ||
|
|
||
| ### Variables d'environnement | ||
|
|
||
| Ajoutez ces variables d'environnement dans les paramètres de votre projet Vercel : | ||
|
|
||
| #### Identifiants des fournisseurs LLM | ||
|
|
||
| | Variable | Description | Exemple | | ||
| |----------|-------------|---------| | ||
| | `OPENAI_API_KEY` | Clé API OpenAI | `sk-...` | | ||
| | `OPENAI_BASE_URL` | URL de base OpenAI | `https://api.openai.com/v1/` | | ||
| | `ANTHROPIC_API_KEY` | Clé API Anthropic Claude | `sk-ant-...` | | ||
| | `GOOGLE_GENERATIVE_AI_API_KEY` | Clé API Google Gemini | `AI...` | | ||
| | `DEEPSEEK_API_KEY` | Clé API DeepSeek | `sk-...` | | ||
| | `AI302_API_KEY` | Clé API 302.AI | `sk-...` | | ||
|
|
||
| #### Sélection du fournisseur par défaut | ||
|
|
||
| | Variable | Description | Exemple | | ||
| |----------|-------------|---------| | ||
| | `DEFAULT_CHAT_PROVIDER` | ID du fournisseur de chat par défaut | `openai` | | ||
| | `DEFAULT_SPEECH_PROVIDER` | ID du fournisseur TTS par défaut | `openai-audio-speech` | | ||
| | `DEFAULT_TRANSCRIPTION_PROVIDER` | ID du fournisseur STT par défaut | `openai-audio-transcription` | | ||
|
|
||
| #### Sélection du modèle par défaut (chat) | ||
|
|
||
| | Variable | Description | Exemple | | ||
| |----------|-------------|---------| | ||
| | `OPENAI_MODEL` | Modèle de chat OpenAI utilisé par défaut | `gpt-4o-mini` | | ||
| | `OPENAI_COMPATIBLE_MODEL` | Modèle de chat compatible OpenAI par défaut | `gpt-4o-mini` | | ||
| | `OPENROUTER_MODEL` | Modèle de chat OpenRouter par défaut | `meta-llama/llama-3.1-405b-instruct` | | ||
| | `ANTHROPIC_MODEL` | Modèle de chat Anthropic par défaut | `claude-3-opus-20240229` | | ||
| | `GOOGLE_GENERATIVE_AI_MODEL` | Modèle de chat Google Gemini par défaut | `gemini-1.5-pro-latest` | | ||
| | `DEEPSEEK_MODEL` | Modèle de chat DeepSeek par défaut | `deepseek-chat` | | ||
| | `AI302_MODEL` | Modèle de chat 302.AI par défaut | `gpt-4o-mini` | | ||
| | `TOGETHER_MODEL` | Modèle de chat Together.ai par défaut | `meta-llama/Llama-3-70b-chat-hf` | | ||
| | `XAI_MODEL` | Modèle de chat xAI par défaut | `grok-beta` | | ||
| | `NOVITA_MODEL` | Modèle de chat Novita par défaut | `gpt-4o-mini` | | ||
| | `FIREWORKS_MODEL` | Modèle de chat Fireworks.ai par défaut | `accounts/fireworks/models/llama-v3p1-405b-instruct` | | ||
| | `FEATHERLESS_MODEL` | Modèle de chat Featherless par défaut | `mistral-small-latest` | | ||
| | `PERPLEXITY_MODEL` | Modèle de chat Perplexity par défaut | `llama-3.1-sonar-small-128k-online` | | ||
| | `MISTRAL_MODEL` | Modèle de chat Mistral par défaut | `mistral-large-latest` | | ||
| | `MOONSHOT_MODEL` | Modèle de chat Moonshot par défaut | `moonshot-v1-32k` | | ||
| | `MODELSCOPE_MODEL` | Modèle de chat ModelScope par défaut | `qwen2-72b-instruct` | | ||
| | `CLOUDFLARE_WORKERS_AI_MODEL` | Modèle de chat Cloudflare Workers AI par défaut | `@cf/meta/llama-3-8b-instruct` | | ||
| | `OLLAMA_MODEL` | Modèle de chat Ollama par défaut | `llama3.1` | | ||
| | `LM_STUDIO_MODEL` | Modèle de chat LM Studio par défaut | `llama3.1-8b` | | ||
| | `PLAYER2_MODEL` | Modèle de chat Player2 par défaut | `player2-model` | | ||
| | `VLLM_MODEL` | Modèle de chat proxifié par vLLM par défaut | `llama-2-13b` | | ||
|
|
||
| #### Sélection du modèle par défaut (voix / embeddings) | ||
|
|
||
| | Variable | Description | Exemple | | ||
| |----------|-------------|---------| | ||
| | `OPENAI_SPEECH_MODEL` | Modèle TTS OpenAI par défaut | `tts-1` | | ||
| | `OPENAI_COMPATIBLE_SPEECH_MODEL` | Modèle TTS compatible OpenAI par défaut | `tts-1-hd` | | ||
| | `PLAYER2_SPEECH_MODEL` | Modèle TTS Player2 par défaut | `player2-voice` | | ||
| | `OLLAMA_EMBEDDING_MODEL` | Modèle d'embedding Ollama par défaut | `nomic-embed-text` | | ||
|
|
||
| #### Sélection du modèle par défaut (transcription) | ||
|
|
||
| | Variable | Description | Exemple | | ||
| |----------|-------------|---------| | ||
| | `OPENAI_TRANSCRIPTION_MODEL` | Modèle STT OpenAI par défaut | `gpt-4o-mini-transcribe` | | ||
| | `OPENAI_COMPATIBLE_TRANSCRIPTION_MODEL` | Modèle STT compatible OpenAI par défaut | `whisper-1` | | ||
|
|
||
| ### Exemple de configuration | ||
|
|
||
| ```env | ||
| OPENAI_API_KEY=sk-proj-xxxxx | ||
| OPENAI_BASE_URL=https://api.openai.com/v1/ | ||
| OPENAI_MODEL=gpt-4o-mini | ||
| DEFAULT_CHAT_PROVIDER=openai | ||
| DEFAULT_SPEECH_PROVIDER=openai-audio-speech | ||
| DEFAULT_TRANSCRIPTION_PROVIDER=openai-audio-transcription | ||
| ``` | ||
|
|
||
| Après avoir défini ces variables, les utilisateurs auront les fournisseurs pré-configurés et automatiquement sélectionnés lorsqu'ils visiteront votre déploiement. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section adds a lot of detailed environment variable documentation, which is also present in the new docs/content/en/docs/guides/deploy/vercel.md file. Duplicating this information across multiple README files can lead to maintenance issues, as any changes would need to be updated in several places. To improve maintainability, consider removing these detailed tables from the README files and instead providing a link to the main Vercel deployment guide. Since you've also added a Chinese version of the guide, you can link to the appropriate language version from each README.
| ## Vercel へのデプロイ | ||
|
|
||
| 環境変数を設定することで、LLM プロバイダーを事前設定した AIRI を Vercel にデプロイできます。これにより、ユーザーは自分の API キーを設定しなくてもデプロイされたインスタンスを使用できます。 | ||
|
|
||
| ### 環境変数 | ||
|
|
||
| Vercel プロジェクト設定でこれらの環境変数を追加してください: | ||
|
|
||
| #### LLM プロバイダー認証情報 | ||
|
|
||
| | 変数 | 説明 | 例 | | ||
| |------|------|-----| | ||
| | `OPENAI_API_KEY` | OpenAI API キー | `sk-...` | | ||
| | `OPENAI_BASE_URL` | OpenAI ベース URL | `https://api.openai.com/v1/` | | ||
| | `OPENAI_COMPATIBLE_API_KEY` | OpenAI 互換 API キー | `sk-...` | | ||
| | `OPENAI_COMPATIBLE_BASE_URL` | OpenAI 互換ベース URL | `https://your-api.com/v1/` | | ||
| | `OPENROUTER_API_KEY` | OpenRouter API キー | `sk-...` | | ||
| | `ANTHROPIC_API_KEY` | Anthropic Claude API キー | `sk-ant-...` | | ||
| | `GOOGLE_GENERATIVE_AI_API_KEY` | Google Gemini API キー | `AI...` | | ||
| | `DEEPSEEK_API_KEY` | DeepSeek API キー | `sk-...` | | ||
| | `AI302_API_KEY` | 302.AI API キー | `sk-...` | | ||
| | `TOGETHER_API_KEY` | Together.ai API キー | `...` | | ||
| | `XAI_API_KEY` | xAI API キー | `...` | | ||
| | `NOVITA_API_KEY` | Novita API キー | `...` | | ||
| | `FIREWORKS_API_KEY` | Fireworks.ai API キー | `...` | | ||
| | `PERPLEXITY_API_KEY` | Perplexity API キー | `...` | | ||
| | `MISTRAL_API_KEY` | Mistral AI API キー | `...` | | ||
| | `MOONSHOT_API_KEY` | Moonshot AI API キー | `...` | | ||
| | `MODELSCOPE_API_KEY` | ModelScope API キー | `...` | | ||
|
|
||
| #### デフォルトプロバイダー選択 | ||
|
|
||
| | 変数 | 説明 | 例 | | ||
| |------|------|-----| | ||
| | `DEFAULT_CHAT_PROVIDER` | デフォルトチャットプロバイダー ID | `openai` | | ||
| | `DEFAULT_SPEECH_PROVIDER` | デフォルト TTS プロバイダー ID | `openai-audio-speech` | | ||
| | `DEFAULT_TRANSCRIPTION_PROVIDER` | デフォルト STT プロバイダー ID | `openai-audio-transcription` | | ||
|
|
||
| #### デフォルトモデル設定(チャット) | ||
|
|
||
| | 変数 | 説明 | 例 | | ||
| |------|------|-----| | ||
| | `OPENAI_MODEL` | OpenAI のデフォルトチャットモデル | `gpt-4o-mini` | | ||
| | `OPENAI_COMPATIBLE_MODEL` | OpenAI 互換のデフォルトチャットモデル | `gpt-4o-mini` | | ||
| | `OPENROUTER_MODEL` | OpenRouter のデフォルトチャットモデル | `meta-llama/llama-3.1-405b-instruct` | | ||
| | `ANTHROPIC_MODEL` | Anthropic のデフォルトチャットモデル | `claude-3-opus-20240229` | | ||
| | `GOOGLE_GENERATIVE_AI_MODEL` | Google Gemini のデフォルトチャットモデル | `gemini-1.5-pro-latest` | | ||
| | `DEEPSEEK_MODEL` | DeepSeek のデフォルトチャットモデル | `deepseek-chat` | | ||
| | `AI302_MODEL` | 302.AI のデフォルトチャットモデル | `gpt-4o-mini` | | ||
| | `TOGETHER_MODEL` | Together.ai のデフォルトチャットモデル | `meta-llama/Llama-3-70b-chat-hf` | | ||
| | `XAI_MODEL` | xAI のデフォルトチャットモデル | `grok-beta` | | ||
| | `NOVITA_MODEL` | Novita のデフォルトチャットモデル | `gpt-4o-mini` | | ||
| | `FIREWORKS_MODEL` | Fireworks.ai のデフォルトチャットモデル | `accounts/fireworks/models/llama-v3p1-405b-instruct` | | ||
| | `FEATHERLESS_MODEL` | Featherless のデフォルトチャットモデル | `mistral-small-latest` | | ||
| | `PERPLEXITY_MODEL` | Perplexity のデフォルトチャットモデル | `llama-3.1-sonar-small-128k-online` | | ||
| | `MISTRAL_MODEL` | Mistral のデフォルトチャットモデル | `mistral-large-latest` | | ||
| | `MOONSHOT_MODEL` | Moonshot のデフォルトチャットモデル | `moonshot-v1-32k` | | ||
| | `MODELSCOPE_MODEL` | ModelScope のデフォルトチャットモデル | `qwen2-72b-instruct` | | ||
| | `CLOUDFLARE_WORKERS_AI_MODEL` | Cloudflare Workers AI のデフォルトチャットモデル | `@cf/meta/llama-3-8b-instruct` | | ||
| | `OLLAMA_MODEL` | Ollama のデフォルトチャットモデル | `llama3.1` | | ||
| | `LM_STUDIO_MODEL` | LM Studio のデフォルトチャットモデル | `llama3.1-8b` | | ||
| | `PLAYER2_MODEL` | Player2 のデフォルトチャットモデル | `player2-model` | | ||
| | `VLLM_MODEL` | vLLM プロキシのデフォルトチャットモデル | `llama-2-13b` | | ||
|
|
||
| #### デフォルトモデル設定(音声 / 埋め込み) | ||
|
|
||
| | 変数 | 説明 | 例 | | ||
| |------|------|-----| | ||
| | `OPENAI_SPEECH_MODEL` | OpenAI のデフォルト音声モデル | `tts-1` | | ||
| | `OPENAI_COMPATIBLE_SPEECH_MODEL` | OpenAI 互換のデフォルト音声モデル | `tts-1-hd` | | ||
| | `PLAYER2_SPEECH_MODEL` | Player2 のデフォルト音声モデル | `player2-voice` | | ||
| | `OLLAMA_EMBEDDING_MODEL` | Ollama のデフォルト埋め込みモデル | `nomic-embed-text` | | ||
|
|
||
| #### デフォルトモデル設定(音声認識) | ||
|
|
||
| | 変数 | 説明 | 例 | | ||
| |------|------|-----| | ||
| | `OPENAI_TRANSCRIPTION_MODEL` | OpenAI のデフォルト音声認識モデル | `gpt-4o-mini-transcribe` | | ||
| | `OPENAI_COMPATIBLE_TRANSCRIPTION_MODEL` | OpenAI 互換のデフォルト音声認識モデル | `whisper-1` | | ||
|
|
||
| ### 利用可能なプロバイダー ID | ||
|
|
||
| - **チャット**: `openai`, `openai-compatible`, `anthropic`, `google-generative-ai`, `deepseek`, `302-ai`, `together-ai`, `xai`, `novita-ai`, `fireworks-ai`, `perplexity-ai`, `mistral-ai`, `moonshot-ai`, `modelscope`, `openrouter-ai` | ||
| - **音声合成 (TTS)**: `openai-audio-speech`, `openai-compatible-audio-speech`, `elevenlabs`, `microsoft-speech` | ||
| - **音声認識 (STT)**: `openai-audio-transcription`, `openai-compatible-audio-transcription` | ||
|
|
||
| ### 設定例 | ||
|
|
||
| ```env | ||
| OPENAI_API_KEY=sk-proj-xxxxx | ||
| OPENAI_BASE_URL=https://api.openai.com/v1/ | ||
| OPENAI_MODEL=gpt-4o-mini | ||
| DEFAULT_CHAT_PROVIDER=openai | ||
| DEFAULT_SPEECH_PROVIDER=openai-audio-speech | ||
| DEFAULT_TRANSCRIPTION_PROVIDER=openai-audio-transcription | ||
| ``` | ||
|
|
||
| これらの変数を設定すると、ユーザーがデプロイにアクセスした際にプロバイダーが自動的に設定され選択されます。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section adds a lot of detailed environment variable documentation, which is also present in the new docs/content/en/docs/guides/deploy/vercel.md file. Duplicating this information across multiple README files can lead to maintenance issues, as any changes would need to be updated in several places. To improve maintainability, consider removing these detailed tables from the README files and instead providing a link to the main Vercel deployment guide. Since you've also added a Chinese version of the guide, you can link to the appropriate language version from each README.
| ## 部署到 Vercel | ||
|
|
||
| 你可以通过设置环境变量,将 AIRI 部署到 Vercel 并预配置 LLM 提供商。这样用户无需配置自己的 API 密钥即可使用你的部署实例。 | ||
|
|
||
| ### 环境变量 | ||
|
|
||
| 在你的 Vercel 项目设置中添加这些环境变量: | ||
|
|
||
| #### LLM 提供商凭证 | ||
|
|
||
| | 变量 | 描述 | 示例 | | ||
| |------|------|------| | ||
| | `OPENAI_API_KEY` | OpenAI API 密钥 | `sk-...` | | ||
| | `OPENAI_BASE_URL` | OpenAI 基础 URL | `https://api.openai.com/v1/` | | ||
| | `OPENAI_COMPATIBLE_API_KEY` | OpenAI 兼容 API 密钥 | `sk-...` | | ||
| | `OPENAI_COMPATIBLE_BASE_URL` | OpenAI 兼容基础 URL | `https://your-api.com/v1/` | | ||
| | `OPENROUTER_API_KEY` | OpenRouter API 密钥 | `sk-...` | | ||
| | `ANTHROPIC_API_KEY` | Anthropic Claude API 密钥 | `sk-ant-...` | | ||
| | `GOOGLE_GENERATIVE_AI_API_KEY` | Google Gemini API 密钥 | `AI...` | | ||
| | `DEEPSEEK_API_KEY` | DeepSeek API 密钥 | `sk-...` | | ||
| | `AI302_API_KEY` | 302.AI API 密钥 | `sk-...` | | ||
| | `TOGETHER_API_KEY` | Together.ai API 密钥 | `...` | | ||
| | `XAI_API_KEY` | xAI API 密钥 | `...` | | ||
| | `NOVITA_API_KEY` | Novita API 密钥 | `...` | | ||
| | `FIREWORKS_API_KEY` | Fireworks.ai API 密钥 | `...` | | ||
| | `PERPLEXITY_API_KEY` | Perplexity API 密钥 | `...` | | ||
| | `MISTRAL_API_KEY` | Mistral AI API 密钥 | `...` | | ||
| | `MOONSHOT_API_KEY` | Moonshot AI API 密钥 | `...` | | ||
| | `MODELSCOPE_API_KEY` | ModelScope API 密钥 | `...` | | ||
|
|
||
| #### 默认提供商选择 | ||
|
|
||
| | 变量 | 描述 | 示例 | | ||
| |------|------|------| | ||
| | `DEFAULT_CHAT_PROVIDER` | 默认聊天提供商 ID | `openai` | | ||
| | `DEFAULT_SPEECH_PROVIDER` | 默认 TTS 提供商 ID | `openai-audio-speech` | | ||
| | `DEFAULT_TRANSCRIPTION_PROVIDER` | 默认 STT 提供商 ID | `openai-audio-transcription` | | ||
|
|
||
| #### 默认模型选择(聊天) | ||
|
|
||
| | 变量 | 描述 | 示例 | | ||
| |------|------|------| | ||
| | `OPENAI_MODEL` | OpenAI 默认聊天模型 | `gpt-4o-mini` | | ||
| | `OPENAI_COMPATIBLE_MODEL` | OpenAI 兼容默认聊天模型 | `gpt-4o-mini` | | ||
| | `OPENROUTER_MODEL` | OpenRouter 默认聊天模型 | `meta-llama/llama-3.1-405b-instruct` | | ||
| | `ANTHROPIC_MODEL` | Anthropic 默认聊天模型 | `claude-3-opus-20240229` | | ||
| | `GOOGLE_GENERATIVE_AI_MODEL` | Google Gemini 默认聊天模型 | `gemini-1.5-pro-latest` | | ||
| | `DEEPSEEK_MODEL` | DeepSeek 默认聊天模型 | `deepseek-chat` | | ||
| | `AI302_MODEL` | 302.AI 默认聊天模型 | `gpt-4o-mini` | | ||
| | `TOGETHER_MODEL` | Together.ai 默认聊天模型 | `meta-llama/Llama-3-70b-chat-hf` | | ||
| | `XAI_MODEL` | xAI 默认聊天模型 | `grok-beta` | | ||
| | `NOVITA_MODEL` | Novita 默认聊天模型 | `gpt-4o-mini` | | ||
| | `FIREWORKS_MODEL` | Fireworks.ai 默认聊天模型 | `accounts/fireworks/models/llama-v3p1-405b-instruct` | | ||
| | `FEATHERLESS_MODEL` | Featherless 默认聊天模型 | `mistral-small-latest` | | ||
| | `PERPLEXITY_MODEL` | Perplexity 默认聊天模型 | `llama-3.1-sonar-small-128k-online` | | ||
| | `MISTRAL_MODEL` | Mistral 默认聊天模型 | `mistral-large-latest` | | ||
| | `MOONSHOT_MODEL` | Moonshot 默认聊天模型 | `moonshot-v1-32k` | | ||
| | `MODELSCOPE_MODEL` | ModelScope 默认聊天模型 | `qwen2-72b-instruct` | | ||
| | `CLOUDFLARE_WORKERS_AI_MODEL` | Cloudflare Workers AI 默认聊天模型 | `@cf/meta/llama-3-8b-instruct` | | ||
| | `OLLAMA_MODEL` | Ollama 默认聊天模型 | `llama3.1` | | ||
| | `LM_STUDIO_MODEL` | LM Studio 默认聊天模型 | `llama3.1-8b` | | ||
| | `PLAYER2_MODEL` | Player2 默认聊天模型 | `player2-model` | | ||
| | `VLLM_MODEL` | vLLM 代理默认聊天模型 | `llama-2-13b` | | ||
|
|
||
| #### 默认模型选择(语音 & 向量) | ||
|
|
||
| | 变量 | 描述 | 示例 | | ||
| |------|------|------| | ||
| | `OPENAI_SPEECH_MODEL` | OpenAI 默认语音模型 | `tts-1` | | ||
| | `OPENAI_COMPATIBLE_SPEECH_MODEL` | OpenAI 兼容默认语音模型 | `tts-1-hd` | | ||
| | `PLAYER2_SPEECH_MODEL` | Player2 默认语音模型 | `player2-voice` | | ||
| | `OLLAMA_EMBEDDING_MODEL` | Ollama 默认向量模型 | `nomic-embed-text` | | ||
|
|
||
| #### 默认模型选择(语音识别) | ||
|
|
||
| | 变量 | 描述 | 示例 | | ||
| |------|------|------| | ||
| | `OPENAI_TRANSCRIPTION_MODEL` | OpenAI 默认语音识别模型 | `gpt-4o-mini-transcribe` | | ||
| | `OPENAI_COMPATIBLE_TRANSCRIPTION_MODEL` | OpenAI 兼容默认语音识别模型 | `whisper-1` | | ||
|
|
||
| ### 可用的提供商 ID | ||
|
|
||
| - **聊天**: `openai`, `openai-compatible`, `anthropic`, `google-generative-ai`, `deepseek`, `302-ai`, `together-ai`, `xai`, `novita-ai`, `fireworks-ai`, `perplexity-ai`, `mistral-ai`, `moonshot-ai`, `modelscope`, `openrouter-ai` | ||
| - **语音合成 (TTS)**: `openai-audio-speech`, `openai-compatible-audio-speech`, `elevenlabs`, `microsoft-speech` | ||
| - **语音识别 (STT)**: `openai-audio-transcription`, `openai-compatible-audio-transcription` | ||
|
|
||
| ### 配置示例 | ||
|
|
||
| ```env | ||
| OPENAI_API_KEY=sk-proj-xxxxx | ||
| OPENAI_BASE_URL=https://api.openai.com/v1/ | ||
| OPENAI_MODEL=gpt-4o-mini | ||
| DEFAULT_CHAT_PROVIDER=openai | ||
| DEFAULT_SPEECH_PROVIDER=openai-audio-speech | ||
| DEFAULT_TRANSCRIPTION_PROVIDER=openai-audio-transcription | ||
| ``` | ||
|
|
||
| 设置这些变量后,用户访问你的部署时将自动预配置并选择提供商。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section adds a lot of detailed environment variable documentation, which is also present in the new docs/content/en/docs/guides/deploy/vercel.md file. Duplicating this information across multiple README files can lead to maintenance issues, as any changes would need to be updated in several places. To improve maintainability, consider removing these detailed tables from the README files and instead providing a link to the main Vercel deployment guide. Since you've also added a Chinese version of the guide, you can link to the appropriate language version from each README.
| router.get('/api/memory/config', eventHandler(() => { | ||
| return { success: true, data: getMemoryConfiguration() } | ||
| })) | ||
|
|
||
| router.post('/api/memory/config', eventHandler(async (event) => { | ||
| const body = await readBody(event) as MemoryConfiguration | undefined | ||
|
|
||
| if (!body) { | ||
| return { success: false, error: 'Configuration payload is required' } | ||
| } | ||
|
|
||
| try { | ||
| await configureMemorySystem(body) | ||
| return { success: true } | ||
| } | ||
| catch (error) { | ||
| return { success: false, error: error instanceof Error ? error.message : String(error) } | ||
| } | ||
| })) | ||
|
|
||
| router.post('/api/memory/save', eventHandler(async (event) => { | ||
| const body = await readBody(event) as { sessionId?: string, message?: unknown, userId?: string } | ||
|
|
||
| if (!body?.sessionId || typeof body.sessionId !== 'string') { | ||
| return { success: false, error: 'sessionId is required' } | ||
| } | ||
|
|
||
| if (!body?.message || typeof body.message !== 'object') { | ||
| return { success: false, error: 'message payload is required' } | ||
| } | ||
|
|
||
| await saveShortTermMemory({ | ||
| sessionId: body.sessionId, | ||
| message: body.message as any, | ||
| userId: typeof body.userId === 'string' ? body.userId : undefined, | ||
| }) | ||
|
|
||
| return { success: true } | ||
| })) | ||
|
|
||
| router.get('/api/memory/session/:sessionId', eventHandler(async (event) => { | ||
| const sessionId = getRouterParam(event, 'sessionId') | ||
|
|
||
| if (!sessionId) { | ||
| return { success: false, error: 'sessionId is required' } | ||
| } | ||
|
|
||
| const requestUrl = new URL(event.node.req.url ?? '', 'http://localhost') | ||
| const limitParam = requestUrl.searchParams.get('limit') | ||
| const limit = limitParam ? Number.parseInt(limitParam, 10) : undefined | ||
|
|
||
| const messages = await getRecentMessages(sessionId, Number.isNaN(limit) ? undefined : limit) | ||
|
|
||
| return { success: true, data: messages } | ||
| })) | ||
|
|
||
| router.post('/api/memory/search', eventHandler(async (event) => { | ||
| const body = await readBody(event) as { query?: string, userId?: string, limit?: number } | ||
|
|
||
| if (!body?.query || typeof body.query !== 'string') { | ||
| return { success: false, error: 'query is required' } | ||
| } | ||
|
|
||
| if (!body?.userId || typeof body.userId !== 'string') { | ||
| return { success: false, error: 'userId is required' } | ||
| } | ||
|
|
||
| const results = await searchUserMemory(body.query, body.userId, typeof body.limit === 'number' ? body.limit : undefined) | ||
|
|
||
| return { success: true, data: results } | ||
| })) | ||
|
|
||
| router.post('/api/memory/clear', eventHandler(async (event) => { | ||
| const body = await readBody(event) as { sessionId?: string } | ||
|
|
||
| if (!body?.sessionId || typeof body.sessionId !== 'string') { | ||
| return { success: false, error: 'sessionId is required' } | ||
| } | ||
|
|
||
| await clearSessionMemory(body.sessionId) | ||
|
|
||
| return { success: true } | ||
| })) | ||
|
|
||
| router.get('/api/memory/export', eventHandler(async (event) => { | ||
| const url = new URL(event.node.req.url ?? '', 'http://localhost') | ||
| const userId = url.searchParams.get('userId') | ||
| const limitParam = url.searchParams.get('limit') | ||
|
|
||
| if (!userId) { | ||
| return { success: false, error: 'userId is required' } | ||
| } | ||
|
|
||
| const limit = limitParam ? Number.parseInt(limitParam, 10) : undefined | ||
| const data = await exportUserMemory(userId, Number.isNaN(limit) ? undefined : limit) | ||
|
|
||
| return { success: true, data } | ||
| })) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block adds several new API endpoints for the memory system. There's some repeated logic for request validation (e.g., checking for sessionId or userId) and error handling across these handlers. To improve code reuse and maintainability, consider the following:
- Centralized Validation: Extract the validation logic into a shared utility function or H3 middleware. This would reduce code duplication.
- Consistent Error Responses: When validation fails, the handlers return a JSON error object but with a
200 OKstatus. It would be more idiomatic to use appropriate HTTP status codes, like400 Bad Request. You can do this in H3 by usingsetResponseStatusor throwing an error withcreateError.
For example:
import { createError, eventHandler, readBody, setResponseStatus } from 'h3'
// Example of a more robust handler
router.post('/api/memory/clear', eventHandler(async (event) => {
const body = await readBody(event) as { sessionId?: string }
if (!body?.sessionId || typeof body.sessionId !== 'string') {
throw createError({
statusCode: 400,
statusMessage: 'Bad Request',
data: { success: false, error: 'sessionId is required' }
})
}
await clearSessionMemory(body.sessionId)
setResponseStatus(event, 204) // No Content for successful deletion
}))| function normalizeShortTermProvider(provider: ShortTermProviderType): ShortTermProviderType { | ||
| if (provider === 'upstash-redis') { | ||
| if (!env.UPSTASH_REDIS_REST_URL || !env.UPSTASH_REDIS_REST_TOKEN) { | ||
| return 'local-redis' | ||
| } | ||
| } | ||
|
|
||
| return provider | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The normalizeShortTermProvider function silently falls back to local-redis if upstash-redis is selected but the required environment variables are missing. This implicit fallback could be confusing for users who expect their configuration to be respected or to receive an error if it's incomplete. It would be more transparent to either:
- Log a warning message indicating that the configuration is incomplete and that the system is falling back to
local-redis. - Throw an error if the configured provider is missing its required environment variables, forcing the user to provide a complete configuration.
This would make debugging configuration issues much easier.
| async function loadModel3JsonAndResources(model3JsonUrl: string): Promise<File> { | ||
| // Convert GitHub URL to raw if needed | ||
| const rawUrl = convertGitHubBlobUrlToRaw(model3JsonUrl) | ||
| const baseUrl = rawUrl.substring(0, rawUrl.lastIndexOf('/')) | ||
|
|
||
| // Fetch model3.json | ||
| const response = await fetch(rawUrl) | ||
| if (!response.ok) { | ||
| throw new Error(`Failed to fetch model3.json: ${response.statusText}`) | ||
| } | ||
|
|
||
| const model3JsonText = await response.text() | ||
| const model3Json: Model3Json = JSON.parse(model3JsonText) | ||
|
|
||
| // Collect all file paths from model3.json | ||
| const filesToFetch: Set<string> = new Set() | ||
| filesToFetch.add(model3JsonUrl.split('/').pop() || 'model.model3.json') | ||
|
|
||
| const fileRefs = model3Json.FileReferences | ||
| if (fileRefs) { | ||
| // Add moc file | ||
| if (fileRefs.Moc) filesToFetch.add(fileRefs.Moc) | ||
|
|
||
| // Add textures | ||
| if (fileRefs.Textures) { | ||
| fileRefs.Textures.forEach(texture => filesToFetch.add(texture)) | ||
| } | ||
|
|
||
| // Add physics | ||
| if (fileRefs.Physics) filesToFetch.add(fileRefs.Physics) | ||
|
|
||
| // Add pose | ||
| if (fileRefs.Pose) filesToFetch.add(fileRefs.Pose) | ||
|
|
||
| // Add display info | ||
| if (fileRefs.DisplayInfo) filesToFetch.add(fileRefs.DisplayInfo) | ||
|
|
||
| // Add user data | ||
| if (fileRefs.UserData) filesToFetch.add(fileRefs.UserData) | ||
|
|
||
| // Add expressions | ||
| if (fileRefs.Expressions) { | ||
| fileRefs.Expressions.forEach(exp => { | ||
| if (exp.File) filesToFetch.add(exp.File) | ||
| }) | ||
| } | ||
|
|
||
| // Add motions | ||
| if (fileRefs.Motions) { | ||
| Object.values(fileRefs.Motions).forEach(motionGroup => { | ||
| motionGroup.forEach(motion => { | ||
| if (motion.File) filesToFetch.add(motion.File) | ||
| if (motion.Sound) filesToFetch.add(motion.Sound) | ||
| }) | ||
| }) | ||
| } | ||
| } | ||
|
|
||
| // Create a zip file | ||
| const zip = new JSZip() | ||
|
|
||
| // Add model3.json to zip | ||
| zip.file(model3JsonUrl.split('/').pop() || 'model.model3.json', model3JsonText) | ||
|
|
||
| // Fetch and add all referenced files | ||
| const fetchPromises = Array.from(filesToFetch) | ||
| .filter(file => file !== (model3JsonUrl.split('/').pop() || 'model.model3.json')) | ||
| .map(async (relativePath) => { | ||
| try { | ||
| const fileUrl = `${baseUrl}/${relativePath}` | ||
| const fileResponse = await fetch(convertGitHubBlobUrlToRaw(fileUrl)) | ||
|
|
||
| if (!fileResponse.ok) { | ||
| console.warn(`Failed to fetch ${relativePath}: ${fileResponse.statusText}`) | ||
| return | ||
| } | ||
|
|
||
| const fileBlob = await fileResponse.blob() | ||
| zip.file(relativePath, fileBlob) | ||
| } | ||
| catch (error) { | ||
| console.warn(`Error fetching ${relativePath}:`, error) | ||
| } | ||
| }) | ||
|
|
||
| await Promise.all(fetchPromises) | ||
|
|
||
| // Generate zip blob | ||
| const zipBlob = await zip.generateAsync({ type: 'blob' }) | ||
| const modelName = model3JsonUrl.split('/').pop()?.replace('.model3.json', '') || 'model' | ||
|
|
||
| return new File([zipBlob], `${modelName}.zip`, { type: 'application/zip' }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The loadModel3JsonAndResources function is a great feature for importing Live2D models from a model3.json URL. However, it relies on fetching numerous individual files from a base URL, which can be fragile. If any of the resource files (textures, motions, etc.) fail to download, the catch block inside the map only logs a warning, and the process continues. This could result in a partially incomplete or corrupted model ZIP file being created and cached, which might lead to runtime errors or visual glitches later.
To make this more robust, consider either:
- Failing the entire import if a critical resource (like the
.moc3file or textures) cannot be fetched. - Implementing a retry mechanism for failed network requests.
- At the very least, providing more prominent feedback to the user that some model resources failed to load.
| "env": [ | ||
| "OPENAI_API_KEY", | ||
| "OPENAI_BASE_URL", | ||
| "OPENAI_MODEL", | ||
| "OPENAI_COMPATIBLE_API_KEY", | ||
| "OPENAI_COMPATIBLE_BASE_URL", | ||
| "OPENAI_COMPATIBLE_MODEL", | ||
| "OPENAI_SPEECH_MODEL", | ||
| "OPENAI_COMPATIBLE_SPEECH_MODEL", | ||
| "OPENAI_TRANSCRIPTION_MODEL", | ||
| "OPENAI_COMPATIBLE_TRANSCRIPTION_MODEL", | ||
| "OPENROUTER_API_KEY", | ||
| "OPENROUTER_BASE_URL", | ||
| "OPENROUTER_MODEL", | ||
| "ANTHROPIC_API_KEY", | ||
| "ANTHROPIC_BASE_URL", | ||
| "ANTHROPIC_MODEL", | ||
| "GOOGLE_GENERATIVE_AI_API_KEY", | ||
| "GOOGLE_GENERATIVE_AI_BASE_URL", | ||
| "GOOGLE_GENERATIVE_AI_MODEL", | ||
| "DEEPSEEK_API_KEY", | ||
| "DEEPSEEK_BASE_URL", | ||
| "DEEPSEEK_MODEL", | ||
| "AI302_API_KEY", | ||
| "AI302_BASE_URL", | ||
| "AI302_MODEL", | ||
| "TOGETHER_API_KEY", | ||
| "TOGETHER_BASE_URL", | ||
| "TOGETHER_MODEL", | ||
| "XAI_API_KEY", | ||
| "XAI_BASE_URL", | ||
| "XAI_MODEL", | ||
| "NOVITA_API_KEY", | ||
| "NOVITA_BASE_URL", | ||
| "NOVITA_MODEL", | ||
| "FIREWORKS_API_KEY", | ||
| "FIREWORKS_BASE_URL", | ||
| "FIREWORKS_MODEL", | ||
| "FEATHERLESS_API_KEY", | ||
| "FEATHERLESS_BASE_URL", | ||
| "FEATHERLESS_MODEL", | ||
| "PERPLEXITY_API_KEY", | ||
| "PERPLEXITY_BASE_URL", | ||
| "PERPLEXITY_MODEL", | ||
| "MISTRAL_API_KEY", | ||
| "MISTRAL_BASE_URL", | ||
| "MISTRAL_MODEL", | ||
| "MOONSHOT_API_KEY", | ||
| "MOONSHOT_BASE_URL", | ||
| "MOONSHOT_MODEL", | ||
| "MODELSCOPE_API_KEY", | ||
| "MODELSCOPE_BASE_URL", | ||
| "MODELSCOPE_MODEL", | ||
| "CLOUDFLARE_WORKERS_AI_MODEL", | ||
| "OLLAMA_MODEL", | ||
| "OLLAMA_EMBEDDING_MODEL", | ||
| "LM_STUDIO_MODEL", | ||
| "PLAYER2_MODEL", | ||
| "PLAYER2_SPEECH_MODEL", | ||
| "VLLM_MODEL", | ||
| "VITE_AIRI_WS_URL", | ||
| "MEMORY_PROVIDER", | ||
| "SHORT_TERM_MEMORY_PROVIDER", | ||
| "MEMORY_NAMESPACE", | ||
| "SHORT_TERM_MEMORY_MAX_MESSAGES", | ||
| "SHORT_TERM_MEMORY_TTL_SECONDS", | ||
| "UPSTASH_REDIS_REST_URL", | ||
| "UPSTASH_REDIS_REST_TOKEN", | ||
| "REDIS_HOST", | ||
| "REDIS_PORT", | ||
| "REDIS_PASSWORD", | ||
| "LONG_TERM_MEMORY_PROVIDER", | ||
| "MEMORY_LONG_TERM_PROVIDER", | ||
| "QDRANT_URL", | ||
| "QDRANT_API_KEY", | ||
| "QDRANT_COLLECTION", | ||
| "QDRANT_VECTOR_SIZE", | ||
| "POSTGRES_URL", | ||
| "POSTGRES_PRISMA_URL", | ||
| "DATABASE_URL", | ||
| "POSTGRES_HOST", | ||
| "POSTGRES_PORT", | ||
| "POSTGRES_DATABASE", | ||
| "POSTGRES_USER", | ||
| "POSTGRES_PASSWORD", | ||
| "POSTGRES_SSL", | ||
| "MEMORY_EMBEDDING_PROVIDER", | ||
| "MEMORY_EMBEDDING_API_KEY", | ||
| "MEMORY_EMBEDDING_BASE_URL", | ||
| "MEMORY_EMBEDDING_MODEL", | ||
| "CLOUDFLARE_ACCOUNT_ID", | ||
| "DEFAULT_CHAT_PROVIDER", | ||
| "DEFAULT_SPEECH_PROVIDER", | ||
| "DEFAULT_TRANSCRIPTION_PROVIDER" | ||
| ], | ||
| "passThroughEnv": [ | ||
| "OPENAI_API_KEY", | ||
| "OPENAI_BASE_URL", | ||
| "OPENAI_MODEL", | ||
| "OPENAI_COMPATIBLE_API_KEY", | ||
| "OPENAI_COMPATIBLE_BASE_URL", | ||
| "OPENAI_COMPATIBLE_MODEL", | ||
| "OPENAI_SPEECH_MODEL", | ||
| "OPENAI_COMPATIBLE_SPEECH_MODEL", | ||
| "OPENAI_TRANSCRIPTION_MODEL", | ||
| "OPENAI_COMPATIBLE_TRANSCRIPTION_MODEL", | ||
| "OPENROUTER_API_KEY", | ||
| "OPENROUTER_BASE_URL", | ||
| "OPENROUTER_MODEL", | ||
| "ANTHROPIC_API_KEY", | ||
| "ANTHROPIC_BASE_URL", | ||
| "ANTHROPIC_MODEL", | ||
| "GOOGLE_GENERATIVE_AI_API_KEY", | ||
| "GOOGLE_GENERATIVE_AI_BASE_URL", | ||
| "GOOGLE_GENERATIVE_AI_MODEL", | ||
| "DEEPSEEK_API_KEY", | ||
| "DEEPSEEK_BASE_URL", | ||
| "DEEPSEEK_MODEL", | ||
| "AI302_API_KEY", | ||
| "AI302_BASE_URL", | ||
| "AI302_MODEL", | ||
| "TOGETHER_API_KEY", | ||
| "TOGETHER_BASE_URL", | ||
| "TOGETHER_MODEL", | ||
| "XAI_API_KEY", | ||
| "XAI_BASE_URL", | ||
| "XAI_MODEL", | ||
| "NOVITA_API_KEY", | ||
| "NOVITA_BASE_URL", | ||
| "NOVITA_MODEL", | ||
| "FIREWORKS_API_KEY", | ||
| "FIREWORKS_BASE_URL", | ||
| "FIREWORKS_MODEL", | ||
| "FEATHERLESS_API_KEY", | ||
| "FEATHERLESS_BASE_URL", | ||
| "FEATHERLESS_MODEL", | ||
| "PERPLEXITY_API_KEY", | ||
| "PERPLEXITY_BASE_URL", | ||
| "PERPLEXITY_MODEL", | ||
| "MISTRAL_API_KEY", | ||
| "MISTRAL_BASE_URL", | ||
| "MISTRAL_MODEL", | ||
| "MOONSHOT_API_KEY", | ||
| "MOONSHOT_BASE_URL", | ||
| "MOONSHOT_MODEL", | ||
| "MODELSCOPE_API_KEY", | ||
| "MODELSCOPE_BASE_URL", | ||
| "MODELSCOPE_MODEL", | ||
| "CLOUDFLARE_WORKERS_AI_MODEL", | ||
| "OLLAMA_MODEL", | ||
| "OLLAMA_EMBEDDING_MODEL", | ||
| "LM_STUDIO_MODEL", | ||
| "PLAYER2_MODEL", | ||
| "PLAYER2_SPEECH_MODEL", | ||
| "VLLM_MODEL", | ||
| "VITE_AIRI_WS_URL", | ||
| "MEMORY_PROVIDER", | ||
| "SHORT_TERM_MEMORY_PROVIDER", | ||
| "MEMORY_NAMESPACE", | ||
| "SHORT_TERM_MEMORY_MAX_MESSAGES", | ||
| "SHORT_TERM_MEMORY_TTL_SECONDS", | ||
| "UPSTASH_REDIS_REST_URL", | ||
| "UPSTASH_REDIS_REST_TOKEN", | ||
| "REDIS_HOST", | ||
| "REDIS_PORT", | ||
| "REDIS_PASSWORD", | ||
| "LONG_TERM_MEMORY_PROVIDER", | ||
| "MEMORY_LONG_TERM_PROVIDER", | ||
| "QDRANT_URL", | ||
| "QDRANT_API_KEY", | ||
| "QDRANT_COLLECTION", | ||
| "QDRANT_VECTOR_SIZE", | ||
| "POSTGRES_URL", | ||
| "POSTGRES_PRISMA_URL", | ||
| "DATABASE_URL", | ||
| "POSTGRES_HOST", | ||
| "POSTGRES_PORT", | ||
| "POSTGRES_DATABASE", | ||
| "POSTGRES_USER", | ||
| "POSTGRES_PASSWORD", | ||
| "POSTGRES_SSL", | ||
| "MEMORY_EMBEDDING_PROVIDER", | ||
| "MEMORY_EMBEDDING_API_KEY", | ||
| "MEMORY_EMBEDDING_BASE_URL", | ||
| "MEMORY_EMBEDDING_MODEL", | ||
| "CLOUDFLARE_ACCOUNT_ID", | ||
| "DEFAULT_CHAT_PROVIDER", | ||
| "DEFAULT_SPEECH_PROVIDER", | ||
| "DEFAULT_TRANSCRIPTION_PROVIDER" | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The env and passThroughEnv arrays are defined here for the build task, but they seem to be identical to the globalEnv and globalPassThroughEnv arrays defined at the root of the configuration. Since these are global, they should apply to all tasks, making the task-specific definitions redundant. To simplify the configuration, you can remove the env and passThroughEnv from the build task and rely on the global definitions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure? This will bundle all the API secret into visible text as part of bundle for everyone to see it.
| readonly OPENAI_API_KEY?: string | ||
| readonly OPENAI_BASE_URL?: string | ||
| readonly OPENAI_MODEL?: string | ||
| readonly OPENAI_COMPATIBLE_API_KEY?: string | ||
| readonly OPENAI_COMPATIBLE_BASE_URL?: string | ||
| readonly OPENAI_COMPATIBLE_MODEL?: string | ||
| readonly OPENAI_SPEECH_MODEL?: string | ||
| readonly OPENAI_COMPATIBLE_SPEECH_MODEL?: string | ||
| readonly OPENAI_TRANSCRIPTION_MODEL?: string | ||
| readonly OPENAI_COMPATIBLE_TRANSCRIPTION_MODEL?: string | ||
| readonly OPENROUTER_API_KEY?: string | ||
| readonly OPENROUTER_BASE_URL?: string | ||
| readonly OPENROUTER_MODEL?: string | ||
| readonly ANTHROPIC_API_KEY?: string | ||
| readonly ANTHROPIC_BASE_URL?: string | ||
| readonly ANTHROPIC_MODEL?: string | ||
| readonly GOOGLE_GENERATIVE_AI_API_KEY?: string | ||
| readonly GOOGLE_GENERATIVE_AI_BASE_URL?: string | ||
| readonly GOOGLE_GENERATIVE_AI_MODEL?: string | ||
| readonly DEEPSEEK_API_KEY?: string | ||
| readonly DEEPSEEK_BASE_URL?: string | ||
| readonly DEEPSEEK_MODEL?: string | ||
| readonly AI302_API_KEY?: string | ||
| readonly AI302_BASE_URL?: string | ||
| readonly AI302_MODEL?: string | ||
| readonly TOGETHER_API_KEY?: string | ||
| readonly TOGETHER_BASE_URL?: string | ||
| readonly TOGETHER_MODEL?: string | ||
| readonly XAI_API_KEY?: string | ||
| readonly XAI_BASE_URL?: string | ||
| readonly XAI_MODEL?: string | ||
| readonly NOVITA_API_KEY?: string | ||
| readonly NOVITA_BASE_URL?: string | ||
| readonly NOVITA_MODEL?: string | ||
| readonly FIREWORKS_API_KEY?: string | ||
| readonly FIREWORKS_BASE_URL?: string | ||
| readonly FIREWORKS_MODEL?: string | ||
| readonly FEATHERLESS_API_KEY?: string | ||
| readonly FEATHERLESS_BASE_URL?: string | ||
| readonly FEATHERLESS_MODEL?: string | ||
| readonly PERPLEXITY_API_KEY?: string | ||
| readonly PERPLEXITY_BASE_URL?: string | ||
| readonly PERPLEXITY_MODEL?: string | ||
| readonly MISTRAL_API_KEY?: string | ||
| readonly MISTRAL_BASE_URL?: string | ||
| readonly MISTRAL_MODEL?: string | ||
| readonly MOONSHOT_API_KEY?: string | ||
| readonly MOONSHOT_BASE_URL?: string | ||
| readonly MOONSHOT_MODEL?: string | ||
| readonly MODELSCOPE_API_KEY?: string | ||
| readonly MODELSCOPE_BASE_URL?: string | ||
| readonly MODELSCOPE_MODEL?: string | ||
| readonly CLOUDFLARE_WORKERS_AI_MODEL?: string | ||
| readonly OLLAMA_MODEL?: string | ||
| readonly OLLAMA_EMBEDDING_MODEL?: string | ||
| readonly LM_STUDIO_MODEL?: string | ||
| readonly PLAYER2_MODEL?: string | ||
| readonly PLAYER2_SPEECH_MODEL?: string | ||
| readonly VLLM_MODEL?: string | ||
| readonly VITE_AIRI_WS_URL?: string | ||
| readonly DEFAULT_CHAT_PROVIDER?: string | ||
| readonly DEFAULT_SPEECH_PROVIDER?: string | ||
| readonly DEFAULT_TRANSCRIPTION_PROVIDER?: string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| readonly OPENAI_API_KEY?: string | |
| readonly OPENAI_BASE_URL?: string | |
| readonly OPENAI_MODEL?: string | |
| readonly OPENAI_COMPATIBLE_API_KEY?: string | |
| readonly OPENAI_COMPATIBLE_BASE_URL?: string | |
| readonly OPENAI_COMPATIBLE_MODEL?: string | |
| readonly OPENAI_SPEECH_MODEL?: string | |
| readonly OPENAI_COMPATIBLE_SPEECH_MODEL?: string | |
| readonly OPENAI_TRANSCRIPTION_MODEL?: string | |
| readonly OPENAI_COMPATIBLE_TRANSCRIPTION_MODEL?: string | |
| readonly OPENROUTER_API_KEY?: string | |
| readonly OPENROUTER_BASE_URL?: string | |
| readonly OPENROUTER_MODEL?: string | |
| readonly ANTHROPIC_API_KEY?: string | |
| readonly ANTHROPIC_BASE_URL?: string | |
| readonly ANTHROPIC_MODEL?: string | |
| readonly GOOGLE_GENERATIVE_AI_API_KEY?: string | |
| readonly GOOGLE_GENERATIVE_AI_BASE_URL?: string | |
| readonly GOOGLE_GENERATIVE_AI_MODEL?: string | |
| readonly DEEPSEEK_API_KEY?: string | |
| readonly DEEPSEEK_BASE_URL?: string | |
| readonly DEEPSEEK_MODEL?: string | |
| readonly AI302_API_KEY?: string | |
| readonly AI302_BASE_URL?: string | |
| readonly AI302_MODEL?: string | |
| readonly TOGETHER_API_KEY?: string | |
| readonly TOGETHER_BASE_URL?: string | |
| readonly TOGETHER_MODEL?: string | |
| readonly XAI_API_KEY?: string | |
| readonly XAI_BASE_URL?: string | |
| readonly XAI_MODEL?: string | |
| readonly NOVITA_API_KEY?: string | |
| readonly NOVITA_BASE_URL?: string | |
| readonly NOVITA_MODEL?: string | |
| readonly FIREWORKS_API_KEY?: string | |
| readonly FIREWORKS_BASE_URL?: string | |
| readonly FIREWORKS_MODEL?: string | |
| readonly FEATHERLESS_API_KEY?: string | |
| readonly FEATHERLESS_BASE_URL?: string | |
| readonly FEATHERLESS_MODEL?: string | |
| readonly PERPLEXITY_API_KEY?: string | |
| readonly PERPLEXITY_BASE_URL?: string | |
| readonly PERPLEXITY_MODEL?: string | |
| readonly MISTRAL_API_KEY?: string | |
| readonly MISTRAL_BASE_URL?: string | |
| readonly MISTRAL_MODEL?: string | |
| readonly MOONSHOT_API_KEY?: string | |
| readonly MOONSHOT_BASE_URL?: string | |
| readonly MOONSHOT_MODEL?: string | |
| readonly MODELSCOPE_API_KEY?: string | |
| readonly MODELSCOPE_BASE_URL?: string | |
| readonly MODELSCOPE_MODEL?: string | |
| readonly CLOUDFLARE_WORKERS_AI_MODEL?: string | |
| readonly OLLAMA_MODEL?: string | |
| readonly OLLAMA_EMBEDDING_MODEL?: string | |
| readonly LM_STUDIO_MODEL?: string | |
| readonly PLAYER2_MODEL?: string | |
| readonly PLAYER2_SPEECH_MODEL?: string | |
| readonly VLLM_MODEL?: string | |
| readonly VITE_AIRI_WS_URL?: string | |
| readonly DEFAULT_CHAT_PROVIDER?: string | |
| readonly DEFAULT_SPEECH_PROVIDER?: string | |
| readonly DEFAULT_TRANSCRIPTION_PROVIDER?: string |
| AGENTS.md | ||
| CLAUDE.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can include them.
Summary
Add comprehensive Chinese (Simplified) documentation for deploying AIRI Stage Web on Vercel.
Contents
File Structure
docs/content/zh-Hans/docs/guides/deploy/vercel.md
Related
docs/content/en/docs/guides/deploy/vercel.md