Skip to content

Conversation

@inoribea
Copy link

@inoribea inoribea commented Oct 7, 2025

Summary

Add comprehensive support for loading Live2D models directly from model3.json URLs, including automatic resource fetching and
packaging.

Features

  • 🔗 GitHub URL Support: Auto-convert GitHub blob URLs to raw URLs
    • Example: github.com/user/repo/blob/main/model.model3.jsonraw.githubusercontent.com/user/repo/main/model.model3.json
  • 📦 Smart Resource Collection: Parse model3.json and automatically fetch all referenced files:
    • Textures (.png)
    • Motions (.motion3.json)
    • Physics files (.physics3.json)
    • Pose data (.pose3.json)
    • Expressions (.exp3.json)
    • Audio files (.mp3, .wav)
  • 🗜️ Auto-packaging: Use JSZip to package all resources into a single zip file
  • 💾 Offline Caching: Cache the packaged model for offline use
  • 🖼️ Preview Generation: Automatically generate preview images for imported models

Usage

Users can now simply paste a model3.json URL from GitHub or other hosting services:
https://github.com/Eikanya/Live2d-model/blob/master/xxxxxx/xxx/xxxx.model3.json

The system will:

  1. Convert the URL to raw format
  2. Download the model3.json file
  3. Parse and download all referenced resources
  4. Package everything into a zip file
  5. Load the Live2D model

Technical Details

  • Added convertGitHubBlobUrlToRaw() function for URL transformation
  • Added loadModel3JsonAndResources() function for resource collection
  • Enhanced addDisplayModelFromURL() to detect and handle .model3.json URLs
  • Updated UI placeholder text to indicate .model3.json support

Testing

  • Tested with various GitHub-hosted model3.json files
  • Verified resource fetching and packaging
  • Confirmed model loads correctly after import

inoribea and others added 24 commits October 6, 2025 08:31
- Add URL import dialog for VRM and Live2D models
- Support VPM JSON format parsing
- Implement model caching with IndexedDB
- Auto-detect model format from URL extension
- Support direct .vrm, .zip, and .json URLs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Fix CORS font loading from jsdelivr.net by using local Kiwi Maru
- Add smart WebSocket URL detection (disabled in production)
- Add CORS headers to Vercel configuration
- Support auto-switching between ws:// and wss:// protocols

Fixes font loading errors and WebSocket connection failures on deployed instances.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Auto-select default chat provider when configured
- Auto-select default speech provider when configured
- Auto-select default transcription provider when configured
- Add watcher to set active provider from env variables

Improves onboarding UX by pre-selecting providers based on deployment configuration.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Add provider API keys to Vite define config
- Add default provider selection environment variables
- Update .gitignore for build artifacts
- Update stage-web README and type definitions

Enables deploying with pre-configured provider credentials via environment variables.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Add comprehensive Vercel deployment section to README
- Document all LLM provider environment variables
- Add default provider selection variables
- Include configuration examples
- Support multiple languages (EN, ZH-CN, JA-JP, FR)

This helps users deploy AIRI to Vercel with pre-configured providers, improving the deployment experience.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Add globalEnv variables to turbo.json to pass environment to build
- Add dependsOn to build task to ensure proper dependency order
- Fix vercel.json buildCommand to use 'pnpm run build:web'

Fixes:
- Turborepo warning about missing environment variables
- Build command execution issues on Vercel

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Change buildCommand to use 'pnpm -w run build:web'
- Ensures build script runs at workspace root level
- Fixes 'ERR_PNPM_NO_SCRIPT Missing script: build:web' error
- Add all provider env vars to turbo.json globalEnv
- Add build task dependencies to turbo.json

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
…nings

- Change globalEnv to globalPassThroughEnv in turbo.json
- Prevents warnings for optional provider environment variables
- These variables are not required and warnings are unnecessary

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Disable Turbo remote cache to fix 400 error with Vercel artifacts API
- Update build command to use turbo filter syntax for better integration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Set vite output directory to project root dist folder for Vercel
- Add empty env array to turbo build task to suppress unnecessary environment variable warnings
- Update vercel.json outputDirectory to match new build output location

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Add empty passThroughEnv array to build task to override globalPassThroughEnv,
preventing unnecessary environment variable warnings for packages that don't need them.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Revert to original output directory setup where Vite outputs to apps/stage-web/dist
to match Turbo's outputs configuration and ensure proper build artifact tracking.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Fix build error where @proj-airi/memory-system was imported but not
declared as a dependency in server-runtime/package.json. This caused
unresolved import warnings during build:

[UNRESOLVED_IMPORT] Could not resolve '@proj-airi/memory-system' in src/services/memory.ts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Merged latest changes from moeru-ai/airi upstream including:
- feat(injecta): new dependency injection package
- refactor(stage-tamagotchi): improved structure
- Various fixes and improvements

Preserved local changes:
- Memory system (@proj-airi/memory-system)
- Memory UI components and configuration
- All memory-related functionality

Resolved conflicts by regenerating pnpm-lock.yaml

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Add comprehensive Chinese (Simplified) translations for memory-related
settings pages, including:
- Short-term memory configuration (providers, TTL, namespace, etc.)
- Long-term memory configuration (database, embeddings, search, etc.)
- All labels, descriptions, hints, and error messages

Translations follow the existing i18n patterns and terminology used
throughout the project.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
… fetching

Add comprehensive support for loading Live2D models directly from
model3.json URLs (including GitHub blob URLs):

Features:
- Auto-convert GitHub blob URLs to raw URLs
- Parse model3.json and collect all referenced resources
  (textures, motions, physics, expressions, sounds, etc.)
- Batch download all resources from the same base directory
- Package everything into a zip file using JSZip
- Cache the packaged model for offline use
- Generate preview images for imported models

This enables users to load Live2D models directly from GitHub or other
hosting services by simply pasting the model3.json URL, without needing
to manually download and package all the files.

Example URL support:
https://github.com/user/repo/blob/main/model/model.model3.json

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@netlify
Copy link

netlify bot commented Oct 7, 2025

Deploy Preview for airi-vtuber ready!

Name Link
🔨 Latest commit 0c447cc
🔍 Latest deploy log https://app.netlify.com/projects/airi-vtuber/deploys/68e50f0569a7f50008b2f37b
😎 Deploy Preview https://deploy-preview-638--airi-vtuber.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@netlify
Copy link

netlify bot commented Oct 7, 2025

Deploy Preview for airi-docs ready!

Name Link
🔨 Latest commit 0c447cc
🔍 Latest deploy log https://app.netlify.com/projects/airi-docs/deploys/68e50f052db49300080f5fb2
😎 Deploy Preview https://deploy-preview-638--airi-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @inoribea, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the application's capabilities by introducing a robust memory system for managing conversation context and long-term knowledge, alongside comprehensive support for importing Live2D models from URLs. It also refines the deployment process by enabling extensive environment variable configuration for LLM providers and memory settings, improving overall flexibility and ease of use.

Highlights

  • Live2D Model Loading from URLs: Introduced comprehensive support for loading Live2D models directly from model3.json URLs. This includes automatic conversion of GitHub blob URLs to raw URLs, smart collection of all referenced resources (textures, motions, physics, etc.), and packaging them into a single ZIP file using JSZip for offline caching and preview generation.
  • New Memory System Implementation: A new memory system has been added, featuring both short-term and long-term memory providers. Short-term memory can use Local Redis, Upstash Redis, or Vercel KV, while long-term memory supports PostgreSQL with pgvector or Qdrant for vector embeddings. This system allows for persistent storage and semantic search of conversation history.
  • Memory System UI and API: Dedicated UI sections have been created for configuring both short-term and long-term memory providers, including connection details, retention policies, and embedding settings. New API endpoints in the server runtime allow for dynamic configuration, saving, fetching, searching, and clearing of memory data.
  • Enhanced Environment Variable Support: The application now supports a wide array of environment variables for pre-configuring LLM providers (chat, speech, transcription) and memory system settings. This significantly streamlines deployment, especially on platforms like Vercel, by allowing default provider and model selections without manual UI configuration.
  • Updated Documentation and Deployment Guides: The README files (English, French, Japanese, Chinese) and a new Vercel deployment guide (docs/content/en/docs/guides/deploy/vercel.md) have been updated to reflect the new environment variable configurations and deployment steps for the memory system and LLM providers.
  • Improved Font Handling: Multiple @fontsource packages have been added to preload locally bundled fonts, avoiding runtime CDN requests and improving performance. The uno.config.ts was updated to reflect this change.
Ignored Files
  • Ignored by pattern: packages/i18n/src/** (1)
    • packages/i18n/src/locales/zh-Hans/settings.yaml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces two major features: a comprehensive conversation memory system and a Live2D model loader that can fetch models and their resources from a model3.json URL. The memory system is a significant addition, providing both short-term and long-term storage options with various backends like Redis, Postgres, and Qdrant, complete with a new settings UI. The model loader intelligently handles GitHub URLs and packages all necessary Live2D resources into a zip for caching. My review focuses on the correctness and robustness of these new, complex features. I've identified a critical bug in the memory system's configuration factory, a type error in the UI store, and a suggestion to improve the robustness of the model resource fetching logic. Overall, these are powerful additions to the project.

Comment on lines +292 to +294
if (config.provider !== 'postgres-pgvector') {
throw new Error(`Unsupported long-term memory provider: ${config.provider}`)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This conditional check incorrectly prevents the use of the qdrant provider when creating a long-term memory provider from a configuration object. The condition config.provider !== 'postgres-pgvector' will be true for qdrant, causing an error to be thrown before the qdrant implementation is reached. This check appears to be a logic error and should be removed to allow all supported providers to be configured. The throw at the end of the function already handles unsupported providers.

}
})

await Promise.all(fetchPromises)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

When fetching resources for a model3.json, a failure to fetch an individual resource is only logged as a warning. While this is acceptable for non-essential files, if a critical file like the .moc3 file fails to download, the process will still succeed, resulting in a broken model entry. It would be more robust to identify critical files and throw an error if they fail to download, preventing the creation of a non-functional model.

Suggested change
await Promise.all(fetchPromises)
await Promise.all(fetchPromises)
// Verify that critical files have been added to the zip
if (fileRefs?.Moc && !zip.file(fileRefs.Moc)) {
throw new Error(`Failed to fetch critical model file: ${fileRefs.Moc}`)
}

Comment on lines +364 to +419
longTerm: enabledLongTerm.value
? longTermProvider.value === 'qdrant'
? {
enabled: true,
provider: 'qdrant',
qdrant: {
url: longTermQdrantUrl.value,
apiKey: longTermQdrantApiKey.value || undefined,
collectionName: longTermQdrantCollection.value || 'memory_entries',
vectorSize: Number(longTermQdrantVectorSize.value) || undefined,
},
embedding: {
provider: embeddingProvider.value,
apiKey: embeddingApiKey.value,
baseUrl: embeddingBaseUrl.value || undefined,
accountId: embeddingAccountId.value || undefined,
model: embeddingModel.value,
},
} satisfies QdrantLongTermPayload
: {
enabled: true,
provider: 'postgres-pgvector',
connection: {
connectionString: longTermConnectionString.value || undefined,
host: longTermHost.value || undefined,
port: Number(longTermPort.value) || undefined,
database: longTermDatabase.value || undefined,
user: longTermUser.value || undefined,
password: longTermPassword.value || undefined,
ssl: Boolean(longTermSsl.value),
},
embedding: {
provider: embeddingProvider.value,
apiKey: embeddingApiKey.value,
baseUrl: embeddingBaseUrl.value || undefined,
accountId: embeddingAccountId.value || undefined,
model: embeddingModel.value,
},
} satisfies PostgresLongTermPayload
: longTermProvider.value === 'qdrant'
? {
enabled: false,
provider: 'qdrant',
qdrant: {
url: longTermQdrantUrl.value,
apiKey: longTermQdrantApiKey.value || undefined,
collectionName: longTermQdrantCollection.value || 'memory_entries',
vectorSize: Number(longTermQdrantVectorSize.value) || undefined,
},
} satisfies QdrantLongTermPayload
: {
enabled: false,
provider: 'postgres-pgvector',
connection: {},
} satisfies PostgresLongTermPayload,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There are a couple of issues in this block:

  1. The types QdrantLongTermPayload and PostgresLongTermPayload used with the satisfies keyword are not defined. This will cause a TypeScript error. The correct types seem to be QdrantLongTermConfiguration and PostgresLongTermConfiguration from @proj-airi/memory-system.
  2. The logic for constructing the longTerm payload is duplicated for the enabled: true and enabled: false cases. This could be refactored to be more concise and maintainable by building the main object first and then setting the enabled property.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant