Skip to content

Implement Functionality for LLMs Loaded in the Browser #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jmemcc opened this issue Jan 25, 2025 · 0 comments
Open

Implement Functionality for LLMs Loaded in the Browser #3

jmemcc opened this issue Jan 25, 2025 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@jmemcc
Copy link
Owner

jmemcc commented Jan 25, 2025

Issue

Setup supports only inferencing through local inferencing tools and internet hosted OpenAI models, but in-browser models are also good for where the user wants a WebGPU approach.

Solution

Add support for in browser models serverless way, potentially with a tool like BrowserAI or WebLLM.

@jmemcc jmemcc added the enhancement New feature or request label Jan 25, 2025
@jmemcc jmemcc self-assigned this Jan 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

When branches are created from issues, their pull requests are automatically linked.

1 participant