Use a model remotely on a local network - use the client via the API or even spin up a webpage? #2294
accessiblepixel
started this conversation in
Ideas
Replies: 2 comments
-
There are some ways, but only indirectly.
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi, i am working in a fork for using network for processing, please see meshgrid_gpt www.mc3d.cl |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
HI,
First off, thank you for the work on the project so far I like having the ability to play with some of the LLMs offline and locally in a very accessible manor... However I have few systems that are best suited to the task of running gpt4all - I've seen that there's a http API available, is it possible to have a centralised system that has the models and does the processing, and gives the results back to a remote client/web page?
Then I can host the 'processor' on a central machine and then access it from more underpowered hardware.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions