Replies: 3 comments 1 reply
-
You should use JNI to create a wrapper over the library. Then re-implement main example in Java to have an idea of how the API works. |
Beta Was this translation helpful? Give feedback.
-
Which library? I have llama.cpp locally but there is no ".so" file that I can wrap over. |
Beta Was this translation helpful? Give feedback.
-
The easiest is to use it through child process, subprocess or whatever Java is using to execute external apps, see example here, you need server.js file https://github.com/deonis1/llcui |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I can run a LLM model (like
ggml-model-f16.bin
) locally from CLI via Llama.cpp:./main -m ./models/7B/ggml-model-q4_0.bin -n 128...
How can I embed this within my Java app? For example to send multiple texts and ask questions about each one?
Beta Was this translation helpful? Give feedback.
All reactions