Use llama.dll for my game's NPCs #9728
Unanswered
ninjadynamics
asked this question in
Q&A
Replies: 1 comment 2 replies
-
In case it helps. I made my NPC in UE4 with llamacpp (Game). I used the example of main and I put it in a UE4 thread with the structure of sending thread to sleep and waking thread up when the NPC has to talk. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, AI noob here!
I'm writing a small game in C (with RayLib) where I'd like my NPCs to have "smart" conversations with the player, using llamacpp as a backend.
So far, I've compiled llamacpp into a shared library using
make llama.dll
and asked ChatGPT (o1-preview) to generate a C program that emulatesllama-cli.exe
with limited success. The code works fine, but after some back and forth interactions, the model starts to speak nonsense/partial tokens and doesn't remember what we're talking about...I'm using
w64devkit
on Windows 10 and my compile command is:~/llama.cpp $ gcc chat.c -o chat.exe -I./include -I./ggml/include -L. -lllama -lstdc++ -pthread -lm
Note: chat.c exists on the same directory as
llama-cli.exe
The model I'm using is
llama-3.2-1b-instruct-q8_0.gguf
which works pretty well withllama-cli.exe
.Here's my chat.c code:
https://gist.github.com/ninjadynamics/6d1eb01401dd6822293940676e84decd
Can anyone help?
Many thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions