-
I apologize for starting this question string in the issues. now that I am in the correct place... I am attempting to connect https://github.com/zooteld/llama-cpp-python-streamlit to my llama.cpp server running locally. I can see that something is actually hitting the server via the console, but I am not recieving anything back from the server to the app. The author of the app informed me its an endpoint issue, (uses differend json structure). it is wrote to use the llama-cpp-python bindings. any pointers on how to tackle this? |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 23 replies
-
Whatever sends requests to the I've never used it personally, so I probably won't be able to help with follow up questions on its use or anything like that. |
Beta Was this translation helpful? Give feedback.
-
ya that is not working either... I noticed that to get it to connect I am having to change the /v1/chat/"/completion to just completion to match what is in my llama.cpp/example/server/public folder |
Beta Was this translation helpful? Give feedback.
-
I was referring to the api_like_OAI.py as it. lol |
Beta Was this translation helpful? Give feedback.
-
for my use in the app I was beating my head against the desk trying to get working, the author of the origional version pushed an update enabling endpoint swapping. So now after launching the server and ai_like_OAI.py, the app connects to 8081 and uses /v1/completions as an endpoint. thank you all and especially KerfuffleV2 for putting up with me. |
Beta Was this translation helpful? Give feedback.
for my use in the app I was beating my head against the desk trying to get working, the author of the origional version pushed an update enabling endpoint swapping. So now after launching the server and ai_like_OAI.py, the app connects to 8081 and uses /v1/completions as an endpoint. thank you all and especially KerfuffleV2 for putting up with me.