Replies: 1 comment
-
Hi, I am also considering about local data retrieval. However, I cannot get stable and correct answers for most of my questions. Does your system almost provide correct and stable answers from your local data (should be large enough)? For example, my local data is a text file with around 150k lines in Chinese (around 15MB). However, the answers I got are not really stable and correct. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am a yardbird to AI and have just run llama.cpp and privateGPT myself.
My use case is that my company has many documents and I hope to use AI to read these documents and create a question-answering chatbot based on the content.
After running llama.cpp, the response is very fast, but I am not sure how llama.cpp can learn from the documents.
PrivateGPT supports reading documents from local folders, but the response speed is slower, taking almost a minute.
My question is, if I want to achieve fast response speed, is privateGPT not a feasible solution? From the search results, it seems to generate a vector database from the documents read, and I am not sure if it needs to go to the database for inference and query every time a question is asked. Is it the best solution to train the existing model and data for a second time if I want the conversation to be answered within 2 seconds?"
Beta Was this translation helpful? Give feedback.
All reactions