Issue generating response from the knowledge base: "I'm sorry, but I need more information to be able to..." #1908
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am using the Docker version of lightRAG "LightRAG Server v1.4.6/0196 " with Ollama for LLM and Embedding. I ingested 3 documents each were chunked to 2 Chunks each. The retrieval works fine. but when I ingested another document that had 38 Chunks, and tried to retrieve an info, the answer I get is always like "I'm sorry, but I need more information to proceed. Could you please clarify what knowledge graph (KG) you are referring to? Is it a specific application or system that you have access to, such as a company's internal knowledge base, a research project, or an online service like ConceptNet or YAGO?"
Although from the lightRag logs I see there were entities are returned and also the relationships:
INFO: Process 1 building query context...
INFO: Query nodes: KG documents, List all documents, top_k: 200, cosine: 0.2
INFO: limit_async: 2 new workers initialized
INFO: Local query: 200 entites, 165 relations
INFO: Query edges: Document retrieval, Knowledge Graph, top_k: 200, cosine: 0.2
INFO: Global query: 192 entites, 200 relations
INFO: Naive query: 44 chunks (chunk_top_k: 200)
INFO: Truncated KG query results: 89 entities, 90 relations
INFO: KG related chunks: 39 from entitys, 31 from relations
INFO: Final context: 89 entities, 90 relations, 7 chunks
INFO: limit_async: 1 new workers initialized
INFO: 172.21.0.1:59860 - "POST /query/stream HTTP/1.1" 200
I checked the llm.py in order to update the rag_response prompt, but no difference...
What to check and how to fix that ?
Beta Was this translation helpful? Give feedback.
All reactions