Outputs for LLM-RAG #3
sarthh7777
started this conversation in
General
Replies: 1 comment
-
this is a textbook ghost eval case — everything looks fine (test pass, coverage 100%), but the retrieval results are fixed-patterns, not semantically triggered. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
INPUT COMMAND-
pip install -r requirements.txt
coverage run --source=llmrag -m unittest discover -s tests
coverage report -m
Result :
Device set to use cpu
C:\Users\dell.cache\chroma\onnx_models\all-MiniLM-L6-v2\onnx.tar.gz: 100%|███████████████████████████████████████████████| 79.3M/79.3M [00:17<00:00, 4.69MiB/s]
..Retrieved: [('Paris is the capital of France.', 0.28786033391952515)]
.Retrieved: [('Paris is the capital of France.', 0.3702659010887146)]
.
Ran 6 tests in 114.034s
OK
PS C:\Users\dell\Downloads\llmrag> coverage report -m
Name Stmts Miss Cover Missing
llmrag\chunking_init_.py 1 0 100%
llmrag\chunking\text_splitter.py 8 0 100%
llmrag\cli.py 52 52 0% 1-85
llmrag\embeddings_init_.py 3 0 100%
llmrag\embeddings\base_embedder.py 5 1 80% 6
llmrag\embeddings\sentence_transformers_embedder.py 12 0 100%
llmrag\generators_init_.py 0 0 100%
llmrag\generators\local_generator.py 7 3 57% 16, 29-35
llmrag\main.py 56 56 0% 1-72
llmrag\models_init_.py 3 0 100%
llmrag\models\base_model.py 5 1 80% 6
llmrag\models\transformers_model.py 8 0 100%
llmrag\pipelines_init_.py 1 0 100%
llmrag\pipelines\rag_pipeline.py 21 6 71% 19, 21, 61-64
llmrag\retrievers_init_.py 12 7 42% 7-14
llmrag\retrievers\base_vector_store.py 8 2 75% 6, 10
llmrag\retrievers\chroma_store.py 42 3 93% 48, 67-68
llmrag\retrievers\faiss_store.py 27 27 0% 1-31
llmrag\streamlit_app.py 29 29 0% 1-53
TOTAL
CONCLUSION-
ALL TESTS RAN SUCCESSFULLY
Beta Was this translation helpful? Give feedback.
All reactions