LLM-RAG- PMR BRANCH #9
sarthh7777
started this conversation in
General
Replies: 1 comment
-
noticed you’ve got chunking and ingestion tests wired in — depending on how deep you go (esp. mixed format or cross-PDF), this layer tends to hit early semantic collapse. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
PS C:\Users\dell\Downloads\llmrag3> pytest -v
===================================================================== test session starts ======================================================================
platform win32 -- Python 3.13.2, pytest-8.3.5, pluggy-1.6.0
rootdir: C:\Users\dell\Downloads\llmrag3
configfile: pytest.ini
testpaths: tests
plugins: anyio-4.9.0, langsmith-0.4.1
collected 31 items
tests\test_basic.py .... [ 12%]
tests\test_chunking.py ... [ 22%]
tests\test_components.py ....... [ 45%]
tests\test_embedder.py . [ 48%]
tests\test_ingestion.py . [ 51%]
tests\test_pipeline.py ... [ 61%]
tests\test_simple.py ..... [ 77%]
tests\test_smoke.py ..... [ 93%]
tests\test_vector_store.py .. [100%]
================ 31 passed in 42.54s ================
TESTS SUCCESSFULL
Beta Was this translation helpful? Give feedback.
All reactions