Indexing API not reducing the ingestion time with OpenSearch with exactly the same input file #27955
Unanswered
arjunsohanlal
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I've used the same code for LangChain's Indexing API with Weaviate and PGVector where the feature works as expected. For example, for 2185 documents to be ingested on the first go with
cleanup='full'
, it would take about 10 minutes to do so. I'd modify a single character, and reingestion usingcleanup=incremental
would take a few seconds.I'd tried the same with OpenSearch, where the first ingestion took around 700 seconds, and re-ingesting exactly the same data took about 590 seconds. Is there any reason why this might be happening?
System Info
LangChain package versions:
Platform: Windows
Python Version: 3.12.4
Beta Was this translation helpful? Give feedback.
All reactions