Ingestion of a very large number of files #1880
Unanswered
terrornoize
asked this question in
Q&A
Replies: 2 comments
-
I loaded 15.000 small files with metadata in it. There is a script that can be run to watch a folder for files. It toon about 1s per file or about 4 hours to ingest. |
Beta Was this translation helpful? Give feedback.
0 replies
-
I tried asking targeted questions and on 5k it didn't get me the correct contents, even the links to the references weren't the right ones. Let's say that in some cases it gets it right, in others it doesn't. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I need to ingest a very large number of small files (they are all financial press releases, around 70/80k elements).
I would like to understand if this tool can help me find information in these documents, if mistral7b is the right model for this type of work or whether another tool/model is better.
Do you have any advice?
Beta Was this translation helpful? Give feedback.
All reactions