-
Notifications
You must be signed in to change notification settings - Fork 504
Multilingual support #1699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @decadance-dance 👋, Have you already tried: Depends a bit if there is any data from mindee we could use. |
Hi, @felixdittrich92 |
Ah let's keep this issue open there is more todo i think :) |
Happy about an feedback how it works for you :) |
Unfortunately, we don't have such data |
@decadance-dance In general we would need the help of the community to collect documents (newspaper, receipt photos, etc.) in divers langauges (can be unlabeled). / This would need a license to sign that we can freely use this data. But not sure how to trigger such "event" 😅 @odulcy-mindee |
Hello =) |
Moreover it should be interesting for Chinese detection models to add multiple recognition data in the same image without intersection. This should help for a Chinese detection model to perform better without real detection data. |
Hi @nikokks 😃 To collect multilingual data for detection is troublesome because it should be real data (or if possible really good generated ones / for example with a fine tuned FLUX model maybe !?) |
Do you can estimate how much data we need to provide multilingual capabilities on the same level as only english ocr is? |
Hi @decadance-dance 👋, I think if we could collect ~100-150 different types of documents for each language we would have a good starting point (at the end the language doesn't matter it's more about the different char sets / fonts / text sizes) - for example: At the end it's more critical to take care that we really can use such images legally. The tricky part is the detection because we need complete real data .. if we have this it should be much easier for the recognition part we could create some synth data and eval on the already collected real data. I think if we are able to collect the data up to end of january i could provide pre-labeling via Azure's Document AI. Currently missing parts are:
Lang list: https://github.com/eymenefealtun/all-words-in-all-languages |
@felixdittrich92, thank you for a detailed answer. |
@decadance-dance Not yet ..maybe the easiest would be to create a huggingface space for this because from this you could also do easily pictures from your smartphone and under the hood we push the taken or uploaded images into an HF dataset. In this case we could also add an agreement before any data can be uploaded that the person who uploads agrees to have all rights on the image and uploads the image with the knowledge to provide the uploaded images openly to everyone who downloads the dataset. Wdyt ? Again CC @odulcy-mindee :D |
I found one possible dataset for printed documents for multiple languages. It is wikisource. They have text and images at the page level, originally created using some existing OCR(Google vision/tesseract) and the data has then been corrected/proofread by people. They have annotations to differentiate what has been proofread and what has not been. An example - https://te.wikisource.org/wiki/పుట%3AAandhrakavula-charitramu.pdf/439. The license would be CC-BY-SA and I am expecting them to only have pulled books for which copyright has expired. Collecting fonts for various languages is a bigger problem though( because of licenses ). |
Thanks @ramSeraph for sharing i will have a look 👍 I created a space which can be used to collect some data (only raw data for starting) wdyt ? Later on if we say we have collected enough raw data we can filter the data and pre-label with Azure Document AI. |
Sounds good to me. Thanks
чт, 17 окт. 2024 г. в 16:37, Felix Dittrich ***@***.***>:
… Thanks @ramSeraph <https://github.com/ramSeraph> for sharing i will have
a look 👍
@decadance-dance <https://github.com/decadance-dance> @nikokks
<https://github.com/nikokks>
I created a space which can be used to collect some data (only raw data
for starting) wdyt ?
https://huggingface.co/spaces/Felix92/docTR-multilingual-Datacollector
Later on if we say we have collected enough raw data we can filter the
data and pre-label with Azure Document AI.
—
Reply to this email directly, view it on GitHub
<#1699 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AURNXMCDXAODUPBYE6BNUQLZ37DTLAVCNFSM6AAAAABMZ2E2AGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJZG4ZTIMRQGY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@decadance-dance @nikokks @ramSeraph @allother I created an request to the mindee team to provide support on this task. Would be nice if you could write a comment in the thread about your needs to support this 🙏 |
First stage would be to improve the detection models, for the sec stage the recognition part we could generate additional synthetic data |
Short update here: I collected ~30k samples containing: Now i need to find a way to annotate all these data - AWS Textract & Azure Document AI failed as possible useful prelabeling solution Best results reached with docTR/OnnxTR (only detection) - but still to much issues to include it directly into our dataset for pretraining. |
Why did they faile? |
Detection results was really worse for many samples |
how do you think what way of generating synth word text is more beneficial? |
How did you evaluate them? As I understood your data is not annotated yet. |
maybe easy-ocr will work for you? |
I would go with option b and augment a fixed part of this data (words) with low frequent characters (like the % symbol). I did the same to train the multilingual parseq model :) |
I think the only option is to label a part of the data manually -> fine tune -> pre-label -> correct and again in an iterative process 🙈😅 (really time consuming) |
I had an idea that could help speed things up when dealing with documents. What if there were a selectable database of PDFs or other documents (DOCX, PPTX) in the desired languages? Then, you could extract the text with certainty, convert the PDF into the desired image format with the required resolution/DPI, adjust the bounding boxes according to the resolutions and text, and voilà. I have around 80k selectable documents in Brazilian Portuguese (latin) and can start testing to see if this works. |
Hey @murilosimao 👋, Yep sounds great feel free to update here if you have some results 👍 I will (hopefully soon) also discuss a strategy with @sebastianMindee |
Hi, I have a question regarding how exactly the multilingual support will be implemented. Other solutions currently have a different model for each script and no way to detect the script beforehand, so you need to know what is the script that you're OCR'ing. Will the multilingual models simply support all supported scripts/languages, or will they also be split? If so, would you consider also training a script detector? |
Hi @cyanic-selkie 👋, We have planned to train unique multilingual models currently we decided to go with 3 stages for this:
Currently we work getting everything together - fonts, complete vocabs, wordlists to generate enough synth data with an mindee internal tool This will slightly slow done the inference latency - but I think it's much more user friendly and a benefit especially for multilingual documents Later on to control this the vision is to implement a kind of blacklisting/whitelisting under the hood from doctr.models import ocr_predictor
from doctr.datasets import VOCABS
model = ocr_predictor(pretrained=True, whitelist=VOCABS["russian"] + VOCABS["german"]) I tried already some possible ways (#1876 (comment)) but will have a look with @SiddhantBahuguna again |
Everyone is btw invited to help use with the vocabs completion in #1883 😄 |
A latin extended model can already be found here: That was started as an experiment on my own but people seems to like it 😅 |
Are you saying that the script detection (i.e., model selection between the 3) will be done on the fly, and hence the inference will be slower or?
Regarding the dictionaries, I've noticed that the linked repository (https://github.com/eymenefealtun/all-words-in-all-languages) is really not good. I can't talk about many other languages, but the Croatian one is extremely poor. Since many other languages have about the same number of words, I'm guessing it's the same situation. May I suggest using Hunspell dictionaries. They're used everywhere for autocorrect (they can easily be extracted from LibreOffice for example if one is missing or is outdated in the available GitHub repos). A simple script could be written that generates all possible words given the dictionary and the form rules (there is the For example, given the Croatian
Gives me all of these words:
One issue with this is that it's all lowercased, is this a problem for the way you do OCR? I could submit a PR with the script and then run it for all supported languages. I don't know what the source for the existing languages is, but I'd wager this would be higher quality. Also, for languages that don't have an available Hunspell dictionary, fineweb-2 was recently released. Much care was given to the quality of the data, particularly deduplication, which is very nice for frequency thresholding words. Also, it's available in 1000+ languages (although many of them have very little data), so theoretically you could support quite a lot of languages, albeit with a somewhat dirtier dataset. |
@cyanic-selkie About the inference latency it depends on the solution we have to find for the black-white/-listing - but I don't expect a high latency increase Mh no this should be fine thanks for sharing we could randomly uppercase the first letter and augment with underrated chars👍 Would you like to add the croatian vocab to our predefined ones ? :) See #1883 |
CC @sebastianMindee That's a really good point if required we could extract word lists / vocabs from the fineweb dataset |
Yeah, but I'm saying if I want to seamlessly support all three (CJK + arabic/hindi + others), I would have to detect the script beforehand. |
Why ? 😅 |
I'm so confused right now, sorry 😢 After rereading your original post I noticed you said "3 stages" not "3 models", but you also said "unique multilingual models". So, the clarification I need here is: Will a single recognition model be able to handle all scripts/languages (i.e., be trained on all scripts/languages); and the "stages" you mention are referring to how the support for different scripts will grow over time for that single multilingual model?
Yes, I'll make a PR for Croatian and other BCMS languages at the very least. |
Correct With models I meant because we have different architectures for example: parseq, vitstr, crnn, etc. ^^ |
@cyanic-selkie I created a HF Space which can be used to upload wordlists & fonts We will filter out the data we need later on |
@felixdittrich92 I had some issues with rate limits for whatever reason, but I managed to upload the Croatian wordlist. I didn't realize the naming convention was |
Yeah 😅 I upload lots of fonts atm so the rate limit will be on it's limit the next days - Yeah that's fine thanks a lot 👍 |
I also updated the list in #1883 so if anyone can help here this would be awesome 🙏 |
Short update here: I cracked it, a first multilingual experimental model was trained for the following languages:
mostly no confusion Next experiments starting soon including:
|
@felixdittrich92 Awesome! What are some ways I can further contribute to the multilingual effort? What do you expect is a timeline for a production ready model? On an unrelated note, is there any support or do you plan to support quantization aware training followed by ONNX export. I've had great success using this workflow with some CNNs before running in int8 on edge devices with the xnnpack execution provider. I could also contribute in those areas if you're interested. |
Hey @cyanic-selkie, To further improve robustness, we need to explore a way to implement "whitelisting" for character constraints. It would be best to discuss the details on Slack or LinkedIn: Future PlanWe’re considering adding a whitelist parameter to the predictor: ocr_predictor(pretrained=True, whitelist=VOCABS["german"] + VOCABS["hebrew"] + "ABc") This would require introducing a Ideally, we aim to finalize this feature - including pretrained recognition models for all architectures - by fall/winter. That timeline will depend a bit on the support from @SiddhantBahuguna and @sebastianMindee. On Quantization-Aware TrainingIt would also be great to integrate QAT into our training scripts! And I still need fonts for the |
🚀 The feature
Support of multiple languages (accordingly VOCABS["multilingual"]) by pretrained models.
Motivation, pitch
It would be great to use models which supports multiple languages because it significantly improve user experience in various cases.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: