I really like the idea, but unfortunately it could not cope with my usecase.
I have some lecture slides as image-only PDF (Hungarian language with a sparkle of English and Latin (biology)). I tried the tool on it and I had the following experience:
- proofreading with the overlay seems like a good idea, actually it is unusable when the original text has colors, and you need to recognize diacritic marks. Being able to show the original in grayscale or black&white could help. (BW worked, but Grayscale left everything colored)
- For proofreading the ebook mode was the most useful, I immediately spotted lots of errors that I could not see with overlay. A quick switch between the modes would be useful
- Editing text is not efficient when error rate is high (Hungarian language is not supported, that caused it mostly I guess), the interface has high overhead for mass corrections.
Very good idea, I think after a little polish it would even fit my usecase. For more traditional OCR usecases than mine it is probably already great.
According to what I read in the documentation, it uses Tesseract underneath. I've used Tesseract v3 in the past and it was pain. Tesseract 4 uses LSTM neural net. How good is the performance and quality of the recognition nowadays in v4? Could anyone share his experience?
This is my first encounter with Scribe.js; since I have many book scans I always try OCRing them when I see this. Compared to Tesseract (which is the best I have so far), it gets the words right slightly more, but the paragraph segmentation is many times worse. On a book where every paragraph is indented, it reliably decides two consecutive one-line paragraphs are the same paragraph, which is understandable, but a downgrade from Tesseract which gets the paragraph segmentation as correct as possible (It doesn't handle paragraphs that spanpage-breaks, since I'm feeding it one page at a time)
Scribe is Tesseract. It uses tesseract.js which is a Web Assembly port of Tesseract. So they in theory should be equal. In practice custom settings or older versions could make a difference.
What's the motivation for doing this in the browser? It seems like intentionally choosing a more difficult path to create an inferior result.
A native MacOS or Windows application could use the OCR facilities of the operating system and, in my experience, both produce results that are far better than Tesseract.
Generate the OCR on the fly, in the browser, when you do not have the proper OCR info.
As someone that works on public web libraries, I see it useful (but wasteful)
EasyOCR is significantly worse than Tesseract for clean printed text and , while being orders of magnitude slower; far better than Tesseract for low-quality clean scans and extracting text from pictures (e.g. comics), which Tesseract does not as well.
I really like the idea, but unfortunately it could not cope with my usecase.
I have some lecture slides as image-only PDF (Hungarian language with a sparkle of English and Latin (biology)). I tried the tool on it and I had the following experience:
- proofreading with the overlay seems like a good idea, actually it is unusable when the original text has colors, and you need to recognize diacritic marks. Being able to show the original in grayscale or black&white could help. (BW worked, but Grayscale left everything colored)
- For proofreading the ebook mode was the most useful, I immediately spotted lots of errors that I could not see with overlay. A quick switch between the modes would be useful
- Editing text is not efficient when error rate is high (Hungarian language is not supported, that caused it mostly I guess), the interface has high overhead for mass corrections.
Very good idea, I think after a little polish it would even fit my usecase. For more traditional OCR usecases than mine it is probably already great.
According to what I read in the documentation, it uses Tesseract underneath. I've used Tesseract v3 in the past and it was pain. Tesseract 4 uses LSTM neural net. How good is the performance and quality of the recognition nowadays in v4? Could anyone share his experience?
I use paperless-ngx for digitizing all my documents, it also uses Tesseract. The result is not perfect, but more than acceptable, if I scan at 600dpi
This is my first encounter with Scribe.js; since I have many book scans I always try OCRing them when I see this. Compared to Tesseract (which is the best I have so far), it gets the words right slightly more, but the paragraph segmentation is many times worse. On a book where every paragraph is indented, it reliably decides two consecutive one-line paragraphs are the same paragraph, which is understandable, but a downgrade from Tesseract which gets the paragraph segmentation as correct as possible (It doesn't handle paragraphs that spanpage-breaks, since I'm feeding it one page at a time)
Scribe is Tesseract. It uses tesseract.js which is a Web Assembly port of Tesseract. So they in theory should be equal. In practice custom settings or older versions could make a difference.
This is only true in the "speed" mode; in the "quality" mode it claims better word recognition than Tesseract on clean scans (which matches my tests): https://github.com/scribeocr/scribe.js/blob/master/docs/scri...
What's the motivation for doing this in the browser? It seems like intentionally choosing a more difficult path to create an inferior result.
A native MacOS or Windows application could use the OCR facilities of the operating system and, in my experience, both produce results that are far better than Tesseract.
Generate the OCR on the fly, in the browser, when you do not have the proper OCR info. As someone that works on public web libraries, I see it useful (but wasteful)
> Tesseract (which is the best I have so far)
Have you looked at EasyOCR?
EasyOCR is significantly worse than Tesseract for clean printed text and , while being orders of magnitude slower; far better than Tesseract for low-quality clean scans and extracting text from pictures (e.g. comics), which Tesseract does not as well.
Have you tried Abbyy FineReader? It's the best OCR package I've seen.
If it would generate ALTO XML files... IF!
This is awesome. Only issue was I had to disable my JShelter extension because it would freeze the page using 100% CPU forever.
anyone looking for an ocr or text pre-processor that maintains the layout(tables, forms) try LLMWhisperer > https://pg.llmwhisperer.unstract.com/