PyTesseract not recognizing decimals - python

This is not truly a duplicate of How to extract decimal in image with Pytesseract, as those answers did not solve my problem and my use case is different.
I'm using PyTesseract to recognise text in table cells. When it comes to recognising drug doses with decimal points, the OCR fails to recognise the ., though is accurate for everything else. I'm using tesseract v5.0.0-alpha.20200328 on Windows 10.
My pre-processing consists of upscaling by 400% using cubic, conversion to black and white, dilation and erosion, morphology, and blurring. I've tried a decent combination of all of these (as well as each on their own), and nothing has recognized the ..
I've tried --psm of various values as well as a character whitelist. I believe the font is Sergoe UI.
Before processing:
After processing:
PyTesseract output: 25mg »p
Processing code:
import cv2, pytesseract
import numpy as np
image = cv2.imread( '01.png' )
upscaled_image = cv2.resize(image, None, fx = 4, fy = 4, interpolation = cv2.INTER_CUBIC)
bw_image = cv2.cvtColor(upscaled_image, cv2.COLOR_BGR2GRAY)
kernel = np.ones((2, 2), np.uint8)
dilated_image = cv2.dilate(bw_image, kernel, iterations=1)
eroded_image = cv2.erode(dilated_image, kernel, iterations=1)
thresh = cv2.threshold(eroded_image, 205, 255, cv2.THRESH_BINARY)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
morh_image = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
blur_image = cv2.threshold(cv2.bilateralFilter(morh_image, 5, 75, 75), 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
final_image = blur_image
text = pytesseract.image_to_string(final_image, lang='eng', config='--psm 10')

If you haven't made sure of this, check out this link
visit https://groups.google.com/g/tesseract-ocr/c/Wdh_JJwnw94/m/xk2ErJnFBQAJ
One major solution for for many problems is text height, I was facing many issues but wasn't able to figure out why, but seems sending image with correct size letters to tesseract solves many problems.
instead of upscaling to a random % try the number with which your image has letters close to 30- 40 Px.
Also if somehow your preprocessing change "." into a noise like char then too it will get ignored.

I had a similar case that and was able to increase the number of correct decimals by using image processing methods and upscaling of the image. Yet, a small share of the decimals were not recognized correctly.
The solution I found was to change the language setting for pytesseract:
I was using a non-English setting, but changing the config to lang='eng' fixed all remaining issues.
That might not help with the original question, though, as the setting is already eng.

Related

Pytesseract fails to extract digits from a fairly clear image

I've been trying to extract digits from images in order to automatize a process using Pytesseract in google colab. I've run the image through a series of preprocessing steps in order to make it clearer, including gray-scaling, Gaussian blur, cropping, thresholding and perspective transform, however none of these seem to make the result better and I still get an empty string from the image.
I've also tried dilation and erosion, but I figured too much preprocessing might actually downgrade the quality so I dropped it. I've specified in the config of pytesseract what it should be looking for using r'--oem 3 --psm 6 outputbase digits'
Here is the image preprocessed image
and the code I used to process it and extract:
img_prepared=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(img_prepared, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
img_prepared=cv2.GaussianBlur(thresh, (3,3), 5)
srcpoints = np.float32([[100, 0], [375, 190], [50,180], [400, 0]])
destpoints = np.float32([[75, 0], [375, 190], [50,180], [375, 0]])
matrix=cv2.getPerspectiveTransform(srcpoints, destpoints)
imgprep = cv2.warpPerspective(img_prepared, matrix, (400, 200))
#and the extraction
options = r'--oem 3 --psm 6 outputbase digits'
s = pytesseract.image_to_string(imgprep, config=options).strip()
Is it a problem with the image? Tesseract can't recognize that type of digits or is it in the code? Thank you in advance!
edit: image_resized
I've also tried this image, and it corectly identifies 9205 but fails with 8.00

How to OCR a text with white colour characters on a blue background from a cropped image?

First, I want to crop an image using a mouse event, and then print the text inside the cropped image. I tried OCR scripts but all can't work for this image attached below. I think the reason is that the text has white characters on blue background.
Can you help me with doing this?
Full image:
Cropped image:
An example what I tried is:
import pytesseract
import cv2
import numpy as np
pytesseract.pytesseract.tesseract_cmd = 'C:\\Program Files\\Tesseract-OCR\\tesseract.exe'
img = cv2.imread('D:/frame/time 0_03_.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
adaptiveThresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 35, 30)
inverted_bin=cv2.bitwise_not(adaptiveThresh)
#Some noise reduction
kernel = np.ones((2,2),np.uint8)
processed_img = cv2.erode(inverted_bin, kernel, iterations = 1)
processed_img = cv2.dilate(processed_img, kernel, iterations = 1)
#Applying image_to_string method
text = pytesseract.image_to_string(processed_img)
print(text)
[EDIT]
For anyone wondering, the image in the question was updated after posting my answer. That was the original image:
Thus, the below output in my original answer.
That's the newly posted image:
The specific Turkish characters, especially in the last word, are still not properly detected (since I still can't use lang='tur' right now), but at least the Ö and Ü can be detected using lang='deu', which I have installed:
text = pytesseract.image_to_string(mask, lang='deu').strip().replace('\n', '').replace('\f', '')
print(text)
# GÖKYÜZÜ AVCILARI ILE TEKE TEK KLASIGI
[/EDIT]
I wouldn't use cv2.adaptiveThreshold here, but simple cv2.threshold using cv2.THRESH_OTSU + cv2.THRESH_BINARY_INV. Since, the comma touches the image border, I'd add another, one pixel wide border via cv2.copyMakeBorder to capture the comma properly. So, that would be the full code (replacing \f is due to my pytesseract version only):
import cv2
import pytesseract
img = cv2.imread('n7nET.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
mask = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY_INV)[1]
mask = cv2.copyMakeBorder(mask, 1, 1, 1, 1, cv2.BORDER_CONSTANT, 0)
text = pytesseract.image_to_string(mask).strip().replace('\n', '').replace('\f', '')
print(text)
# 2020'DE SALGINI BILDILER, YA 2021'DE?
The output seems correct to me – of course, not for this special (I assume Turkish) capital I character with the dot above. Unfortunately, I can't run pytesseract.image_to_string(..., lang='tur'), since it's simply not installed. Maybe, have a look at that to get the proper characters here as well.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
PyCharm: 2021.1.1
OpenCV: 4.5.1
pytesseract: 5.0.0-alpha.20201127
----------------------------------------

Pytesseract doesn't recognize decimal points

I'm trying to read the text in this image that contains also decimal points and decimal numbers
in this way:
img = cv2.imread(path_to_image)
print(pytesseract.image_to_string(img))
and what I get is:
73-82
Primo: 50 —
I've tried to specify also the italian language but the result is pretty similar:
73-82 _
Primo: 50
Searching through other questions on stackoverflow I found that the reading of the decimal numbers can be improved by using a whitelist, in this case tessedit_char_whitelist='0123456789.', but I want to read also the words in the image. Any idea on how to improve the reading of decimal numbers?
I would suggest passing tesseract every row of text as separate image.
For some reason it seams to solve the decimal point issue...
Convert image from grayscale to black and white using cv2.threshold.
Use cv2.dilate morphological operation with very long horizontal kernel (merge blocks across horizontal direction).
Use find contours - each merged row is going to be in a separate contour.
Find bounding boxes of the contours.
Sort the bounding boxes according to the y coordinate.
Iterate bounding boxes, and pass slices to pytesseract.
Here is the code:
import numpy as np
import cv2
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' # I am using Windows
path_to_image = 'image.png'
img = cv2.imread(path_to_image, cv2.IMREAD_GRAYSCALE) # Read input image as Grayscale
# Convert to binary using automatic threshold (use cv2.THRESH_OTSU)
ret, thresh = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
# Dilate thresh for uniting text areas into blocks of rows.
dilated_thresh = cv2.dilate(thresh, np.ones((3,100)))
# Find contours on dilated_thresh
cnts = cv2.findContours(dilated_thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # Use index [-2] to be compatible to OpenCV 3 and 4
# Build a list of bounding boxes
bounding_boxes = [cv2.boundingRect(c) for c in cnts]
# Sort bounding boxes from "top to bottom"
bounding_boxes = sorted(bounding_boxes, key=lambda b: b[1])
# Iterate bounding boxes
for b in bounding_boxes:
x, y, w, h = b
if (h > 10) and (w > 10):
# Crop a slice, and inverse black and white (tesseract prefers black text).
slice = 255 - thresh[max(y-10, 0):min(y+h+10, thresh.shape[0]), max(x-10, 0):min(x+w+10, thresh.shape[1])]
text = pytesseract.image_to_string(slice, config="-c tessedit"
"_char_whitelist=abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-:."
" --psm 3"
" ")
print(text)
I know it's not the most general solution, but it manages to solve the sample you have posted.
Please treat the answer as a conceptual solution - finding a robust solution might be very challenging.
Results:
Thresholder image after dilate:
First slice:
Second slice:
Third slice:
Output text:
7.3-8.2
Primo:50
You can easily recognize by down-sampling the image.
If you down-sample by 0.5, result will be:
Now if you read:
7.3 - 8.2
Primo: 50
I got the result by using pytesseract 0.3.7 version (current)
Code:
# Load the libraries
import cv2
import pytesseract
# Load the image
img = cv2.imread("s9edQ.png")
# Convert to the gray-scale
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Down-sample
gry = cv2.resize(gry, (0, 0), fx=0.5, fy=0.5)
# OCR
txt = pytesseract.image_to_string(gry)
print(txt)
Explanation:
The input-image contains a little bit of an artifact. You can see it on the right part of the image. On the other hand, the current image is perfect for OCR recognition. You need to use the pre-preprocessing method when the data from the image is not visible or corrupted. Please read the followings:
Image processing
Page-segmentation-mode

How to extract dotted text from image?

I'm working on my bachelor's degree final project and I want to create an OCR for bottle inspection with python. I need some help with text recognition from the image. Do I need to apply the cv2 operations in a better way, train tesseract or should I try another method?
I tried image processing operations on the image and I used pytesseract to recognize the characters.
Using the code bellow I got from this photo:
to this one:
and then to this one:
Sharpen function:
def sharpen(img):
sharpen = iaa.Sharpen(alpha=1.0, lightness = 1.0)
sharpen_img = sharpen.augment_image(img)
return sharpen_img
Image processing code:
textZone = cv2.pyrUp(sharpen(originalImage[y:y + h - 1, x:x + w - 1])) #text zone cropped from the original image
sharp = cv2.cvtColor(textZone, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(sharp, 127, 255, cv2.THRESH_BINARY)
#the functions such as opening are inverted (I don't know why) that's why I did opening with MORPH_CLOSE parameter, dilatation with erode and so on
kernel_open = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
open = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel_open)
kernel_dilate = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,7))
dilate = cv2.erode(open,kernel_dilate)
kernel_close = cv2.getStructuringElement(cv2.MORPH_RECT, (1, 5))
close = cv2.morphologyEx(dilate, cv2.MORPH_OPEN, kernel_close)
print(pytesseract.image_to_string(close))
This is the result of pytesseract.image_to_string:
22203;?!)
92:53 a
The expected result is :
22/03/20
02:53 A
"Do I need to apply the cv2 operations in a better way, train tesseract or should I try another method?"
First, kudos for taking this project on and getting this far with it. What you have from the OpenCV/cv2 standpoint looks pretty good.
Now, if you're thinking of Tesseract to carry you the rest of the way, at the very least you'll have to train it. Here you have a tough choice: Invest in training Tesseract, or work up a CNN to recognize a limited alphabet. If you have a way to segment the image, I'd be tempted to go with the latter.
From the result you got and the expected result, you can see that some of the characters are recognized correctly. Assuming you are using a different image from that shown in the tutorial, I recommend you to change the values of threshold and getStructuringElement.
These values work better depending on the image color. The tutorial author must have optimized it for his/her use (by trial and error or some other way).
Here is a video if you want to play around with those value using sliders in opencv. You can also print your result in the same loop to see if you are getting the desired result.
One potential thing you could do to improve recognition on the characters is to dilate the characters so pytesseract gives a better result. Dilating the characters will connect the individual blobs together and can fix the / or the A characters. So starting with your latest binary image:
Original
Dilate with a 3x3 kernel with iterations=1 (left) or iterations=2 (right). You can experiment with other values but don't do it too much or the characters will all connect. Maybe this will provide a better result with you OCR.
import cv2
image = cv2.imread("1.PNG")
thresh = cv2.threshold(image, 115, 255, cv2.THRESH_BINARY_INV)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
dilate = cv2.dilate(thresh, kernel, iterations=1)
final = cv2.threshold(dilate, 115, 255, cv2.THRESH_BINARY_INV)[1]
cv2.imshow('image', image)
cv2.imshow('dilate', dilate)
cv2.imshow('final', final)
cv2.waitKey(0)

Preprocessing poorly scanned handwritten digits

I have a few thousands of PDF files containing B&W images (1bit) from digitalized paper forms. I'm trying to OCR some fields, but sometime the writing is too faint:
I've just learned about morphological transforms. They are really cool!!! I feel like I'm abusing them (like I did with regular expressions when I learned Perl).
I'm only interested in the date, 07-06-2017:
im = cv2.blur(im, (5, 5))
plt.imshow(im, 'gray')
ret, thresh = cv2.threshold(im, 250, 255, 0)
plt.imshow(~thresh, 'gray')
People filling this form seems to have some disregard for the grid, so I tried to get rid of it. I'm able to isolate the horizontal line with this transform:
horizontal = cv2.morphologyEx(
~thresh,
cv2.MORPH_OPEN,
cv2.getStructuringElement(cv2.MORPH_RECT, (100, 1)),
)
plt.imshow(horizontal, 'gray')
I can get the vertical lines as well:
plt.imshow(horizontal ^ ~thresh, 'gray')
ret, thresh2 = cv2.threshold(roi, 127, 255, 0)
vertical = cv2.morphologyEx(
~thresh2,
cv2.MORPH_OPEN,
cv2.getStructuringElement(cv2.MORPH_RECT, (2, 15)),
iterations=2
)
vertical = cv2.morphologyEx(
~vertical,
cv2.MORPH_ERODE,
cv2.getStructuringElement(cv2.MORPH_RECT, (9, 9))
)
horizontal = cv2.morphologyEx(
~horizontal,
cv2.MORPH_ERODE,
cv2.getStructuringElement(cv2.MORPH_RECT, (7, 7))
)
plt.imshow(vertical & horizontal, 'gray')
Now I can get rid of the grid:
plt.imshow(horizontal & vertical & ~thresh, 'gray')
The best I got was this, but the 4 is still split into 2 pieces:
plt.imshow(cv2.morphologyEx(im2, cv2.MORPH_CLOSE,
cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))), 'gray')
Probably at this point it is better to use cv2.findContours and some heuristic in order to locate each digit, but I was wondering:
should I give up and demand all documents to be rescanned in grayscale?
are there better methods in order to isolate and locate the faint digits?
do you know any morphological transform to join cases like the "4"?
[update]
Is rescanning the documents too demanding? If it is no great trouble I believe it is better to get higher quality inputs than training and trying to refine your model to withstand noisy and atypical data
A bit of context: I'm a nobody working at a public agency in Brazil. The price for ICR solutions start in the 6 digits so nobody believes a single guy can write an ICR solution in-house. I'm naive enough to believe I can prove them wrong. Those PDF documents were sitting at an FTP server (about 100K files) and were scanned just to get rid of the dead tree version. Probably I can get the original form and scan again myself but I would have to ask for some official support - since this is the public sector I would like to keep this project underground as much as I can. What I have now is an error rate of 50%, but if this approach is a dead end there is no point trying to improve it.
Maybe with Active contour model of some sort?
For example, I found this library: https://github.com/pmneila/morphsnakes
Took your final "4" number:
After some quick tweaking (without actually understanding the parameters, so it may be possible to get a better result) I got this:
with the following code (I also hacked into the morphsnakes.py a little to save the images):
import morphsnakes
import numpy as np
from scipy.misc import imread
from matplotlib import pyplot as ppl
def circle_levelset(shape, center, sqradius, scalerow=1.0):
"""Build a binary function with a circle as the 0.5-levelset."""
grid = np.mgrid[list(map(slice, shape))].T - center
phi = sqradius - np.sqrt(np.sum((grid.T)**2, 0))
u = np.float_(phi > 0)
return u
#img = imread("testimages/mama07ORI.bmp")[...,0]/255.0
img = imread("four.png")[...,0]/255.0
# g(I)
gI = morphsnakes.gborders(img, alpha=900, sigma=3.5)
# Morphological GAC. Initialization of the level-set.
mgac = morphsnakes.MorphGAC(gI, smoothing=1, threshold=0.29, balloon=-1)
mgac.levelset = circle_levelset(img.shape, (39, 39), 39)
# Visual evolution.
ppl.figure()
morphsnakes.evolve_visual(mgac, num_iters=50, background=img)

Categories

Resources