I am using PyTesseract to extract information from multiple images which contain vertically separated prices (one price per line), horizontally aligned to the right like the following image:
Tesseract is not able to extract reliable text with such image, so, image processing has to occur:
Image scaling to 4x;
Binarization
"Bolding";
Gaussian blur;
Which results in the following image:
Pytesseract is successfully able to extract its information (using PSM --6) resulting in a string containing:
96,000,000
94,009,999
90,000,000
85,000,000
78,000,000
70,000,000
66,000,000
However, when Pytesseract is presented with some edge cases like an image with a single digit, recognition fails. Example:
Pre-processed:
post-processed:
Which results in an empty string extracted. This is strange as the number 8 was previously successfully read. What other suggestions should I follow? I've spent endless hours trying to optimize the algorythm without success for such case scenarios.
I had tried the same exact scenario with easyocr. Easyocr is also using tesseract engine internally for optical character recognition. I try with resizing image of custom size (600,600) and fed to easyocr, it worked.
import easyocr
import cv2
image = cv2.imread('7.png')
image = cv2.resize(image,(600,600))
cv2.imwrite('image.png',image)
reader = easyocr.Reader(['en'])
result = reader.readtext('image.png')
texts = [detection[1] for detection in result if detection[2] > 0.5]
print(texts)
The output for first image is,
['96,000,000', '94,009,999', '90,000,000', '85,000,000', '78,000,000', '70,000,000', '66,000,000']
The output for second image is,
['8']
May be this alternate solution work for your case. You can install easyocr bypip install easyocr. Happy coding :)
Related
I've got this picture (preprocessed image) from which I want to extract the numeric values of each line. I'm using pytesseract but it doesnt show any results for this image.
I've tried several config options from other questions like "--psm 13 --oem 3" or whitelisting numbers but nothing yields results.
As a result I usually get just one or two characters or ~5 dots/dashes but nothing even remotly resembling the size of my input.
I hope someone can help me cheers in advance for your time.
pytesseract version: 0.3.8
tesseract version: 5.0.0-alpha.20210506
You must think to use --psm 4, it's more appropriate for your image. I also recommend to rethink about the image pre-process. Tesseract is not perfect and it requires good image as input to work well.
import cv2 as cv
import pytesseract as tsr
img = cv.imread('41DAx.jpg')
img = cv.cvtColor(img, cv.COLOR_BGR2RGB)
config = '--psm 4 -c tessedit_char_whitelist=0123456789,'
text = tsr.image_to_string(img, config=config)
print(text)
The above code was not able to well detect all digts in the image, but almost of them. Maybe with a bit of image pre-processing, you can reach your objective.
I'm guessing this is because the images I have contain text on top of a picture. pytesseract.image_to_string() can usually scan the text properly but it also returns a crap ton of gibberish characters: I'm guessing it's because of the pictures underneath the text making Pytesseract think they are text too or something.
When Pytesseract returns a string, how can I make it so that it doesn't include any text unless it's certain that the text is right.
Like, if there a way for Pytesseract to also return some sort of number telling me how certain the text is scanned accurately?
I know I kinda sound dumb but somebody pls help
You can set a character whitelist with config argument to get rid of gibberish characters,and also you can try with different psm options to get better result.
Unfortunately, it is not that easy, I think the only way is applying some image preprocessing and this is my best:
Firstly I applied some blurring to smoothing:
import cv2
blurred = cv2.blur(img,(5,5))
Then to remove everything except text, converted image to grayscale and applied thresholding to get only white color which is the text color (I used inverse thresholding to make text black which is the optimum condition for tesseract ocr):
gray_blurred=cv2.cvtColor(blurred, cv2.COLOR_BGR2GRAY)
ret,th1 = cv2.threshold(gray_blurred,239,255,cv2.THRESH_BINARY_INV)
and applied ocr then removed whitespace characters :
txt = pytesseract.image_to_string(th1,lang='eng', config='--psm 12')
txt = txt.replace("\n", " ").replace("\x0c", "")
print(txt)
>>>"WINNING'OLYMPIC GOLD MEDAL IT'S MADE OUT OF RECYCLED ELECTRONICS "
Related topics:
Pytesser set character whitelist
Pytesseract OCR multiple config options
You can also try preprocessing your image to let pytesseract work more accurate and if you want to recognize meaningful words you can apply spell check after ocr:
https://pypi.org/project/pyspellchecker/
I have attached a very simple text image that I want text from. It is white with a black background. To the naked eye it seems absolutely legible but apparently to tesseract it is a rubbish. I have tried changing the oem and psm parameters but nothing seems to work. Please note that this works for other images but for this one.
Please try running it on your machine and see if it works. Or else I might have to change my ocr engine altogether.
Note: It was working earlier until I tried to add black pixels around the image to help the extraction process. Also I don't think that tesseract was trained on black text on a white background. It should be able to do this too. Also if this was true why does it work for other text images that have the same format as this one
Edit: Miraculously I tried running the script again and this time it was able to extract Chand properly but failed in the below mentioned case. Also please look at the parameters I have used. I have read the documentation and I feel this would be the right choice. I have added the image for your reference. It is not about just this image. Why is tesseract failing for such simple use cases?
To find the desired result, you need to know the followings:
Page-segmentation-modes
Suggested Image processing methods
The input images are boldly written, we need to shrink the bold font and then assume the output as a single uniform block of text.
To shrink the images we could use erosion
Result will be:
Erode
Result
CHAND
BAKLIWAL
Code:
# Load the library
import cv2
import pytesseract
# Initialize the list
img_lst = ["lKpdZ.png", "ZbDao.png"]
# For each image name in the list
for name in img_lst:
# Load the image
img = cv2.imread(name)
# Convert to gry-scale
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Erode the image
erd = cv2.erode(gry, None, iterations=2)
# OCR with assuming the image as a single uniform block of text
txt = pytesseract.image_to_string(erd, config="--psm 6")
print(txt)
I want to be able to recognize digits from images. So I have been playing around with tesseract and python. I looked into how to prepare the image and tried running tesseract on it and I must say I am pretty disappointed by how badly my digits are recognized. I have tried to prepare my images with OpenCV and thought I did a pretty good job (see examples below) but tesseract has a lot of errors when trying to identify my images. Am I expecting too much here? But when I look at these example images I think that tesseract should easily be able to identify these digits without any problems. I am wondering if the accuracy is not there yet or if somehow my configuration is not optimal. Any help or direction would be gladly appreciated.
Things I tried to improve the digit recognition: (nothing seemed to improved the results significantly)
limit characters: config = "--psm 13 --oem 3 -c tessedit_char_whitelist=0123456789"
Upscale images
add a white border around the image to give the letters more space, as I have read that this improves the recognition process
Threshold image to only have black and white pixels
Examples:
Image 1:
Tesseract recognized: 72
Image 2:
Tesseract recognized: 0
EDIT:
Image 3:
https://ibb.co/1qVtRYL
Tesseract recognized: 1723
I'm not sure what's going wrong for you. I downloaded those images and tesseract interprets them just fine for me. What version of tesseract are you using (I'm using 5.0)?
781429
209441
import pytesseract
import cv2
import numpy as np
from PIL import Image
# set path
pytesseract.pytesseract.tesseract_cmd = r'C:\\Users\\ichu\\AppData\\Local\\Programs\\Tesseract-OCR\\tesseract.exe';
# load images
first = cv2.imread("first_text.png");
second = cv2.imread("second_text.png");
images = [first, second];
# convert to pillow
pimgs = [];
for img in images:
rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB);
pimgs.append(Image.fromarray(rgb));
# do text
for img in pimgs:
text = pytesseract.image_to_string(img, config='--psm 10 --oem 3 -c tessedit_char_whitelist=0123456789');
print(text[:-2]); # drops newline + end char
I installed pytesseract via pip and its result is terrible.
As I searched for it, I think I need to give it more data
but I can't find where to put tessedata(traineddata)
since there is no directory like ProgramFile\Tesseract-OCR using Mac.
There is no problem with images' resolution, font or size.
Image whose result is 'ecient Sh Abu'
Because large and clear test images work fine, I think it is a problem about lack of data.
But any other possible solution is welcomed as long as it can read text with Python.
Please help me..
I installed pytesseract via pip and its result is terrible.
Sometimes you need to apply preprocessing to the input image to get accurate results.
Because large and clear test images work fine, I think it is a problem about lack of data. But any other possible solution is welcomed as long as it can read text with Python.
You could say lack of data is a problem. I think you'll find morphological-transformations useful.
For instance if we apply close operation, the result will be:
The image looks similar to the original posted image. However there are slight changes in the output images (i.e. Grammar word is slightly different from the original image)
Now if we read the output image:
English
Grammar Practice
ter K-SAT (1-10)
Code:
import cv2
from pytesseract import image_to_string
img = cv2.imread("6Celp.jpg")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
opn = cv2.morphologyEx(gry, cv2.MORPH_OPEN, None)
txt = image_to_string(opn)
txt = txt.split("\n")
for i in txt:
i = i.strip()
if i != '' and len(i) > 3:
print(i)