I have a dataset of images and I want to filter out all images that contain text (ASCII chars). For example, I have the following cute image of a dog:
As you can see, on right bottom corner there is a text "MAY 18 2003" so it should be filtered out.
After some research, I came across with tesseract OCR. In python I have the following code:
# Attempt 1
img = Image.open('n02086240_1681.jpg')
text = pytesseract.image_to_string(img)
print(text)
# Attempt 2
import unidecode
img = Image.open('n02086240_1681.jpg')
text = pytesseract.image_to_string(img)
text = unidecode.unidecode(text)
print(text)
# Attempt 3
import string
char_whitelist = string.digits
char_whitelist += string.ascii_lowercase
char_whitelist += string.ascii_uppercase
text = pytesseract.image_to_string(img,lang='eng',
config='--psm 10 --oem 3 -c tessedit_char_whitelist=0123456789')
print(text)
None of them detected the string (prints whitespaces). How can I detect it?
you should prepare the image for the OCR.
for example, for this image I would do the following:
convert it to Black & White image with threshold that make the text visible (for this image it is 130)
then I would Invert the image (so the text be in black)
now try tesseract OCR
You can use Easy-OCR instead of pytesseract to get directly this output
Kay
10 2003
and as your goal is just to detect ASCII, you don't care about the accurate characters because you just want to filter the images which contain them.
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import cv2
import easyocr
path = ""
img = cv2.imread(path+"input.jpg")
# Now apply the Easy-OCR
reader = easyocr.Reader(['en'])
output = reader.readtext(img)
for i in range(len(output)):
print(output[i][-2])
You can use inRange thresholding
The result will be:
If you set psm mode to the 6, the output will be:
<<
‘\
' MAY 18 2003
All the digits are captured correctly, but we have some unwanted characters.
If we add an 'only-alpha numeric' condition, then the result will be:
['M', 'A', 'Y', '1', '8', '2', '0', '0', '3']
First, I've upsampled the image, and then apply tesseract-OCR. The reason is that the date is too small to read.
Code:
import cv2
import pytesseract
from numpy import array
img = cv2.imread("result.png") # Load the upsampled image
img = cv2.cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
msk = cv2.inRange(img, array([0, 103, 171]), array([179, 255, 255]))
krn = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 3))
dlt = cv2.dilate(msk, krn, iterations=1)
thr = 255 - cv2.bitwise_and(dlt, msk)
txt = pytesseract.image_to_string(thr, config='--psm 6')
print([t for t in txt if t.isalnum()])
cv2.imshow("", thr)
cv2.waitKey(0)
You can set the new values for the minimum and maximum ranges:
import numpy as np
min_range = np.array([0, 103, 171])
max_range = np.array([179, 255, 255])
msk = cv2.inRange(img, min_range, max_range)
You can also test with different psm parameters:
txt = pytesseract.image_to_string(thr, config='--psm 6')
For more read: Improving the quality of the output
Related
I am trying to get all the numerical data from the Mean column with multiple identical pictures as attached. nonsip2_write_8000M
The way I do it is by using a cursor position script to get the column of information that I want. However, the first data I get is always a bunch of character while the rest are correctly processed. results Even though I rearrange the order of images, the results are the same and I get a bunch of characters for the first data. Is there a better way to do this? I think that I might not have preprocessed the images properly.
import numpy as np
import time
import datetime
import cv2
import pytesseract
from PIL import ImageGrab
import sys
import subprocess
pytesseract.pytesseract.tesseract_cmd=r'C:\Program Files\Teseract-OCR\tesseract.exe'
Tstamp = datetime.datetime.now().strftime('%Y_%m_%d_%H_%M_%S')
report_fname='C:\Test_Automation\excel_file\ocr_'+TStamp+'.csv'
fid_1=open((report_fname),"a")
filename_set = ['nonsip_read_10M.jpg', 'nonsip_read_200M.jpg', 'nonsip_read_8000M.jpg', 'nonsip_read_8000M_long.jpg', 'nonsip_write_10M.jpg', 'nonsip_write_200M.jpg', 'nonsip_write_8000M.jpg', 'nonsip_write_8000M_long.jpg','nonsip2_read_8000M.jpg','nonsip2_write_8000M.jpg']
while filename_set:
filename=filename_set.pop(-1)
print(filename)
img=cv2.imread(filename,0)
cv2.namedWindow("window",cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("window",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
cv2.imshow("window",img)
cv2.waitKey(1)
x_start=839
x_end=927
y_start=844
y_end=1057
x_interval=(x_end - x_start)/8
y_interval=(y_end - y_start)/8
x1=x_start
y1=y_start
x2=x_end
y2=y_end
for i in range(1,9,1):
y2=int(y_start + i*y_interval)
print(i,x1,y1,x1,y2)
img1=ImageGrab.grab(bbox=(x1,y1,x1,y2))
print("debug1")
img1.save('sc.png')
img1=cv2.imread('sc.png',0)
img1=np.invert(img1)
data=pytesseract.image_to_string(img1, lang='eng',config='--psm 6')
print(data)
fid_1.write('%s.%s\n'%(filename,data))
y1=y2
My solution is:
Crop the image
Apply tresholding to the image
Set page-segmentation-mode to the column read (4)
First of all you want the bottom part of the image. Therefore we can crop the image with the ratio:
a. Get image height (h) and width (w) values
h, w = img.shape[:2] # get height and width
b. Set starting x, y coordinates.
x = int(w/3)
y = int((3*h)/4)
c. Crop the image
img = img[y:int(y+h/4), x:x+int(w/5)]
Result:
Apply thresholding to the cropped area:
thr = cv2.threshold(gry, 127, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
Result:
Set page-segmentation-mode to the column read, since you want the meancolumn values
txt = pytesseract.image_to_string(thr, config="--psm 4")
Do a cleaning in the txt variable (we want only mean values)
txt = txt.strip().split("\n")
for t in txt:
t = t.split(" ")
is_cnt_dgt = [i for i in t if i.replace(".", "").isdigit()]
if len(is_cnt_dgt) != 0:
print(t[len(t)-2])
Result:
887.958
919.142
846.984
72.1587
897.016
934.200
857.695
76.5089
If you don't want to clean your code the result will be:
‘Current Mean
§88529 mV 887.958 mV
Q2L7B5 mV 919.142 mV
846308 mV 846.984 mV
TSATI mV 72.1587 mV
897.397 mV 897.016 mV
934378mV 934.200 mV
856477 mV 857.695 mV
TI901 mY 76.5089 mV
es
Code:
import cv2
import pytesseract
img = cv2.imread("imYLS.jpg")
h, w = img.shape[:2] # get height and width
x = int(w/3)
y = int((3*h)/4)
img = img[y:int(y+h/4), x:x+int(w/5)]
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thr = cv2.threshold(gry, 127, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
txt = pytesseract.image_to_string(thr, config="--psm 4")
txt = txt.strip().split("\n")
for t in txt:
t = t.split(" ")
is_cnt_dgt = [i for i in t if i.replace(".", "").isdigit()]
if len(is_cnt_dgt) != 0:
print(t[len(t)-2])
cv2.imshow("thr", thr)
cv2.waitKey(0)
I am trying to detect this letter but it doesn't seem to recognize it.
import cv2
import pytesseract as tess
img = cv2.imread("letter.jpg")
imggray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
print(tess.image_to_string(imggray))
this is the image in question:
Preprocessing of the image (e.g. inverting it) should help, and also you could take advantage of pytesseract image_to_string config options.
For instance, something along these lines:
import pytesseract
import cv2 as cv
import requests
import numpy as np
import io
# I read this directly from imgur
response = requests.get('https://i.stack.imgur.com/LGFAu.jpg')
nparr = np.frombuffer(response.content, np.uint8)
img = cv.imdecode(nparr, cv.IMREAD_GRAYSCALE)
# simple inversion as preprocessing
neg_img = cv.bitwise_not(img)
# invoke tesseract with options
text = pytesseract.image_to_string(neg_img, config='--psm 7')
print(text)
should parse the letter correctly.
Have a look at related questions for some additional info about preprocessing and tesseract options:
Why does pytesseract fail to recognise digits from image with darker background?
Why does pytesseract fail to recognize digits in this simple image?
Why does tesseract fail to read text off this simple image?
#Davide Fiocco 's answer is definitely correct.
I just want to show another way of doing it with adaptive-thresholding
When you apply adaptive-thesholding result will be:
Now when you read it:
txt = pytesseract.image_to_string(thr, config="--psm 7")
print(txt)
Result:
B
Code:
import cv2
import pytesseract
img = cv2.imread("LGFAu.jpg")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thr = cv2.adaptiveThreshold(gry, 252, cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY_INV, 11, 2)
txt = pytesseract.image_to_string(thr, config="--psm 7")
print(txt)
I have this issue with reading exactly two lines of numbers (each line contains max of 3 digits) from an image.
My Python code has a big problem with reading a data from images like the ones below:
Most of the times it is just printing random numbers.
What should I do to make this work?
This is my Python code:
from PIL import ImageGrab, Image
from datetime import datetime
from pytesseract import pytesseract
import numpy as nm
pytesseract.tesseract_cmd = 'F:\\Tesseract\\tesseract'
while True:
screenshot = ImageGrab.grab(bbox=(515, 940, 560, 990))
datetime = datetime.now()
filename = 'pic_{}.{}.png'.format(datetime.strftime('%H%M_%S'), datetime.microsecond / 500000)
gray = screenshot.convert('L')
bw = nm.asarray(gray).copy()
bw[bw < 160] = 0
bw[bw >= 160] = 255
convertedScreenshot = Image.fromarray(bw)
tesseract = pytesseract.image_to_string(convertedScreenshot, config='digits --psm 6')
convertedScreenshot.save(filename)
print(tesseract)
The image has to have white text on the black background or the black text on the white background.
It is also important to save the image afterwards.
Tesseract works best on images having black text on white Background. Invert the image before using tesseract by adding the below line :
convertedScreenshot = 255 - convertedScreenshot
Hey I was facing similar problem, I still am but using a few arguments in 'image_to_string' function helped.
I was using it for a single digit detection
d = pytesseract.image_to_string(thr, lang='eng',config='--psm 10 --oem
3 -c tessedit_char_whitelist=0123456789')
This helped me detect the single digits
I have a binary image like this,
I want to extract the numbers in the image using tesseract ocr in Python. I used pytesseract like this on the image,
txt = pytesseract.image_to_string(img)
But I am not getting any good results.
What can I do in pre-processing or augmentation that can help tesseract do better.?
I tried to localize the text from the image using East Text Detector but it was not able to recognize the text.
How to proceed with this in python.?
I think the page-segmentation-mode is an important factor here.
Since we are trying to read column values, we could use --psm 4 (source)
import cv2
import pytesseract
img = cv2.imread("k7bqx.jpg")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
txt = pytesseract.image_to_string(gry, config="--psm 4")
We want to get the text starts with #
txt = sorted([t[:2] for t in txt if "#" in t])
Result:
['#3', '#7', '#9', '#€']
But we miss 4, 5, we could apply adaptive-thresholding:
Result:
['#3', '#4', '#5', '#7', '#9', '#€']
Unfortunately, #2 and #6 are not recognized.
Code:
import cv2
import pytesseract
img = cv2.imread("k7bqx.jpg")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thr = cv2.adaptiveThreshold(gry, 252, cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY_INV, blockSize=131, C=100)
bnt = cv2.bitwise_not(thr)
txt = pytesseract.image_to_string(bnt, config="--psm 4")
txt = txt.strip().split("\n")
txt = sorted([t[:2] for t in txt if "#" in t])
print(txt)
I have tried to read text from image of receipt using pytesseract. But a result text have a lot weird characters and it really looks awful.
There is my code which i used to manipulate image:
import sys
from PIL import Image
import cv2 as cv
import numpy as np
import pytesseract
def manipulate_image(img):
img = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
kernel = np.ones((1,1), dtype = "uint8")
img = cv.erode(img, kernel, iterations = 1)
img = cv.threshold(img, 0, 255,
cv.THRESH_BINARY | cv.THRESH_OTSU)[1]
img = cv.medianBlur(img, 3)
return img
if len(sys.argv) > 2:
print("Please provide only name of image.")
elif len(sys.argv) == 2:
img = cv.imread(sys.argv[1])
img = manipulate_image(img)
cv.imwrite("test.png", img)
text = pytesseract.image_to_string(img)
print text.encode('utf8')
else:
print("Please provide name of image.")
there is my test receipt image:
https://imgur.com/a/RjeQ9dL
and there is output image after manupulate:
https://imgur.com/a/1tFZRdq
and there is text result:
""'9vco4v‘l7
0 .Vt3t00N 00t300N BUNUUS
SKLEP PUU POPUGOH|
UL. JHGIELLUNSKA 25, 70-364 SZCZ[C|N
TEL. 91 4841-20-58
N|P: 955—150-21-B2
dn.19r03.05 Uydr.8534
PARAGON FISKALNY
CIHSTKH 17 0,3 ¥ 16,30 = 4.89 B
Sp.0p.B 4,89 PTU B= 8,00% 0,35
Razem PTU 0,35
ZOP{HCUNU GUTUNKQ PLN
RESZTA PLN
0025/1373 H0103 0N|0 H.
15F H9HF[B9416} 13fl02D6k0[20D4334C
7?? BW 140
Any idea how to perform it in better way to get nicer results?
Applying simple thresholding will not be enough for pyTesseract to properly detect the characters. There is much more preprocessing that can be done to drastically improve your results, such as:
using Tesseract V4, where deep learning is implemented
segmenting characters
using only the part of the receipt where the text is through edge detection
perspective transform to straighten out the text
These are somewhat lengthy topics to write all in one answer, but you can check out some articles on pyImageSearch, where this is talked about in much more depth:
https://www.pyimagesearch.com/2014/09/01/build-kick-ass-mobile-document-scanner-just-5-minutes/
https://www.pyimagesearch.com/2018/09/17/opencv-ocr-and-text-recognition-with-tesseract/