Please check the following image:
Image
I am using the following code to extract text from the image.
img = cv2.imread("img.png")
txt = pytesseract.image_to_string(img)
But the result is showing different than the original one:
It is showing the following result:
+BuFl
But it should be:
+Bu#L
I don't know what the problem is. I am pretty new in Pytesseract.
Is there anyone who can help me to sort out the problem?
Thank you very much.
One way of solving is applying otsu-thresholding
Otsu's method automatically finds the threshold value unlike global thresholding.
The result of applying Otsu's threshold will be:
import cv2
import pytesseract
img = cv2.imread("Tqom8.png") # Load the image
img = cv2.resize(img, (0, 0), fx=0.5, fy=0.5)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Convert to gray
thr = cv2.threshold(gray, 0, 128, cv2.THRESH_OTSU)[1]
txt = pytesseract.image_to_string(gray, config='--psm 6')
print(pytesseract.__version__)
print(txt)
Result:
0.3.8
+Bu#L
Also make sure to read the Improving the quality of the output
Related
GOAL: I need a cleaned image to use pytesseract and get the text from it.
I pass this image into gray.
I use cv2.adaptiveThreshold to deal with the reflection problematic. But it doesn't work well.
My text became less readable and pytesseract can't read it. I don't know how to upgrade my image.
import cv2
path = "path/to/image.jpg"
rgb_img = cv2.imread(path)
gray_img = cv2.imread(path, 0)
tresholded_img = cv2.adaptiveThreshold(gray_img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 19, 1)
cv2.imshow('rgb', rgb_img)
cv2.imshow('gray', gray_img)
cv2.imshow('tresholded', tresholded_img)
cv2.waitKey(0)
EDIT:
cv2.ADAPTIVE_THRESH_GAUSSIAN_C didn't gave me better result.
Here is a sample.
I want to recognize a image like this:
I am using the following config:
config="--psm 6 --oem 3 -c tessedit_char_whitelist=0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ,."
but when I try to convert that, I get the following:
1581
1
W
I think that the image shows really clearly what is written and think that there is a problem with pytesseract. Can you help?
Preprocessing the image to obtain a binary image before performing OCR seems to work. You could also try to resize the image so that more details would be seen
Results
158.1
1
IT
import cv2
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Grayscale and Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Perform text extraction
data = pytesseract.image_to_string(thresh, lang='eng', config='--psm 6')
print(data)
cv2.imshow('thresh', thresh)
cv2.waitKey()
Hi I am looking to improve my performance with pytesseract at digit recognition.
I take my raw image and split it into parts that look like this:
The size can vary.
To this I apply some pre-processing methods like so
image = cv2.imread(im, cv2.IMREAD_GRAYSCALE)
image = cv2.GaussianBlur(image, (1, 1), 0)
kernel = np.ones((5, 5), np.uint8)
result_img = cv2.blur(img, (2, 2), 0)
result_img = cv2.dilate(result_img, kernel, iterations=1)
result_img = cv2.erode(result_img, kernel, iterations=1)
and I get this
I then pass this to pytesseract:
num = pytesseract.image_to_string(result_img, lang='eng',
config='--psm 10 --oem 3 -c tessedit_char_whitelist=0123456789')
However this is not good enough for me and often gets numbers wrong.
I am looking for ways to improve, I have tried to keep this minimal and self contained but let me know if I've not been clear and I will elaborate.
Thank you.
You're on the right track by trying to preprocess the image before performing OCR but using an incorrect approach. There is no reason to dilate or erode the image since these operations are mainly used for removing small noise particles. In addition, your current output is not a binary image. It may look like it only contains black and white pixels but it is actually a 3-channel BGR image which is probably why you're getting incorrect OCR results. If you look at Tesseract improve quality, you will notice that for Pytesseract to perform optimal OCR, the image needs to be preprocessed so that the desired text to detect is in black with the background in white. To do this, we can perform a Otsu's threshold
to obtain a binary image then invert it so the text is in the foreground. This will result in our preprocessed image where we can throw it into image_to_string. We use the --psm 6 configuration option to assume a single uniform block of text. Take a look at configuration options for more settings. Here's the results:
Input image -> Binary -> Invert
Result from Pytesseract OCR
8
Code
import cv2
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Load image, grayscale, Otsu's threshold, invert
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
invert = 255 - thresh
# OCR
data = pytesseract.image_to_string(invert, lang='eng', config='--psm 6')
print(data)
cv2.imshow('thresh', thresh)
cv2.imshow('invert', invert)
cv2.waitKey()
I want to read a column of number from an attached image (png file).
My code is
import cv2
import pytesseract
import os
img = cv2.imread(os.path.join(image_path, image_name), 0)
config= "-c
tessedit_char_whitelist=01234567890.:ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
pytesseract.image_to_string(img, config=config)
This code gives me the output string: 'n113\nun\n1.08'. As we can see, there are two problems:
It fails to recognize a decimal point in 1.13 (see attached picture).
It totally cannot read 1.11 (see attached picture). It just returns 'nun'.
What is a solution to these problems?
Bests
You need to preprocess the image. A simple approach is to resize the image, convert to grayscale, and obtain a binary image using Otsu's threshold. From here we can apply a slight gaussian blur then invert the image so the desired text to extract is in white with the background in black. Here's the processed image ready for OCR
Result from OCR
1.13
1.11
1.08
Code
import cv2
import pytesseract
import imutils
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Resize, grayscale, Otsu's threshold
image = cv2.imread('1.png')
image = imutils.resize(image, width=400)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Blur and perform text extraction
thresh = 255 - cv2.GaussianBlur(thresh, (5,5), 0)
data = pytesseract.image_to_string(thresh, lang='eng',config='--psm 6')
print(data)
cv2.imshow('thresh', thresh)
cv2.waitKey()
I am trying to read some characters that come out on the screen, but none of my attempts is successful. Example image here
And here is my code:
import pytesseract as tess
tess.pytesseract.tesseract_cmd = r'C:\Users\myuser\AppData\Local\Tesseract-OCR\tesseract.exe'
from PIL import Image
img = Image.open(r'E:\images\numbers.PNG')
text = tess.image_to_string(img)
print(text)
The "garbage" output that displays is:
C NCES IC DICIIED)
CK STOO TEED
#©O®D#O#O#O#O®
I suppose this is happening because of the color of the numbers, and the different background image they could appear on.
Unfortunately I do not know how to proceed further and how to get it working.
Can you please help? Your assistance is much appreciated!
Thanks!
I don't have Tesseract installed right now but try with the result of this code:
import cv2
img = cv2.imread('img.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 3, 6)
cv2.imshow('threshold', thresh)
cv2.waitKey(0)
You can fine tune it to achieve your result.