Difficulty to use cv2.adaptiveThreshold with the reflection problematic - python

GOAL: I need a cleaned image to use pytesseract and get the text from it.
I pass this image into gray.
I use cv2.adaptiveThreshold to deal with the reflection problematic. But it doesn't work well.
My text became less readable and pytesseract can't read it. I don't know how to upgrade my image.
import cv2
path = "path/to/image.jpg"
rgb_img = cv2.imread(path)
gray_img = cv2.imread(path, 0)
tresholded_img = cv2.adaptiveThreshold(gray_img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 19, 1)
cv2.imshow('rgb', rgb_img)
cv2.imshow('gray', gray_img)
cv2.imshow('tresholded', tresholded_img)
cv2.waitKey(0)
EDIT:
cv2.ADAPTIVE_THRESH_GAUSSIAN_C didn't gave me better result.
Here is a sample.

Related

Pytesseract image to text problem in Python

Please check the following image:
Image
I am using the following code to extract text from the image.
img = cv2.imread("img.png")
txt = pytesseract.image_to_string(img)
But the result is showing different than the original one:
It is showing the following result:
+BuFl
But it should be:
+Bu#L
I don't know what the problem is. I am pretty new in Pytesseract.
Is there anyone who can help me to sort out the problem?
Thank you very much.
One way of solving is applying otsu-thresholding
Otsu's method automatically finds the threshold value unlike global thresholding.
The result of applying Otsu's threshold will be:
import cv2
import pytesseract
img = cv2.imread("Tqom8.png") # Load the image
img = cv2.resize(img, (0, 0), fx=0.5, fy=0.5)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Convert to gray
thr = cv2.threshold(gray, 0, 128, cv2.THRESH_OTSU)[1]
txt = pytesseract.image_to_string(gray, config='--psm 6')
print(pytesseract.__version__)
print(txt)
Result:
0.3.8
+Bu#L
Also make sure to read the Improving the quality of the output

Filtering the image and converting to text gives wrong output in python

I want to extract a particular text from an image, and I already did some filtering in image but still I'm not getting the exact text.Also is there any way to get a specific text alone from the image?
Code for filtering the image and converting to text
import cv2
import pytesseract
image = cv2.imread('original.png', 0)
thresh = cv2.threshold(image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
img = cv2.adaptiveThreshold(thresh, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2)
cv2.imwrite('filtered.png', img)
data = pytesseract.image_to_data(img)
print(data)
cv2.imshow('thresh', img)
cv2.waitKey()
You can try easyOCR instead of pytesseract
first install by pip install easyocr
import cv2
import easyocr
image = cv2.imread('original.jpg', 0)
reader = easyocr.Reader(['en'])
result = reader.readtext(image)
#You can use regular expression
interested_string = 'Patrol Rewards'
line = [l[1] for l in result if 'Patrol Rewards' in l[1]]
print(line)
You will get list containing interested string like
['Patrol Rewards: Courage Horn X 1']
This will give correct output but it is bit slow compared to pytesseract on CPU but if you have GPU configured then it is faster. But it gives quite good OCR results.

Processing image for reducing noise with OpenCV in Python

I want to apply some kind of preprocessing to this image so that text can be more readable, so that later I can read text from image. I'm new to this so I do not know what should I do, should I increase contrast or should I reduce noise, or something else. Basically, I want to remove these gray areas on the image and keep only black letters (as clear as they can be) and white background.
import cv2
img = cv2.imread('slika1.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow('gray', img)
cv2.waitKey(0)
thresh = 200
img = cv2.threshold(img, thresh, 255, cv2.THRESH_BINARY)[1]
cv2.imshow('filter',img)
cv2.waitKey(0)
I read the image and applied threshold to the image but I needed to try 20 different thresholds until I found one that gives results.
Is there any better way to solve problems like this?
The problem is that I can get different pictures with different size of gray areas, so sometime I do not need to apply any kind of threshold, and sometimes I do, because of that I think that my solution with threshold is not that good.
For this image, my code works good:
But for this it gives terrible results:
Try division normalization in Python/OpenCV. Divide the input by its blurred copy. Then sharpen. You may want to crop the receipt better or mask out the background first.
Input:
import cv2
import numpy as np
import skimage.filters as filters
# read the image
img = cv2.imread('receipt2.jpg')
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# blur
smooth = cv2.GaussianBlur(gray, (95,95), 0)
# divide gray by morphology image
division = cv2.divide(gray, smooth, scale=255)
# sharpen using unsharp masking
sharp = filters.unsharp_mask(division, radius=1.5, amount=1.5, multichannel=False, preserve_range=False)
sharp = (255*sharp).clip(0,255).astype(np.uint8)
# save results
cv2.imwrite('receipt2_division.png',division)
cv2.imwrite('receipt2_division_sharp.png',sharp)
# show results
cv2.imshow('smooth', smooth)
cv2.imshow('division', division)
cv2.imshow('sharp', sharp)
cv2.waitKey(0)
cv2.destroyAllWindows()
Division result:
Sharpened result:

How to auto adjust contrast and brightness of a scanned Image with opencv python

I want to auto adjust the brightness and contrast of a color image taken from phone under different lighting conditions. Please help me I am new to OpenCV.
Source:
Input Image
Result:
result
What I am looking for is more of a localized transformation. In essence, I want the shadow to get as light as possible completely gone if possible and get darker pixels of the image to get darker, more in contrast and the light pixels to get more white but not to a point where it gets overexposed or anything like that.
I have tried CLAHE, Histogram Equalization, Binary Thresholding, Adaptive Thresholding, etc But nothing has worked.
My initials thoughts are that I need to neutralize Highlights and bring darker pixels more towards the average value while keeping the text and lines as dark as possible. And then maybe do a contrast filter. But I am unable to Get the result please help me.
Here is one way to do that in Python/OpenCV.
Read the input
Increase contrast
Convert original to grayscale
Adaptive threshold
Use the thresholded image to make the background white on the contrast increased image
Save results
Input:
import cv2
import numpy as np
# read image
img = cv2.imread("math_diagram.jpg")
# convert img to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# do adaptive threshold on gray image
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 21, 15)
# make background of input white where thresh is white
result = img.copy()
result[thresh==255] = (255,255,255)
# write results to disk
cv2.imwrite("math_diagram_threshold.jpg", thresh)
cv2.imwrite("math_diagram_processed.jpg", result)
# display it
cv2.imshow("THRESHOLD", thresh)
cv2.imshow("RESULT", result)
cv2.waitKey(0)
Threshold image:
Result:
You can use any local binarization method. In OpenCV there is one such method called Wolf-Julion local binarization which can be applied to the input image. Below is code snippet as an example:
import cv2
image = cv2.imread('input.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)[:,:,2]
T = cv2.ximgproc.niBlackThreshold(gray, maxValue=255, type=cv2.THRESH_BINARY_INV, blockSize=81, k=0.1, binarizationMethod=cv2.ximgproc.BINARIZATION_WOLF)
grayb = (gray > T).astype("uint8") * 255
cv2.imshow("Binary", grayb)
cv2.waitKey(0)
The output result from above code is below. Please note that to use ximgproc module you need to install opencv contrib package.

Python OCR issues with Pytesseract

I am trying to read some characters that come out on the screen, but none of my attempts is successful. Example image here
And here is my code:
import pytesseract as tess
tess.pytesseract.tesseract_cmd = r'C:\Users\myuser\AppData\Local\Tesseract-OCR\tesseract.exe'
from PIL import Image
img = Image.open(r'E:\images\numbers.PNG')
text = tess.image_to_string(img)
print(text)
The "garbage" output that displays is:
C NCES IC DICIIED)
CK STOO TEED
#©O®D#O#O#O#O®
I suppose this is happening because of the color of the numbers, and the different background image they could appear on.
Unfortunately I do not know how to proceed further and how to get it working.
Can you please help? Your assistance is much appreciated!
Thanks!
I don't have Tesseract installed right now but try with the result of this code:
import cv2
img = cv2.imread('img.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 3, 6)
cv2.imshow('threshold', thresh)
cv2.waitKey(0)
You can fine tune it to achieve your result.

Categories

Resources