How to extract text from image using pytesseract? - python

I'm using pytesseract to try extract text numbers from image.
I'm trying to extract the three numbers from this picture.
A straightforward method using pytesseract is:
from PIL import Image
from pytesseract import pytesseract
text = pytesseract.image_to_string(Image.open("uploaded_image.png"))
print(text)
But this prints blank.
Why can't it extract the numbers as it can for normal usual text ?

Your images need some preprocessing in order to be efficiently processed by pytesseract.
The following shows this process using cv2.adaptiveThreshold(), cv2.findContours(), cv2.drawContours() operations before converting image to black and white and invert it:
import numpy as np
import cv2
from PIL import Image
import pytesseract
img = cv2.imread('uploaded_image.png', cv2.IMREAD_COLOR)
img = cv2.blur(img, (5, 5))
#HSV (hue, saturation, value)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
#Applying threshold on pixels' Value (or Brightness)
thresh = cv2.adaptiveThreshold(v, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 2)
#Finding contours
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
#Filling contours
contours = cv2.drawContours(img,np.array(contours),-1,(255,255,255),-1)
#To black and white
grayImage = cv2.cvtColor(contours, cv2.COLOR_BGR2GRAY)
#And inverting it
#Setting all `dark` pixels to white
grayImage[grayImage > 200] = 0
#Setting relatively clearer pixels to black
grayImage[grayImage < 100] = 255
#Write the temp file
cv2.imwrite('temp.png',grayImage)
#Read it with tesseract
text = pytesseract.image_to_string(Image.open('temp.png'),config='tessedit_char_whitelist=0123456789 -psm 6 ')
#Output
print("#### Raw text ####")
print(text)
print()
print("#### Extracted digits ####")
print([''.join([y for y in x if y.isdigit()]) for x in text.split('\n')])
Output
#### Raw text ####
93
31
92
#### Extracted digits ####
['93', '31', '92']
Processed image :
EDIT
Updated answer using cv2 library and getting all the digits from image

Related

How to find only the bolded text lines from no. of images

from PIL import Image
import pytesseract
from pdf2image import convert_from_path
import os
import pandas as pd
import cv2
import numpy as np
files = os.chdir("C:/Users/abhishek_kumar1/Desktop/New folder")
#print(os.getcwd())
pages = convert_from_path("d.pdf",190,single_file=True,
poppler_path='C:/Users/abhishek_kumar1/Downloads/poppler-0.68.0_x86/poppler-0.68.0/bin')
image_counter=1
for page in pages:
filename = "page_"+str(image_counter)+".jpg"
page.save(filename,'JPEG')
img = cv2.imread(filename)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imwrite('grey.png',gray)
binary,thresh1 = cv2.threshold(gray, 0, 255,cv2.THRESH_OTSU|cv2.THRESH_BINARY_INV)
cv2.imwrite('Thresh1.png',thresh1)
rect_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15, 3))
dilation = cv2.dilate(thresh1, rect_kernel, iterations = 6)
contours, hierarchy = cv2.findContours(dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
im2 = img.copy()
ROI_number = 0
for cnt in contours[::-1]:
[x,y,w,h] = cv2.boundingRect(cnt)
ROI=im2[y:y+h, x:x+w]
#print(str(w),str(h))
#cv2.putText(im2, str(h), (x,y - 10 ), cv2.FONT_HERSHEY_SIMPLEX, 0.1, (255, 0, 0), 1)
#cv2.putText(im2, str(w), (x,y + 10 ), cv2.FONT_HERSHEY_SIMPLEX, 0.1, (0, 0, 255), 1)
cv2.imwrite('ROI_{}.jpg'.format(ROI_number),ROI)
cv2.rectangle(im2,(x,y),(x+w,y+h),(36,255,12),1)
ROI_number += 1
cv2.imwrite('contours1.png',im2)
How to find only this image from above code section section, is there any options to understand font type from image like bold, italic,something else
get trouble to find only the bold line part from all of images.
Please any body have a suggestion regarding this please help me out.
Alex Alex's answer did not work for me. Here is my alternative described in words.
The general idea is that we compare how many black pixels there are in comparison to the minimum possible pixels to still form characters. This provides us with a difference from the skeleton to normal text and skeleton to bold text. In this way, we can quite clearly separate normal text from the bold text.
Use OCR software to extract bounding boxes of individual words. Optional: Combine individual words into lines of words, for example by word_num in Pytesseract.
Convert the image to grayscale and invert the image colors
Perform Zhang-Suen thinning on the selected area of text on the image (opencv contribution: cv2.ximgproc.thinning)
Sum where there are white pixels in the thinned image, i.e. where values are equal to 255 (white pixels are letters)
Sum where there are white pixels in the inverted image
Finally compute the thickness (sum_inverted_pixels - sum_skeleton_pixels) / sum_skeleton_pixels (sometimes there will be zero division error, check when the sum of the skeleton is 0 and return 0 instead)
Normalize the thickness by minimum and maximum values
Apply a threshold for deciding when a word/line of text is bold, e.g. 0.6 or 0.7
See python code and result:
import cv2
import numpy as np
img = cv2.imread('C.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 160, 255, cv2.THRESH_BINARY)[1]
kernel = np.ones((5,5),np.uint8)
kernel2 = np.ones((3,3),np.uint8)
marker = cv2.dilate(thresh,kernel,iterations = 1)
mask=cv2.erode(thresh,kernel,iterations = 1)
while True:
tmp=marker.copy()
marker=cv2.erode(marker, kernel2)
marker=cv2.max(mask, marker)
difference = cv2.subtract(tmp, marker)
if cv2.countNonZero(difference) == 0:
break
marker_color = cv2.cvtColor(marker, cv2.COLOR_GRAY2BGR)
out=cv2.bitwise_or(img, marker_color)
cv2.imwrite('out.png', out)
cv2.imshow('result', out )

Find contours based on edges and output equipment label

I am trying to code a tool that automatically identifies and alphabetically sorts the images based on equipment number (19-V1083AI). I used the pytesseract library to convert the image to a string after the contours of the equipment label were identified. Although the code runs correctly, it never outputs the equipment number. It's my first time using the pytesseract library and the goodFeaturesToTrack function. Any help would be greatly appreciated!
Original Image:
import numpy as np
import cv2
import imutils #resizeimage
import pytesseract # convert img to string
from matplotlib import pyplot as plt
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Read the image file
image = cv2.imread('Car Images/s3.JPG')
# Resize the image - change width to 500
image = imutils.resize(image, width=500)
# Display the original image
cv2.imshow("Original Image", image)
cv2.waitKey(0)
# RGB to Gray scale conversion
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow("1 - Grayscale Conversion", gray)
cv2.waitKey(0)
# Noise removal with iterative bilateral filter(removes noise while preserving edges)
gray = cv2.bilateralFilter(gray, 11, 17, 17)
cv2.imshow("2 - Bilateral Filter", gray)
cv2.waitKey(0)
corners = cv2.goodFeaturesToTrack(gray,60,0.001,10)
corners = np.int0(corners)
for i in corners:
x,y = i.ravel()
cv2.circle(image,(x,y),0,255,-1)
coord = np.where(np.all(image == (255, 0, 0),axis=-1))
plt.imshow(image)
# Use tesseract to covert image into string
text = pytesseract.image_to_string(image, lang='eng')
print("Equipment Number is:", text)
plt.show()
Output:

Why does this automatic image cropping not work on handwritten numbers?

I am trying to crop this image:
The following code crops all sides, but I want to remove even the unwanted handwritten number, which this code does not do. Any noise removal code, which I need to include in this would be of great help.
import numpy as np
import cv2
img = cv2.imread('test.JPG')
img = img[:-20,:-20]
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = 255*(gray < 128).astype(np.uint8)
gray = cv2.morphologyEx(gray, cv2.MORPH_OPEN, np.ones((2, 2), dtype=np.uint8))
coords = cv2.findNonZero(gray)
x, y, w, h = cv2.boundingRect(coords)
rect = img[y:y+h, x:x+w]
cv2.imwrite("test_crop", rect)
Inverse binary threshold your image with a low threshold, e.g. 24, so you'll get rid of some of the lighter gray pixels.
In the thresholded image, count white pixels per column and row. The table cell borders will have peaks:
Find the first and last peak for both to get the first and last horizontal and vertical table border; crop that part.
Here's the full code:
import cv2
import numpy as np
from skimage import io # Only needed for web reading images
# Read image from web
image = cv2.cvtColor(io.imread('https://i.stack.imgur.com/kA16I.jpg'), cv2.COLOR_RGB2BGR)
# Convert to grayscale; inverse binary threshold; map to [0, 1]
image_thr = cv2.threshold(cv2.cvtColor(image, cv2.COLOR_BGR2GRAY), 24, 255, cv2.THRESH_BINARY_INV)[1] / 255
# Count white pixels per column and row
col_sum = np.sum(image_thr, axis=0)
row_sum = np.sum(image_thr, axis=1)
# Get first and last vertical border of table
col_thr = 50
cols = np.where(col_sum > col_thr)[0][[0, -1]]
# Get first and last horizontal border of table
row_thr = 200
rows = np.where(row_sum > row_thr)[0][[0, -1]]
# Crop table
output = image[rows[0]:rows[1]+1, cols[0]:cols[1]+1]
# Outputs
cv2.imshow('image', image)
cv2.imshow('image_thr', image_thr)
cv2.imshow('output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
Hope that helps!
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.1
NumPy: 1.18.1
OpenCV: 4.2.0
----------------------------------------

Image processing and for loops [duplicate]

How would I take an RGB image in Python and convert it to black and white? Not grayscale, I want each pixel to be either fully black (0, 0, 0) or fully white (255, 255, 255).
Is there any built-in functionality for getting it done in the popular Python image processing libraries? If not, would the best way be just to loop through each pixel, if it's closer to white set it to white, if it's closer to black set it to black?
Scaling to Black and White
Convert to grayscale and then scale to white or black (whichever is closest).
Original:
Result:
Pure Pillow implementation
Install pillow if you haven't already:
$ pip install pillow
Pillow (or PIL) can help you work with images effectively.
from PIL import Image
col = Image.open("cat-tied-icon.png")
gray = col.convert('L')
bw = gray.point(lambda x: 0 if x<128 else 255, '1')
bw.save("result_bw.png")
Alternatively, you can use Pillow with numpy.
Pillow + Numpy Bitmasks Approach
You'll need to install numpy:
$ pip install numpy
Numpy needs a copy of the array to operate on, but the result is the same.
from PIL import Image
import numpy as np
col = Image.open("cat-tied-icon.png")
gray = col.convert('L')
# Let numpy do the heavy lifting for converting pixels to pure black or white
bw = np.asarray(gray).copy()
# Pixel range is 0...255, 256/2 = 128
bw[bw < 128] = 0 # Black
bw[bw >= 128] = 255 # White
# Now we put it back in Pillow/PIL land
imfile = Image.fromarray(bw)
imfile.save("result_bw.png")
Black and White using Pillow, with dithering
Using pillow you can convert it directly to black and white. It will look like it has shades of grey but your brain is tricking you! (Black and white near each other look like grey)
from PIL import Image
image_file = Image.open("cat-tied-icon.png") # open colour image
image_file = image_file.convert('1') # convert image to black and white
image_file.save('/tmp/result.png')
Original:
Converted:
Black and White using Pillow, without dithering
from PIL import Image
image_file = Image.open("cat-tied-icon.png") # open color image
image_file = image_file.convert('1', dither=Image.NONE) # convert image to black and white
image_file.save('/tmp/result.png')
I would suggest converting to grayscale, then simply applying a threshold (halfway, or mean or meadian, if you so choose) to it.
from PIL import Image
col = Image.open('myimage.jpg')
gry = col.convert('L')
grarray = np.asarray(gry)
bw = (grarray > grarray.mean())*255
imshow(bw)
img_rgb = cv2.imread('image.jpg')
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
(threshi, img_bw) = cv2.threshold(img_gray, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
Pillow, with dithering
Using pillow you can convert it directly to black and white. It will look like it has shades of grey but your brain is tricking you! (Black and white near each other look like grey)
from PIL import Image
image_file = Image.open("cat-tied-icon.png") # open colour image
image_file = image_file.convert('1') # convert image to black and white
image_file.save('/tmp/result.png')
Original:
Converted:
And you can use colorsys (in the standard library) to convert rgb to hls and use the lightness value to determine black/white:
import colorsys
# convert rgb values from 0-255 to %
r = 120/255.0
g = 29/255.0
b = 200/255.0
h, l, s = colorsys.rgb_to_hls(r, g, b)
if l >= .5:
# color is lighter
result_rgb = (255, 255, 255)
elif l < .5:
# color is darker
result_rgb = (0,0,0)
Using opencv You can easily convert rgb to binary image
import cv2
%matplotlib inline
import matplotlib.pyplot as plt
from skimage import io
from PIL import Image
import numpy as np
img = io.imread('http://www.bogotobogo.com/Matlab/images/MATLAB_DEMO_IMAGES/football.jpg')
img = cv2.cvtColor(img, cv2.IMREAD_COLOR)
imR=img[:,:,0] #only taking gray channel
print(img.shape)
plt.imshow(imR, cmap=plt.get_cmap('gray'))
#Gray Image
plt.imshow(imR)
plt.title('my picture')
plt.show()
#Histogram Analyze
imgg=imR
hist = cv2.calcHist([imgg],[0],None,[256],[0,256])
plt.hist(imgg.ravel(),256,[0,256])
# show the plotting graph of an image
plt.show()
#Black And White
height,width=imgg.shape
for i in range(0,height):
for j in range(0,width):
if(imgg[i][j]>60):
imgg[i][j]=255
else:
imgg[i][j]=0
plt.imshow(imgg)
Here is the code for creating binary image using opencv-python :
img = cv2.imread('in.jpg',2)
ret, bw_img = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
cv2.imshow("Output - Binary Image",bw_img)
If you don't want to use cv methods for the segmentation and understand what you are doing, treat the RGB image as matrix.
image = mpimg.imread('image_example.png') # your image
R,G,B = image[:,:,0], image[:,:,1], image[:,:,2] # the 3 RGB channels
thresh = [100, 200, 50] # example of triple threshold
# First, create an array of 0's as default value
binary_output = np.zeros_like(R)
# then screen all pixels and change the array based on RGB threshold.
binary_output[(R < thresh[0]) & (G > thresh[1]) & (B < thresh[2])] = 255
The result is an array of 0's and 255's based on a triple condition.

Use pytesseract OCR to recognize text from an image

I need to use Pytesseract to extract text from this picture:
and the code:
from PIL import Image, ImageEnhance, ImageFilter
import pytesseract
path = 'pic.gif'
img = Image.open(path)
img = img.convert('RGBA')
pix = img.load()
for y in range(img.size[1]):
for x in range(img.size[0]):
if pix[x, y][0] < 102 or pix[x, y][1] < 102 or pix[x, y][2] < 102:
pix[x, y] = (0, 0, 0, 255)
else:
pix[x, y] = (255, 255, 255, 255)
img.save('temp.jpg')
text = pytesseract.image_to_string(Image.open('temp.jpg'))
# os.remove('temp.jpg')
print(text)
and the "temp.jpg" is
Not bad, but the result of print is ,2 WW
Not the right text2HHH, so how can I remove those black dots?
Here's a simple approach using OpenCV and Pytesseract OCR. To perform OCR on an image, its important to preprocess the image. The idea is to obtain a processed image where the text to extract is in black with the background in white. To do this, we can convert to grayscale, apply a slight Gaussian blur, then Otsu's threshold to obtain a binary image. From here, we can apply morphological operations to remove noise. Finally we invert the image. We perform text extraction using the --psm 6 configuration option to assume a single uniform block of text. Take a look here for more options.
Here's a visualization of the image processing pipeline:
Input image
Convert to grayscale -> Gaussian blur -> Otsu's threshold
Notice how there are tiny specs of noise, to remove them we can perform morphological operations
Finally we invert the image
Result from Pytesseract OCR
2HHH
Code
import cv2
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Grayscale, Gaussian blur, Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Morph open to remove noise and invert image
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
invert = 255 - opening
# Perform text extraction
data = pytesseract.image_to_string(invert, lang='eng', config='--psm 6')
print(data)
cv2.imshow('thresh', thresh)
cv2.imshow('opening', opening)
cv2.imshow('invert', invert)
cv2.waitKey()
Here is my solution:
import pytesseract
from PIL import Image, ImageEnhance, ImageFilter
im = Image.open("temp.jpg") # the second one
im = im.filter(ImageFilter.MedianFilter())
enhancer = ImageEnhance.Contrast(im)
im = enhancer.enhance(2)
im = im.convert('1')
im.save('temp2.jpg')
text = pytesseract.image_to_string(Image.open('temp2.jpg'))
print(text)
I have something different pytesseract approach for our community.
Here is my approach
import pytesseract
from PIL import Image
text = pytesseract.image_to_string(Image.open("temp.jpg"), lang='eng',
config='--psm 10 --oem 3 -c tessedit_char_whitelist=0123456789')
print(text)
To extract the text directly from the web, you can try the following implementation (making use of the first image):
import io
import requests
import pytesseract
from PIL import Image, ImageFilter, ImageEnhance
response = requests.get('https://i.stack.imgur.com/HWLay.gif')
img = Image.open(io.BytesIO(response.content))
img = img.convert('L')
img = img.filter(ImageFilter.MedianFilter())
enhancer = ImageEnhance.Contrast(img)
img = enhancer.enhance(2)
img = img.convert('1')
img.save('image.jpg')
imagetext = pytesseract.image_to_string(img)
print(imagetext)
Here is my small advancement with removing noise and arbitrary line within certain colour frequency range.
import pytesseract
from PIL import Image, ImageEnhance, ImageFilter
im = Image.open(img) # img is the path of the image
im = im.convert("RGBA")
newimdata = []
datas = im.getdata()
for item in datas:
if item[0] < 112 or item[1] < 112 or item[2] < 112:
newimdata.append(item)
else:
newimdata.append((255, 255, 255))
im.putdata(newimdata)
im = im.filter(ImageFilter.MedianFilter())
enhancer = ImageEnhance.Contrast(im)
im = enhancer.enhance(2)
im = im.convert('1')
im.save('temp2.jpg')
text = pytesseract.image_to_string(Image.open('temp2.jpg'),config='-c tessedit_char_whitelist=0123456789abcdefghijklmnopqrstuvwxyz -psm 6', lang='eng')
print(text)
you only need grow up the size of picture by cv2.resize
image = cv2.resize(image,(0,0),fx=7,fy=7)
my picture 200x40 -> HZUBS
resized same picture 1400x300 -> A 1234 (so, this is right)
and then,
retval, image = cv2.threshold(image,200,255, cv2.THRESH_BINARY)
image = cv2.GaussianBlur(image,(11,11),0)
image = cv2.medianBlur(image,9)
and change parameters for enhance results
Page segmentation modes:
0 Orientation and script detection (OSD) only.
1 Automatic page segmentation with OSD.
2 Automatic page segmentation, but no OSD, or OCR.
3 Fully automatic page segmentation, but no OSD. (Default)
4 Assume a single column of text of variable sizes.
5 Assume a single uniform block of vertically aligned text.
6 Assume a single uniform block of text.
7 Treat the image as a single text line.
8 Treat the image as a single word.
9 Treat the image as a single word in a circle.
10 Treat the image as a single character.
11 Sparse text. Find as much text as possible in no particular order.
12 Sparse text with OSD.
13 Raw line. Treat the image as a single text line,
bypassing hacks that are Tesseract-specific.
from PIL import Image, ImageEnhance, ImageFilter
import pytesseract
path = 'hhh.gif'
img = Image.open(path)
img = img.convert('RGBA')
pix = img.load()
for y in range(img.size[1]):
for x in range(img.size[0]):
if pix[x, y][0] < 102 or pix[x, y][1] < 102 or pix[x, y][2] < 102:
pix[x, y] = (0, 0, 0, 255)
else:
pix[x, y] = (255, 255, 255, 255)
text = pytesseract.image_to_string(Image.open('hhh.gif'))
print(text)

Categories

Resources