I am trying to implement one of the stages of the OCR system. The character segmentation stage. The code is shown below. The code is quite simple:
the image is being read
grayscale image translation
image binarization
application of dilation operation
selection of contours
It is assumed that each selected contour is a symbol.
The results of the algorithm are not satisfactory. Sometimes-the characters stand out well. Sometimes only parts of characters are highlighted, sometimes several characters are highlighted. Please help with the code, I really want it to correctly highlight the characters.
UPDATE 1. I am trying to implement a character segmentation system for different fonts. It turned out that there are no universal parameters of erosion and dilation operations for different fonts
Test image:
Result of character selection 1 (Small parts of characters):
Result of character selection 2 (Big parts of characters):
Full result (All parts of characters):
import cv2
import numpy as np
def letters_extract(image_file):
img = cv2.imread(image_file)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2)
img_dilate = cv2.dilate(thresh, np.ones((1, 1), np.uint8), iterations=1)
# img_erode = cv2.erode(img_dilate, np.ones((3, 3), np.uint8), iterations=1)
# Get contours
contours, hierarchy = cv2.findContours(img_dilate, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
letters = []
for idx, contour in enumerate(contours):
(x, y, w, h) = cv2.boundingRect(contour)
if hierarchy[0][idx][3] == 0:
letter_crop = gray[y:y + h, x:x + w]
letters.append(letter_crop)
cv2.imwrite(r'D:\projects\proj\test\tnr\{}.png'.format(idx), letter_crop)
return letters
letters_extract(r'D:\projects\proj\test\test_tnr.png')
Run your code (a bit modified for debugging) and it looks pretty good (I've only changed the dilation mask):
import cv2
import numpy as np
import matplotlib.pyplot as plt
def letters_extract(image_file):
img = cv2.imread(image_file)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2)
plt.figure(figsize=(20, 20))
plt.imshow(thresh)
plt.show()
img_dilate = cv2.erode(thresh, np.ones((2,1), np.uint8))
plt.figure(figsize=(20, 20))
plt.imshow(img_dilate)
plt.show()
contours, hierarchy = cv2.findContours(img_dilate, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
im_with_aabb = img.copy()
for idx, contour in enumerate(contours):
(x, y, w, h) = cv2.boundingRect(contour)
if hierarchy[0][idx][3] == 0:
color = (255, 0, 0)
thickness = 1
im_with_aabb = cv2.rectangle(im_with_aabb, (x,y), (x+w,y+h), color, thickness)
return im_with_aabb
im_with_aabb = letters_extract('test.png')
plt.figure(figsize=(20, 20))
plt.imshow(im_with_aabb)
plt.show()
But there are problems with several chars still. If your input images looks this good (no high variability between the same char in different places) I can suggest perhaps tamplate matching with each char as template.
If the data is with high variability maybe you should use a pretrained NN like tesseract.
If your data is always as clear as the image you have shared, you do not have to do dilation or erosion. I set threshold to 190 and inverse the gray image with cv2.THRESH_BINARY_INV parameter such that countours will be find around the letters. Finally, I change contour search algorithm to find only external contours with cv2.RETR_EXTERNAL parameter.
import cv2
import numpy as np
def letters_extract(image_file):
img = cv2.imread(image_file)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 190, 255, cv2.THRESH_BINARY_INV)
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
letters = []
for idx, contour in enumerate(contours):
(x, y, w, h) = cv2.boundingRect(contour)
letter_crop = gray[y:y + h, x:x + w]
letters.append(letter_crop)
cv2.rectangle(img, (x,y), (x + w, y + h), (0,0,255))
cv2.namedWindow("win", cv2.WINDOW_FREERATIO)
cv2.imshow("win",img)
cv2.waitKey()
return letters
letters_extract('text.png')
Final image is as follows:
Related
the task I want to do looks pretty simple: I take as input several images with an object centered in the photo and a little color chart needed for other purposes. My code normally works for the majority of the cases, but sometimes fails miserably and I just can't understand why.
For example (these are the source images), it works correctly on this https://imgur.com/PHfIqcb but not on this https://imgur.com/qghzO3V
Here's the code of the interested part:
img = cv2.imread(path)
height, width, channel = img.shape
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
kernel = np.ones((31, 31), np.uint8)
dil = cv2.dilate(gray, kernel, iterations=1)
_, th = cv2.threshold(dil, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
th_er1 = cv2.bitwise_not(th)
_, contours, _= cv2.findContours(th_er1, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cnt=contours[max_index]
x,y,w,h = cv2.boundingRect(cnt)
After that I'm just going to crop the image accordingly to the given results (getting the biggest rectangle contour), basically cutting off the photo only the main object.
But as I said, using very similar images sometimes works and sometimes not.
Thank you in advance.
maybe you could try not using otsu's method, and just set threshold manually, if it's possible... ;)
You can use the Canny edge detector. In the two images, there is a good threshold value to isolate the object in the center of the image. After applying the threshold, we blur the results and apply the Canny edge detector before finding the contours:
import cv2
import numpy as np
def process(img):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(img_gray, 190, 255, cv2.THRESH_BINARY_INV)
img_blur = cv2.GaussianBlur(thresh, (3, 3), 1)
img_canny = cv2.Canny(img_blur, 0, 0)
kernel = np.ones((5, 5))
img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
return cv2.erode(img_dilate, kernel, iterations=1)
def get_contours(img):
contours, hierarchies = cv2.findContours(process(img), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
cv2.drawContours(img, [cnt], -1, (0, 255, 0), 30)
x, y, w, h = cv2.boundingRect(cnt)
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 30)
img = cv2.imread("image.jpeg")
get_contours(img)
cv2.imshow("Result", img)
cv2.waitKey(0)
Input images:
Output images:
The green outlines are the contours of the objects, and the red outlines are the bounding boxes of the objects.
I am trying to extract handwritten numbers and alphabet from an image, for that i followed this stackoverflow link. It is working fine for most of the images where letter is written using marker but when i am using image where data is written using Pen it is failing miserably. Need some help to fix this.
Below is my code:
import cv2
import imutils
from imutils import contours
# Load image, grayscale, Otsu's threshold
image = cv2.imread('xxx/pic_crop_7.png')
image = imutils.resize(image, width=350)
img=image.copy()
# Remove border
kernel_vertical = cv2.getStructuringElement(cv2.MORPH_RECT, (1,50))
temp1 = 255 - cv2.morphologyEx(image, cv2.MORPH_CLOSE, kernel_vertical)
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (50,1))
temp2 = 255 - cv2.morphologyEx(image, cv2.MORPH_CLOSE, horizontal_kernel)
temp3 = cv2.add(temp1, temp2)
result = cv2.add(temp3, image)
# Convert to grayscale and Otsu's threshold
gray = cv2.cvtColor(result, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray,(5,5),0)
_,thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY_INV)
# thresh=cv2.dilate(thresh,None,iterations=1)
# Find contours and filter using contour area
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[0]
MIN_AREA=45
digit_contours = []
for c in cnts:
if cv2.contourArea(c)>MIN_AREA:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(img, (x, y), (x + w, y + h), (36,255,12), 2)
digit_contours.append(c)
# cv2.imwrite("C:/Samples/Dataset/ocr/segmented" + str(i) + ".png", image[y:y+h,x:x+w])
sorted_digit_contours = contours.sort_contours(digit_contours, method='left-to-right')[0]
contour_number = 0
for c in sorted_digit_contours:
x,y,w,h = cv2.boundingRect(c)
ROI = image[y:y+h, x:x+w]
cv2.imwrite('xxx/segment_{}.png'.format(contour_number), ROI)
contour_number += 1
cv2.imshow('thresh', thresh)
cv2.imshow('img', img)
cv2.waitKey()
It is correctly able to extract the numbers when written using marker.
Below is an example:
Original Image
Correctly extracting charachters
Image where it fails to read.
Original Image
Incorrectly Extracting
In this case, you only need to adjust your parameter.
Because there is no vertical line in your handwritten characters' background, so I decided to delete them.
# Remove border
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (50,1))
temp2 = 255 - cv2.morphologyEx(image, cv2.MORPH_CLOSE, horizontal_kernel)
result = cv2.add(temp2, image)
And it works.
The solution that CodingPeter has given is perfectly fine, except that it may not be generic apropos the two test images you have posted. So, here's my take on it that might work on both of your test images, albeit with a little lesser accuracy.
import numpy as np
import cv2
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20, 20)
plt.rcParams["image.cmap"] = 'gray'
img_rgb = cv2.imread('path/to/your/image.jpg')
img = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
th = cv2.adaptiveThreshold(img,255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,11,2)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15,1))
horiz = cv2.morphologyEx(th, cv2.MORPH_OPEN, kernel, iterations=3)
ctrs, _ = cv2.findContours(horiz,cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for ctr in ctrs:
x,y,w,h = cv2.boundingRect(ctr)
if w < 20:
cv2.drawContours(horiz, [ctr], 0, 0, cv2.FILLED)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,10))
vert = cv2.morphologyEx(th, cv2.MORPH_OPEN, kernel, iterations=3)
ctrs, _ = cv2.findContours(vert,cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for ctr in ctrs:
x,y,w,h = cv2.boundingRect(ctr)
if h < 25:
cv2.drawContours(vert, [ctr], 0, 0, cv2.FILLED)
th = th - (horiz | vert)
ctrs, _ = cv2.findContours(th,cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
min_ctr_area = 400 # Min character bounding box area
for ctr in ctrs:
x, y, w, h = cv2.boundingRect(ctr)
# Filter contours based on size
if w * h > min_ctr_area and \
w < 100 and h < 100:
cv2.rectangle(img_rgb, (x, y), (x+w, y+h), (0, 255, 0), 1)
plt.imshow(img_rgb)
Of course some of the parameters here are hard-coded for filtering, which compare the contour height and width to ascertain whether it is a part of a line or maybe a character. With different images you may have to smartly change these values.
I'm trying to create bounding boxes for the text in an image I have. An example is the one below.
I would like to add a bounding box around each This is a test line. Unfortunately I'm not sure why this method is not automatically identifying the bounding boxes
import re
import cv2
import numpy as np
import pytesseract
from pytesseract import Output
from matplotlib import pyplot as plt
# Plot character boxes on image using pytesseract.image_to_boxes() function
image = cv2.imread('Image.jpg')
b, g, r = cv2.split(image)
image = cv2.merge([r,g,b])
d = pytesseract.image_to_data(image, output_type=Output.DICT)
print('DATA KEYS: \n', d.keys())
n_boxes = len(d['text'])
for i in range(n_boxes):
# condition to only pick boxes with a confidence > 60%
if int(d['conf'][i]) > 60:
(x, y, w, h) = (d['left'][i], d['top'][i], d['width'][i], d['height'][i])
image = cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)
b, g, r = cv2.split(image)
rgb_img = cv2.merge([r, g, b])
plt.figure(figsize=(16, 12))
plt.imshow(rgb_img)
plt.title('SAMPLE IMAGE WITH WORD LEVEL BOXES')
plt.show()
Here is a different way to do that with Python/OpenCV.
Read the input
Convert to gray
(OTSU) Threshold (white text on black background)
Apply morphology dilate with horizontal kernel longer than letter spacing and then smaller vertical kernel to remove thin horizontal lines remaining from line in page.
Find contours
Draw bounding boxes of contours on input
Save result
Input:
import cv2
import numpy as np
# load image
img = cv2.imread("test_text.jpg")
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold the grayscale image
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# use morphology erode to blur horizontally
#kernel = np.ones((500,3), np.uint8)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (250, 3))
morph = cv2.morphologyEx(thresh, cv2.MORPH_DILATE, kernel)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 17))
morph = cv2.morphologyEx(morph, cv2.MORPH_OPEN, kernel)
# find contours
cntrs = cv2.findContours(morph, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cntrs = cntrs[0] if len(cntrs) == 2 else cntrs[1]
# Draw contours
result = img.copy()
for c in cntrs:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(result, (x, y), (x+w, y+h), (0, 0, 255), 2)
# write result to disk
cv2.imwrite("test_text_threshold.png", thresh)
cv2.imwrite("test_text_morph.png", morph)
cv2.imwrite("test_text_lines.jpg", result)
cv2.imshow("GRAY", gray)
cv2.imshow("THRESH", thresh)
cv2.imshow("MORPH", morph)
cv2.imshow("RESULT", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thresholded image:
Dilated image:
Result:
I would like to get from the image in the groups that are on the image
I have managed to remove first contour (as described below), but issue is that when I try to read the text, I have some missing text, I expect that this is because of other contours that have stayed on the image, but while I try to remove them, I loose the grouping or part of text...
for i in range(len(contours)):
if 800 < cv2.contourArea(contours[i]) < 2000:
x, y, width, height = cv2.boundingRect(contours[i])
roi = img[y:y + height, x:x + width]
roi_h = roi.shape[0]
roi_w = roi.shape[1]
resize_roi = cv2.resize(roi,(int(roi_w*6),int(roi_h*6)), interpolation=cv2.INTER_LINEAR)
afterd = cv2.cvtColor(resize_roi, cv2.COLOR_BGR2GRAY)
retim, threshm = cv2.threshold(afterd, 210, 225, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
contoursm, hierarchym = cv2.findContours(threshm, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
mask = np.ones(resize_roi.shape[:2], dtype="uint8") * 255
for m in range(len(contoursm)):
if 10000 < cv2.contourArea(contoursm[m]) < 33000:
cv2.drawContours(mask, contoursm, m, 0, 7)
afterd = cv2.bitwise_not(afterd)
afterd = cv2.bitwise_and(afterd, afterd, mask=mask)
afterd = cv2.bitwise_not(afterd)
print(pytesseract.image_to_string(afterd, lang='eng', config='--psm 3'))
Instead of dealing with all the boxes, I suggest deleting them by finding connected components, and filling the large clusters with background color.
You may use the following stages:
Convert image to Grayscale, apply threshold, and invert polarity.
Delete all clusters having more than 100 pixels (assume letters are smaller).
Dilate thresh for uniting text areas to single "blocks".
Find contours on the dilated thresh image.
Find bounding rectangles, and apply OCR to the rectangle.
Here is the complete code sample:
import numpy as np
import cv2
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' # I am using Windows
img = cv2.imread('img.png') # Read input image
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Convert to Grayscale.
ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) # Convert to binary and invert polarity
nlabel,labels,stats,centroids = cv2.connectedComponentsWithStats(thresh, connectivity=8)
thresh_size = 100
# Delete all lines by filling large clusters with zeros.
for i in range(1, nlabel):
if stats[i, cv2.CC_STAT_AREA] > thresh_size:
thresh[labels == i] = 0
# Dilate thresh for uniting text areas to single blocks.
dilated_thresh = cv2.dilate(thresh, np.ones((5,5)))
# Find contours on dilated thresh
contours, hierarchy = cv2.findContours(dilated_thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Iterate contours, find bounding rectangles
for c in contours:
# Get bounding rectangle
x, y, w, h = cv2.boundingRect(c)
# Draw green rectangle for testing
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), thickness = 1)
# Get the slice with the text (slice with margins).
afterd = thresh[y-3:y+h+3, x-3:x+w+3]
# Show afterd as image for testing
# cv2.imshow('afterd', afterd)
# cv2.waitKey(100)
# The OCR works only when image is enlarged and black text?
resized_afterd = cv2.resize(afterd, (afterd.shape[1]*5, afterd.shape[0]*5), interpolation=cv2.INTER_LANCZOS4)
print(pytesseract.image_to_string(255 - resized_afterd, lang='eng', config='--psm 3'))
cv2.imshow('thresh', thresh)
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result strings after OCR:
DF6DF645
RFFTW
2345
2277
AABBA
DF1267
ABCET5456
Input image with green boxes around the text:
Update:
Grouping contours:
For contours contours you may use the hierarchy result of cv2.findContours with cv2.RETR_TREE.
See Contours Hierarchy documentation.
You may use the parent-child relationship for grouping contours.
Here is an incomplete sample code for using the hierarchy:
img = cv2.imread('img.png') # Read input image
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Convert to Grayscale.
ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) # Convert to binary and invert polarity
nlabel,labels,stats,centroids = cv2.connectedComponentsWithStats(thresh, connectivity=8)
thresh_boxes = np.zeros_like(thresh)
thresh_size = 100
# Delete all lines by filling large clusters with zeros.
# Make new image that contains only boxes - without text
for i in range(1, nlabel):
if stats[i, cv2.CC_STAT_AREA] > thresh_size:
thresh[labels == i] = 0
thresh_boxes[labels == i] = 255
# Find contours on thresh_boxes, use cv2.RETR_TREE to build tree with hierarchy
contours, hierarchy = cv2.findContours(thresh_boxes, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
# Iterate contours, and hierarchy
for c, i in zip(contours, range(len(contours))):
h = hierarchy[0, i, :]
h_child = h[2]
# if contours has no child (last level)
if h_child == -1:
h_parent = h[3]
x, y, w, h = cv2.boundingRect(c)
cv2.putText(img, str(h_parent), (x+w//2-4, y+h//2+8), fontFace=cv2.FONT_HERSHEY_SIMPLEX, fontScale=1, color=(0, 0, 255), thickness=2)
cv2.imshow('thresh', thresh)
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
I'm working on parsing coupon codes from receipts, and unfortunately, the letters are not solid lines. They composed of small individual dots. I managed to do some image manipulation and find the dots, but this is where I'm stuck. Is there a way to connect or merge the dots that are close to each other? Is there a simple solution to this?
Here is the original image and also images after finding the dots.
Here is the code I came up with.
import cv2
import numpy as np
def load_local_image(image):
c_img = cv2.imread(image, cv2.IMREAD_COLOR)
g_img = cv2.imread(image, cv2.IMREAD_GRAYSCALE)
return (cv2.resize(c_img, (800, 800)), cv2.resize(g_img, (800, 800)))
def find_letters(binary_image, rgb_image, settings):
contours, hierarchy = cv2.findContours(binary_image.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
letters = []
for contour in contours:
if cv2.contourArea(contour) > settings['contour_area_threshold']:
# four points of bounding box for each character
x, y, w, h = cv2.boundingRect(contour)
# draw the bounding rectangle from points above
cv2.rectangle(rgb_image, (x, y), (x + w, y + h), settings['outline_color'], settings['outline_thickness'])
# print 'x:{}, y:{}, width:{}, height:{}'.format(x, y, w, h)
letters.append((x, y, w, h))
return sorted(letters, key=lambda x: x[0])
def alter_image(img):
blur = cv2.GaussianBlur(g, (3, 3), 0)
ret, thresh1 = cv2.threshold(blur, 50, 255, cv2.THRESH_BINARY)
bitwise = cv2.bitwise_not(thresh1)
erosion = cv2.erode(bitwise, np.ones((1, 1) ,np.uint8), iterations=1)
dilation = cv2.dilate(erosion, np.ones((3, 3) ,np.uint8), iterations=1)
return dilation
c, g = load_local_image('img.jpg')
altered_img = alter_image(g)
contour_settings = {
'contour_area_threshold': 1,
'outline_thickness': 1,
'outline_color': (66, 116, 244)
}
letters_crop = find_letters(altered_img, c, contour_settings)
cv2.imshow('color', c)
cv2.imshow('gray', altered_img)
cv2.waitKey()
cv2.destroyAllWindows()