Segmenting characters from a license plate with OpenCV - python

I cleaned a patent plate to read the characters. Now, I am stuck in a part where I must segment the characters.
For the phase of cleaning the plate I do this:
To this:
Now the idea is to be able to segment the characters and then be able to read it with a neural network that I developed, For segmentation I carry this but still don't understand why it doesn't work:
# Create sort_contours() function to grab the contour of each digit from left to right
def sort_contours(cnts,reverse = False):
i = 0
boundingBoxes = [cv2.boundingRect(c) for c in cnts]
(cnts, boundingBoxes) = zip(*sorted(zip(cnts, boundingBoxes),
key=lambda b: b[1][i], reverse=reverse))
return cnts
cont, _ = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# creat a copy version "test_roi" of plat_image to draw bounding box
test_roi = plate_image.copy()
# Initialize a list which will be used to append charater image
crop_characters = []
# define standard width and height of character
digit_w, digit_h = 40, 80
for c in sort_contours(cont):
(x, y, w, h) = cv2.boundingRect(c)
ratio = h/w
if 1<=ratio<=3.5: # Only select contour with defined ratio
if h/plate_image.shape[0]>=0.1: # Select contour which has the height larger than 50% of the plate
# Draw bounding box arroung digit number
cv2.rectangle(test_roi, (x, y), (x + w, y + h), (0, 255,0), 2)
# Sperate number and gibe prediction
curr_num = thre_mor[y:y+h,x:x+w]
curr_num = cv2.resize(curr_num, dsize=(digit_w, digit_h))
_, curr_num = cv2.threshold(curr_num, 220, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
crop_characters.append(curr_num)
print("Detect {} letters...".format(len(crop_characters)))
fig = plt.figure(figsize=(10,6))
plt.axis(False)
plt.imshow(test_roi)
How could I make the implementation correct? Any help is welcome!

If you want to label them by a neural network the first step of cleaning up is not needed. If you look it to region propose neural network (rnn, fast-rnn, faster-rnn, yolo, mask-rnn ect.) they will do the segmentation and classification in one go.
If you do want to go for segmentation first look in to connected components. It will select all the positive pixels what are connected together.

Related

Set an ROI in OpenCV

I am very new to OpenCV and Python. I have followed a tutorial to use YOLO using yolov3-tiny. It can detect vehicles fine. But what I need to complete my project is to count the number of vehicles that passes a particular lane. If I use the method where the vehicle is detected (the bounding box appears) to count, the count becomes very inaccurate since, the bounding box keeps blinking (meaning, it keeps on locating the same vehicle again, sometimes up to 5 times), so this is not a good way to count.
So I figured, how about if I just count a vehicle if it gets to a certain point. I have seen a lot of codes that seems to make this but, since I am a beginner, it really is hard for me to understand let alone, run it in my system. Their samples need to install so many things that I can't do because it throws errors.
See my sample code below:
cap = cv2.VideoCapture('rtsp://username:password#xxx.xxx.xxx.xxx:xxx/cam/realmonitor?channel=1')
whT = 320
confThreshold = 0.75
nmsThreshold = 0.3
list_of_vehicles = ["bicycle","car","motorbike","bus","truck"]
classesFile = 'coco.names'
classNames = []
with open(classesFile, 'r') as f:
classNames = f.read().rstrip('\n').split('\n')
modelConfiguration = 'yolov3-tiny.cfg'
modelWeights = 'yolov3-tiny.weights'
net = cv2.dnn.readNetFromDarknet(modelConfiguration, modelWeights)
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
total_vehicle_count = 0
def getVehicleCount(boxes, class_name):
global total_vehicle_count
dict_vehicle_count = {}
if(class_name in list_of_vehicles):
total_vehicle_count += 1
# print(total_vehicle_count)
return total_vehicle_count, dict_vehicle_count
def findObjects(ouputs, img):
hT, wT, cT = img.shape
bbox = []
classIds = []
confs = []
for output in outputs:
for det in output:
scores = det[5:]
classId = np.argmax(scores)
confidence = scores[classId]
if confidence > confThreshold:
w, h = int(det[2] * wT), int(det[3] * hT)
x, y = int((det[0] * wT) - w/2), int((det[1] * hT) - h/2)
bbox.append([x, y, w, h])
classIds.append(classId)
confs.append(float(confidence))
indices = cv2.dnn.NMSBoxes(bbox, confs, confThreshold, nmsThreshold)
for i in indices:
i = i[0]
box = bbox[i]
getVehicleCount(bbox, classNames[classIds[i]])
x, y, w, h = box[0], box[1], box[2], box[3]
cv2.rectangle(img, (x,y), (x+w, y+h), (255,0,255), 1)
cv2.putText(img, f'{classNames[classIds[i]].upper()} {int(confs[i]*100)}%', (x,y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255,0,255), 2)
while True:
success, img = cap.read()
blob = cv2.dnn.blobFromImage(img, 1/255, (whT,whT), [0,0,0], 1, crop=False)
net.setInput(blob)
layerNames = net.getLayerNames()
outputnames = [layerNames[i[0]-1] for i in net.getUnconnectedOutLayers()]
# print(outputnames)
outputs = net.forward(outputnames)
findObjects(outputs, img)
cv2.imshow('Image', img)
if cv2.waitKey(1) & 0XFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
With this code, 1 vehicle is counted sometimes up to 50 count, depending on the size, which is highly inaccurate. How can I create an ROI so that when the detected vehicle passes that point, that will be the only time it will count.
First, I would recommend that you consider using a visual tracker to track each detected rectangle. This is important even if you have an ROI to crop the image close to your counting zone/line. That is because even if the ROI is localized, the detection might still blink a couple of times causing a miscount. That is especially valid if another vehicle can enter the ROI while the first one is still passing it.
I recommend using the easy-to-use tracker provided by the widely used dlib library. Please refer to this example on how to use it.
Instead of counting detections within an ROI, you need to define an ROI line (within your ROI). Then, track detections rectangles centers in each frame. Finally, increase your counter once a rectangle center passes the ROI line.
Regarding how to count a rectangle passing the ROI line:
Select two points to define your ROI line.
Use your points to find the parameters for the general line formula ax + by + c = 0
For each frame, plug the rectangle center coordinates in the formula and keep track of the sign of the result.
If the sign of the result changes that means that the rectangle center has passed the line.

How to sort cropped table cells by rows and columns with OpenCV?

While building an OCR pipeline for automatic data extraction from a fairly messy scanned document, I got stuck at the image preprocessing stage where I need to crop all cells from a table before running Tesseract on them. One major complication is the necessity to somehow label rows and columns, since extracted data needs structure to hold any meaning at all. Additionally, ability to extract only specific rows and/or columns would make my pipeline much faster when working with several documents of similar composition, which is its expected use.
Here is the code I currently use, almost directly taken from an open source box detection algorithm by Kanan Vyas found earlier on Github
import cv2
import numpy as np
import os
import glob
def sort_contours(cnts, method="left-to-right"):
reverse = False
i = 0
if method == "right-to-left" or method == "bottom-to-top":
reverse = True
if method == "top-to-bottom" or method == "bottom-to-top":
i = 1
boundingBoxes = [cv2.boundingRect(c) for c in cnts]
(cnts, boundingBoxes) = zip(*sorted(zip(cnts, boundingBoxes),
key=lambda b: b[1][i], reverse=reverse))
return (cnts, boundingBoxes)
def box_extraction(img_for_box_extraction_path, cropped_dir_path):
img = cv2.imread(img_for_box_extraction_path, 0)
img[int(0):int(img.shape[0]),int(0):int(5)] = [255, 255, 255]
(thresh, img_bin) = cv2.threshold(img, 128, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)
img_bin = 255-img_bin
cv2.imwrite("Image_bin.jpg",img_bin)
kernel_length = np.array(img).shape[1]//40
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, kernel_length))
hori_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernel_length, 1))
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
img_temp1 = cv2.erode(img_bin, vertical_kernel, iterations=3)
vertical_lines_img = cv2.dilate(img_temp1, vertical_kernel, iterations=3)
cv2.imwrite("vertical_lines.jpg",vertical_lines_img)
img_temp2 = cv2.erode(img_bin, hori_kernel, iterations=3)
horizontal_lines_img = cv2.dilate(img_temp2, hori_kernel, iterations=3)
cv2.imwrite("horizontal_lines.jpg",horizontal_lines_img)
alpha = 0.5
beta = 1.0 - alpha
img_final_bin = cv2.addWeighted(vertical_lines_img, alpha, horizontal_lines_img, beta, 0.0)
img_final_bin = cv2.erode(~img_final_bin, kernel, iterations=2)
(thresh, img_final_bin) = cv2.threshold(img_final_bin, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
cv2.imwrite("img_final_bin.png",img_final_bin)
im2, contours, hierarchy = cv2.findContours(
img_final_bin, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
(contours, boundingBoxes) = sort_contours(contours, method="top-to-bottom")
idx = 0
for c in contours:
x, y, w, h = cv2.boundingRect(c)
if (w > 30 and h > 20) and w > h and h < 150:
idx += 1
new_img = img[y:y+h, x:x+w]
cv2.imwrite(cropped_dir_path+str(idx) + '.png', new_img)
cv2.drawContours(img, contours, -1, (0, 0, 255), 3)
cv2.imwrite("./Temp/img_contour.jpg", img)
output_dir = "./Cropped/"
files = glob.glob(output_dir+"*")
for f in files:
os.remove(f)
box_extraction("./prototype/page_cropped.png", output_dir)
The problem with this approach is in sorting - simple "directional" directives used here fail to preserve table structure, especially if the table is somewhat skewed in the source image because of the trade-off between dewarping table borders and preserving table contents for OCR. If it is important, I do not currently expect to deal with table cells that occupy more than one row or column.
My actual image contains sensitive business data, so I've made this dummy picture with GIMP. It should be enough for demonstration purposes, you only need to point box_extraction() function at it.
Given that incomplete cells to the right should be ignored by the box extraction algorithm, I expected to get 9 x 4 = 36 images named "1.png" (with cell 0,0), "2.png" (0,1) etc, each set of 4 in correspondence to cells of a single row (it should be possible to get all cells of a single column by selecting its header cell from the first row and every fifth image thereafter).
However, now output images end up arranged in a very weird order, with "1.png" to "4.png" holding cells of the first row in reverse order, "5.png" - first cell of the second row, "6.png" - last cell of the second row, and collapse of the pattern after that.
contours = [[1727.0220947265625, 151.22364807128906, 761.0980224609375, 91.216552734375, 0.9920176267623901], [1340.4588623046875, 1518.04248046875, 270.90380859375, 71.301025390625, 0.9890508651733398], [2110.046630859375, 1843.77099609375, 278.02294921875, 74.4261474609375, 0.9886908531188965], [2055.577392578125, 1226.511474609375, 311.0478515625, 82.458740234375, 0.984758734703064], [186.29859924316406, 1517.8702392578125, 569.6539764404297, 76.9754638671875, 0.9835792183876038], [1354.387451171875, 1321.7530517578125, 240.595947265625, 72.9495849609375, 0.9832095503807068], [1325.5714111328125, 1420.0745849609375, 285.83837890625, 73.9566650390625, 0.9797852635383606], [2059.18212890625, 1319.1517333984375, 308.333984375, 82.51513671875, 0.9795728325843811], [1232.687744140625, 631.7418823242188, 312.6070556640625, 77.5128173828125, 0.9789395928382874], [2079.22216796875, 1416.3896484375, 283.681640625, 81.77978515625, 0.9754523038864136], [321.4118347167969, 636.4778442382812, 329.8473205566406, 81.48944091796875, 0.9666035771369934], [1353.6107177734375, 1227.5003662109375, 238.04345703125, 75.3751220703125, 0.9659966826438904], [1228.7125244140625, 722.1092529296875, 316.5693359375, 69.8770751953125, 0.960330069065094], [2066.78076171875, 1511.6800537109375, 296.3779296875, 80.0389404296875, 0.9578678607940674], [163.09405517578125, 1417.7535400390625, 610.5952758789062, 83.1680908203125, 0.930923342704773], [1923.8992919921875, 640.7562255859375, 259.6851806640625, 74.0850830078125, 0.9276629090309143], [156.9827880859375, 1224.2708740234375, 676.9793701171875, 87.8233642578125, 0.9211469292640686], [168.77809143066406, 1322.410888671875, 620.0236663818359, 80.5582275390625, 0.8804788589477539], [1716.60595703125, 1414.467041015625, 283.1617431640625, 81.5452880859375, 0.8343544602394104], [1718.3673095703125, 1510.7855224609375, 304.5853271484375, 81.029541015625, 0.7616285681724548], [1718.100830078125, 1226.4912109375, 287.2305908203125, 78.2430419921875, 0.745871365070343], [1722.0452880859375, 1319.096923828125, 277.3297119140625, 81.481201171875, 0.7207437753677368]]
sorted_contours = []
sorted_by_height = sorted(contours, key=lambda x: x[1])
for index,i in enumerate(sorted_by_height):
result = list(filter(lambda x: x[1] > i[1]- 6 and x[1] < i[1] + 6 and i not in sorted_contours , sorted_by_height[index:]))
sorted_contours = sorted_contours + sorted(result, key=lambda x: x[0])
print(sorted_contours)

How to find table like structure in image

I have different type of invoice files, I want to find table in each invoice file. In this table position is not constant. So I go for image processing. First I tried to convert my invoice into image, then I found contour based on table borders, Finally I can catch table position.
For the task I used below code.
with Image(page) as page_image:
page_image.alpha_channel = False #eliminates transperancy
img_buffer=np.asarray(bytearray(page_image.make_blob()), dtype=np.uint8)
img = cv2.imdecode(img_buffer, cv2.IMREAD_UNCHANGED)
ret, thresh = cv2.threshold(img, 127, 255, 0)
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
margin=[]
for contour in contours:
# get rectangle bounding contour
[x, y, w, h] = cv2.boundingRect(contour)
# Don't plot small false positives that aren't text
if (w >thresh1 and h> thresh2):
margin.append([x, y, x + w, y + h])
#data cleanup on margin to extract required position values.
In this code thresh1, thresh2 i'll update based on the file.
So using this code I can successfully read positions of tables in images, using this position i'll work on my invoice pdf file. For example
Sample 1:
Sample 2:
Sample 3:
Output:
Sample 1:
Sample 2:
Sample 3:
But, now I have a new format which doesn't have any borders but it's a table. How to solve this? Because my entire operation depends only on borders of the tables. But now I don't have a table borders. How can I achieve this? I don't have any idea to move out from this problem. My question is, Is there any way to find position based on table structure?.
For example My problem input looks like below:
I would like to find its position like below:
How can I solve this?
It is really appreciable to give me an idea to solve the problem.
Thanks in advance.
Vaibhav is right. You can experiment with the different morphological transforms to extract or group pixels into different shapes, lines, etc. For example, the approach can be the following:
Start from the Dilation to convert the text into the solid spots.
Then apply the findContours function as a next step to find text
bounding boxes.
After having the text bounding boxes it is possible to apply some
heuristics algorithm to cluster the text boxes into groups by their
coordinates. This way you can find a groups of text areas aligned
into rows and columns.
Then you can apply sorting by x and y coordinates and/or some
analysis to the groups to try to find if the grouped text boxes can
form a table.
I wrote a small sample illustrating the idea. I hope the code is self explanatory. I've put some comments there too.
import os
import cv2
import imutils
# This only works if there's only one table on a page
# Important parameters:
# - morph_size
# - min_text_height_limit
# - max_text_height_limit
# - cell_threshold
# - min_columns
def pre_process_image(img, save_in_file, morph_size=(8, 8)):
# get rid of the color
pre = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Otsu threshold
pre = cv2.threshold(pre, 250, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
# dilate the text to make it solid spot
cpy = pre.copy()
struct = cv2.getStructuringElement(cv2.MORPH_RECT, morph_size)
cpy = cv2.dilate(~cpy, struct, anchor=(-1, -1), iterations=1)
pre = ~cpy
if save_in_file is not None:
cv2.imwrite(save_in_file, pre)
return pre
def find_text_boxes(pre, min_text_height_limit=6, max_text_height_limit=40):
# Looking for the text spots contours
# OpenCV 3
# img, contours, hierarchy = cv2.findContours(pre, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# OpenCV 4
contours, hierarchy = cv2.findContours(pre, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# Getting the texts bounding boxes based on the text size assumptions
boxes = []
for contour in contours:
box = cv2.boundingRect(contour)
h = box[3]
if min_text_height_limit < h < max_text_height_limit:
boxes.append(box)
return boxes
def find_table_in_boxes(boxes, cell_threshold=10, min_columns=2):
rows = {}
cols = {}
# Clustering the bounding boxes by their positions
for box in boxes:
(x, y, w, h) = box
col_key = x // cell_threshold
row_key = y // cell_threshold
cols[row_key] = [box] if col_key not in cols else cols[col_key] + [box]
rows[row_key] = [box] if row_key not in rows else rows[row_key] + [box]
# Filtering out the clusters having less than 2 cols
table_cells = list(filter(lambda r: len(r) >= min_columns, rows.values()))
# Sorting the row cells by x coord
table_cells = [list(sorted(tb)) for tb in table_cells]
# Sorting rows by the y coord
table_cells = list(sorted(table_cells, key=lambda r: r[0][1]))
return table_cells
def build_lines(table_cells):
if table_cells is None or len(table_cells) <= 0:
return [], []
max_last_col_width_row = max(table_cells, key=lambda b: b[-1][2])
max_x = max_last_col_width_row[-1][0] + max_last_col_width_row[-1][2]
max_last_row_height_box = max(table_cells[-1], key=lambda b: b[3])
max_y = max_last_row_height_box[1] + max_last_row_height_box[3]
hor_lines = []
ver_lines = []
for box in table_cells:
x = box[0][0]
y = box[0][1]
hor_lines.append((x, y, max_x, y))
for box in table_cells[0]:
x = box[0]
y = box[1]
ver_lines.append((x, y, x, max_y))
(x, y, w, h) = table_cells[0][-1]
ver_lines.append((max_x, y, max_x, max_y))
(x, y, w, h) = table_cells[0][0]
hor_lines.append((x, max_y, max_x, max_y))
return hor_lines, ver_lines
if __name__ == "__main__":
in_file = os.path.join("data", "page.jpg")
pre_file = os.path.join("data", "pre.png")
out_file = os.path.join("data", "out.png")
img = cv2.imread(os.path.join(in_file))
pre_processed = pre_process_image(img, pre_file)
text_boxes = find_text_boxes(pre_processed)
cells = find_table_in_boxes(text_boxes)
hor_lines, ver_lines = build_lines(cells)
# Visualize the result
vis = img.copy()
# for box in text_boxes:
# (x, y, w, h) = box
# cv2.rectangle(vis, (x, y), (x + w - 2, y + h - 2), (0, 255, 0), 1)
for line in hor_lines:
[x1, y1, x2, y2] = line
cv2.line(vis, (x1, y1), (x2, y2), (0, 0, 255), 1)
for line in ver_lines:
[x1, y1, x2, y2] = line
cv2.line(vis, (x1, y1), (x2, y2), (0, 0, 255), 1)
cv2.imwrite(out_file, vis)
I've got the following output:
Of course to make the algorithm more robust and applicable to a variety of different input images it has to be adjusted correspondingly.
Update: Updated the code with respect to the OpenCV API changes for findContours. If you have older version of OpenCV installed - use the corresponding call. Related post.
You can try applying some morphological transforms (such as Dilation, Erosion or Gaussian Blur) as a pre-processing step before your findContours function
For example
blur = cv2.GaussianBlur(g, (3, 3), 0)
ret, thresh1 = cv2.threshold(blur, 150, 255, cv2.THRESH_BINARY)
bitwise = cv2.bitwise_not(thresh1)
erosion = cv2.erode(bitwise, np.ones((1, 1) ,np.uint8), iterations=5)
dilation = cv2.dilate(erosion, np.ones((3, 3) ,np.uint8), iterations=5)
The last argument, iterations shows the degree of dilation/erosion that will take place (in your case, on the text). Having a small value will results in small independent contours even within an alphabet and large values will club many nearby elements. You need to find the ideal value so that only that block of your image gets.
Please note that I've taken 150 as the threshold parameter because I've been working on extracting text from images with varying backgrounds and this worked out better. You can choose to continue with the value you've taken since it's a black & white image.
There are many types of tables in the document images with too much variations and layouts. No matter how many rules you write, there will always appear a table for which your rules will fail. These types of problems are genrally solved using ML(Machine Learning) based solutions. You can find many pre-implemented codes on github for solving the problem of detecting tables in the images using ML or DL (Deep Learning).
Here is my code along with the deep learning models, the model can detect various types of tables as well as the structure cells from the tables: https://github.com/DevashishPrasad/CascadeTabNet
The approach achieves state of the art on various public datasets right now (10th May 2020) as far as the accuracy is concerned
More details : https://arxiv.org/abs/2004.12629
this would be helpful for you.
I've drawn a bounding box for each word in my invoice, then I will chose only fields that I want. You can use for that ROI (Region Of Interest)
import pytesseract
import cv2
img = cv2.imread(r'path\Invoice2.png')
d = pytesseract.image_to_data(img, output_type=pytesseract.Output.DICT)
n_boxes = len(d['level'])
for i in range(n_boxes):
(x, y, w, h) = (d['left'][i], d['top'][i], d['width'][i], d['height'][i])
img = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 1)
cv2.imshow('img', img)
cv2.waitKey(0)
You will get this output:

How to close contour over outline rather than edge - OpenCV

Tl;DR: How to measure area enclosed by contour rather than just the contour line itself
I want to find the outline of the object in the below image and have a code that works for most cases.
Thresholding and adpative thresholding do not work reliably as the ligthing changes. I use a Canny edge detection and check the area to ensure I found the proper contour. However, once in a while, when there is a gap that cannot be closed by morphological closing, the shape is correct but the area is of the contour line instead of the whole object.
What I usually do is use convexHull, as it returns a contour around the object. However, in this case the object curves inwards along the top and convexHull isn't a good approximation to the area anymore.
I tried using approxPolyDP but the area that gets returned is of the contour line rather than the object.
How can I get the approxPolyDP to return a similar closed contour around the object, just like the convexHull function does?
Code illustrating this using the above picture:
import cv2
img = cv2.imread('Img_0.jpg',0)
cv2.imshow('Original', img)
edges = cv2.Canny(img,50,150)
cv2.imshow('Canny', edges)
contours, hierarchy = cv2.findContours(edges,cv2.cv.CV_RETR_EXTERNAL,cv2.cv.CV_CHAIN_APPROX_NONE)
cnt = contours[1] #I have a function to do this but for simplicity here by hand
M = cv2.moments(cnt)
print('Area = %f \t' %M['m00'], end="")
cntHull = cv2.convexHull(cnt, returnPoints=True)
cntPoly=cv2.approxPolyDP(cnt, epsilon=1, closed=True)
MHull = cv2.moments(cntHull)
MPoly = cv2.moments(cntPoly)
print('Area after Convec Hull = %f \t Area after apporxPoly = %f \n' %(MHull['m00'], MPoly['m00']), end="")
x, y =img.shape
size = (w, h, channels) = (x, y, 1)
canvas = np.zeros(size, np.uint8)
cv2.drawContours(canvas, cnt, -1, 255)
cv2.imshow('Contour', canvas)
canvas = np.zeros(size, np.uint8)
cv2.drawContours(canvas, cntHull, -1, 255)
cv2.imshow('Hull', canvas)
canvas = np.zeros(size, np.uint8)
cv2.drawContours(canvas, cntPoly, -1, 255)
cv2.imshow('Poly', canvas)
The output from the code is
Area = 24.500000 Area after Convec Hull = 3960.500000 Area after apporxPoly = 29.500000
Here's a very promising ppt from geosensor.net that discusses several algorithms. My recommendation would be to use the swing arm method with a limited radius.
Another completely un-tested, off the wall idea I have is to scan across the image by row and column (more directions increase accuracy) and color in the regions between line intersections:
_______
/-------\
/---------\
--------+---------+------ (fill between 2 intersections)
| |
|
--------+---------------- (no fill between single intersection)
\
-------
the maximum error would then decrease as the number of line directions scanned increases (more than 90 and 45 degrees). Getting a final area would then be as simple as a pixel count.

How to remove convexity defects in a Sudoku square?

I was doing a fun project: Solving a Sudoku from an input image using OpenCV (as in Google goggles etc). And I have completed the task, but at the end I found a little problem for which I came here.
I did the programming using Python API of OpenCV 2.3.1.
Below is what I did :
Read the image
Find the contours
Select the one with maximum area, ( and also somewhat equivalent to square).
Find the corner points.
e.g. given below:
(Notice here that the green line correctly coincides with the true boundary of the Sudoku, so the Sudoku can be correctly warped. Check next image)
warp the image to a perfect square
eg image:
Perform OCR ( for which I used the method I have given in Simple Digit Recognition OCR in OpenCV-Python )
And the method worked well.
Problem:
Check out this image.
Performing the step 4 on this image gives the result below:
The red line drawn is the original contour which is the true outline of sudoku boundary.
The green line drawn is approximated contour which will be the outline of warped image.
Which of course, there is difference between green line and red line at the top edge of sudoku. So while warping, I am not getting the original boundary of the Sudoku.
My Question :
How can I warp the image on the correct boundary of the Sudoku, i.e. the red line OR how can I remove the difference between red line and green line? Is there any method for this in OpenCV?
I have a solution that works, but you'll have to translate it to OpenCV yourself. It's written in Mathematica.
The first step is to adjust the brightness in the image, by dividing each pixel with the result of a closing operation:
src = ColorConvert[Import["http://davemark.com/images/sudoku.jpg"], "Grayscale"];
white = Closing[src, DiskMatrix[5]];
srcAdjusted = Image[ImageData[src]/ImageData[white]]
The next step is to find the sudoku area, so I can ignore (mask out) the background. For that, I use connected component analysis, and select the component that's got the largest convex area:
components =
ComponentMeasurements[
ColorNegate#Binarize[srcAdjusted], {"ConvexArea", "Mask"}][[All,
2]];
largestComponent = Image[SortBy[components, First][[-1, 2]]]
By filling this image, I get a mask for the sudoku grid:
mask = FillingTransform[largestComponent]
Now, I can use a 2nd order derivative filter to find the vertical and horizontal lines in two separate images:
lY = ImageMultiply[MorphologicalBinarize[GaussianFilter[srcAdjusted, 3, {2, 0}], {0.02, 0.05}], mask];
lX = ImageMultiply[MorphologicalBinarize[GaussianFilter[srcAdjusted, 3, {0, 2}], {0.02, 0.05}], mask];
I use connected component analysis again to extract the grid lines from these images. The grid lines are much longer than the digits, so I can use caliper length to select only the grid lines-connected components. Sorting them by position, I get 2x10 mask images for each of the vertical/horizontal grid lines in the image:
verticalGridLineMasks =
SortBy[ComponentMeasurements[
lX, {"CaliperLength", "Centroid", "Mask"}, # > 100 &][[All,
2]], #[[2, 1]] &][[All, 3]];
horizontalGridLineMasks =
SortBy[ComponentMeasurements[
lY, {"CaliperLength", "Centroid", "Mask"}, # > 100 &][[All,
2]], #[[2, 2]] &][[All, 3]];
Next I take each pair of vertical/horizontal grid lines, dilate them, calculate the pixel-by-pixel intersection, and calculate the center of the result. These points are the grid line intersections:
centerOfGravity[l_] :=
ComponentMeasurements[Image[l], "Centroid"][[1, 2]]
gridCenters =
Table[centerOfGravity[
ImageData[Dilation[Image[h], DiskMatrix[2]]]*
ImageData[Dilation[Image[v], DiskMatrix[2]]]], {h,
horizontalGridLineMasks}, {v, verticalGridLineMasks}];
The last step is to define two interpolation functions for X/Y mapping through these points, and transform the image using these functions:
fnX = ListInterpolation[gridCenters[[All, All, 1]]];
fnY = ListInterpolation[gridCenters[[All, All, 2]]];
transformed =
ImageTransformation[
srcAdjusted, {fnX ## Reverse[#], fnY ## Reverse[#]} &, {9*50, 9*50},
PlotRange -> {{1, 10}, {1, 10}}, DataRange -> Full]
All of the operations are basic image processing function, so this should be possible in OpenCV, too. The spline-based image transformation might be harder, but I don't think you really need it. Probably using the perspective transformation you use now on each individual cell will give good enough results.
Nikie's answer solved my problem, but his answer was in Mathematica. So I thought I should give its OpenCV adaptation here. But after implementing I could see that OpenCV code is much bigger than nikie's mathematica code. And also, I couldn't find interpolation method done by nikie in OpenCV ( although it can be done using scipy, i will tell it when time comes.)
1. Image PreProcessing ( closing operation )
import cv2
import numpy as np
img = cv2.imread('dave.jpg')
img = cv2.GaussianBlur(img,(5,5),0)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
mask = np.zeros((gray.shape),np.uint8)
kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(11,11))
close = cv2.morphologyEx(gray,cv2.MORPH_CLOSE,kernel1)
div = np.float32(gray)/(close)
res = np.uint8(cv2.normalize(div,div,0,255,cv2.NORM_MINMAX))
res2 = cv2.cvtColor(res,cv2.COLOR_GRAY2BGR)
Result :
2. Finding Sudoku Square and Creating Mask Image
thresh = cv2.adaptiveThreshold(res,255,0,1,19,2)
contour,hier = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
max_area = 0
best_cnt = None
for cnt in contour:
area = cv2.contourArea(cnt)
if area > 1000:
if area > max_area:
max_area = area
best_cnt = cnt
cv2.drawContours(mask,[best_cnt],0,255,-1)
cv2.drawContours(mask,[best_cnt],0,0,2)
res = cv2.bitwise_and(res,mask)
Result :
3. Finding Vertical lines
kernelx = cv2.getStructuringElement(cv2.MORPH_RECT,(2,10))
dx = cv2.Sobel(res,cv2.CV_16S,1,0)
dx = cv2.convertScaleAbs(dx)
cv2.normalize(dx,dx,0,255,cv2.NORM_MINMAX)
ret,close = cv2.threshold(dx,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
close = cv2.morphologyEx(close,cv2.MORPH_DILATE,kernelx,iterations = 1)
contour, hier = cv2.findContours(close,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contour:
x,y,w,h = cv2.boundingRect(cnt)
if h/w > 5:
cv2.drawContours(close,[cnt],0,255,-1)
else:
cv2.drawContours(close,[cnt],0,0,-1)
close = cv2.morphologyEx(close,cv2.MORPH_CLOSE,None,iterations = 2)
closex = close.copy()
Result :
4. Finding Horizontal Lines
kernely = cv2.getStructuringElement(cv2.MORPH_RECT,(10,2))
dy = cv2.Sobel(res,cv2.CV_16S,0,2)
dy = cv2.convertScaleAbs(dy)
cv2.normalize(dy,dy,0,255,cv2.NORM_MINMAX)
ret,close = cv2.threshold(dy,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
close = cv2.morphologyEx(close,cv2.MORPH_DILATE,kernely)
contour, hier = cv2.findContours(close,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contour:
x,y,w,h = cv2.boundingRect(cnt)
if w/h > 5:
cv2.drawContours(close,[cnt],0,255,-1)
else:
cv2.drawContours(close,[cnt],0,0,-1)
close = cv2.morphologyEx(close,cv2.MORPH_DILATE,None,iterations = 2)
closey = close.copy()
Result :
Of course, this one is not so good.
5. Finding Grid Points
res = cv2.bitwise_and(closex,closey)
Result :
6. Correcting the defects
Here, nikie does some kind of interpolation, about which I don't have much knowledge. And i couldn't find any corresponding function for this OpenCV. (may be it is there, i don't know).
Check out this SOF which explains how to do this using SciPy, which I don't want to use : Image transformation in OpenCV
So, here I took 4 corners of each sub-square and applied warp Perspective to each.
For that, first we find the centroids.
contour, hier = cv2.findContours(res,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
centroids = []
for cnt in contour:
mom = cv2.moments(cnt)
(x,y) = int(mom['m10']/mom['m00']), int(mom['m01']/mom['m00'])
cv2.circle(img,(x,y),4,(0,255,0),-1)
centroids.append((x,y))
But resulting centroids won't be sorted. Check out below image to see their order:
So we sort them from left to right, top to bottom.
centroids = np.array(centroids,dtype = np.float32)
c = centroids.reshape((100,2))
c2 = c[np.argsort(c[:,1])]
b = np.vstack([c2[i*10:(i+1)*10][np.argsort(c2[i*10:(i+1)*10,0])] for i in xrange(10)])
bm = b.reshape((10,10,2))
Now see below their order :
Finally we apply the transformation and create a new image of size 450x450.
output = np.zeros((450,450,3),np.uint8)
for i,j in enumerate(b):
ri = i/10
ci = i%10
if ci != 9 and ri!=9:
src = bm[ri:ri+2, ci:ci+2 , :].reshape((4,2))
dst = np.array( [ [ci*50,ri*50],[(ci+1)*50-1,ri*50],[ci*50,(ri+1)*50-1],[(ci+1)*50-1,(ri+1)*50-1] ], np.float32)
retval = cv2.getPerspectiveTransform(src,dst)
warp = cv2.warpPerspective(res2,retval,(450,450))
output[ri*50:(ri+1)*50-1 , ci*50:(ci+1)*50-1] = warp[ri*50:(ri+1)*50-1 , ci*50:(ci+1)*50-1].copy()
Result :
The result is almost same as nikie's, but code length is large. May be, better methods are available out there, but until then, this works OK.
Regards
ARK.
You could try to use some kind of grid based modeling of you arbitrary warping. And since the sudoku already is a grid, that shouldn't be too hard.
So you could try to detect the boundaries of each 3x3 subregion and then warp each region individually. If the detection succeeds it would give you a better approximation.
I thought this was a great post, and a great solution by ARK; very well laid out and explained.
I was working on a similar problem, and built the entire thing. There were some changes (i.e. xrange to range, arguments in cv2.findContours), but this should work out of the box (Python 3.5, Anaconda).
This is a compilation of the elements above, with some of the missing code added (i.e., labeling of points).
'''
https://stackoverflow.com/questions/10196198/how-to-remove-convexity-defects-in-a-sudoku-square
'''
import cv2
import numpy as np
img = cv2.imread('test.png')
winname="raw image"
cv2.namedWindow(winname)
cv2.imshow(winname, img)
cv2.moveWindow(winname, 100,100)
img = cv2.GaussianBlur(img,(5,5),0)
winname="blurred"
cv2.namedWindow(winname)
cv2.imshow(winname, img)
cv2.moveWindow(winname, 100,150)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
mask = np.zeros((gray.shape),np.uint8)
kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(11,11))
winname="gray"
cv2.namedWindow(winname)
cv2.imshow(winname, gray)
cv2.moveWindow(winname, 100,200)
close = cv2.morphologyEx(gray,cv2.MORPH_CLOSE,kernel1)
div = np.float32(gray)/(close)
res = np.uint8(cv2.normalize(div,div,0,255,cv2.NORM_MINMAX))
res2 = cv2.cvtColor(res,cv2.COLOR_GRAY2BGR)
winname="res2"
cv2.namedWindow(winname)
cv2.imshow(winname, res2)
cv2.moveWindow(winname, 100,250)
#find elements
thresh = cv2.adaptiveThreshold(res,255,0,1,19,2)
img_c, contour,hier = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
max_area = 0
best_cnt = None
for cnt in contour:
area = cv2.contourArea(cnt)
if area > 1000:
if area > max_area:
max_area = area
best_cnt = cnt
cv2.drawContours(mask,[best_cnt],0,255,-1)
cv2.drawContours(mask,[best_cnt],0,0,2)
res = cv2.bitwise_and(res,mask)
winname="puzzle only"
cv2.namedWindow(winname)
cv2.imshow(winname, res)
cv2.moveWindow(winname, 100,300)
# vertical lines
kernelx = cv2.getStructuringElement(cv2.MORPH_RECT,(2,10))
dx = cv2.Sobel(res,cv2.CV_16S,1,0)
dx = cv2.convertScaleAbs(dx)
cv2.normalize(dx,dx,0,255,cv2.NORM_MINMAX)
ret,close = cv2.threshold(dx,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
close = cv2.morphologyEx(close,cv2.MORPH_DILATE,kernelx,iterations = 1)
img_d, contour, hier = cv2.findContours(close,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contour:
x,y,w,h = cv2.boundingRect(cnt)
if h/w > 5:
cv2.drawContours(close,[cnt],0,255,-1)
else:
cv2.drawContours(close,[cnt],0,0,-1)
close = cv2.morphologyEx(close,cv2.MORPH_CLOSE,None,iterations = 2)
closex = close.copy()
winname="vertical lines"
cv2.namedWindow(winname)
cv2.imshow(winname, img_d)
cv2.moveWindow(winname, 100,350)
# find horizontal lines
kernely = cv2.getStructuringElement(cv2.MORPH_RECT,(10,2))
dy = cv2.Sobel(res,cv2.CV_16S,0,2)
dy = cv2.convertScaleAbs(dy)
cv2.normalize(dy,dy,0,255,cv2.NORM_MINMAX)
ret,close = cv2.threshold(dy,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
close = cv2.morphologyEx(close,cv2.MORPH_DILATE,kernely)
img_e, contour, hier = cv2.findContours(close,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contour:
x,y,w,h = cv2.boundingRect(cnt)
if w/h > 5:
cv2.drawContours(close,[cnt],0,255,-1)
else:
cv2.drawContours(close,[cnt],0,0,-1)
close = cv2.morphologyEx(close,cv2.MORPH_DILATE,None,iterations = 2)
closey = close.copy()
winname="horizontal lines"
cv2.namedWindow(winname)
cv2.imshow(winname, img_e)
cv2.moveWindow(winname, 100,400)
# intersection of these two gives dots
res = cv2.bitwise_and(closex,closey)
winname="intersections"
cv2.namedWindow(winname)
cv2.imshow(winname, res)
cv2.moveWindow(winname, 100,450)
# text blue
textcolor=(0,255,0)
# points green
pointcolor=(255,0,0)
# find centroids and sort
img_f, contour, hier = cv2.findContours(res,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
centroids = []
for cnt in contour:
mom = cv2.moments(cnt)
(x,y) = int(mom['m10']/mom['m00']), int(mom['m01']/mom['m00'])
cv2.circle(img,(x,y),4,(0,255,0),-1)
centroids.append((x,y))
# sorting
centroids = np.array(centroids,dtype = np.float32)
c = centroids.reshape((100,2))
c2 = c[np.argsort(c[:,1])]
b = np.vstack([c2[i*10:(i+1)*10][np.argsort(c2[i*10:(i+1)*10,0])] for i in range(10)])
bm = b.reshape((10,10,2))
# make copy
labeled_in_order=res2.copy()
for index, pt in enumerate(b):
cv2.putText(labeled_in_order,str(index),tuple(pt),cv2.FONT_HERSHEY_DUPLEX, 0.75, textcolor)
cv2.circle(labeled_in_order, tuple(pt), 5, pointcolor)
winname="labeled in order"
cv2.namedWindow(winname)
cv2.imshow(winname, labeled_in_order)
cv2.moveWindow(winname, 100,500)
# create final
output = np.zeros((450,450,3),np.uint8)
for i,j in enumerate(b):
ri = int(i/10) # row index
ci = i%10 # column index
if ci != 9 and ri!=9:
src = bm[ri:ri+2, ci:ci+2 , :].reshape((4,2))
dst = np.array( [ [ci*50,ri*50],[(ci+1)*50-1,ri*50],[ci*50,(ri+1)*50-1],[(ci+1)*50-1,(ri+1)*50-1] ], np.float32)
retval = cv2.getPerspectiveTransform(src,dst)
warp = cv2.warpPerspective(res2,retval,(450,450))
output[ri*50:(ri+1)*50-1 , ci*50:(ci+1)*50-1] = warp[ri*50:(ri+1)*50-1 , ci*50:(ci+1)*50-1].copy()
winname="final"
cv2.namedWindow(winname)
cv2.imshow(winname, output)
cv2.moveWindow(winname, 600,100)
cv2.waitKey(0)
cv2.destroyAllWindows()
I want to add that above method works only when sudoku board stands straight, otherwise height/width (or vice versa) ratio test will most probably fail and you will not be able to detect edges of sudoku. (I also want to add that if lines that are not perpendicular to the image borders, sobel operations (dx and dy) will still work as lines will still have edges with respect to both axes.)
To be able to detect straight lines you should work on contour or pixel-wise analysis such as contourArea/boundingRectArea, top left and bottom right points...
Edit: I managed to check whether a set of contours form a line or not by applying linear regression and checking the error. However linear regression performed poorly when slope of the line is too big (i.e. >1000) or it is very close to 0. Therefore applying the ratio test above (in most upvoted answer) before linear regression is logical and did work for me.
To remove undected corners I applied gamma correction with a gamma value of 0.8.
The red circle is drawn to show the missing corner.
The code is:
gamma = 0.8
invGamma = 1/gamma
table = np.array([((i / 255.0) ** invGamma) * 255
for i in np.arange(0, 256)]).astype("uint8")
cv2.LUT(img, table, img)
This is in addition to Abid Rahman's answer if some corner points are missing.

Categories

Resources