The objective is to create bounding boxes using text recognition methods (eg: OpenCV) for US floor plan images, which can then be fed into a text reader (eg: LSTM or tesseract).
Several methods which have been tried cv2.findContours and cv2.boundingRect methods have been attempted but have largely failed to generalise to different types of floor plans (there is a wide deviation in how the floor plans look).
For example, cv2.findContours using grayscale, adaptive thresholds, erosion and dilation (with various iterations) before applying the cv2.findContours function results in the bellow. Note that Bedroom 2 and Kitchen are not being picked up correctly.
Additional example which fails to find any regions:
Any thoughts on text recognition models or cleaning procedures that will improve the accuracy of the text recognition model, preferably with code examples?
This answer is based on the assumption that images are similar one to another (like their size, thickness of walls, letters...). If they are not this wouldn't be a good approach because you would have to change the thresholders for every image. That being said, I would try to transform the image to binary and search for contours. After that you can add criterion like height, weight etc. to filter out the walls. After that You can draw contours on a mask and then dilate the image. That will combine letters close to each other into one contour. Then you can create bounding box for all the contours which is your ROI. Then you can use any OCR on that region. Hope it helps a bit. Cheers!
Example:
import cv2
import numpy as np
img = cv2.imread('floor.png')
mask = np.zeros(img.shape, dtype=np.uint8)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, threshold = cv2.threshold(gray,150,255,cv2.THRESH_BINARY_INV)
_, contours, hierarchy = cv2.findContours(threshold,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
ROI = []
for cnt in contours:
x,y,w,h = cv2.boundingRect(cnt)
if h < 20:
cv2.drawContours(mask, [cnt], 0, (255,255,255), 1)
kernel = np.ones((7,7),np.uint8)
dilation = cv2.dilate(mask,kernel,iterations = 1)
gray_d = cv2.cvtColor(dilation, cv2.COLOR_BGR2GRAY)
_, threshold_d = cv2.threshold(gray_d,150,255,cv2.THRESH_BINARY)
_, contours_d, hierarchy = cv2.findContours(threshold_d,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for cnt in contours_d:
x,y,w,h = cv2.boundingRect(cnt)
if w > 35:
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
roi_c = img[y:y+h, x:x+w]
ROI.append(roi_c)
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
Related
I want to count cardboard boxes and read a specific label which will only contain 3 words with white background on a conveyer belt using OpenCV and Python. Attached is the image I am using for experiments. The problem so far is that I am unable to detect the complete box due to noise and if I try to check w and h in x, y, w, h = cv2.boundingRect(cnt) then it simply filter out the text. in this case ABC is written on the box. Also the box have detected have spikes on both top and bottom, which I am not sure how to filter.
Below it the code I am using
import cv2
# reading image
image = cv2.imread('img002.jpg')
# convert the image to grayscale format
img_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# apply binary thresholding
ret, thresh = cv2.threshold(img_gray, 150, 255, cv2.THRESH_BINARY)
# visualize the binary image
cv2.imshow('Binary image', thresh)
# collectiong contours
contours,h = cv2.findContours(thresh, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# looping through contours
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
cv2.rectangle(image,(x,y),(x+w,y+h),(0,215,255),2)
cv2.imshow('img', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Also please suggest how to crop the text ABC and then apply an OCR on that to read the text.
Many Thanks.
EDIT 2: Many thanks for your answer and based upon your suggestion I changed the code so that it can check for boxes in a video. It worked liked a charm expect it only failed to identify one box for a long time. Below is my code and link to the video I have used. I have couple of questions around this as I am new to OpenCV, if you can find some time to answer.
import cv2
import numpy as np
from time import time as timer
def get_region(image):
contours, hierarchy = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
c = max(contours, key = cv2.contourArea)
black = np.zeros((image.shape[0], image.shape[1]), np.uint8)
mask = cv2.drawContours(black,[c],0,255, -1)
return mask
cap = cv2.VideoCapture("Resources/box.mp4")
ret, frame = cap.read()
fps = 60
fps /= 1000
framerate = timer()
elapsed = int()
while(1):
start = timer()
ret, frame = cap.read()
# convert the image to grayscale format
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# Performing threshold on the hue channel `hsv[:,:,0]`
thresh = cv2.threshold(hsv[:,:,0],127,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1]
mask = get_region(thresh)
masked_img = cv2.bitwise_and(frame, frame, mask = mask)
newImg = cv2.cvtColor(masked_img, cv2.COLOR_BGR2GRAY)
# collectiong contours
c,h = cv2.findContours(newImg, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cont_sorted = sorted(c, key=cv2.contourArea, reverse=True)[:5]
x,y,w,h = cv2.boundingRect(cont_sorted[0])
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),5)
#cv2.imshow('frame',masked_img)
cv2.imshow('Out',frame)
if cv2.waitKey(1) & 0xFF == ord('q') or ret==False :
break
diff = timer() - start
while diff < fps:
diff = timer() - start
cap.release()
cv2.destroyAllWindows()
Link to video: https://www.storyblocks.com/video/stock/boxes-and-packages-move-along-a-conveyor-belt-in-a-shipment-factory-a-few-blank-boxes-for-your-custom-graphics-lmgxtwq
Questions:
How can we be 100% sure if the rectangle drawn is actually on top of a box and not on belt or somewhere else.
Can you please tell me how can I use the function you have provided in original answer to use for other boxes in this new code for video.
Is it correct way to again convert masked frame to grey, find contours again to draw a rectangle. Or is there a more efficient way to do it.
The final version of this code is intended to run on raspberry pi. So what can we do to optimize the code's performance.
Many thank again for your time.
There are 2 steps to be followed:
1. Box segmentation
We can assume there will be no background change since the conveyor belt is present. We can segment the box using a different color space. In the following I have used HSV color space:
img = cv2.imread('box.jpg')
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Performing threshold on the hue channel `hsv[:,:,0]`
th = cv2.threshold(hsv[:,:,0],127,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1]
Masking the largest contour in the binary image:
def get_region(image):
contours, hierarchy = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
c = max(contours, key = cv2.contourArea)
black = np.zeros((image.shape[0], image.shape[1]), np.uint8)
mask = cv2.drawContours(black,[c],0,255, -1)
return mask
mask = get_region(th)
Applying the mask on the original image:
masked_img = cv2.bitwise_and(img, img, mask = mask)
2. Text Detection:
The text region is enclosed in white, which can be isolated again by applying a suitable threshold. (You might want to apply some statistical measure to calculate the threshold)
# Applying threshold at 220 on green channel of 'masked_img'
result = cv2.threshold(masked_img[:,:,1],220,255,cv2.THRESH_BINARY)[1]
Note:
The code is written for the shared image. For boxes of different sizes you can filter contours with approximately 4 vertices/sides.
# Function to extract rectangular contours above a certain area
def extract_rect(contours, area_threshold):
rect_contours = []
for c in contours:
if cv2.contourArea(c) > area_threshold:
perimeter = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02*perimeter, True)
if len(approx) == 4:
cv2.drawContours(image, [approx], 0, (0,255,0),2)
rect_contours.append(c)
return rect_contours
Experiment using a statistical value (mean, median, etc.) to find optimal threshold to detect text region.
Your additional questions warranted a separate answer:
1. How can we be 100% sure if the rectangle drawn is actually on top of a box and not on belt or somewhere else?
PRO: For this very purpose I chose the Hue channel of HSV color space. Shades of grey, white and black (on the conveyor belt) are neutral in this channel. The brown color of the box is contrasting could be easily segmented using Otsu threshold. Otsu's algorithm finds the optimal threshold value without user input.
CON You might face problems when boxes are also of the same color as conveyor belt
2. Can you please tell me how can I use the function you have provided in original answer to use for other boxes in this new code for video.
PRO: In case you want to find boxes using edge detection and without using color information; there is a high chance of getting many unwanted edges. By using extract_rect() function, you can filter contours that:
have approximately 4 sides (quadrilateral)
are above certain area
CON If you have parcels/packages/bags that have more than 4 sides you might need to change this.
3. Is it correct way to again convert masked frame to grey, find contours again to draw a rectangle. Or is there a more efficient way to do it.
I felt this is the best way, because all that is remaining is the textual region enclosed in white. Applying threshold of high value was the simplest idea in my mind. There might be a better way :)
(I am not in the position to answer the 4th question :) )
I'm attempting to split the handwritten text from a dataset of NIST forms into separate lines. Here is a link to the dataset:
https://www.nist.gov/srd/nist-special-database-19
Example Image
The code I'm using is based off of a similar question on stackoverflow but it doesn't quite work due to some the characters touching. Here is the code:
import cv2
import numpy as np
#import image
image = cv2.imread('form1.jpg')
#cv2.imshow('orig',image)
#cv2.waitKey(0)
#grayscale
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
cv2.imshow('gray',gray)
cv2.waitKey(0)
#binary
ret,thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY_INV)
cv2.imshow('second',thresh)
cv2.waitKey(0)
#dilation
kernel = np.ones((5,100), np.uint8)
img_dilation = cv2.dilate(thresh, kernel, iterations=1)
cv2.imshow('dilated',img_dilation)
cv2.waitKey(0)
#find contours
im2,ctrs, hier = cv2.findContours(img_dilation.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
#sort contours
sorted_ctrs = sorted(ctrs, key=lambda ctr: cv2.boundingRect(ctr)[0])
for i, ctr in enumerate(sorted_ctrs):
# Get bounding box
x, y, w, h = cv2.boundingRect(ctr)
# Getting ROI
roi = image[y:y+h, x:x+w]
# show ROI
cv2.imshow('segment no:'+str(i),roi)
cv2.rectangle(image,(x,y),( x + w, y + h ),(90,0,255),2)
cv2.waitKey(0)
cv2.imshow('marked areas',image)
cv2.waitKey(0)
How can I get it to split the lines properly even when some of the characters are overlapping?
I am working on similar problem, and the sample is of a quite good quality.
From the code that is given I can see that you use Contour Detection. You may want to play with aspect ratio restriction on detected contours in order to omit connected components. Here you can find insights for your project as well as description on how to make aspects ratio restriction.
If you still want to keep them, then you will need to perform post processing. Either it will involve morphology alternation for those elements, or application of some sort of Machine/Deep Learning.
That is the most complicated part and it might have many different solutions. For my project I used Convolutional Neural Network with Keras in order to train the model to classify letters, and put outliers in separate class. As you might have guessed I have added couple more classes with training data (thousands of training examples will be needed) to the existing dataset of MNIST-like letters.
Good luck!
I'm trying to read all of the handwritten digits in the scanned image here
I tried looking through pixel-by-pixel using PIL, cropping the sub-images, then feeding them through a neural network, but the regions that were being cropped never quite lined up and led to a lot of inaccuracy.
I've also tried using OpenCV to find all the grey squares, then crop the images and feed them through a neural network, but I couldn't seem to get it to find all or even only miss a few; it would miss ~30% of squares. (I'm not very experienced with OpenCV, so I could be messing something up)
So I'm just looking for a potential idea/solution for this problem, so any suggestions would be appreciated, thanks in advance!
I assume the input image name is "sqaures.jpg"
First of all, import required libraries and load image in both RGB and Gray format:
import cv2
import numpy as np
image = cv2.imread("squares.jpg", 1)
image_gray = cv2.imread("squares.jpg", 0)
Then, we perform a simple operation to clean some noise from the input image using np.where() function:
image_gray = np.where(image_gray > 240, 255, image_gray)
image_gray = np.where(image_gray <= 240, 0, image_gray)
Because we want to grab the whole square regions from the image. We need to blur the image a little bit before performing the adaptive thresholding method:
image_gray = cv2.blur(image_gray, (5, 5))
im_th = cv2.adaptiveThreshold(image_gray, 255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY, 115, 1)
kernal = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
im_th = cv2.morphologyEx(im_th, cv2.MORPH_OPEN, kernal, iterations=3)
Use contour detection in OpenCV to find all possible regions:
_, contours, _ = cv2.findContours(im_th.copy(), cv2.RETR_LIST,
cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea, reverse=True)
contours.remove(contours[0]) #remove the biggest contour
Finally, try to find the potential square regions based on the ratio of height and width:
square_rects = []
square_areas = []
for i, cnt in enumerate(contours):
(x, y, w, h) = cv2.boundingRect(cnt)
ar = w / float(h)
if 0.9 < ar < 1.1:
square_rects.append(((x,y), (x+w, y+h)))
square_areas.append(w*h) #store area information
We need to remove anything that is too small from the list by doing the follows:
import statistics
median_size_limit= statistics.median(square_areas) * 0.8
square_rects = [rect for i, rect in enumerate(square_rects)
if square_areas[i] > median_size_limit]
You can visually check the output by drawing all the rectangles on the original image:
for rect in square_rects:
cv2.rectangle(image, rect[0], rect[1], (0,255,0), 2)
cv2.imwrite("_output_image.png", image)
cv2.imshow("image", image)
cv2.waitKey()
You can use "square_rects" to locate all the squares and crop them from the original image.
The following is the preview of the final result.
Cheers.
I am trying to extract the account number from an image of a cheque. The logic that I have is that, I am trying to find the rectangle that contains the account number, slice the bounding rectangle and then feed the slice into an OCR to get the text out of it.
The problem I am facing is when the rectangle is not very prominent and light colour, I am not able to get the rectangle contour since the edges are not connected totally.
How to overcome this?
Things I tried, but did not work are
I cannot increase the erosion iteration, to erode it more, because then the edges connect with the surrounding black pixels and form a different shape.
Reducing the threshold offset might help, but, it seems inefficient. Since the code has to work with several types of images. I can start with offset 10 and keep incrementing the offset and checking if I found the rectangle or not. This will increase the time a lot for cheques with prominent rectangles that work well at offset 20 or more. And since I don't have a condition to check if the edges of the rectangle are prominent or not, the loop has to be applied in all the cheques.
Keeping the above points in mind. Can someone help me out with a solution to this problem?
Libraries used and versions
scikit-image==0.13.1
opencv-python==3.3.0.10
Code
from skimage.filters import threshold_adaptive, threshold_local
import cv2
Step 1:
image = cv2.imread('cropped.png')
Step 2:
Using adaptive threshold from skimage to remove the background, so that I can get the account number rectangle box. This works fine for the cheques where the rectangle is more pronounced, but when the rectangle edges are thin, or are lighter in colour, the threshold results in
unconnected edges, because of which I am not able to find the contours. I have attached examples of this further down in the question.
account_number_block = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
account_number_block = threshold_adaptive(account_number_block, 251, offset=20)
account_number_block = account_number_block.astype("uint8") * 255
Step 3:
Erode the image a bit to try to connect small disconnections in the edges
kernel = np.ones((3,3), np.uint8)
account_number_block = cv2.erode(account_number_block, kernel, iterations=5)
Find the contours
(_, cnts, _) = cv2.findContours(account_number_block.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# cnts = sorted(cnts, key=cv2.contourArea)[:3]
rect_cnts = [] # Rectangular contours
for cnt in cnts:
approx = cv2.approxPolyDP(cnt,0.01*cv2.arcLength(cnt,True),True)
if len(approx) == 4:
rect_cnts.append(cnt)
rect_cnts = sorted(rect_cnts, key=cv2.contourArea, reverse=True)[:1]
Working Example
Step 1: Original Image
Step 2: After thresholding to remove the background.
Step 3: Finding contours to find rectangle box of the account number.
Failure Working example - Light rectangular boundary.
Step 1: Read original image
Step 2: After thresholding to remove the background. Notice that the edges of the rectangle are not connected, because of which I am not able to get the contour out of it.
Step 3: Finding contours to find rectangle box of the account number.
import numpy as np
import cv2
import pytesseract as pt
from PIL import Image
#Run Main
if __name__ == "__main__" :
image = cv2.imread("image.jpg", -1)
# resize image to speed up computation
rows,cols,_ = image.shape
image = cv2.resize(image, (np.int32(cols/2),np.int32(rows/2)))
# convert to gray and binarize
gray_img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
binary_img = cv2.adaptiveThreshold(gray_img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9)
# note: erosion and dilation works on white forground
binary_img = cv2.bitwise_not(binary_img)
# dilate the image to fill the gaps
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
dilated_img = cv2.morphologyEx(binary_img, cv2.MORPH_DILATE, kernel,iterations=2)
# find contours, discard contours which do not belong to a rectangle
(_, cnts, _) = cv2.findContours(dilated_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
rect_cnts = [] # Rectangular contours
for cnt in cnts:
approx = cv2.approxPolyDP(cnt,0.01*cv2.arcLength(cnt,True),True)
if len(approx) == 4:
rect_cnts.append(cnt)
# sort contours based on area
rect_cnts = sorted(rect_cnts, key=cv2.contourArea, reverse=True)[:1]
# find bounding rectangle of biggest contour
box = cv2.boundingRect(rect_cnts[0])
x,y,w,h = box[:]
# extract rectangle from the original image
newimg = image[y:y+h,x:x+w]
# use 'pytesseract' to get the text in the new image
text = pt.image_to_string(Image.fromarray(newimg))
print(text)
cv2.namedWindow('Image', cv2.WINDOW_NORMAL)
cv2.imshow('Image', newimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
result: 03541140011724
result: 34785736216
I want to detect the text area of images using python 2.7 and opencv 2.4.9
and draw a rectangle area around it. Like shown in the example image below.
I am new to image processing so any idea how to do this will be appreciated.
There are multiple ways to go about detecting text in an image.
I recommend looking at this question here, for it may answer your case as well. Although it is not in python, the code can be easily translated from c++ to python (Just look at the API and convert the methods from c++ to python, not hard. I did it myself when I tried their code for my own separate problem). The solutions here may not work for your case, but I recommend trying them out.
If I were to go about this I would do the following process:
Prep your image:
If all of your images you want to edit are roughly like the one you provided, where the actual design consists of a range of gray colors, and the text is always black. I would first white out all content that is not black (or already white). Doing so will leave only the black text left.
# must import if working with opencv in python
import numpy as np
import cv2
# removes pixels in image that are between the range of
# [lower_val,upper_val]
def remove_gray(img,lower_val,upper_val):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_bound = np.array([0,0,lower_val])
upper_bound = np.array([255,255,upper_val])
mask = cv2.inRange(gray, lower_bound, upper_bound)
return cv2.bitwise_and(gray, gray, mask = mask)
Now that all you have is the black text the goal is to get those boxes. As stated before, there are different ways of going about this.
Stroke Width Transform (SWT)
The typical way to find text areas: you can find text regions by using stroke width transform as depicted in "Detecting Text in Natural Scenes with Stroke Width Transform " by Boris Epshtein, Eyal Ofek, and Yonatan Wexler. To be honest, if this is as fast and reliable as I believe it is, then this method is a more efficient method than my below code. You can still use the code above to remove the blueprint design though, and that may help the overall performance of the swt algorithm.
Here is a c library that implements their algorithm, but it is stated to be very raw and the documentation is stated to be incomplete. Obviously, a wrapper will be needed in order to use this library with python, and at the moment I do not see an official one offered.
The library I linked is CCV. It is a library that is meant to be used in your applications, not recreate algorithms. So this is a tool to be used, which goes against OP's want for making it from "First Principles", as stated in comments. Still, useful to know it exists if you don't want to code the algorithm yourself.
Home Brewed Non-SWT Method
If you have meta data for each image, say in an xml file, that states how many rooms are labeled in each image, then you can access that xml file, get the data about how many labels are in the image, and then store that number in some variable say, num_of_labels. Now take your image and put it through a while loop that erodes at a set rate that you specify, finding external contours in the image in each loop and stopping the loop once you have the same number of external contours as your num_of_labels. Then simply find each contours' bounding box and you are done.
# erodes image based on given kernel size (erosion = expands black areas)
def erode( img, kern_size = 3 ):
retval, img = cv2.threshold(img, 254.0, 255.0, cv2.THRESH_BINARY) # threshold to deal with only black and white.
kern = np.ones((kern_size,kern_size),np.uint8) # make a kernel for erosion based on given kernel size.
eroded = cv2.erode(img, kern, 1) # erode your image to blobbify black areas
y,x = eroded.shape # get shape of image to make a white boarder around image of 1px, to avoid problems with find contours.
return cv2.rectangle(eroded, (0,0), (x,y), (255,255,255), 1)
# finds contours of eroded image
def prep( img, kern_size = 3 ):
img = erode( img, kern_size )
retval, img = cv2.threshold(img, 200.0, 255.0, cv2.THRESH_BINARY_INV) # invert colors for findContours
return cv2.findContours(img,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) # Find Contours of Image
# given img & number of desired blobs, returns contours of blobs.
def blobbify(img, num_of_labels, kern_size = 3, dilation_rate = 10):
prep_img, contours, hierarchy = prep( img.copy(), kern_size ) # dilate img and check current contour count.
while len(contours) > num_of_labels:
kern_size += dilation_rate # add dilation_rate to kern_size to increase the blob. Remember kern_size must always be odd.
previous = (prep_img, contours, hierarchy)
processed_img, contours, hierarchy = prep( img.copy(), kern_size ) # dilate img and check current contour count, again.
if len(contours) < num_of_labels:
return (processed_img, contours, hierarchy)
else:
return previous
# finds bounding boxes of all contours
def bounding_box(contours):
bBox = []
for curve in contours:
box = cv2.boundingRect(curve)
bBox.append(box)
return bBox
The resulting boxes from the above method will have space around the labels, and this may include part of the original design, if the boxes are applied to the original image. To avoid this make regions of interest via your new found boxes and trim the white space. Then save that roi's shape as your new box.
Perhaps you have no way of knowing how many labels will be in the image. If this is the case, then I recommend playing around with erosion values until you find the best one to suit your case and get the desired blobs.
Or you could try find contours on the remaining content, after removing the design, and combine bounding boxes into one rectangle based on their distance from each other.
After you found your boxes, simply use those boxes with respect to the original image and you will be done.
Scene Text Detection Module in OpenCV 3
As mentioned in the comments to your question, there already exists a means of scene text detection (not document text detection) in opencv 3. I understand you do not have the ability to switch versions, but for those with the same question and not limited to an older opencv version, I decided to include this at the end. Documentation for the scene text detection can be found with a simple google search.
The opencv module for text detection also comes with text recognition that implements tessaract, which is a free open-source text recognition module. The downfall of tessaract, and therefore opencv's scene text recognition module is that it is not as refined as commercial applications and is time consuming to use. Thus decreasing its performance, but its free to use, so its the best we got without paying money, if you want text recognition as well.
Links:
Documentation OpenCv
Older Documentation
The source code is located here, for analysis and understanding
Honestly, I lack the experience and expertise in both opencv and image processing in order to provide a detailed way in implementing their text detection module. The same with the SWT algorithm. I just got into this stuff this past few months, but as I learn more I will edit this answer.
Here's a simple image processing approach using only thresholding and contour filtering:
Obtain binary image. Load image, convert to grayscale, Gaussian blur, and adaptive threshold
Combine adjacent text. We create a rectangular structuring kernel then dilate to form a single contour
Filter for text contours. We find contours and filter using contour area. From here we can draw the bounding box with cv2.rectangle()
Using this original input image (removed red lines)
After converting the image to grayscale and Gaussian blurring, we adaptive threshold to obtain a binary image
Next we dilate to combine the text into a single contour
From here we find contours and filter using a minimum threshold area (in case there was small noise). Here's the result
If we wanted to, we could also extract and save each ROI using Numpy slicing
Code
import cv2
# Load image, grayscale, Gaussian blur, adaptive threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (9,9), 0)
thresh = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,11,30)
# Dilate to combine adjacent text contours
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,9))
dilate = cv2.dilate(thresh, kernel, iterations=4)
# Find contours, highlight text areas, and extract ROIs
cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
ROI_number = 0
for c in cnts:
area = cv2.contourArea(c)
if area > 10000:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 3)
# ROI = image[y:y+h, x:x+w]
# cv2.imwrite('ROI_{}.png'.format(ROI_number), ROI)
# ROI_number += 1
cv2.imshow('thresh', thresh)
cv2.imshow('dilate', dilate)
cv2.imshow('image', image)
cv2.waitKey()