I want to detect the text area of images using python 2.7 and opencv 2.4.9
and draw a rectangle area around it. Like shown in the example image below.
I am new to image processing so any idea how to do this will be appreciated.
There are multiple ways to go about detecting text in an image.
I recommend looking at this question here, for it may answer your case as well. Although it is not in python, the code can be easily translated from c++ to python (Just look at the API and convert the methods from c++ to python, not hard. I did it myself when I tried their code for my own separate problem). The solutions here may not work for your case, but I recommend trying them out.
If I were to go about this I would do the following process:
Prep your image:
If all of your images you want to edit are roughly like the one you provided, where the actual design consists of a range of gray colors, and the text is always black. I would first white out all content that is not black (or already white). Doing so will leave only the black text left.
# must import if working with opencv in python
import numpy as np
import cv2
# removes pixels in image that are between the range of
# [lower_val,upper_val]
def remove_gray(img,lower_val,upper_val):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_bound = np.array([0,0,lower_val])
upper_bound = np.array([255,255,upper_val])
mask = cv2.inRange(gray, lower_bound, upper_bound)
return cv2.bitwise_and(gray, gray, mask = mask)
Now that all you have is the black text the goal is to get those boxes. As stated before, there are different ways of going about this.
Stroke Width Transform (SWT)
The typical way to find text areas: you can find text regions by using stroke width transform as depicted in "Detecting Text in Natural Scenes with Stroke Width Transform " by Boris Epshtein, Eyal Ofek, and Yonatan Wexler. To be honest, if this is as fast and reliable as I believe it is, then this method is a more efficient method than my below code. You can still use the code above to remove the blueprint design though, and that may help the overall performance of the swt algorithm.
Here is a c library that implements their algorithm, but it is stated to be very raw and the documentation is stated to be incomplete. Obviously, a wrapper will be needed in order to use this library with python, and at the moment I do not see an official one offered.
The library I linked is CCV. It is a library that is meant to be used in your applications, not recreate algorithms. So this is a tool to be used, which goes against OP's want for making it from "First Principles", as stated in comments. Still, useful to know it exists if you don't want to code the algorithm yourself.
Home Brewed Non-SWT Method
If you have meta data for each image, say in an xml file, that states how many rooms are labeled in each image, then you can access that xml file, get the data about how many labels are in the image, and then store that number in some variable say, num_of_labels. Now take your image and put it through a while loop that erodes at a set rate that you specify, finding external contours in the image in each loop and stopping the loop once you have the same number of external contours as your num_of_labels. Then simply find each contours' bounding box and you are done.
# erodes image based on given kernel size (erosion = expands black areas)
def erode( img, kern_size = 3 ):
retval, img = cv2.threshold(img, 254.0, 255.0, cv2.THRESH_BINARY) # threshold to deal with only black and white.
kern = np.ones((kern_size,kern_size),np.uint8) # make a kernel for erosion based on given kernel size.
eroded = cv2.erode(img, kern, 1) # erode your image to blobbify black areas
y,x = eroded.shape # get shape of image to make a white boarder around image of 1px, to avoid problems with find contours.
return cv2.rectangle(eroded, (0,0), (x,y), (255,255,255), 1)
# finds contours of eroded image
def prep( img, kern_size = 3 ):
img = erode( img, kern_size )
retval, img = cv2.threshold(img, 200.0, 255.0, cv2.THRESH_BINARY_INV) # invert colors for findContours
return cv2.findContours(img,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) # Find Contours of Image
# given img & number of desired blobs, returns contours of blobs.
def blobbify(img, num_of_labels, kern_size = 3, dilation_rate = 10):
prep_img, contours, hierarchy = prep( img.copy(), kern_size ) # dilate img and check current contour count.
while len(contours) > num_of_labels:
kern_size += dilation_rate # add dilation_rate to kern_size to increase the blob. Remember kern_size must always be odd.
previous = (prep_img, contours, hierarchy)
processed_img, contours, hierarchy = prep( img.copy(), kern_size ) # dilate img and check current contour count, again.
if len(contours) < num_of_labels:
return (processed_img, contours, hierarchy)
else:
return previous
# finds bounding boxes of all contours
def bounding_box(contours):
bBox = []
for curve in contours:
box = cv2.boundingRect(curve)
bBox.append(box)
return bBox
The resulting boxes from the above method will have space around the labels, and this may include part of the original design, if the boxes are applied to the original image. To avoid this make regions of interest via your new found boxes and trim the white space. Then save that roi's shape as your new box.
Perhaps you have no way of knowing how many labels will be in the image. If this is the case, then I recommend playing around with erosion values until you find the best one to suit your case and get the desired blobs.
Or you could try find contours on the remaining content, after removing the design, and combine bounding boxes into one rectangle based on their distance from each other.
After you found your boxes, simply use those boxes with respect to the original image and you will be done.
Scene Text Detection Module in OpenCV 3
As mentioned in the comments to your question, there already exists a means of scene text detection (not document text detection) in opencv 3. I understand you do not have the ability to switch versions, but for those with the same question and not limited to an older opencv version, I decided to include this at the end. Documentation for the scene text detection can be found with a simple google search.
The opencv module for text detection also comes with text recognition that implements tessaract, which is a free open-source text recognition module. The downfall of tessaract, and therefore opencv's scene text recognition module is that it is not as refined as commercial applications and is time consuming to use. Thus decreasing its performance, but its free to use, so its the best we got without paying money, if you want text recognition as well.
Links:
Documentation OpenCv
Older Documentation
The source code is located here, for analysis and understanding
Honestly, I lack the experience and expertise in both opencv and image processing in order to provide a detailed way in implementing their text detection module. The same with the SWT algorithm. I just got into this stuff this past few months, but as I learn more I will edit this answer.
Here's a simple image processing approach using only thresholding and contour filtering:
Obtain binary image. Load image, convert to grayscale, Gaussian blur, and adaptive threshold
Combine adjacent text. We create a rectangular structuring kernel then dilate to form a single contour
Filter for text contours. We find contours and filter using contour area. From here we can draw the bounding box with cv2.rectangle()
Using this original input image (removed red lines)
After converting the image to grayscale and Gaussian blurring, we adaptive threshold to obtain a binary image
Next we dilate to combine the text into a single contour
From here we find contours and filter using a minimum threshold area (in case there was small noise). Here's the result
If we wanted to, we could also extract and save each ROI using Numpy slicing
Code
import cv2
# Load image, grayscale, Gaussian blur, adaptive threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (9,9), 0)
thresh = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,11,30)
# Dilate to combine adjacent text contours
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,9))
dilate = cv2.dilate(thresh, kernel, iterations=4)
# Find contours, highlight text areas, and extract ROIs
cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
ROI_number = 0
for c in cnts:
area = cv2.contourArea(c)
if area > 10000:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 3)
# ROI = image[y:y+h, x:x+w]
# cv2.imwrite('ROI_{}.png'.format(ROI_number), ROI)
# ROI_number += 1
cv2.imshow('thresh', thresh)
cv2.imshow('dilate', dilate)
cv2.imshow('image', image)
cv2.waitKey()
Related
I have a microscopy image and need to calculate the area shown in red. The idea is to build a function that returns the area inside the red line on the right photo (float value, X mm²).
Since I have almost no experience in image processing, I don't know how to approach this (maybe silly) problem and so I'm looking for help. Other image examples are pretty similar with just 1 aglomerated "interest area" close to the center.
I'm comfortable coding in python and have used the software ImageJ for some time.
Any python package, software, bibliography, etc. should help.
Thanks!
EDIT:
The example in red I made manually just to make people understand what I want. Detecting the "interest area" must be done inside the code.
Canny, morphological transformation and contours can provide a decent result.
Although it might need some fine-tuning depending on the input images.
import numpy as np
import cv2
# Change this with your filename
image = cv2.imread('test.png', cv2.IMREAD_GRAYSCALE)
# You can fine-tune this, or try with simple threshold
canny = cv2.Canny(image, 50, 580)
# Morphological Transformations
se = np.ones((7,7), dtype='uint8')
image_close = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, se)
contours, _ = cv2.findContours(image_close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Create black canvas to draw area on
mask = np.zeros(image.shape[:2], np.uint8)
biggest_contour = max(contours, key = cv2.contourArea)
cv2.drawContours(mask, biggest_contour, -1, 255, -1)
area = cv2.contourArea(biggest_contour)
print(f"Total area image: {image.shape[0] * image.shape[1]} pixels")
print(f"Total area contour: {area} pixels")
cv2.imwrite('mask.png', mask)
cv2.imshow('img', mask)
cv2.waitKey(0)
# Draw the contour on the original image for demonstration purposes
original = cv2.imread('test.png')
cv2.drawContours(original, biggest_contour, -1, (0, 0, 255), -1)
cv2.imwrite('result.png', original)
cv2.imshow('result', original)
cv2.waitKey(0)
The code produces the following output:
Total area image: 332628 pixels
Total area contour: 85894.5 pixels
Only thing left to do is convert the pixels to your preferred measurement.
Two images for demonstration below.
Result
Mask
I don't know much about this topic, but it seems like a very similar question to this one, which has what look like two great answers:
Python: calculate an area within an irregular contour line
Similar questions have been asked, but none of them seem to help in my case (nevertheless, I learned a few things from those threads).
I am using Tesseract for OCR, but the results are far from satisfactory when the text is slightly skewed (see the image above).
Inspired by similar cases, I tried to use OpenCV to detect and fix the skew, but unfortunately it just doesn't seem to work. Below, you can see my current attempt, which doesn't yield the necessary result. What I get is just another bounding box around the image (that has already been cropped).
import cv2
from matplotlib import pyplot as plt
import numpy as np
img = cv2.imread("skew.JPG")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#gray = cv2.bitwise_not(gray)
ret,thresh1 = cv2.threshold(gray, 0, 255 ,cv2.THRESH_OTSU)
rect_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3
, 2))
dilation = cv2.dilate(thresh1, rect_kernel, iterations = 1)
cv2.imshow('dilation', dilation)
cv2.waitKey(0)
cv2.destroyAllWindows()
contours, hierarchy = cv2.findContours(dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for cnt in contours:
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img,[box],0,(0,0,255),3)
cv2.imshow('final', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
I would appreciate any advice.
Tesseract seems to have a lot of troubles when the text has some distortions.
The idea is to find the contour of the text to be able to undistort the image and then use Tesseract.
The contour is generally a rectangle which has undergone the same distortion as the text. So it does not appear as a perfect rectangle in your image anymore. Opencv gives you different methods to find it. cv2.minAreaRect() finds the best rotated rectangle. It may be sufficient depending on the distortion of your text. Otherwise, you can use cv2.convexHull() to better fit your text.
The contour should give you the corners of the text that you want to remap to a regular rectangle. You can do that with:
cv2.getAffineTransform(corners, dest_corners) # requires 3 points
cv2.getPerspectiveTransform(corners, dest_corners) # requires 4 points
and then
cv2.warpAffine(...)
cv2.warpPerspective(...)
Also, don't forget to correctly set the page segmentation method that Tesseract needs to use (https://github.com/tesseract-ocr/tesseract/wiki/ImproveQuality). In your case, "6 Assume a single uniform block of text." seems to be adapted.
I am new to OpenCV and Python and I made a program that finds contours with area that is above 500 and saves them into a new image I used boundingRect as advised on the internet, it runs and does the job well but I got a problem with an output of an image. It seems that noises near beside the region of interest are also saved. As you can see in the image below, there are some tiny shapes near beside the ROI. The output is good for other images its just that I want to get rid of noises like this. Is there a way to remove those kind of noises in the output?
Here is the output of the program I made:
Here is the input image:
Hide with contouring
This solution uses cv2.drawContours() to simply draw black contours over the noise. I ran the black and white sample image through a few iterations of dilation, filtered contours by area, and then drew black contour lines over the noise. I used the threshold feature because there turned out to be a good bit of minuscule noise in what initially appeared to be a simple black and white image.
Input:
Code:
import cv2
thresh_value = 10
img = cv2.imread("cells_BW.jpg")
img = cv2.medianBlur(img, 5)
dilation = cv2.dilate(img,(3,3),iterations = 3)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
(T, thresh) = cv2.threshold(img_gray, thresh_value, 255, cv2.THRESH_BINARY)
_, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = [i for i in contours if cv2.contourArea(i) < 5000]
cv2.drawContours(img, contours, -1, (0,0,0), 10, lineType=8)
cv2.imwrite("cells_BW_CLEAN.jpg", img)
Output:
There could be several solutions depends on the assumption on the input data.
Probable Methods
If the ROI has a significantly different color than others,
1-1. You can threshold the input image using RGB before finding the contour.
If the area of the object you want to find is significantly bigger that others,
2-1. Fill the holes like this example
2-2. Calculate the size of the blobs, and exclude all the blobs except the largest one (example to calculate the size of blobs).
If there has intersection point between the contours of multiple objects, Method 2 surely fail to segment the region of single cell.
The objective is to create bounding boxes using text recognition methods (eg: OpenCV) for US floor plan images, which can then be fed into a text reader (eg: LSTM or tesseract).
Several methods which have been tried cv2.findContours and cv2.boundingRect methods have been attempted but have largely failed to generalise to different types of floor plans (there is a wide deviation in how the floor plans look).
For example, cv2.findContours using grayscale, adaptive thresholds, erosion and dilation (with various iterations) before applying the cv2.findContours function results in the bellow. Note that Bedroom 2 and Kitchen are not being picked up correctly.
Additional example which fails to find any regions:
Any thoughts on text recognition models or cleaning procedures that will improve the accuracy of the text recognition model, preferably with code examples?
This answer is based on the assumption that images are similar one to another (like their size, thickness of walls, letters...). If they are not this wouldn't be a good approach because you would have to change the thresholders for every image. That being said, I would try to transform the image to binary and search for contours. After that you can add criterion like height, weight etc. to filter out the walls. After that You can draw contours on a mask and then dilate the image. That will combine letters close to each other into one contour. Then you can create bounding box for all the contours which is your ROI. Then you can use any OCR on that region. Hope it helps a bit. Cheers!
Example:
import cv2
import numpy as np
img = cv2.imread('floor.png')
mask = np.zeros(img.shape, dtype=np.uint8)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, threshold = cv2.threshold(gray,150,255,cv2.THRESH_BINARY_INV)
_, contours, hierarchy = cv2.findContours(threshold,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
ROI = []
for cnt in contours:
x,y,w,h = cv2.boundingRect(cnt)
if h < 20:
cv2.drawContours(mask, [cnt], 0, (255,255,255), 1)
kernel = np.ones((7,7),np.uint8)
dilation = cv2.dilate(mask,kernel,iterations = 1)
gray_d = cv2.cvtColor(dilation, cv2.COLOR_BGR2GRAY)
_, threshold_d = cv2.threshold(gray_d,150,255,cv2.THRESH_BINARY)
_, contours_d, hierarchy = cv2.findContours(threshold_d,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for cnt in contours_d:
x,y,w,h = cv2.boundingRect(cnt)
if w > 35:
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
roi_c = img[y:y+h, x:x+w]
ROI.append(roi_c)
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
I'm trying to develop a script using Python and OpenCV to detect some highlighted regions on a scanned instrumentation diagram and output text using Tesseract's OCR function. My workflow is first to detect the general vicinity of the region of interest, and then apply processing steps to remove everything aside from the blocks of text (lines, borders, noise). The processed image is then feed into Tesseract's OCR engine.
This workflow is works on about half of the images, but fails on the rest due to the text touching the borders. I'll show some examples of what I mean below:
Step 1: Find regions of interest by creating a mask using InRange with the color range of the highlighter.
Step 2: Contour regions of interest, crop and save to file.
--- Referenced code begins here ---
Step 3: Threshold image and apply Canny Edge Detection
Step 4: Contour the edges and filter them into circular shape using cv2.approxPolyDP and looking at ones with vertices greater than 8. Taking the first or second largest contour usually corresponds to the inner edge.
Step 5: Using masks and bitwise operations, everything inside contour is transferred to a white background image. Dilation and erosion is applied to de-noise the image and create the final image that gets fed into the OCR engine.
import cv2
import numpy as np
import pytesseract
pytesseract.pytesseract.tesseract_cmd = 'C:/Program Files (x86)/Tesseract-OCR/tesseract'
d_path = "Test images\\"
img_name = "cropped_12.jpg"
img = cv2.imread(d_path + img_name) # Reads the image
## Resize image before calculating contour
height, width = img.shape[:2]
img = cv2.resize(img,(2*width,2*height),interpolation = cv2.INTER_CUBIC)
img_orig = img.copy() # Makes copy of original image
img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Convert to grayscale
# Apply threshold to get binary image and write to file
_, img = cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Edge detection
edges = cv2.Canny(img,100,200)
# Find contours of mask threshold
_, contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Find contours associated w/ polygons with 8 sides or more
cnt_list = []
area_list = [cv2.contourArea(c) for c in contours]
for j in contours:
poly_pts = cv2.approxPolyDP(j,0.01*cv2.arcLength(j,True),True)
area = cv2.contourArea(j)
if (len(poly_pts) > 8) & (area == max(area_list)):
cnt_list.append(j)
cv2.drawContours(img_orig, cnt_list, -1, (255,0,0), 2)
# Show contours
cv2.namedWindow('Show',cv2.WINDOW_NORMAL)
cv2.imshow("Show",img_orig)
cv2.waitKey()
cv2.destroyAllWindows()
# Zero pixels outside circle
mask = np.zeros(img.shape).astype(img.dtype)
cv2.fillPoly(mask, cnt_list, (255,255,255))
mask_inv = cv2.bitwise_not(mask)
a = cv2.bitwise_and(img,img,mask = mask)
wh_back = np.ones(img.shape).astype(img.dtype)*255
b = cv2.bitwise_and(wh_back,wh_back,mask = mask_inv)
res = cv2.add(a,b)
# Get rid of noise
kernel = np.ones((2, 2), np.uint8)
res = cv2.dilate(res, kernel, iterations=1)
res = cv2.erode(res, kernel, iterations=1)
# Show final image
cv2.namedWindow('result',cv2.WINDOW_NORMAL)
cv2.imshow("result",res)
cv2.waitKey()
cv2.destroyAllWindows()
When code works, these are the images that get outputted:
Working
However, in the instances where the text touches the circular border, the code assumes part of the text is part of the larger contour and ignores the last letter. For example:
Not working
Are there any processing steps that can help me bypass this problem? Or perhaps a different approach? I've tried using Hough Circle Transforms to try to detect the borders, but they're quite finicky and doesn't work as well as contouring.
I'm quite new to OpenCV and Python so any help would be appreciated.
If the Hough circle transform didn't work for you I think you're best option will be to approximate the boarder shape. The best method I know for that is: Douglas-Peucker algorithm which will make your contour simpler by reducing the perimeter on pics.
You can check this reference file from OpenCV to see the type of post processing you can apply to your boarder. They also mention Douglas-Peucker:
OpenCV boarder processing
Just a hunch. After OTSU thresholding. Erode and dilate the image. This will result in vanishing of very thin joints. The code for the same is below.
kernel = np.ones((5,5),np.uint8)
th3 = cv2.erode(th3, kernel,iterations=1)
th3 = cv2.dilate(th3, kernel,iterations=1)
Let me know how it goes. I have couple more idea if this did not work.