I'm using OpenCV to find tabular data within images so that I can use an OCR on it. So far I have been able to find the table in the image, find the columns of the table, then find each cell within each column. It works pretty well, but I'm having an issue with the cell walls getting stuck in my images and I'm unable to remove them reliably.This is one example that I'm having difficulty with.This would be another example.
I have tried several approaches to get these images better. I have been having the most luck with finding the contours.
img2gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, mask = cv2.threshold(img2gray, 180, 255, cv2.THRESH_BINARY)
image_final = cv2.bitwise_and(img2gray, img2gray, mask=mask)
ret, new_img = cv2.threshold(image_final, 180, 255, cv2.THRESH_BINARY_INV)
kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (5, 3))
# dilate , more the iteration more the dilation
dilated = cv2.dilate(new_img, kernel, iterations=3)
cv2.imwrite('../test_images/find_contours_dilated.png', dilated)
I have been toying with the kernel size and dilation iterations and have found this to be the best configuration.
Another approach I used was with PIL, but it is only really good if the border is uniform around the whole image, which in my cases it is not.
copy = Image.fromarray(img)
try:
bg = Image.new(copy.mode, copy.size, copy.getpixel((0, 0)))
except:
return None
diff = ImageChops.difference(copy, bg)
diff = ImageChops.add(diff, diff, 2.0, -100)
bbox = diff.getbbox()
if bbox:
return np.array(copy.crop(bbox))
There were a few other ideas that I tried, but none got me very far. Any help would be appreciated.
You could try with finding contours and "drawing" them out. Meaning you can draw a border that will be connected with the "walls" mannualy with cv2.rectangle on the borders of your image (that will combine all contours - walls). The biggest two contours will be outer and inner line of your walls and you can draw the contour white to remove the border. Then apply threshold again to remove the rest of the noise. Cheers!
Example:
import cv2
import numpy as np
# Read the image
img = cv2.imread('borders2.png')
# Get image shape
h, w, channels = img.shape
# Draw a rectangle on the border to combine the wall to one contour
cv2.rectangle(img,(0,0),(w,h),(0,0,0),2)
# Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply binary threshold
_, threshold = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY_INV)
# Search for contours and sort them by size
_, contours, hierarchy = cv2.findContours(threshold,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
area = sorted(contours, key=cv2.contourArea, reverse=True)
# Draw it out with white color from biggest to second biggest contour
cv2.drawContours(img, ((contours[0]),(contours[1])), -1, (255,255,255), -1)
# Apply binary threshold again to the new image to remove little noises
_, img = cv2.threshold(img, 180, 255, cv2.THRESH_BINARY)
# Display results
cv2.imshow('img', img)
Result:
Related
I want to retrieve all contours of the image below, but ignore text.
Image:
When I try to find the contours of the current image I get the following:
I have no idea how to go about this as I am new to using OpenCV and image processing. I want to get ignore the text, how can I achieve this? If ignoring is not possible but making a single bounding box surrounding the text is, than that would be good too.
Edit:
Criteria that I need to match:
The contours may very in size and shape.
The colors from the image may differ.
The colors and size of the text inside the image may differ.
Here is one way to do that in Python/OpenCV.
Read the input
Convert to grayscale
Get Canny edges
Apply morphology close to ensure they are closed
Get all contour hierarchy
Filter contours to keep only those above threshold in perimeter
Draw contours on input
Draw each contour on a black background
Save results
Input:
import numpy as np
import cv2
# read input
img = cv2.imread('short_title.png')
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# get canny edges
edges = cv2.Canny(gray, 1, 50)
# apply morphology close to ensure they are closed
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
edges = cv2.morphologyEx(edges, cv2.MORPH_CLOSE, kernel)
# get contours
contours = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
contours = contours[0] if len(contours) == 2 else contours[1]
# filter contours to keep only large ones
result = img.copy()
i = 1
for c in contours:
perimeter = cv2.arcLength(c, True)
if perimeter > 500:
cv2.drawContours(result, c, -1, (0,0,255), 1)
contour_img = np.zeros_like(img, dtype=np.uint8)
cv2.drawContours(contour_img, c, -1, (0,0,255), 1)
cv2.imwrite("short_title_contour_{0}.jpg".format(i),contour_img)
i = i + 1
# save results
cv2.imwrite("short_title_gray.jpg", gray)
cv2.imwrite("short_title_edges.jpg", edges)
cv2.imwrite("short_title_contours.jpg", result)
# show images
cv2.imshow("gray", gray)
cv2.imshow("edges", edges)
cv2.imshow("result", result)
cv2.waitKey(0)
Grayscale:
Edges:
All contours on input:
Contour 1:
Contour 2:
Contour 3:
Contour 4:
Here are two options for erasing the text:
Using pytesseract OCR.
Finding white (and small) connected components.
Both solution build a mask, dilate the mask and use cv2.inpaint for erasing the text.
Using pytesseract:
Find text boxes using pytesseract.image_to_boxes.
Fill the boxes in the mask with 255.
Code sample:
import cv2
import numpy as np
from pytesseract import pytesseract, Output
# Tesseract path
pytesseract.tesseract_cmd = "C:\\Program Files\\Tesseract-OCR\\tesseract.exe"
img = cv2.imread('ShortAndInteresting.png')
# https://stackoverflow.com/questions/20831612/getting-the-bounding-box-of-the-recognized-words-using-python-tesseract
boxes = pytesseract.image_to_boxes(img, lang='eng', config=' --psm 6') # Run tesseract, returning the bounding boxes
h, w, _ = img.shape # assumes color image
mask = np.zeros((h, w), np.uint8)
# Fill the bounding boxes on the image
for b in boxes.splitlines():
b = b.split(' ')
mask = cv2.rectangle(mask, (int(b[1]), h - int(b[2])), (int(b[3]), h - int(b[4])), 255, -1)
mask = cv2.dilate(mask, np.ones((5, 5), np.uint8)) # Dilate the boxes in the mask
clean_img = cv2.inpaint(img, mask, 2, cv2.INPAINT_NS) # Remove the text using inpaint (replace the masked pixels with the neighbor pixels).
# Show mask and clean_img for testing
cv2.imshow('mask', mask)
cv2.imshow('clean_img', clean_img)
cv2.waitKey()
cv2.destroyAllWindows()
Mask:
Finding white (and small) connected components:
Use mask = cv2.inRange(img, (230, 230, 230), (255, 255, 255)) for finding the text (assume the text is white).
Finding connected components in the mask using cv2.connectedComponentsWithStats(mask, 4)
Remove large components from the mask - fill components with large area with zeros.
Code sample:
import cv2
import numpy as np
img = cv2.imread('ShortAndInteresting.png')
mask = cv2.inRange(img, (230, 230, 230), (255, 255, 255))
nlabel, labels, stats, centroids = cv2.connectedComponentsWithStats(mask, 4) # Finding connected components with statistics
# Remove large components from the mask (fill components with large area with zeros).
for i in range(1, nlabel):
area = stats[i, cv2.CC_STAT_AREA] # Get area
if area > 1000:
mask[labels == i] = 0 # Remove large connected components from the mask (fill with zero)
mask = cv2.dilate(mask, np.ones((5, 5), np.uint8)) # Dilate the text in the maks
cv2.imwrite('mask2.png', mask)
clean_img = cv2.inpaint(img, mask, 2, cv2.INPAINT_NS) # Remove the text using inpaint (replace the masked pixels with the neighbor pixels).
# Show mask and clean_img for testing
cv2.imshow('mask', mask)
cv2.imshow('clean_img', clean_img)
cv2.waitKey()
cv2.destroyAllWindows()
Mask:
Clean image:
Note:
My assumption is that you know how to split the image into contours, and the only issue is the present of the text.
I would recommend using flood fill, find the seed point for each color region, flood fill it to ignore the text values within. Hope that helps!
Refer to example of using floodfill here: https://www.programcreek.com/python/example/89425/cv2.floodFill
Example below copied from link above
def fillhole(input_image):
'''
input gray binary image get the filled image by floodfill method
Note: only holes surrounded in the connected regions will be filled.
:param input_image:
:return:
'''
im_flood_fill = input_image.copy()
h, w = input_image.shape[:2]
mask = np.zeros((h + 2, w + 2), np.uint8)
im_flood_fill = im_flood_fill.astype("uint8")
cv.floodFill(im_flood_fill, mask, (0, 0), 255)
im_flood_fill_inv = cv.bitwise_not(im_flood_fill)
img_out = input_image | im_flood_fill_inv
return img_out
I'm using OpenCV 4 - python 3 - to find an specific area in a black & white image.
This area is not a 100% filled shape. It may hame some gaps between the white lines.
This is the base image from where I start processing:
This is the rectangle I expect - made with photoshop -:
Results I got with hough transform lines - not accurate -
So basically, I start from the first image and I expect to find what you see in the second one.
Any idea of how to get the rectangle of the second image?
I'd like to present an approach which might be computationally less expensive than the solution in fmw42's answer only using NumPy's nonzero function. Basically, all non-zero indices for both axes are found, and then the minima and maxima are obtained. Since we have binary images here, this approach works pretty well.
Let's have a look at the following code:
import cv2
import numpy as np
# Read image as grayscale; threshold to get rid of artifacts
_, img = cv2.threshold(cv2.imread('images/LXSsV.png', cv2.IMREAD_GRAYSCALE), 0, 255, cv2.THRESH_BINARY)
# Get indices of all non-zero elements
nz = np.nonzero(img)
# Find minimum and maximum x and y indices
y_min = np.min(nz[0])
y_max = np.max(nz[0])
x_min = np.min(nz[1])
x_max = np.max(nz[1])
# Create some output
output = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
cv2.rectangle(output, (x_min, y_min), (x_max, y_max), (0, 0, 255), 2)
# Show results
cv2.imshow('img', img)
cv2.imshow('output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
I borrowed the cropped image from fmw42's answer as input, and my output should be the same (or most similar):
Hope that (also) helps!
In Python/OpenCV, you can use morphology to connect all the white parts of your image and then get the outer contour. Note I have modified your image to remove the parts at the top and bottom from your screen snap.
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('blackbox.png')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
_,thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY)
# apply close to connect the white areas
kernel = np.ones((75,75), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# get contours (presumably just one around the outside)
result = img.copy()
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
x,y,w,h = cv2.boundingRect(cntr)
cv2.rectangle(result, (x, y), (x+w, y+h), (0, 0, 255), 2)
# show thresh and result
cv2.imshow("thresh", thresh)
cv2.imshow("Bounding Box", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save resulting images
cv2.imwrite('blackbox_thresh.png',thresh)
cv2.imwrite('blackbox_result.png',result)
Input:
Image after morphology:
Result:
Here's a slight modification to #fmw42's answer. The idea is connect the desired regions into a single contour is very similar however you can find the bounding rectangle directly since there's only one object. Using the same cropped input image, here's the result.
We can optionally extract the ROI too
import cv2
# Grayscale, threshold, and dilate
image = cv2.imread('3.png')
original = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Connect into a single contour and find rect
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
dilate = cv2.dilate(thresh, kernel, iterations=1)
x,y,w,h = cv2.boundingRect(dilate)
ROI = original[y:y+h,x:x+w]
cv2.rectangle(image, (x, y), (x+w, y+h), (36, 255, 12), 2)
cv2.imshow('image', image)
cv2.imshow('ROI', ROI)
cv2.waitKey()
I am new to OpenCV so I really need your help. I have a bunch of images like this one:
I need to detect the rectangle on the image, extract the text part from it and save it as a new image.
Can you please help me with this?
Thank you!
Just to add to Danyals answer I have added an example code with steps written in comments. For this image you don't even need to perform morphological opening on the image. But usually for this kind of noise in the image it is recomended. Cheers!
import cv2
import numpy as np
# Read the image and create a blank mask
img = cv2.imread('napis.jpg')
h,w = img.shape[:2]
mask = np.zeros((h,w), np.uint8)
# Transform to gray colorspace and invert Otsu threshold the image
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# ***OPTIONAL FOR THIS IMAGE
### Perform opening (erosion followed by dilation)
#kernel = np.ones((2,2),np.uint8)
#opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
# ***
# Search for contours, select the biggest and draw it on the mask
_, contours, hierarchy = cv2.findContours(thresh, # if you use opening then change "thresh" to "opening"
cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
cv2.drawContours(mask, [cnt], 0, 255, -1)
# Perform a bitwise operation
res = cv2.bitwise_and(img, img, mask=mask)
########### The result is a ROI with some noise
########### Clearing the noise
# Create a new mask
mask = np.zeros((h,w), np.uint8)
# Transform the resulting image to gray colorspace and Otsu threshold the image
gray = cv2.cvtColor(res,cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Search for contours and select the biggest one again
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Draw it on the new mask and perform a bitwise operation again
cv2.drawContours(mask, [cnt], 0, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
# If you will use pytesseract it is wise to make an aditional white border
# so that the letters arent on the borders
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(res,(x,y),(x+w,y+h),(255,255,255),1)
# Crop the result
final_image = res[y:y+h+1, x:x+w+1]
# Display the result
cv2.imshow('img', final_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
One way to do this (if the rectangle sizes are somewhat predictable) is:
Convert the image to black and white
Invert the image
Perform morphological opening on the image from (2) with a horizontal line / rectangle (I tried with 2x30).
Perform morphological opening on the image from (2) with a vertical line (I tried it with 15x2).
Add the images from (3) and (4). You should only have a white rectangle now. Now can remove all corresponding rows and columns in the original image that are entirely zero in this image.
I am using OpenCV+Python to detect and extract eyeglasses from a (face) image. I followed the line of reasoning of this post(https://stackoverflow.com/questi...) which is the following:
1) Detect the face
2) Find the largest contour in the face area which must be the outer frame of the glasses
3) Find the second and third largest contours in the face area which must be the frame of the two lenses
4) Extract the area between these contours which essentially represents the eyeglasses
However, findContours() does find the outer frame of the glasses as contour but it does not accurately find the frame of the two lenses as contours.
My results are the following:
Original image, Outer frame,
Left lens
My source code is the following:
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('Luc_Marion.jpg')
RGB_img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Detect the face in the image
haar_face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
faces = haar_face_cascade.detectMultiScale(gray_img, scaleFactor=1.1, minNeighbors=8);
# Loop in all detected faces - in our case it is only one
for (x,y,w,h) in faces:
cv2.rectangle(RGB_img,(x,y),(x+w,y+h),(255,0,0), 1)
# Focus on the face as a region of interest
roi = RGB_img[int(y+h/4):int(y+2.3*h/4), x:x+w]
roi_gray = gray_img[int(y+h/4):int(y+2.3*h/4), x:x+w]
# Apply smoothing to roi
roi_blur = cv2.GaussianBlur(roi_gray, (5, 5), 0)
# Use Canny to detect edges
edges = cv2.Canny(roi_gray, 250, 300, 3)
# Dilate and erode to thicken the edges
kernel = np.ones((3, 3), np.uint8)
edg_dil = cv2.dilate(edges, kernel, iterations = 3)
edg_er = cv2.erode(edg_dil, kernel, iterations = 3)
# Thresholding instead of Canny does not really make things better
# ret, thresh = cv2.threshold(roi_blur, 127, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# thresh = cv2.adaptiveThreshold(blur_edg, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 115, 1)
# Find and sort contours by contour area
cont_img, contours, hierarchy = cv2.findContours(edg_er, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
print("Number of contours: ", len(contours))
cont_sort = sorted(contours, key=cv2.contourArea, reverse=True)
# Draw largest contour on original roi (which is the outer
# frame of eyeglasses)
cv2.drawContours(roi, cont_sort[0], -1, (0, 255, 0), 2)
# Draw second largest contour on original roi (which is the left
# lens of the eyeglasses)
# cv2.drawContours(roi, cont_sort[1], -1, (0, 255, 0), 2)
plt.imshow(RGB_img)
plt.show()
How can I detect exactly the contour of the left lens and of the right lens of the eyeglasses?
I hope that it is clear that my final output must be the eyeglasses themselves as my final goal is to extract the eyeglasses from a face image.
I am trying to have the circle detected in the following image.
So I did color thresholding and finally got this result.
Because of the lines in the center being removed, the circle is split into many small parts, so if I do contour detection on this, it can only give me each contour separately.
But is there a way I can somehow combine the contours so I could get a circle instead of just pieces of it?
Here is my code for color thresholding:
blurred = cv2.GaussianBlur(img, (9,9), 9)
ORANGE_MIN = np.array((12, 182, 221),np.uint8)
ORANGE_MAX = np.array((16, 227, 255),np.uint8)
hsv_disk = cv2.cvtColor(blurred,cv2.COLOR_BGR2HSV)
disk_threshed = cv2.inRange(hsv_disk, ORANGE_MIN, ORANGE_MAX)
The task is much easier when performed with the red plane only.
I guess there was problem with the thresholds for color segmentation, So the idea here was to generate a binary mask. By inspection your region of interest seems to be brighter than the other regions of input image, so thresholding can simply be done on a grayScale image to simplify the context. Note: You may change this step as per your requirement. After satisfying with the threshold output, you may use cv2.convexHull() to get the convex shape of your contour.
Also keep in mind to select the largest contour and ignore the small contours. The following code can be used to generate the required output:
import cv2
import numpy as np
# Loading the input_image
img = cv2.imread("/Users/anmoluppal/Downloads/3xGG4.jpg")
# Converting the input image to grayScale
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Thresholding the image to get binary mask.
ret, img_thresh = cv2.threshold(img_gray, 145, 255, cv2.THRESH_BINARY)
# Dilating the mask image
kernel = np.ones((3,3),np.uint8)
dilation = cv2.dilate(img_thresh,kernel,iterations = 3)
# Getting all the contours
_, contours, __ = cv2.findContours(dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Finding the largest contour Id
largest_contour_area = 0
largest_contour_area_idx = 0
for i in xrange(len(contours)):
if (cv2.contourArea(contours[i]) > largest_contour_area):
largest_contour_area = cv2.contourArea(contours[i])
largest_contour_area_idx = i
# Get the convex Hull for the largest contour
hull = cv2.convexHull(contours[largest_contour_area_idx])
# Drawing the contours for debugging purposes.
img = cv2.drawContours(img, [hull], 0, [0, 255, 0])
cv2.imwrite("./garbage.png", img)