I tried the code provided bellow to segment each digit in this image and put a contour around it then crop it out but it's giving me bad results, I'm not sure what I need to change or work on.
The best idea I can think of right now is filtering the 4 largest contours in the image except the image contour itself.
The code I'm working with:
import sys
import numpy as np
import cv2
im = cv2.imread('marks/mark28.png')
im3 = im.copy()
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
thresh = cv2.adaptiveThreshold(blur, 255, 1, 1, 11, 2)
################# Now finding Contours ###################
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
samples = np.empty((0, 100))
responses = []
keys = [i for i in range(48, 58)]
for cnt in contours:
if cv2.contourArea(cnt) > 50:
[x, y, w, h] = cv2.boundingRect(cnt)
if h > 28:
cv2.rectangle(im, (x, y), (x + w, y + h), (0, 0, 255), 2)
roi = thresh[y:y + h, x:x + w]
roismall = cv2.resize(roi, (10, 10))
cv2.imshow('norm', im)
key = cv2.waitKey(0)
if key == 27: # (escape to quit)
sys.exit()
elif key in keys:
responses.append(int(chr(key)))
sample = roismall.reshape((1, 100))
samples = np.append(samples, sample, 0)
responses = np.array(responses, np.float32)
responses = responses.reshape((responses.size, 1))
print
"training complete"
np.savetxt('generalsamples.data', samples)
np.savetxt('generalresponses.data', responses)
I need to change the if condition on height probably but more importantly I need if conditions to get the 4 largest contours on the image. Sadly, I haven't managed to find what I'm supposed to be filtering.
This is the kind of results I get, I'm trying to escape getting those inner contours on the digit "zero"
Unprocessed images as requested: example 1 example 2
All I need is an idea on what I should filter for, don't write code please. Thank you community.
You almost have it. You have multiple bounding rectangles on each digit because you are retrieving every contour (external and internal). You are using cv2.findContours in RETR_LIST mode, which retrieves all the contours, but doesn't create any parent-child relationship. The parent-child relationship is what discriminates between inner (child) and outter (parent) contours, OpenCV calls this "Contour Hierarchy". Check out the docs for an overview of all hierarchy modes. Of particular interest is RETR_EXTERNAL mode. This mode fetches only external contours - so you don't get multiple contours and (by extension) multiple bounding boxes for each digit!
Also, it seems that your images have a red border. This will introduce noise while thresholding the image, and this border might be recognized as the top-level outer contour - thus, every other contour (the children of this parent contour) will not be fetched in RETR_EXTERNAL mode. Fortunately, the border position seems constant and we can eliminate it with a simple flood-fill, which pretty much fills a blob of a target color with a substitute color.
Let's check out the reworked code:
# Imports:
import cv2
import numpy as np
# Set image path
path = "D://opencvImages//"
fileName = "rhWM3.png"
# Read Input image
inputImage = cv2.imread(path+fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()
# Convert BGR to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
The first step is to get the binary image with all the target blobs/contours. This is the result so far:
Notice the border is white. We have to delete this, a simple flood-filling at position (x=0,y=0) with black color will suffice:
# Flood-fill border, seed at (0,0) and use black (0) color:
cv2.floodFill(binaryImage, None, (0, 0), 0)
This is the filled image, no more border!
Now we can retrieve the external, outermost contours in RETR_EXTERNAL mode:
# Get each bounding box
# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
Notice you also get each contour's hierarchy as second return value. This is useful if you want to check out if the current contour is a parent or a child. Alright, let's loop through the contours and get their bounding boxes. If you want to ignore contours below a minimum area threshold, you can also implement an area filter:
# Look for the outer bounding boxes (no children):
for _, c in enumerate(contours):
# Get the bounding rectangle of the current contour:
boundRect = cv2.boundingRect(c)
# Get the bounding rectangle data:
rectX = boundRect[0]
rectY = boundRect[1]
rectWidth = boundRect[2]
rectHeight = boundRect[3]
# Estimate the bounding rect area:
rectArea = rectWidth * rectHeight
# Set a min area threshold
minArea = 10
# Filter blobs by area:
if rectArea > minArea:
# Draw bounding box:
color = (0, 255, 0)
cv2.rectangle(inputImageCopy, (int(rectX), int(rectY)),
(int(rectX + rectWidth), int(rectY + rectHeight)), color, 2)
cv2.imshow("Bounding Boxes", inputImageCopy)
# Crop bounding box:
currentCrop = inputImage[rectY:rectY+rectHeight,rectX:rectX+rectWidth]
cv2.imshow("Current Crop", currentCrop)
cv2.waitKey(0)
The last three lines of the above snippet crop and show the current digit. This is the result of detected bounding boxes for both of your images (the bounding boxes are colored in green, the red border is part of the input images):
Related
I have many x-ray scans and need to crop the scanned object from its background noise.
The files are in .png format and I am planning to use OpenCV Python for this task. I have seen some works with FindContours() but unsure that thresholding will work for this case.
Before Image:
After/Cropped Image:
Any suggested solution/code is appreciated.
Here is one way to do that in Python/OpenCV. It assumes you have the same excess border in all your images so that one can sort contours by area and skip the largest contour to get the second largest one.
Input:
import cv2
import numpy as np
# load image
img = cv2.imread("table_xray.jpg")
hh, ww = img.shape[:2]
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# median filter
filt = cv2.medianBlur(gray, 15)
# threshold the filtered image and invert
thresh = cv2.threshold(filt, 64, 255, cv2.THRESH_BINARY)[1]
thresh = 255 - thresh
# find contours and store index with area in list
cntrs_info = []
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
index=0
for cntr in contours:
area = cv2.contourArea(cntr)
print(index, area)
cntrs_info.append((index,area))
index = index + 1
# sort contours by area
def takeSecond(elem):
return elem[1]
cntrs_info.sort(key=takeSecond, reverse=True)
# get bounding box of second largest contour skipping large border
index_second = cntrs_info[1][0]
x,y,w,h = cv2.boundingRect(contours[index_second])
print(index_second,x,y,w,h)
# crop input image
results = img[y:y+h,x:x+w]
# write result to disk
cv2.imwrite("table_xray_thresholded.png", thresh)
cv2.imwrite("table_xray_extracted.png", results)
cv2.imshow("THRESH", thresh)
cv2.imshow("RESULTS", results)
cv2.waitKey(0)
cv2.destroyAllWindows()
Filtered and Thresholded Image:
Cropped Result:
This is another possible solution. It uses the K-Channel of your input image, once converted to the CMYK color-space. The K (or Key) channel has most of the information of the black color, so it should be useful for segmenting the input image. After that, you can apply a heavy morphological chain to produce a good mask of the object. After that, cropping the object is very straightforward. Let's see the code:
# Imports
import cv2
import numpy as np
# Read image
imagePath = "D://opencvImages//"
inputImage = cv2.imread(imagePath+"jU6QA.jpg")
# Convert to float and divide by 255:
imgFloat = inputImage.astype(np.float) / 255.
# Calculate channel K:
kChannel = 1 - np.max(imgFloat, axis=2)
# Convert back to uint 8:
kChannel = (255*kChannel).astype(np.uint8)
The first bit of the program converts your image to the CMYK color-space and extracts the K channel. OpenCV has no direct conversion to this color-space, so a manual conversion is necessary. We need to be careful with the data types because there are float operations involved. The resulting image is this:
Pixels with black information are assigned an intensity close to 255. Now, let's threshold this image to get a binary mask. The threshold level is fixed:
# Threshold the image with a fixed thresh level
thresholdLevel = 200
_, binaryImage = cv2.threshold(kChannel, thresholdLevel, 255, cv2.THRESH_BINARY)
This produces the following binary image:
Alright. We need to isolate the object, however we have both the lines of the background and the "frame" around the image. Let's get rid of the lines first. We will apply a morphological Erosion. Then, we will remove the frame Flood-Filling with black color at two locations: upper left and bottom right of the image. After that, we will apply a Dilation to restore the object's original size. I wrapped these OpenCV functions inside custom functions that save me the typing of a couple of lines - These helper functions are presented at the end of the post. This is the approach:
# Perform Small Erosion:
binaryImage = morphoOperation(binaryImage, 3, 5, "Erode")
# Flood-Fill at two locations: Top left corner and bottom right:
(imageHeight, imageWidth) = binaryImage.shape[:2]
floodPositions = [(0, 0),(imageWidth-1, imageHeight-1)]
binaryImage = floodFill(binaryImage, floodPositions, 0)
# Perform Small Dilate:
binaryImage = morphoOperation(binaryImage, 3, 5, "Dilate")
This is the result:
Nice. We can improve the mask by applying a second morphological chain, this time with more iterations. Let's apply a Dilation to try and join the "holes" of the object, followed with a Erosion to, once again, restore the object's original size:
# Perform Big Dilate:
binaryImage = morphoOperation(binaryImage, 3, 10, "Dilate")
# Perform Big Erode:
binaryImage = morphoOperation(binaryImage, 3, 10, "Erode")
This yields the following result:
The gaps inside the object have been filled. Now, let's retrieve the contours on this mask to find the object's contour. I've additionally included an area filter. The mask is pretty clean by this point, so maybe this filter is not too necessary. Once the contour is located, we can crop the object from the original image:
# Find the contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# BGR image for drawing results:
binaryBGR = cv2.cvtColor(binaryImage, cv2.COLOR_GRAY2BGR)
# Look for the outer bounding boxes (no children):
for _, c in enumerate(contours):
# Get blob area:
currentArea = cv2.contourArea(c)
# Set a min area value:
minArea = 10000
if minArea < currentArea:
# Get the contour's bounding rectangle:
boundRect = cv2.boundingRect(c)
# Get the dimensions of the bounding rect:
rectX = boundRect[0]
rectY = boundRect[1]
rectWidth = boundRect[2]
rectHeight = boundRect[3]
# Set bounding rect:
color = (0, 255, 0)
cv2.rectangle( binaryBGR, (int(rectX), int(rectY)),
(int(rectX + rectWidth), int(rectY + rectHeight)), color, 5 )
cv2.imshow("Rects", binaryBGR)
# Crop original input:
currentCrop = inputImage[rectY:rectY + rectHeight, rectX:rectX + rectWidth]
cv2.imshow("Cropped", currentCrop)
cv2.waitKey(0)
The last step produces the following two images. The first is the object enclosed by a rectangle, the second one is the actual crop:
I also tested the algorithm with your second image, these are the final results:
Wow. Somebody brought a gun to the airport? That's not OK. These are the helper functions used earlier. This first function performs the morphological operations:
def morphoOperation(binaryImage, kernelSize, opIterations, opString):
# Get the structuring element:
morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform Operation:
if opString == "Dilate":
op = cv2.MORPH_DILATE
else:
if opString == "Erode":
op = cv2.MORPH_ERODE
outImage = cv2.morphologyEx(binaryImage, op, morphKernel, None, None, opIterations,
cv2.BORDER_REFLECT101)
return outImage
The second function performs Flood-Filling given a list of seed-points:
def floodFill(binaryImage, positions, color):
# Loop thru the positions list of tuples:
for p in range(len(positions)):
currentSeed = positions[p]
x = int(currentSeed[0])
y = int(currentSeed[1])
# Apply flood-fill:
cv2.floodFill(binaryImage, mask=None, seedPoint=(x, y), newVal=(color))
return binaryImage
given a dental form as input, need to find all the checkboxes present in the form using image processing. I have answered my current approach below. Is there any better approach to find the checkboxes for low-quality docs as well?
sample input:
This is one approach in which we can solve the issue,
import cv2
import numpy as np
image=cv2.imread('path/to/image.jpg')
### binarising image
gray_scale=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
th1,img_bin = cv2.threshold(gray_scale,150,225,cv2.THRESH_BINARY)
Defining vertical and horizontal kernels
lineWidth = 7
lineMinWidth = 55
kernal1 = np.ones((lineWidth,lineWidth), np.uint8)
kernal1h = np.ones((1,lineWidth), np.uint8)
kernal1v = np.ones((lineWidth,1), np.uint8)
kernal6 = np.ones((lineMinWidth,lineMinWidth), np.uint8)
kernal6h = np.ones((1,lineMinWidth), np.uint8)
kernal6v = np.ones((lineMinWidth,1), np.uint8)
Detect horizontal lines
img_bin_h = cv2.morphologyEx(~img_bin, cv2.MORPH_CLOSE, kernal1h) # bridge small gap in horizonntal lines
img_bin_h = cv2.morphologyEx(img_bin_h, cv2.MORPH_OPEN, kernal6h) # kep ony horiz lines by eroding everything else in hor direction
finding vertical lines
## detect vert lines
img_bin_v = cv2.morphologyEx(~img_bin, cv2.MORPH_CLOSE, kernal1v) # bridge small gap in vert lines
img_bin_v = cv2.morphologyEx(img_bin_v, cv2.MORPH_OPEN, kernal6v)# kep ony vert lines by eroding everything else in vert direction
merging vertical and horizontal lines to get blocks. Adding a layer of dilation to remove small gaps
### function to fix image as binary
def fix(img):
img[img>127]=255
img[img<127]=0
return img
img_bin_final = fix(fix(img_bin_h)|fix(img_bin_v))
finalKernel = np.ones((5,5), np.uint8)
img_bin_final=cv2.dilate(img_bin_final,finalKernel,iterations=1)
Apply Connected component analysis on the binary image to get the blocks required.
ret, labels, stats,centroids = cv2.connectedComponentsWithStats(~img_bin_final, connectivity=8, ltype=cv2.CV_32S)
### skipping first two stats as background
for x,y,w,h,area in stats[2:]:
cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),2)
You can also use contours for this problem.
# Reading the image in grayscale and thresholding it
Image = cv2.imread("findBox.jpg", 0)
ret, Thresh = cv2.threshold(Image, 100, 255, cv2.THRESH_BINARY)
Now perform dilation and erosion twice to join the dotted lines present inside the boxes.
kernel = np.ones((3, 3), dtype=np.uint8)
Thresh = cv2.dilate(Thresh, kernel, iterations=2)
Thresh = cv2.erode(Thresh, kernel, iterations=2)
Find contours in the image with cv2.RETR_TREE flag to get all contours with parent-child relations. For more info on this.
Contours, Hierarchy = cv2.findContours(Thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
Now all the boxes along with all the alphabets in the image are detected. We have to eliminate the alphabets detected, very small contours(due to noise), and also those boxes which contain smaller boxes inside them.
For this, I am running a for loop iterating over all the contours detected, and using this loop I am saving 3 values for each contour in 3 different lists.
1st value: Area of contour(Number of pixels a contour encloses)
2nd value: Contour's bounding rectangle info.
3rd value: Ratio of area of contour to the area of its bounding rectangle.
Areas = []
Rects = []
Ratios = []
for Contour in Contours:
# Getting bounding rectangle
Rect = cv2.boundingRect(Contour)
# Drawing contour on new image and finding number of white pixels for contour area
C_Image = np.zeros(Thresh.shape, dtype=np.uint8)
cv2.drawContours(C_Image, [Contour], -1, 255, -1)
ContourArea = np.sum(C_Image == 255)
# Area of the bounding rectangle
Rect_Area = Rect[2]*Rect[3]
# Calculating ratio as explained above
Ratio = ContourArea / Rect_Area
# Storing data
Areas.append(ContourArea)
Rects.append(Rect)
Ratios.append(Ratio)
Filtering out undesired contours:
Getting indices of those contours which have an area less than 3600(threshold value for this image) and which have Ratio >= 0.99.
The ratio defines the extent of overlap of contour to its bounding rectangle. As in this case, the desired contours are rectangle in shape, this ratio for them is expected to be "1.0" (0.99 for keeping a threshold of small noise).
BoxesIndices = [i for i in range(len(Contours)) if Ratios[i] >= 0.99 and Areas[i] > 3600]
Now final contours are those among contours at indices "BoxesIndices" which do not have a child contour(this will extract innermost contours) and if they have a child contour, then this child contour should not be one of the contours at indices "BoxesIndices".
FinalBoxes = [Rects[i] for i in BoxesIndices if Hierarchy[0][i][2] == -1 or BoxesIndices.count(Hierarchy[0][i][2]) == 0]
Final output image
I am trying to detect the outer boundary of the circular object in the images below:
I tried OpenCV's Hough Circle, but the code is not working for every image. I also tried to adjust parameters such as minRadius and maxRadius in Hough Circle but its not working on every image.
The aim is to detect the object from the image and crop it.
Expected output:
Source code:
import imutils
import cv2
import numpy as np
from matplotlib import pyplot as plt
image = cv2.imread("path to the image i have provided")
r = 600.0 / image.shape[1]
dim = (600, int(image.shape[0] * r))
resized = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite("path to were we want to save downscaled image", resized)
image = cv2.imread('path of downscaled image')
image1 = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image2 = cv2.GaussianBlur(image1, (5, 5), 0)
edged = cv2.Canny(image2, 30, 150)
img = cv2.medianBlur(image2,5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(edged,cv2.HOUGH_GRADIENT,1,20,
param1=50,param2=30,minRadius=200,maxRadius=280)
circles = np.uint16(np.around(circles))
max_circle = max(circles[0,:], key=lambda x:x[2])
# print(max_circle)
# # Create mask
height,width = image1.shape
mask = np.zeros((height,width), np.uint8)
for i in [max_circle]:
cv2.circle(mask,(i[0],i[1]),i[2],(255,255,255),thickness=-1)
masked_data = cv2.bitwise_and(image, image, mask=mask)
_,thresh = cv2.threshold(mask,1,255,cv2.THRESH_BINARY)
# Find Contour
contours = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)[0]
x,y,w,h = cv2.boundingRect(contours[0])
# Crop masked_data
crop = masked_data[y:y+h,x:x+w]
#Code to close Window
cv2.imshow('OG',image)
cv2.imshow('Cropped ROI',crop)
cv2.imwrite("path to save roi image", crop)
cv2.waitKey(0)
cv2.destroyAllWindows()
Second Answer: an approach based on color segmentation.
While I was editing the question to improve it's readability and was inserting and resizing all the images from the link you shared to make it easier for everyone to visualize what you are trying to do, it occurred to me that this problem might be a better candidate for an approach based on segmentation by color:
This simpler (but clever) approach assumes that the reel appears pretty much in the same location and has more or less the same dimensions every time:
To discover the approximate color of the reel in the image, define a list of Regions of Interest (ROIs) to sample pixels from and determine the min and max color of that area in the HSV color space. The location and size of the ROI are values derived from the size of the image. In the images below, you can see the ROIs as draw as blue-ish rectangles:
Once the min and max HSV colors have been found, a threshold operation with cv2.inRange() can be executed to segment the reel:
Then, iterate though all the contours in the binary image and assume that the largest one represents the reel. Use this contour and draw it in a separate mask to be able to extract the pixels from original image:
At this stage, it is also possible to compute a bounding box for the contour and extract it's precise location to be able to perform a crop operation later and completely isolate the reel in the image:
This approach works for EVERY image shared on the question.
Source code:
import cv2
import numpy as np
import sys
# initialize global H, S, V values
min_global_h = 179
min_global_s = 255
min_global_v = 255
max_global_h = 0
max_global_s = 0
max_global_v = 0
# load input image from the cmd-line
filename = sys.argv[1]
img = cv2.imread(sys.argv[1])
if (img is None):
print('!!! Failed imread')
sys.exit(-1)
# create an auxiliary image for debugging purposes
dbg_img = img.copy()
# initiailize a list of Regions of Interest that need to be scanned to identify good HSV values to threhsold by color
w = img.shape[1]
h = img.shape[0]
roi_w = int(w * 0.10)
roi_h = int(h * 0.10)
roi_list = []
roi_list.append( (int(w*0.25), int(h*0.15), roi_w, roi_h) )
roi_list.append( (int(w*0.25), int(h*0.60), roi_w, roi_h) )
# convert image to HSV color space
hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# iterate through the ROIs to determine the min/max HSV color of the reel
for rect in roi_list:
x, y, w, h = rect
x2 = x + w
y2 = y + h
print('ROI rect=', rect)
cropped_hsv_img = hsv_img[y:y+h, x:x+w]
h, s, v = cv2.split(cropped_hsv_img)
min_h = np.min(h)
min_s = np.min(s)
min_v = np.min(v)
if (min_h < min_global_h):
min_global_h = min_h
if (min_s < min_global_s):
min_global_s = min_s
if (min_v < min_global_v):
min_global_v = min_v
max_h = np.max(h)
max_s = np.max(s)
max_v = np.max(v)
if (max_h > max_global_h):
max_global_h = max_h
if (max_s > max_global_s):
max_global_s = max_s
if (max_v > max_global_v):
max_global_v = max_v
# debug: draw ROI in original image
cv2.rectangle(dbg_img, (x, y), (x2, y2), (255,165,0), 4) # red
cv2.imshow('ROIs', cv2.resize(dbg_img, dsize=(0, 0), fx=0.5, fy=0.5))
#cv2.waitKey(0)
cv2.imwrite(filename[:-4] + '_rois.png', dbg_img)
# define min/max color for threshold
low_hsv = np.array([min_h, min_s, min_v])
max_hsv = np.array([max_h, max_s, max_v])
#print('low_hsv=', low_hsv)
#print('max_hsv=', max_hsv)
# threshold image by color
img_bin = cv2.inRange(hsv_img, low_hsv, max_hsv)
cv2.imshow('binary', cv2.resize(img_bin, dsize=(0, 0), fx=0.5, fy=0.5))
cv2.imwrite(filename[:-4] + '_binary.png', img_bin)
#cv2.imshow('img_bin', cv2.resize(img_bin, dsize=(0, 0), fx=0.5, fy=0.5))
#cv2.waitKey(0)
# create a mask to store the contour of the reel (hopefully)
mask = np.zeros((img_bin.shape[0], img_bin.shape[1]), np.uint8)
crop_x, crop_y, crop_w, crop_h = (0, 0, 0, 0)
# iterate throw all the contours in the binary image:
# assume that the first contour with an area larger than 100k belongs to the reel
contours, hierarchy = cv2.findContours(img_bin, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contourIdx, cnt in enumerate(contours):
area = cv2.contourArea(contours[contourIdx])
print('contourIdx=', contourIdx, 'area=', area)
# draw potential reel blob on the mask (in white)
if (area > 100000):
crop_x, crop_y, crop_w, crop_h = cv2.boundingRect(cnt)
centers, radius = cv2.minEnclosingCircle(cnt)
cv2.circle(mask, (int(centers[0]), int(centers[1])), int(radius), (255), -1) # fill with white
break
cv2.imshow('mask', cv2.resize(mask, dsize=(0, 0), fx=0.5, fy=0.5))
cv2.imwrite(filename[:-4] + '_mask.png', mask)
# copy just the reel area into its own image
reel_img = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow('reel_img', cv2.resize(reel_img, dsize=(0, 0), fx=0.5, fy=0.5))
cv2.imwrite(filename[:-4] + '_reel.png', reel_img)
# crop the reel to a smaller image
if (crop_w != 0 and crop_h != 0):
cropped_reel_img = reel_img[crop_y:crop_y+crop_h, crop_x:crop_x+crop_w]
cv2.imshow('cropped_reel_img', cv2.resize(cropped_reel_img, dsize=(0, 0), fx=0.5, fy=0.5))
output_filename = filename[:-4] + '_crop.png'
cv2.imwrite(output_filename, cropped_reel_img)
cv2.waitKey(0)
First answer: an approach based on pre-processing the image and executing an adaptiveThreshold operation.
There might be other ways of solving this problem that are not based on Hough Circles. Here is the result of an approach that is not:
Preprocess the image! Decreasing the size of the image and executing a blur helps with segmentation:
The segmentation method uses a cv2.adaptiveThreshold() to create a binary image that preserves the most important objects: the center of the reel and the external edge of the reel. This is an important step since we are only interested in what exists between these two objects. However, life is not perfect and neither is this segmentation. The shadow of reel on the table became part of the binary objects detected. Also, the outer edge is not fully connected as you can see on the resulting image on the right (look at the top left of the circumference):
To join broken segments, a morphological operation can be executed:
Finally, the entire reel area can be exposed by iterating through the contours of the image above and discarding those whose area is larger than what is expected for a reel. The resulting binary image (on the left) can then be used as a mask to identify the reel location on the original image:
Keep in mind that I'm not trying to find an universal solution for your problem. I'm merely showing that there might be other solutions that don't depend on Hough Circles.
Also, this code might need some adjustments to work on a larger number of cases.
Source code:
import cv2
import numpy as np
import sys
img = cv2.imread("test_images/reel.jpg")
if (img is None):
print('!!! Failed imread')
sys.exit(-1)
# create output image
output_img = img.copy()
# 1. Preprocess the image: downscale to speed up processing and execute a blur
SCALE_FACTOR = 0.5
smaller_img = cv2.resize(img, dsize=(0, 0), fx=SCALE_FACTOR, fy=SCALE_FACTOR)
blur_img = cv2.medianBlur(smaller_img, 9)
cv2.imwrite('reel1_blur_img.png', blur_img)
# 2. Segment the image to identify the 2 most important contours: the center of the reel and the outter edge
gray_img = cv2.cvtColor(blur_img, cv2.COLOR_BGR2GRAY)
img_bin = cv2.adaptiveThreshold(gray_img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 19, 4)
cv2.imwrite('reel2_img_bin.png', img_bin)
green_mask = np.zeros((img_bin.shape[0], img_bin.shape[1]), np.uint8)
#green_mask = cv2.cvtColor(img_bin, cv2.COLOR_GRAY2RGB) # debug
contours, hierarchy = cv2.findContours(img_bin, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contourIdx, cnt in enumerate(contours):
x, y, w, h = cv2.boundingRect(cnt)
area = cv2.contourArea(contours[contourIdx])
#print('contourIdx=', contourIdx, 'w=', w, 'h=', h, 'area=', area)
# filter out tiny segments
if (area < 5000):
#cv2.fillPoly(green_mask, pts=[cnt], color=(0, 0, 255)) # red
continue
# draw green contour (filled)
#cv2.fillPoly(green_mask, pts=[cnt], color=(0, 255, 0)) # green
cv2.fillPoly(green_mask, pts=[cnt], color=(255)) # white
# debug:
#cv2.imshow('green_mask', green_mask)
#cv2.waitKey(0)
cv2.imshow('green_mask', green_mask)
cv2.imwrite('reel2_green_mask.png', green_mask)
# 3. Fix mask: join segments nearby
kernel = np.ones((3,3), np.uint8)
img_dilation = cv2.dilate(green_mask, kernel, iterations=1)
green_mask = cv2.erode(img_dilation, kernel, iterations=1)
cv2.imshow('fixed green_mask', green_mask)
cv2.imwrite('reel3_img.png', green_mask)
# 4. Extract the reel area from the green mask
reel_mask = np.zeros((green_mask.shape[0], green_mask.shape[1]), np.uint8)
#reel_mask = cv2.cvtColor(green_mask, cv2.COLOR_GRAY2RGB) # debug
contours, hierarchy = cv2.findContours(green_mask, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contourIdx, cnt in enumerate(contours):
x, y, w, h = cv2.boundingRect(cnt)
area = cv2.contourArea(contours[contourIdx])
print('contourIdx=', contourIdx, 'w=', w, 'h=', h, 'area=', area)
# filter out smaller segments
if (area > 110000):
#cv2.fillPoly(reel_mask, pts=[cnt], color=(0, 0, 255)) # red
continue
# draw green contour (filled)
#cv2.fillPoly(reel_mask, pts=[cnt], color=(0, 255, 0)) # green
cv2.fillPoly(reel_mask, pts=[cnt], color=(255)) # white
# debug:
#cv2.imshow('reel_mask', reel_mask)
#cv2.waitKey(0)
cv2.imshow('reel_mask', reel_mask)
cv2.imwrite('reel4_reel_mask.png', reel_mask)
# 5. Draw the reel area on the original image
contours, hierarchy = cv2.findContours(reel_mask, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contourIdx, cnt in enumerate(contours):
centers, radius = cv2.minEnclosingCircle(cnt)
# rescale these values back to the original image size
centers_orig = (centers[0] // SCALE_FACTOR, centers[1] // SCALE_FACTOR)
radius_orig = radius // SCALE_FACTOR
print('centers=', centers_orig, 'radius=', radius_orig)
cv2.circle(output_img, (int(centers_orig[0]), int(centers_orig[1])), int(radius_orig), (128,0,255), 5) # magenta
cv2.imshow('output_img', output_img)
cv2.imwrite('reel5_output.png', output_img)
# display just the pixels from the original image
larger_reel_mask = cv2.resize(reel_mask, (int(img.shape[1]), int(img.shape[0])))
output_reel_img = cv2.bitwise_and(img, img, mask=larger_reel_mask)
cv2.imshow('output_reel_img', output_reel_img)
cv2.imwrite('reel5_output_reel.png', output_reel_img)
cv2.waitKey(0)
At this point, its possible to use larger_reel_maskand compute a minimal enclosing circle, draw it over this mask to make it a little bit more round and allow us to retrieve the area of the reel more accurately:
But the 4 lines of code that achieve this improvement I leave as an exercise for the reader.
OpenCV in Python provides the following code:
regions, hierarchy = cv2.findContours(binary_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for region in regions:
x, y, w, h = cv2.boundingRect(region)
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 1)
This gives some contours within contour. How to remove them in Python?
For that, you should take a look at this tutorial on how to use the hierarchy object returned by the method findContours .
The main point is that you should use cv2.RETR_TREE instead of cv2.RETR_LIST to get parent/child relationships between your clusters:
regions, hierarchy = cv2.findContours(binary_image, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Then you can check whether a contour with index i is inside another by checking if hierarchy[0,i,3] equals -1 or not. If it is different from -1, then your contour is inside another.
In order to remove the contours inside a contour:
shapes, hierarchy = cv2.findContours(image=image, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)
However, in some cases you may observe that a big contour is formed on the whole image, and applying the above returns you that one big contour.
In order to avoid this, try inverting the image:
image = cv2.imread("Image Path")
image = 255 - image
shapes, hierarchy = cv2.findContours(image=image, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)
This will give you the desired result.
UPDATE:
The reason why hierarchy does not work if a big bounding box is approximated on the whole image is that the output of hierarchy[0,iteration,3] is -1 only for the one bounding box drawn on the whole image, as all other bounding boxes are inside this big bounding box, and hierarchy[0,iteration,3] is not equal to -1 for any of them. Thus, inverting the image will be required in order to comply with the following:
In OpenCV, finding contours is like finding white object from black background. So remember, object to be found should be white and background should be black.
However, as pointed out by #Jeru, this is not a generalized solution and one must visualize the image before inverting it.
Consider this image:
Running
shapes, hierarchy = cv2.findContours(image=image, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_SIMPLE)
results in
Now, only displaying the contour with hierarchy[0,iteration,3] = -1 results in
which is not correct. If we want to obtain the rectangle containing the shapes and the text shapes, we can do
image = 255 - image
shapes, hierarchy = cv2.findContours(image=thresh, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)
In this case we get:
Code:
import cv2
from easyocr import Reader
import math
shape_number = 2
image = cv2.imread("Image Path")
deep_copy = image.copy()
image_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(image_gray, 210, 255, cv2.THRESH_BINARY)
thresh = 255 - thresh
shapes, hierarchy = cv2.findContours(image=thresh, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(image=deep_copy, contours=shapes, contourIdx=-1, color=(0, 255, 0), thickness=2, lineType=cv2.LINE_AA)
for iteration, shape in enumerate(shapes):
if hierarchy[0,iteration,3] == -1:
print(hierarchy[0,iteration,3])
print(iteration)
cv2.imshow('Shapes', deep_copy)
cv2.waitKey(0)
cv2.destroyAllWindows()
img_output, contours, hierarchy = cv2.findContours(blank_image_firstImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
This removes the child contour
I have what seems to be like a rather simple question - I have an image from which I'm extracting contours using the following code -
import numpy as np
import cv2
def findAndColorShapes(inputFile):
# Find contours in the image
im = cv2.imread(inputFile)
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
this find contours in the image very well, and then I draw them using -
cv2.drawContours(fullIm, [con], -1, (0,255,0), 2)
Some of the shapes are hollow (an outlined circle, for example), while some are filled. I would like to draw the contours the way the appear in the original image. e.g., if the contour is a filled circle, it should be drawn with its filling, and if its just an outline - as an outline.
I tried many things (among them to change the mode in findContours to CHAIN_APPROX_NONE instead of CHAIN_APPROX_SIMPLE), and to change the 5th parameter in drawContours, but non worked.
Edit: Adding a sample image - Left circle should be drawn empty, while the right square should be drawn full.
Do you know of anyway it could be done?
Thanks!
Dan
If someone will ever need to do something similar one day, this is the code I eventually used. It is not very efficient, but it works well and time is not a factor in this project (notice I used red and green for the contours threshold in cv2.threshold(imgray,220,255,0). You may want to change that) -
def contour_to_image(con, original_image):
# Get the rect coordinates of the contour
lm, tm, rm, bm = rect_for_contour(con)
con_im = original_image.crop((lm, tm, rm, bm))
if con_im.size[0] == 0 or con_im.size[1] == 0:
return None
con_pixels = con_im.load()
for x in range(0, con_im .size[0]):
for y in range(0, con_im.size[1]):
# If the pixel is already white, don't bother checking it
if con_im.getpixel((x, y)) == (255, 255, 255):
continue
# Check if the pixel is outside the contour. If so, clear it
if cv2.pointPolygonTest(con, (x + lm, y + tm), False) < 0:
con_pixels[x, y] = (255, 255, 255)
return con_im
def findAndColorShapes(input_file, shapes_dest_path):
im = cv2.imread(input_file)
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 220, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
i = 0
for con in contours:
con_im = contour_to_image(con, Image.open(input_file))
if con_im is not None:
con_im.save(shapes_dest_path + "%d.png"%i)
i += 1
Where np_to_int() and rect_for_contour() are 2 simple helper functions -
def np_to_int(np_val):
return np.asscalar(np.int16(np_val))
def rect_for_contour(con):
# Get coordinates of a rectangle around the contour
leftmost = tuple(con[con[:,:,0].argmin()][0])
topmost = tuple(con[con[:,:,1].argmin()][0])
bottommost = tuple(con[con[:,:,1].argmax()][0])
rightmost = tuple(con[con[:,:,0].argmax()][0])
return leftmost[0], topmost[1], rightmost[0], bottommost[1]
You can check hierarchy parameter to check whether the contour has child(not filled), or not(filled),
For example,
vector< Vec4i > hierarchy
where for an i-th contour
hierarchy[i][0] = next contour at the same hierarchical level
hierarchy[i][1] = previous contour at the same hierarchical level
hierarchy[i][2] = denotes its first child contour
hierarchy[i][3] = denotes index of its parent contour
If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative. So for each contour you have to check is there child or not,
And
if child contour-> Draw contour with thickness=1;
if no child contour-> Draw contour with thickness=CV_FILLED;
I think this method will work for the images like you posted.
Also see the answer here might be helpful.
This is how you would create a mask image (that is the filled contours) and then "filter" the source image with that mask to get a result.
In this snipped "th" is the threshold image (single channel)
#np comes from import numpy as np
mask = np.zeros(th.shape, np.uint8) # create a black base 'image'
mask = cv2.drawContours(mask, contours, -1, 255, cv2.FILLED) # set everything to white inside all contours
result = np.zeros(th.shape, np.uint8)
result = np.where(mask == 0, result, th) # set everything where the mask is white to the value of th
Note: findContours manipulates the given image! You may want to pass a copy (np.copy(th)) to it, if you want to use the thresholded image else where.