Braille Dot detection on image using OpenCV - python

For a project I want to detect braille dots on a plate. I make a picture on which I make my detection thanks to the connectedComponentsWithStats function. Despite my attempts I can never get a threshold value where all the dots and only them are detected, I have the same problem if I try to use the circle detection. I'm trying to use template matching on the advice of a teacher but I'm also having problems with my detection since the only factor that influences it is the threshold.
import matplotlib.pyplot as plt
img1 = cv.imread(r"traitement\prod.png")
plt.figure(figsize=(40,40))
plt.subplot(3,1,1)
gray_img = cv.cvtColor(img1, cv.COLOR_BGR2GRAY)
test = cv.adaptiveThreshold(gray_img, 255, cv.ADAPTIVE_THRESH_MEAN_C, cv.THRESH_BINARY_INV, 11, 6)
_, _, boxes, _ = cv.connectedComponentsWithStats(test)
boxes = boxes[1:]
filtered_boxes = []
for x,y,w,h,pixels in boxes:
if pixels < 1000 and h < 35 and w < 35 and h > 14 and w > 14 and x > 15 and y > 15:
filtered_boxes.append((x,y,w,h))
for x,y,w,h in filtered_boxes:
W = int(w)/2
H = int(h)/2
#print(w)
cv.circle(img1,(x+int(W),y+int(H)),2,(0,255,0),20)
cv.imwrite("gray.png",gray_img)
cv.imwrite("test.png",test)
plt.imshow(test)
plt.subplot(3,1,2)
plt.imshow(img1)
import cv2 as cv
import numpy as np
from imutils.object_detection import non_max_suppression
import matplotlib.pyplot as plt
img = cv.imread('traitement/prod.png')
temp_gray = cv.imread('dot.png',0)
W, H = temp.shape[:2]
thresh = 0.6
img_gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
match = cv.matchTemplate(image=img_gray, templ=temp_gray, method=cv.TM_CCOEFF_NORMED)
(y_points, x_points) = np.where(match >= thresh)
boxes = list()
for (x, y) in zip(x_points, y_points):
# update our list of rectangles
boxes.append((x, y, x + W, y + H))
boxes = non_max_suppression(np.array(boxes))
# loop over the final bounding boxes
for (x1, y1, x2, y2) in boxes:
cv.circle(img,(x1+int(W/2),y1+int(H/2)),2,(255,0,0),15)
plt.figure(figsize=(40,40))
plt.subplot(3,1,1)
plt.imshow(img)
Image with adaptive threshold:
Image with template detection:

I found a solution that may not be better than your solutions, because I had to overfit few parameters for the given input...
The problem is challenging because the input image was taken under non-uniform illumination conditions (the center part is brighter than the top). Consider taking a better snapshot...
Point of thought:
The dots are ordered in rows, and we are not using that information.
We may get better results if we were using the fact that the dots are ordered in rows.
For overcoming the brightness differences we may subtract the median of the surrounding pixels from each pixel (using large filter radius), and compute the absolute difference:
bg = cv2.medianBlur(gray, 151) # Background
fg = cv2.absdiff(gray, bg) # Foreground (use absdiff because the dost are dark but bright at the center).
Apply binary threshold (use THRESH_OTSU for automatic threshold level):
_, thresh = cv2.threshold(fg, 0, 255, cv2.THRESH_OTSU)
The result of thresh is not good enough for finding the dots.
We may use the fact that the dots are dark with bright center.
That fact makes an high edges around and inside the dots.
Apply Canny edge detection:
edges = cv2.Canny(gray, threshold1=50, threshold2=100)
Merge edges with thresh (use binary or):
thresh = cv2.bitwise_or(thresh, edges)
Find connected components and continue (filter the components by area).
Code sample:
import numpy as np
import cv2
img1 = cv2.imread('prod.jpg')
gray = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) # Convert to grayscale
bg = cv2.medianBlur(gray, 151) # Compute the background (use a large filter radius for excluding the dots)
fg = cv2.absdiff(gray, bg) # Compute absolute difference
_, thresh = cv2.threshold(fg, 0, 255, cv2.THRESH_OTSU) # Apply binary threshold (THRESH_OTSU applies automatic threshold level)
edges = cv2.Canny(gray, threshold1=50, threshold2=100) # Apply Canny edge detection.
thresh = cv2.bitwise_or(thresh, edges) # Merge edges with thresh
_, _, boxes, _ = cv2.connectedComponentsWithStats(thresh)
boxes = boxes[1:]
filtered_boxes = []
for x, y, w, h, pixels in boxes:
#if pixels < 1000 and h < 35 and w < 35 and h > 14 and w > 14 and x > 15 and y > 15 and pixels > 100:
if pixels < 1000 and x > 15 and y > 15 and pixels > 200:
filtered_boxes.append((x, y, w, h))
for x, y, w, h in filtered_boxes:
W = int(w)/2
H = int(h)/2
cv2.circle(img1, (x+int(W), y+int(H)), 2, (0, 255, 0), 20)
# Show images for testing
cv2.imshow('bg', bg)
cv2.imshow('fg', fg)
cv2.imshow('gray', gray)
cv2.imshow('edges', edges)
cv2.imshow('thresh', thresh)
cv2.imshow('img1', img1)
cv2.waitKey()
cv2.destroyAllWindows()
Result:
There are few dots that are marked twice.
It is relatively simple to merge the overlapping circles into one circle.
Intermediate results:
thresh (before merging with edges):
edges:
thresh merged with edges:
Update:
As Jeru Luke commented we may use non-maximum suppression as done in question.
Here is a code sample:
import numpy as np
import cv2
from imutils.object_detection import non_max_suppression
img1 = cv2.imread('prod.jpg')
gray = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) # Convert to grayscale
bg = cv2.medianBlur(gray, 151) # Compute the background (use a large filter radius for excluding the dots)
fg = cv2.absdiff(gray, bg) # Compute absolute difference
_, thresh = cv2.threshold(fg, 0, 255, cv2.THRESH_OTSU) # Apply binary threshold (THRESH_OTSU applies automatic threshold level)
edges = cv2.Canny(gray, threshold1=50, threshold2=100) # Apply Canny edge detection.
thresh = cv2.bitwise_or(thresh, edges) # Merge edges with thresh
_, _, boxes, _ = cv2.connectedComponentsWithStats(thresh)
boxes = boxes[1:]
filtered_boxes = []
for x, y, w, h, pixels in boxes:
if pixels < 1000 and x > 15 and y > 15 and pixels > 200:
filtered_boxes.append((x, y, x+w, y+h))
filtered_boxes = non_max_suppression(np.array(filtered_boxes), overlapThresh=0.2)
for x1, y1, x2, y2 in filtered_boxes:
cv2.circle(img1, ((x1+x2)//2, (y1+y2)//2), 2, (0, 255, 0), 20)
Result:

Approach: template matching.
Because the appearance of these dots can't be caught by just thresholding on brightness. These dots have both brighter and darker pixels than the flat surface. At best you'd get fractured components and given how close these dots are, any morphology operations to fix up the fractured components would run the risk of joining adjacent dots.
So here I do template matching. Works well enough, even though the appearance of these dots changes across the image. Brightness is uneven but that's not too much of a problem. TM_CCOEFF subtracts the mean for both patches before correlating.
imutils requires you to come up with bounding boxes for its NMS. Instead I'm using a simple and effective NMS formulation for raster data. Comparing floats for equality is okay here because dilate simply replicates the maximum value across the kernel size. It uses L-inf distance; for euclidean distance or an approximation, the kernel needs to be round. One downside: if there are multiple peaks of equal value, this will not remove either of them. To catch adjacent equal peaks, connectedComponents would work instead of findNonZero. For non-adjacent but equal peaks... let's just assume that doesn't happen because the data would make that situation impossible.
# Non-maximum suppression for raster data
def non_maximum_suppression(im, radius):
dilated = cv.dilate(im, kernel=None, iterations=radius)
return (im == dilated)
# Read image
im = cv.imread("fK2WOX.jpeg", cv.IMREAD_GRAYSCALE)
# Select template
# (x,y,w,h) = (880, 247, 44, 44)
(x,y,w,h) = cv.selectROI("ROI", 255-im, showCrosshair=False, fromCenter=True) # inverted to see the white rectangle...
cv.destroyWindow("ROI")
print((x,y,w,h))
template = im[y:y+h, x:x+w]
# find instances
scores = cv.matchTemplate(im, template, method=cv.TM_CCOEFF)
scores = scores / scores.max()
You see that the template creates additional peaks to the top and bottom of each dot. That can't be helped because that's the appearance of these dots, they consist of a bright dash surrounded by two darker blobs. These false peaks can be suppressed with NMS quite easily.
# find peaks of sufficient strength and NMS
threshold = 0.2
nmsradius = 20 # proportional to size of template/dot
nmsmask = (scores >= threshold) & non_maximum_suppression(scores, radius=nmsradius)
coords = cv.findNonZero(nmsmask.astype(np.uint8)).reshape((-1, 2))
coords += (w//2, h//2) # shift coordinates to be center of template
# draw result
canvas = cv.cvtColor(im, cv.COLOR_GRAY2BGR)
for pt in coords:
cv.circle(canvas, pt, radius=5, color=(0,0,255), thickness=cv.FILLED)
There are no overlapping/split detections. You'll have to excuse the false detections around the punched holes near the top. Just crop those out.
As for associating those dots into symbols, that's a new problem. I'd recommend using nearest-neighbor queries. When the spacing (in N/S/E/W direction) to other dots has been estimated, you can "probe" in those directions going from each dot and check if there is another dot there... and assemble symbols like that, marking dots as "associated" until there's nothing left. You would also want to associate symbols into strings, assuming a certain spacing between symbols. same approach there... and that gives you the added information of a baseline for the "text".

Related

How do I find the largest empty space in such images?

I would like to find the empty spaces (black regions) in images similar to the one I've posted below, where I have randomly sized blocks scattered in it.
By empty spaces, I refer to such possible open fields ( i have no particular lower bound on the area, but I would like to extract the top 3-4 largest ones present in the image.) There is also no restriction on the geometric shape they can take, but these empty spaces must not contain any of the blue blocks.
What is the best way to go about this?
What I've done till now:
My original image actually looks like this. I identified all the points, grouped them based on a certain distance threshold and applied a convex hull around them. I'm unsure how to proceed further. Any help would be greatly appreciated. Thank you!
Here is one way in Python/OpenCV using the distance transform to find the largest Euclidean distance between the Xs.
Input:
import cv2
import numpy as np
import skimage.exposure
# read image
img = cv2.imread('xxx.png')
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold to binary and invert so background is white and xxx are black
thresh = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)[1]
thresh = 255 - thresh
# add black border around threshold image to avoid corner being largest distance
thresh2 = cv2.copyMakeBorder(thresh, 1,1,1,1, cv2.BORDER_CONSTANT, (0))
h, w = thresh2.shape
# create zeros mask 2 pixels larger in each dimension
mask = np.zeros([h + 2, w + 2], np.uint8)
# apply distance transform
distimg = thresh2.copy()
distimg = cv2.distanceTransform(distimg, cv2.DIST_L2, 5)
# remove excess border
distimg = distimg[1:h-1, 1:w-1]
# get max value and location in distance image
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(distimg)
# scale distance image for viewing
distimg = skimage.exposure.rescale_intensity(distimg, in_range='image', out_range=(0,255))
distimg = distimg.astype(np.uint8)
# draw circle on input
result = img.copy()
centx = max_loc[0]
centy = max_loc[1]
radius = int(max_val)
cv2.circle(result, (centx, centy), radius, (0,0,255), 1)
print('center x,y:', max_loc,'center radius:', max_val)
# save image
cv2.imwrite('xxx_distance.png',distimg)
cv2.imwrite('xxx_radius.png',result)
# show the images
cv2.imshow("thresh", thresh)
cv2.imshow("thresh2", thresh2)
cv2.imshow("distance", distimg)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Distance Transform Image:
Region of Largest Distance to Xs:
Textual Information:
center x,y: (179, 352) radius: 92.5286865234375

How can I improve Watershed segmentation of heterogenous structures in Python?

I'm following a simple approach to segment cells (microscopy images) using the Watershed algorithm in Python. I'm happy with the result 90% of the time, but I have two main problems: (i) the markers/contours are really "spiky" and (2) the algorithm sometimes fails when two cells are to close to each other (i.e they are segmented together). Can you give some tips in how to improve it?
Here's the code I'm using and an output image showing my 2 issues.
# Adjustable parameters for a future function
img_file = NP_file
sigma = 9 # size of gaussian blur kernel; has to be an even number
alpha = 0.2 #scalling factor distance transform
clear_border = False
remove_small_objects = True
# read image and covert to gray scale
im = cv2.imread(NP_file, 1)
im = enhanceContrast(im)
im_gray = cv2.cvtColor(im.copy(), cv2.COLOR_BGR2GRAY)
# Basic Median Filter
im_blur = cv2.medianBlur(im_gray, ksize = sigma)
# Threshold Image
th, im_seg = cv2.threshold(im_blur, im_blur.mean(), 255, cv2.THRESH_BINARY);
# filling holes in the segmented image
im_filled = binary_fill_holes(im_seg)
# discard cells touching the border
if clear_border == True:
im_filled = skimage.segmentation.clear_border(im_filled)
# filter small particles
if remove_small_objects == True:
im_filled = sk.morphology.remove_small_objects(im_filled, min_size = 5000)
# apply distance transform
# labels each pixel of the image with the distance to the nearest obstacle pixel.
# In this case, obstacle pixel is a boundary pixel in a binary image.
dist_transform = cv2.distanceTransform(img_as_ubyte(im_filled), cv2.DIST_L2, 3)
# get sure foreground area: region near to center of object
fg_val, sure_fg = cv2.threshold(dist_transform, alpha * dist_transform.max(), 255, 0)
# get sure background area: region much away from the object
sure_bg = cv2.dilate(img_as_ubyte(im_filled), np.ones((3,3),np.uint8), iterations = 6)
# The remaining regions (borders) are those which we don’t know if they are img or background
borders = cv2.subtract(sure_bg, np.uint8(sure_fg))
# use Connected Components labelling:
# scans an image and groups its pixels into components based on pixel connectivity
# label background of the image with 0 and other objects with integers starting from 1.
n_markers, markers1 = cv2.connectedComponents(np.uint8(sure_fg))
# filter small particles again! (bc of segmentation artifacts)
if remove_small_objects == True:
markers1 = sk.morphology.remove_small_objects(markers1, min_size = 1000)
# Make sure the background is 1 and not 0;
# and that borders are marked as 0
markers2 = markers1 + 1
markers2[borders == 255] = 0
# implement the watershed algorithm: connects markers with original image
# The label image will be modified and the marker in the border area will change to -1
im_out = im.copy()
markers3 = cv2.watershed(im_out, markers2)
# generate an extra image with color labels only for visuzalization
# color markers in BLUE (pixels = -1 after watershed algorithm)
im_out[markers3 == -1] = [0, 255, 255]
in case you want to try to reproduce my results you can find my .tif file here:
https://drive.google.com/file/d/13KfyUVyHodtEOP_yKAnfFCAhgyoY0BQL/view?usp=sharing
Thanks!
In the past, the best approach for me to apply the watershed algorithm is 'only when needed'. It is computationally intensive and not needed for the majority of cells in your image.
This is the code I have used with your image:
# Threshold your image
# This example worked very well with a threshold value of 1
tv, thresh = cv2.threshold(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY), 1, 255, cv2.THRESH_BINARY)
# Minimize the holes in the cells to facilitate finding contours
for i in range(5):
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, np.ones((3,3)))
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, np.ones((3,3)))
# Find contours and keep the ones big enough to be a cell
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = [c for c in contours if cv2.contourArea(c) > 400]
output = np.zeros_like(thresh)
cv2.drawContours(output, contours, -1, 255, -1)
for i, contour in enumerate(contours):
x, y, w, h = cv2.boundingRect(contour)
cv2.putText(output, f"{i}", (x, y), cv2.FONT_HERSHEY_PLAIN, 1, 255, 2)
The output of this code is this image:
As you can see, only a pair of cells (contour #7) needs splitting using watershed algorithm.
Running the watershed algorithm on that cell is very fast (smaller image to work with) and this is the result:
EDIT
Some of the cell morphology calculations that can be used to assess whether the watershed algorithm should be run on an object in the image:
# area
area = cv2.contourArea(contour)
# perimeter, with the minimum value = 0.01 to avoid division by zero in other calculations
perimeter = max(0.01, cv2.arcLength(contour, True))
# circularity
circularity = (4 * math.pi * area) / (perimeter ** 2)
# Check if the cell is convex (not smoothly elliptical)
hull = cv2.convexHull(contour)
convexity = cv2.arcLength(hull, True) / perimeter
approx = cv2.approxPolyDP(contour, 0.1 * perimeter, True)
convex = cv2.isContourConvex(approx)
You will need to find the thresholds for each of the measurements in your project. In my project, cells were elliptic, and having a blob with a large area and convex usually means there are 2 or more cells lump together.

How to identify the shape of a floorplan?

I'm trying to differentiate between two different styles of houses using a floorplan. I'm very new to cv2, so I'm struggling a bit here. I'm able to identify the exterior of the house using contours using the code below, that is from another Stack Overflow response.
import cv2
import numpy as np
def find_rooms(img, noise_removal_threshold=25, corners_threshold=0.1,
room_closing_max_length=100, gap_in_wall_threshold=500):
assert 0 <= corners_threshold <= 1
# Remove noise left from door removal
img[img < 128] = 0
img[img > 128] = 255
contours, _ = cv2.findContours(~img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
mask = np.zeros_like(img)
for contour in contours:
area = cv2.contourArea(contour)
if area > noise_removal_threshold:
cv2.fillPoly(mask, [contour], 255)
img = ~mask
# Detect corners (you can play with the parameters here)
dst = cv2.cornerHarris(img ,2,3,0.04)
dst = cv2.dilate(dst,None)
corners = dst > corners_threshold * dst.max()
# Draw lines to close the rooms off by adding a line between corners on the same x or y coordinate
# This gets some false positives.
# You could try to disallow drawing through other existing lines for example.
for y,row in enumerate(corners):
x_same_y = np.argwhere(row)
for x1, x2 in zip(x_same_y[:-1], x_same_y[1:]):
if x2[0] - x1[0] < room_closing_max_length:
color = 0
cv2.line(img, (x1, y), (x2, y), color, 1)
for x,col in enumerate(corners.T):
y_same_x = np.argwhere(col)
for y1, y2 in zip(y_same_x[:-1], y_same_x[1:]):
if y2[0] - y1[0] < room_closing_max_length:
color = 0
cv2.line(img, (x, y1), (x, y2), color, 1)
# Mark the outside of the house as black
contours, _ = cv2.findContours(~img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contour_sizes = [(cv2.contourArea(contour), contour) for contour in contours]
biggest_contour = max(contour_sizes, key=lambda x: x[0])[1]
mask = np.zeros_like(mask)
cv2.fillPoly(mask, [biggest_contour], 255)
img[mask == 0] = 0
return biggest_contour, mask
#Read gray image
img = cv2.imread("/content/51626-7-floorplan-2.jpg", cv2.IMREAD_GRAYSCALE)
ext_contour, mask = find_rooms(img.copy())
cv2_imshow(mask)
print('exterior')
epsilon = 0.01*cv2.arcLength(ext_contour,True)
approx = cv2.approxPolyDP(ext_contour,epsilon,True)
final = cv2.drawContours(img, [approx], -1, (0, 255, 0), 2)
cv2_imshow(final)
These floorplans will only have one of two shapes, a 6 sided shape and a 4 sided shape. Below are the two styles:
I need to ignore any bay windows or small extrusions.
I believe the next step is to only have a contour for the main walls, have that contour be smooth, and then count the edges in the array. I'm stuck as to how to do this. Any assistance would be greatly appreciated!
If you really just need the decision, whether it's a four or six sided house, you can simply do the following: Grayscale image, and inverse binary threshold everything, which is not nearly white. Then, just calculate the ratio between that mask and the total number of pixels. That ratio must be larger for four sided houses than for six sided houses. The exact cut-off depends on your data. For the two given examples, one could set the cut-off to 0.9.
Here's some code:
import cv2
from skimage import io # Only needed for web grabbing images
def house_analysis(image):
# Grayscale image
mask = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Inverse binary threshold everything, which is not nearly white
mask = cv2.threshold(mask, 248, 255, cv2.THRESH_BINARY_INV)[1]
# Calculate ratio between mask and total number of pixels
ratio = cv2.countNonZero(mask) / (mask.shape[0] * mask.shape[1])
print(ratio)
# Decide with respect to cut-off, if house is four or six sided
cutoff = 0.9
if ratio > cutoff:
print('Four sided house')
else:
print('Six sided house')
cv2.imshow('image', image)
cv2.imshow('mask', mask)
cv2.waitKey(0)
house_4 = cv2.cvtColor(io.imread('https://i.stack.imgur.com/vqzZB.jpg'), cv2.COLOR_RGB2BGR)
house_6 = cv2.cvtColor(io.imread('https://i.stack.imgur.com/ZpkQW.jpg'), cv2.COLOR_RGB2BGR)
house_analysis(house_4)
house_analysis(house_6)
cv2.destroyAllWindows()
The print outputs:
0.9533036597428289
Four sided house
0.789531416400426
Six sided house
If you have larger white space around the main walls, one could crop that part to get more robust ratios.
Hope that helps!
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.1
OpenCV: 4.1.2
----------------------------------------
Simple contour finding is unlikely to give you a robust solution.
however your current approach can be improved by first calculating a mask of the white background.
Using the shape of this mask you can determine the layout.
lower_color_bounds = cv.Scalar(255, 255, 255)
upper_color_bounds = cv.Scalar(220, 220, 220)
mask = cv2.inRange(frame,lower_color_bounds,upper_color_bounds )
mask_rgb = cv2.cvtColor(mask,cv2.COLOR_GRAY2BGR)

How to detect largest difference between images in OpenCV Python?

I'm working on a shooting simulator project where I have to detect bullet holes from images. I'm trying to differentiate two images so I can detect the new hole between the images, but its not working as expected. Between the two images, there are minor changes in the previous bullet holes because of slight movement between camera frames.
My first image is here
before.png
and the second one is here
after.png
I tried this code for checking differences
import cv2
import numpy as np
before = cv2.imread("before.png") after = cv2.imread("after.png")
result = after - before
cv2.imwrite("result.png", result)
the result i'm getting in result.png is the image below
result.png
but this is not what i expected, i only want to detect new hole
but it is showing diff with some pixels of previous image.
The result I'm expecting is
expected.png
Please help me figure it out so it can only detect big differences.
Thanks in advance.
Any new idea will be appreciated.
In order to find the differences between two images, you can utilize the Structural Similarity Index (SSIM) which was introduced in Image Quality Assessment: From Error Visibility to Structural Similarity. This method is already implemented in the scikit-image library for image processing. You can install scikit-image with pip install scikit-image.
Using the skimage.metrics.structural_similarity() function from scikit-image, it returns a score and a difference image, diff. The score represents the structual similarity index between the two input images and can fall between the range [-1,1] with values closer to one representing higher similarity. But since you're only interested in where the two images differ, the diff image what you're looking for. The diff image contains the actual image differences between the two images.
Next, we find all contours using cv2.findContours() and filter for the largest contour. The largest contour should represent the new detected difference as slight differences should be smaller then the added bullet.
Here is the largest detected difference between the two images
Here is the actual differences between the two images. Notice how all of the differences were captured but since a new bullet is most likely the largest contour, we can filter out all the other slight movements between camera frames.
Note: this method works pretty well if we assume that the new bullet will have the largest contour in the diff image. If the newest hole was smaller, you may have to mask out the existing regions and whatever new contours in the new image would be the new hole (assuming the image will be a uniform black background with white holes).
from skimage.metrics import structural_similarity
import cv2
# Load images
image1 = cv2.imread('1.png')
image2 = cv2.imread('2.png')
# Convert to grayscale
image1_gray = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
image2_gray = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)
# Compute SSIM between the two images
(score, diff) = structural_similarity(image1_gray, image2_gray, full=True)
# The diff image contains the actual image differences between the two images
# and is represented as a floating point data type in the range [0,1]
# so we must convert the array to 8-bit unsigned integers in the range
# [0,255] image1 we can use it with OpenCV
diff = (diff * 255).astype("uint8")
print("Image Similarity: {:.4f}%".format(score * 100))
# Threshold the difference image, followed by finding contours to
# obtain the regions of the two input images that differ
thresh = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
contour_sizes = [(cv2.contourArea(contour), contour) for contour in contours]
result = image2.copy()
# The largest contour should be the new detected difference
if len(contour_sizes) > 0:
largest_contour = max(contour_sizes, key=lambda x: x[0])[1]
x,y,w,h = cv2.boundingRect(largest_contour)
cv2.rectangle(result, (x, y), (x + w, y + h), (36,255,12), 2)
cv2.imshow('result', result)
cv2.imshow('diff', diff)
cv2.waitKey()
Here's another example with different input images. SSIM is pretty good for detecting differences between images
This is my approach: after we subtract one from the other, there is some noise still remaining, so I just tried to remove that noise. I am dividing the image on a percentile of it's size, and, for each small section of the image, comparing between before and after, so that only significant chunks of white pixels are remaining. This algorithm lacks precision when there is occlusion, that is, whenever the new shot overlaps an existing one.
import cv2
import numpy as np
# This is the percentage of the width/height we're gonna cut
# 0.99 < percent < 0.1
percent = 0.01
before = cv2.imread("before.png")
after = cv2.imread("after.png")
result = after - before # Here, we eliminate the biggest differences between before and after
h, w, _ = result.shape
hPercent = percent * h
wPercent = percent * w
def isBlack(crop): # Function that tells if the crop is black
mask = np.zeros(crop.shape, dtype = int)
return not (np.bitwise_or(crop, mask)).any()
for wFrom in range(0, w, int(wPercent)): # Here we are gonna remove that noise
for hFrom in range(0, h, int(hPercent)):
wTo = int(wFrom+wPercent)
hTo = int(hFrom+hPercent)
crop = result[wFrom:wTo,hFrom:hTo] # Crop the image
if isBlack(crop): # If it is black, there is no shot in it
continue # We dont need to continue with the algorithm
beforeCrop = before[wFrom:wTo,hFrom:hTo] # Crop the image before
if not isBlack(beforeCrop): # If the image before is not black, it means there was a hot already there
result[wFrom:wTo,hFrom:hTo] = [0, 0, 0] # So, we erase it from the result
cv2.imshow("result",result )
cv2.imshow("before", before)
cv2.imshow("after", after)
cv2.waitKey(0)
As you can see, it worked for the use case you provided. A good next step is to keep an array of positions of shots, so that you can
My code :
from skimage.measure import compare_ssim
import argparse
import imutils
import cv2
import numpy as np
# load the two input images
imageA = cv2.imread('./Input_1.png')
cv2.imwrite("./org.jpg", imageA)
# imageA = cv2.medianBlur(imageA,29)
imageB = cv2.imread('./Input_2.png')
cv2.imwrite("./test.jpg", imageB)
# imageB = cv2.medianBlur(imageB,29)
# convert the images to grayscale
grayA = cv2.cvtColor(imageA, cv2.COLOR_BGR2GRAY)
grayB = cv2.cvtColor(imageB, cv2.COLOR_BGR2GRAY)
##########################################################################################################
difference = cv2.subtract(grayA,grayB)
result = not np.any(difference)
if result is True:
print ("Pictures are the same")
else:
cv2.imwrite("./open_cv_subtract.jpg", difference )
print ("Pictures are different, the difference is stored.")
##########################################################################################################
diff = cv2.absdiff(grayA, grayB)
cv2.imwrite("./tabsdiff.png", diff)
##########################################################################################################
grayB=cv2.resize(grayB,(grayA.shape[1],grayA.shape[0]))
(score, diff) = compare_ssim(grayA, grayB, full=True)
diff = (diff * 255).astype("uint8")
print("SSIM: {}".format(score))
#########################################################################################################
thresh = cv2.threshold(diff, 25, 255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
#s = imutils.grab_contours(cnts)
count = 0
# loop over the contours
for c in cnts:
# images differ
count=count+1
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(imageA, (x, y), (x + w, y + h), (0, 0, 255), 2)
cv2.rectangle(imageB, (x, y), (x + w, y + h), (0, 0, 255), 2)
##########################################################################################################
print (count)
cv2.imwrite("./original.jpg", imageA)
# cv2.imshow("Modified", imageB)
cv2.imwrite("./test_image.jpg", imageB)
cv2.imwrite("./compare_ssim.jpg", diff)
cv2.imwrite("./thresh.jpg", thresh)
cv2.waitKey(0)
Another code :
import subprocess
# -fuzz 5% # ignore minor difference between two images
# -density 300
# miff:- | display
# -metric phash
# -highlight-color White # by default its RED
# -lowlight-color Black
# -compose difference # src
# -threshold 0
# -separate -evaluate-sequence Add
cmd = 'compare -highlight-color black -fuzz 5% -metric AE Input_1.png ./Input_2.png -compose src ./result.png x: '
a = subprocess.call(cmd, shell=True)
Above code are various image comparison algorithms for images difference using opencv, ImageMagic, numpy, skimage, etc
Hope this will find you help-full.

Advanced square detection (with connected region)

if the squares has connected region in image, how can I detect them.
I have tested the method mentioned in
OpenCV C++/Obj-C: Advanced square detection
It did not work well.
Any good ideas ?
import cv2
import numpy as np
def angle_cos(p0, p1, p2):
d1, d2 = (p0-p1).astype('float'), (p2-p1).astype('float')
return abs( np.dot(d1, d2) / np.sqrt( np.dot(d1, d1)*np.dot(d2, d2) ) )
def find_squares(img):
squares = []
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# cv2.imshow("gray", gray)
gaussian = cv2.GaussianBlur(gray, (5, 5), 0)
temp,bin = cv2.threshold(gaussian, 80, 255, cv2.THRESH_BINARY)
# cv2.imshow("bin", bin)
contours, hierarchy = cv2.findContours(bin, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours( gray, contours, -1, (0, 255, 0), 3 )
#cv2.imshow('contours', gray)
for cnt in contours:
cnt_len = cv2.arcLength(cnt, True)
cnt = cv2.approxPolyDP(cnt, 0.02*cnt_len, True)
if len(cnt) == 4 and cv2.contourArea(cnt) > 1000 and cv2.isContourConvex(cnt):
cnt = cnt.reshape(-1, 2)
max_cos = np.max([angle_cos( cnt[i], cnt[(i+1) % 4], cnt[(i+2) % 4] ) for i in xrange(4)])
if max_cos < 0.1:
squares.append(cnt)
return squares
if __name__ == '__main__':
img = cv2.imread('123.bmp')
#cv2.imshow("origin", img)
squares = find_squares(img)
print "Find %d squres" % len(squares)
cv2.drawContours( img, squares, -1, (0, 255, 0), 3 )
cv2.imshow('squares', img)
cv2.waitKey()
I use some method in the opencv example, but the result is not good.
Applying a Watershed Transform based on the Distance Transform will separate the objects:
Handling objects at the border is always problematic, and often discarded, so that pink rectangle at top left not separated is not a problem at all.
Given a binary image, we can apply the Distance Transform (DT) and from it obtain markers for the Watershed. Ideally there would be a ready function for finding regional minima/maxima, but since it isn't there, we can make a decent guess on how we can threshold DT. Based on the markers we can segment using Watershed, and the problem is solved. Now you can worry about distinguishing components that are rectangles from those that are not.
import sys
import cv2
import numpy
import random
from scipy.ndimage import label
def segment_on_dt(img):
dt = cv2.distanceTransform(img, 2, 3) # L2 norm, 3x3 mask
dt = ((dt - dt.min()) / (dt.max() - dt.min()) * 255).astype(numpy.uint8)
dt = cv2.threshold(dt, 100, 255, cv2.THRESH_BINARY)[1]
lbl, ncc = label(dt)
lbl[img == 0] = lbl.max() + 1
lbl = lbl.astype(numpy.int32)
cv2.watershed(cv2.cvtColor(img, cv2.COLOR_GRAY2BGR), lbl)
lbl[lbl == -1] = 0
return lbl
img = cv2.cvtColor(cv2.imread(sys.argv[1]), cv2.COLOR_BGR2GRAY)
img = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)[1]
img = 255 - img # White: objects; Black: background
ws_result = segment_on_dt(img)
# Colorize
height, width = ws_result.shape
ws_color = numpy.zeros((height, width, 3), dtype=numpy.uint8)
lbl, ncc = label(ws_result)
for l in xrange(1, ncc + 1):
a, b = numpy.nonzero(lbl == l)
if img[a[0], b[0]] == 0: # Do not color background.
continue
rgb = [random.randint(0, 255) for _ in xrange(3)]
ws_color[lbl == l] = tuple(rgb)
cv2.imwrite(sys.argv[2], ws_color)
From the above image you can consider fitting ellipses in each component to determine rectangles. Then you can use some measurement to define whether the component is a rectangle or not. This approach has a greater chance to work for rectangles that are fully visible, and will likely produce bad results for partially visible ones. The following image shows the result of such approach considering that a component is a rectangle if the rectangle from the fitted ellipse is within 10% of component's area.
# Fit ellipse to determine the rectangles.
wsbin = numpy.zeros((height, width), dtype=numpy.uint8)
wsbin[cv2.cvtColor(ws_color, cv2.COLOR_BGR2GRAY) != 0] = 255
ws_bincolor = cv2.cvtColor(255 - wsbin, cv2.COLOR_GRAY2BGR)
lbl, ncc = label(wsbin)
for l in xrange(1, ncc + 1):
yx = numpy.dstack(numpy.nonzero(lbl == l)).astype(numpy.int64)
xy = numpy.roll(numpy.swapaxes(yx, 0, 1), 1, 2)
if len(xy) < 100: # Too small.
continue
ellipse = cv2.fitEllipse(xy)
center, axes, angle = ellipse
rect_area = axes[0] * axes[1]
if 0.9 < rect_area / float(len(xy)) < 1.1:
rect = numpy.round(numpy.float64(
cv2.cv.BoxPoints(ellipse))).astype(numpy.int64)
color = [random.randint(60, 255) for _ in xrange(3)]
cv2.drawContours(ws_bincolor, [rect], 0, color, 2)
cv2.imwrite(sys.argv[3], ws_bincolor)
Solution 1:
Dilate your image to delete connected components.
Find contours of detected components. Eliminate contours which are not rectangles by introducing some measure (ex. ratio perimeter / area).
This solution will not detect rectangles connected to borders.
Solution 2:
Dilate to delete connected components.
Find contours.
Approximate contours to reduce their points (for rectangle contour should be 4 points).
Check if angle between contour lines is 90 degrees.
Eliminate contours which have no 90 degrees.
This should solve problem with rectangles connected to borders.
You have three problems:
The rectangles are not very strict rectangles (the edges are often somewhat curved)
There are a lot of them.
They are often connected.
It seems that all your rects are essentially the same size(?), and do not greatly overlap, but the pre-processing has connected them.
For this situation the approach I would try is:
dilate your image a few times (as also suggested by #krzych) - this will remove the connections, but result in slightly smaller rects.
Use scipy to label and find_objects - You now know the position and slice for every remaining blob in the image.
Use minAreaRect to find the center, orientation, width and height of each rectangle.
You can use step 3. to test whether the blob is a valid rectangle or not, by its area, dimension ratio or proximity to the edge..
This is quite a nice approach, as we assume each blob is a rectangle, so minAreaRect will find the parameters for our minimum enclosing rectangle. Further we could test each blob using something like humoments if absolutely neccessary.
Here is what I was suggesting in action, boundary collision matches shown in red.
Code:
import numpy as np
import cv2
from cv2 import cv
import scipy
from scipy import ndimage
im_col = cv2.imread('jdjAf.jpg')
im = cv2.imread('jdjAf.jpg',cv2.CV_LOAD_IMAGE_GRAYSCALE)
im = np.where(im>100,0,255).astype(np.uint8)
im = cv2.erode(im, None,iterations=8)
im_label, num = ndimage.label(im)
for label in xrange(1, num+1):
points = np.array(np.where(im_label==label)[::-1]).T.reshape(-1,1,2).copy()
rect = cv2.minAreaRect(points)
lines = np.array(cv2.cv.BoxPoints(rect)).astype(np.int)
if any([np.any(lines[:,0]<=0), np.any(lines[:,0]>=im.shape[1]-1), np.any(lines[:,1]<=0), np.any(lines[:,1]>=im.shape[0]-1)]):
cv2.drawContours(im_col,[lines],0,(0,0,255),1)
else:
cv2.drawContours(im_col,[lines],0,(255,0,0),1)
cv2.imshow('im',im_col)
cv2.imwrite('rects.png',im_col)
cv2.waitKey()
I think the Watershed and distanceTransform approach demonstrated by #mmgp is clearly superior for segmenting the image, but this simple approach can be effective depending upon your needs.

Categories

Resources