Identifying the densest region/cluster - python

I have an image like so:
I would like to automatically identify the dense white box area in the top left and then fill it and black out the rest of image. Producing something like this:
Essentially, I just want to return the co-ordinates of the densest cluster. I have tried ad-hoc methods such as erosion, dilation and binary closing but they do not quite suite my needs. I'm not sure if I could use k-means here? Looking for an efficient method, any help is appreciated.

You could erode the image a little bit more, to remove more of the noise, and then find the contours and filter them by area. Here is what I would use (not tested):
kernel = np.ones((2, 2), np.uint8)
img = cv2.erode(img, kernel, iterations = 2)
#Finding contours of white square:
_, conts, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_SIMPLE)
for cnt in conts:
area = cv2.contourArea(cnt)
#filter more noise
if area > 200: # optimize this number
x1, y1, w, h = cv2.boundingRect(cnt)
x2 = x1 + w # (x1, y1) = top-left vertex
y2 = y1 + h # (x2, y2) = bottom-right vertex
rect = cv2.rectangle(img, (x1, y1), (x2, y2), (255,0,0), 2)

One right approach here would be to apply a large square averaging filter. If you know approximately the size of the box you're looking for, then match that size with the filter. After applying this filter, the largest pixel value in the image will be at the middle of the densest region. Let's call this point p.
Next, apply segmentation and connected component labeling to your original image. From your example image, it seems that the box you're looking for is connected. You might want to apply some morphological operations to make sure it's connected. You can also paint a reasonably-sizes blob centered at point p, it'll connect lots of small regions that together form a dense area.
Next, remove all connected components except the one containing point p. You can do this by finding the label at pixel p, and comparing all pixels in the labeled image for equality with that label.
This should leave you a connected, compact region. You can find the bounding box of this region, and paint it on your image, if you really want to enforce that the found area be a box.

Related

Extract most central area in a Binary Image

I am processing binary images, and was previously using this code to find the largest area in the binary image:
# Use the hue value to convert to binary
thresh = 20
thresh, thresh_img = cv2.threshold(h, thresh, 255, cv2.THRESH_BINARY)
cv2.imshow('thresh', thresh_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Finding Contours
# Use a copy of the image since findContours alters the image
contours, _ = cv2.findContours(thresh_img.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#Extract the largest area
c = max(contours, key=cv2.contourArea)
This code isn't really doing what I need it to do, now I think it would better to extract the most central area in the binary image.
Binary Image
Largest Image
This is currently what the code is extracting, but I am hoping to get the central circle in the first binary image extracted.
OpenCV comes with a point-polygon test function (for contours). It even gives a signed distance, if you ask for that.
I'll find the contour that is closest to the center of the picture. That may be a contour actually overlapping the center of the picture.
Timings, on my quadcore from 2012, give or take a millisecond:
findContours: ~1 millisecond
all pointPolygonTests and argmax: ~1 millisecond
mask = cv.imread("fkljm.png", cv.IMREAD_GRAYSCALE)
(height, width) = mask.shape
ret, mask = cv.threshold(mask, 128, 255, cv.THRESH_BINARY) # required because the sample picture isn't exactly clean
# get contours
contours, hierarchy = cv.findContours(mask, cv.RETR_LIST | cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
center = (np.array([width, height]) - 1) / 2
# find contour closest to center of picture
distances = [
cv.pointPolygonTest(contour, center, True) # looking for most positive (inside); negative is outside
for contour in contours
]
iclosest = np.argmax(distances)
print("closest contour is", iclosest, "with distance", distances[iclosest])
# draw closest contour
canvas = cv.cvtColor(mask, cv.COLOR_GRAY2BGR)
cv.drawContours(image=canvas, contours=[contours[iclosest]], contourIdx=-1, color=(0, 255, 0), thickness=5)
closest contour is 45 with distance 65.19202405202648
a cv.floodFill() on the center point can also quickly yield a labeling on that blob... assuming the mask is positive there. Otherwise, there needs to be search.
(cx, cy) = center.astype(int)
assert mask[cy,cx], "floodFill not applicable"
# trying cv.floodFill on the image center
mask2 = mask >> 1 # turns everything else gray
cv.floodFill(image=mask2, mask=None, seedPoint=center.astype(int), newVal=255)
# use (mask2 == 255) to identify that blob
This also takes less than a millisecond.
Some practically faster approaches might involve a pyramid scheme (low-res versions of the mask) to quickly identify areas of the picture that are candidates for an exact test (distance/intersection).
Test target pixel. Hit (positive)? Done.
Calculate low-res mask. Per block, if any pixel is positive, block is positive.
Find positive blocks, sort by distance, examine closer all those that are within sqrt(2) * blocksize of the best distance.
There are several ways you define "most central." I chose to define it as the region with the closest distance to the point you're searching for. If the point is inside the region, then that distance will be zero.
I also chose to do this with a pixel-based approach rather than a polygon-based approach, like you're doing with findContours().
Here's a step-by-step breakdown of what this code is doing.
Load the image, put it into grayscale, and threshold it. You're already doing these things.
Identify connected components of the image. Connected components are places where there are white pixels which are directly connected to other white pixels. This breaks up the image into regions.
Using np.argwhere(), convert a true/false mask into an array of coordinates.
For each coordinate, compute the Euclidean distance between that point and search_point.
Find the minimum within each region.
Across all regions, find the smallest distance.
import cv2
import numpy as np
img = cv2.imread('test197_img.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh_img = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)
n_groups, comp_grouped = cv2.connectedComponents(thresh_img)
components = []
search_point = [600, 150]
for i in range(1, n_groups):
mask = (comp_grouped == i)
component_coords = np.argwhere(mask)[:, ::-1]
min_distance = np.sqrt(((component_coords - search_point) ** 2).sum(axis=1)).min()
components.append({
'mask': mask,
'min_distance': min_distance,
})
closest = min(components, key=lambda x: x['min_distance'])['mask']
Output:

Rotate incomplete box so it is vertical

I have a dataset of x-ray images that i am trying to clean by rotating the images so the arm is vertical and cropping the image of any excess space. Here are some examples from the dataset:
I am currently working out the best way to work out the angle of the x-ray and rotate the image based on that.
My curent approach is to detect the line of the side of the rectangle that the scan is in using the hough transform, and rotate the image based on that.
I tried to run the hough transform on the output of a canny edge detector but this doesnt work so well for images where the edge of the rectangle is blurred like in the first image.
I cant use cv's box detection as sometimes the rectangle around the scan has an edge off screen.
So i currently use adaptive thresholding to find the edge of the box and then median filter it and try to find the longest line in this, but sometimes the wrong line is the longest and the image gets rotated completley wrong.
Adaptive thresholding is used due to the fact that soem scans have different brightnesses.
The current implementation i have is:
def get_lines(img):
#threshold
thresh = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 15, 4.75)
median = cv2.medianBlur(thresh, 3)
# detect lines
lines = cv2.HoughLines(median, 1, np.pi/180, 175)
return sorted(lines, key=lambda x: x[0][0], reverse=True)
def rotate(image, angle):
(h, w) = image.shape[:2]
(cX, cY) = (w // 2, h // 2)
M = cv2.getRotationMatrix2D((cX, cY), angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY
return cv2.warpAffine(image, M, (nW, nH))
def fix_rotation(input):
lines = get_lines(input)
rho, theta = lines[0][0]
return rotate_bound(input, theta*180/np.pi)
and produces the following results:
When it goes wrong:
I was wondering if there are any better techniques to usein order to improve the performance of this and what the best way to go about cropping the images after they have been rotated would be?
The idea is to use the blob of the arm itself and fit an ellipse around it. Then, extract its major axis. I quickly tested the idea in Matlab – not OpenCV. Here's what I did, you should be able to use OpenCV's equivalent functions to achieve similar outputs.
First, compute the threshold value of your input via Otsu. Then add some bias to the threshold value to find a better segmentation and use this value to threshold the image.
In pseudo-code:
//the bias value
threshBias = 0.4;
//get the binary threshold via otsu:
thresholdLevel = graythresh( grayInput, “otsu” );
//add bias to the original value
thresholdLevel = thresholdLevel - threshSensitivity * thresholdLevel;
//get the fixed binary image:
thresholdLevel = imbinarize( grayInput, thresholdLevel );
After small blob filtering, this is the output:
Now, get the contours/blobs and fit an ellipse for each contour. Check out the OpenCV example here: https://docs.opencv.org/3.4.9/de/d62/tutorial_bounding_rotated_ellipses.html
You end up with two ellipses:
We are looking for the biggest ellipse, the one with the biggest area and the biggest major and minor axis. I used the width and height of each ellipse to filter the results. The target ellipse is then colored in green. Finally, I get the major axis of the target ellipse, here colored in yellow:
Now, to implement these ideas in OpenCV you have these options:
Use fitEllipse to find the ellipses. The return value of this
function is a RotatedRect object. The data stored here are the
vertices of the ellipse.
Instead of fitting an ellipse you could try using minAreaRect, which
finds a rotated rectangle of the minimum area enclosing a blob.
You can use image moments to calculate the rotation angle.
Using opencv moments function, calculate the second order central moments to construct a covariance matrix and then obtain the orientation as shown here in the Image moment wiki page.
Obtain the normalized central moments nu20, nu11 and nu02 from opencv moments. Then the orientation is calculated as
0.5 * arctan(2 * nu11/(nu20 - nu02))
Please refer the given link for details.
You can use the raw image itself or the preprocessed one for the calculation of orientation. See which one gives you better accuracy and use it.
As for the bounding-box, once you rotate the image, assuming you used the preprocessed one, get all the non-zero pixel coordinates of the rotated image and calculate their upright bounding-box using opencv boundingRect.

How can I prevent HoughLines from detecting certain lines multiple times

So I'm working on this piece of code to extract data from some graphs in images. These images are all scanned from a book. Since we're talking about 100+ images here, I would like to automate the process of course. My first step was to make sure that all images are aligned. Because the pages of the book were scanned by hand, the scans are all slightly shifted or rotated in regards to each other. Luckily there are some dotted lines on the images, which can be used as a reference point to align them. Afterwards I can then divide the image into smaller subimages, by slicing the image on these dotted lines. In that way, all subimages will be equal for all scanned images.
So, first step of course is to detect these dotted lines. My strategy can be described in 4 steps:
turn the dotted lines into solid lines, using Morphological Transformation
detect all edges, using Canny Edge Detection
identify the lines, using HoughLines
draw these lines on a mask for further usage
Now there are several problems which may occur. Sometimes HoughLines will detect a wrong line (such as the fold of the next page in the book), but this could potentially be fixed by cropping the image a little on the right side (better solutions are always welcome). The second (and biggest) problem is that HoughLines sometimes tends to identify a single line as multiple lines. I think this has something to do with Canny Edge Detection being too rough or vague about the edges, so that HoughLines actually sees multiple lines. Is there a way I could "smooth" the output from Canny so that HoughLines detects each line exactly once?
In the case of this specific image, the vertical dotted lines in the middle didn't get identified, whereas the fold of the next page in the book did. Furthermore the vertical dotted lines got identified as multiple lines. (left source image, middle edges detected, right lines detected)
# load image
img_large = cv2.imread("image.png")
# resize for ease of use
img_ori = cv2.resize(img_large, None, fx=0.2, fy=0.2, interpolation=cv2.INTER_CUBIC)
# create grayscale
img = cv2.cvtColor(img_ori, cv2.COLOR_BGR2GRAY)
# create mask for image size
mask = np.zeros((img.shape[:2]), dtype=np.uint8)
# do a morphologic close to merge dotted line
kernel = np.ones((8, 8))
res = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
# detect edges for houghlines
edges = cv2.Canny(res, 50, 50)
# detect lines
lines = cv2.HoughLines(edges, 1, np.pi/180, 200)
# draw detected lines
for line in lines:
rho, theta = line[0]
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*a)
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*a)
cv2.line(mask, (x1, y1), (x2, y2), 255, 2)
cv2.line(img, (x1, y1), (x2, y2), 127, 2)
In your script, the pixel-bins and the rotation bins are too fine for the threshold you've set:
lines = cv2.HoughLines(edges, 1, np.pi/180, 200)
So you can tune the threshold parameter (200) to get only one line, or tune the rho (1) and theta (np.pi/180) parameters, or tune all these. You can select a set of image that contain only one horizontal or vertical line from your images. Then do grid search to find the parameters that detect only one line in your set of test images.

Ellipse accuracy from 2D points

I have an ellipse which I detected from the image using opencv, where elipse is defined as (x_centre,y_centre),(minor_axis,major_axis),angle. I also have list of points in form [(x1, y1), (x2,y2), ...] which are defining where the ellipse should be in the image.
How can I find the accuracy of the found ellipse from the ellipse defined by the points?
Update
For better understanding this is result from my actual script:
ellipse detection. The red ellipse was detected from image and green dots are just loaded from file.
Less accurate example: ellipse detection 2
I need some method to validate how accurate the ellipse is to the outer points.
This answer describes one way to find how accurately a found ellipse matches the ellipse as defined by a list of points.
The first step is to create a mask image, and draw the ellipse on it.
mask = np.zeros((img.shape[0], img.shape[1]), np.uint8)
mask = cv2.ellipse(mask, ellipse, 255, 5)
Next, iterate though the list of points and check if they are in the white part or the black part of the mask image.
hit, miss = 0,0
for point in cnt:
if mask[point[0][1], point[0][0]] == 0: miss += 1
else: hit += 1
This is an ellipse that fits perfectly:
Here is an ellipse that doesn't fit so well:
This RMSE can be found with the help of the function cv2.pointsPolygonTest:
_,ellipse_contours,hierarchy = cv2.findContours(mask, 1, 2)
ellipse_contour = ellipse_contours[0]
for point in cnt:
total_dist += cv2.pointPolygonTest(ellipse_contour, tuple(point[0]), True)**2
rmse = math.sqrt(total_dist/len(cnt))

How to close contour over outline rather than edge - OpenCV

Tl;DR: How to measure area enclosed by contour rather than just the contour line itself
I want to find the outline of the object in the below image and have a code that works for most cases.
Thresholding and adpative thresholding do not work reliably as the ligthing changes. I use a Canny edge detection and check the area to ensure I found the proper contour. However, once in a while, when there is a gap that cannot be closed by morphological closing, the shape is correct but the area is of the contour line instead of the whole object.
What I usually do is use convexHull, as it returns a contour around the object. However, in this case the object curves inwards along the top and convexHull isn't a good approximation to the area anymore.
I tried using approxPolyDP but the area that gets returned is of the contour line rather than the object.
How can I get the approxPolyDP to return a similar closed contour around the object, just like the convexHull function does?
Code illustrating this using the above picture:
import cv2
img = cv2.imread('Img_0.jpg',0)
cv2.imshow('Original', img)
edges = cv2.Canny(img,50,150)
cv2.imshow('Canny', edges)
contours, hierarchy = cv2.findContours(edges,cv2.cv.CV_RETR_EXTERNAL,cv2.cv.CV_CHAIN_APPROX_NONE)
cnt = contours[1] #I have a function to do this but for simplicity here by hand
M = cv2.moments(cnt)
print('Area = %f \t' %M['m00'], end="")
cntHull = cv2.convexHull(cnt, returnPoints=True)
cntPoly=cv2.approxPolyDP(cnt, epsilon=1, closed=True)
MHull = cv2.moments(cntHull)
MPoly = cv2.moments(cntPoly)
print('Area after Convec Hull = %f \t Area after apporxPoly = %f \n' %(MHull['m00'], MPoly['m00']), end="")
x, y =img.shape
size = (w, h, channels) = (x, y, 1)
canvas = np.zeros(size, np.uint8)
cv2.drawContours(canvas, cnt, -1, 255)
cv2.imshow('Contour', canvas)
canvas = np.zeros(size, np.uint8)
cv2.drawContours(canvas, cntHull, -1, 255)
cv2.imshow('Hull', canvas)
canvas = np.zeros(size, np.uint8)
cv2.drawContours(canvas, cntPoly, -1, 255)
cv2.imshow('Poly', canvas)
The output from the code is
Area = 24.500000 Area after Convec Hull = 3960.500000 Area after apporxPoly = 29.500000
Here's a very promising ppt from geosensor.net that discusses several algorithms. My recommendation would be to use the swing arm method with a limited radius.
Another completely un-tested, off the wall idea I have is to scan across the image by row and column (more directions increase accuracy) and color in the regions between line intersections:
_______
/-------\
/---------\
--------+---------+------ (fill between 2 intersections)
| |
|
--------+---------------- (no fill between single intersection)
\
-------
the maximum error would then decrease as the number of line directions scanned increases (more than 90 and 45 degrees). Getting a final area would then be as simple as a pixel count.

Categories

Resources