I am having a image here. The region within the yellow lines is my region of interest, as shown in this image here, which is also one of my objective. Here's my planning / steps:
Denoise, color filtering, masking and Canny edging (DONE)
Coordinates of the edges (DONE)
Select coordinates of certain vertices, for example
Draw polygon with those vertices' coordinates
Here's the code:
import cv2
import numpy as np
from matplotlib import pyplot as plt
frame = cv2.imread('realtest.jpg')
denoisedFrame = cv2.fastNlMeansDenoisingColored(frame, None, 10, 10, 7, 21)
HSVframe = cv2.cvtColor(denoisedFrame, cv2.COLOR_BGR2HSV)
lower_yellowColor = np.array([15,105,105])
upper_yellowColor = np.array([25,255,255])
whiteMask = cv2.inRange(HSVframe, lower_yellowColor, upper_yellowColor)
maskedFrame = cv2.bitwise_and(denoisedFrame, denoisedFrame, mask=whiteMask)
grayFrame = cv2.cvtColor(maskedFrame, cv2.COLOR_BGR2GRAY)
gaussBlurFrame = cv2.GaussianBlur(grayFrame, (5,5), 0)
edgedFrame = cv2.Canny(grayFrame, 100, 200)
#Coordinates of each white pixels that make up the edges
ans = []
for y in range(0, edgedFrame.shape[0]):
for x in range(0, edgedFrame.shape[1]):
if edgedFrame[y, x] != 0:
ans = ans + [[x, y]]
ans = np.array(ans)
#print(ans.shape)
#print(ans[0:100, :])
cv2.imshow("edged", edgedFrame)
cv2.waitKey(0)
cv2.destroyAllWindows()
As you can see, I have successfully done step number (2) in getting the coordinates of each white pixels that make the edges. Whereas for the next step, step number (3), I am stuck. I have tried the coding here, but getting error that says 'ValueError: too many values to unpack (expected 2)'.
Please help teaching me in finding good vertices for constructing a polygon that is as close to the yellow lines as possible.
I have split the answer into two parts
Part 1: Finding the good vertices to construct a polygon
The required vertices around an image containing edges can be done using OpenCV's inbuilt cv2.findContours() function. It returns the image with contours, vertices of the contours and the hierarchy of the contours.
One can find vertices of contours in two ways:
cv2.CHAIN_APPROX_NONE plots ALL the coordinates(boundary points) on each contour
cv2.CHAIN_APPROX_SIMPLE plots ONLY the most necessary coordinates on each contour. It doesn't store all the points. Only the most required coordinates that best represent the contours are stored.
In your case option 2 can be opted. After finding the edges you can do the following:
image, contours, hier = cv2.findContours(edgedFrame, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
contours contains the vertices of every contour in the image edgedFrame
Part 2: Constructing the polygon
Opencv has an in-built function for this as well cv2.convexHull() After finding those points you can draw them using cv2.drawContours().
for cnt in contours:
hull = cv2.convexHull(cnt)
cv2.drawContours(frame, [hull], -1, (0, 255, 0), 2)
cv2.imshow("Polygon", frame)
You can obtain a better approximation of the desired edge by doing some more pre-processing while creating the mask
Related
I am processing binary images, and was previously using this code to find the largest area in the binary image:
# Use the hue value to convert to binary
thresh = 20
thresh, thresh_img = cv2.threshold(h, thresh, 255, cv2.THRESH_BINARY)
cv2.imshow('thresh', thresh_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Finding Contours
# Use a copy of the image since findContours alters the image
contours, _ = cv2.findContours(thresh_img.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#Extract the largest area
c = max(contours, key=cv2.contourArea)
This code isn't really doing what I need it to do, now I think it would better to extract the most central area in the binary image.
Binary Image
Largest Image
This is currently what the code is extracting, but I am hoping to get the central circle in the first binary image extracted.
OpenCV comes with a point-polygon test function (for contours). It even gives a signed distance, if you ask for that.
I'll find the contour that is closest to the center of the picture. That may be a contour actually overlapping the center of the picture.
Timings, on my quadcore from 2012, give or take a millisecond:
findContours: ~1 millisecond
all pointPolygonTests and argmax: ~1 millisecond
mask = cv.imread("fkljm.png", cv.IMREAD_GRAYSCALE)
(height, width) = mask.shape
ret, mask = cv.threshold(mask, 128, 255, cv.THRESH_BINARY) # required because the sample picture isn't exactly clean
# get contours
contours, hierarchy = cv.findContours(mask, cv.RETR_LIST | cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
center = (np.array([width, height]) - 1) / 2
# find contour closest to center of picture
distances = [
cv.pointPolygonTest(contour, center, True) # looking for most positive (inside); negative is outside
for contour in contours
]
iclosest = np.argmax(distances)
print("closest contour is", iclosest, "with distance", distances[iclosest])
# draw closest contour
canvas = cv.cvtColor(mask, cv.COLOR_GRAY2BGR)
cv.drawContours(image=canvas, contours=[contours[iclosest]], contourIdx=-1, color=(0, 255, 0), thickness=5)
closest contour is 45 with distance 65.19202405202648
a cv.floodFill() on the center point can also quickly yield a labeling on that blob... assuming the mask is positive there. Otherwise, there needs to be search.
(cx, cy) = center.astype(int)
assert mask[cy,cx], "floodFill not applicable"
# trying cv.floodFill on the image center
mask2 = mask >> 1 # turns everything else gray
cv.floodFill(image=mask2, mask=None, seedPoint=center.astype(int), newVal=255)
# use (mask2 == 255) to identify that blob
This also takes less than a millisecond.
Some practically faster approaches might involve a pyramid scheme (low-res versions of the mask) to quickly identify areas of the picture that are candidates for an exact test (distance/intersection).
Test target pixel. Hit (positive)? Done.
Calculate low-res mask. Per block, if any pixel is positive, block is positive.
Find positive blocks, sort by distance, examine closer all those that are within sqrt(2) * blocksize of the best distance.
There are several ways you define "most central." I chose to define it as the region with the closest distance to the point you're searching for. If the point is inside the region, then that distance will be zero.
I also chose to do this with a pixel-based approach rather than a polygon-based approach, like you're doing with findContours().
Here's a step-by-step breakdown of what this code is doing.
Load the image, put it into grayscale, and threshold it. You're already doing these things.
Identify connected components of the image. Connected components are places where there are white pixels which are directly connected to other white pixels. This breaks up the image into regions.
Using np.argwhere(), convert a true/false mask into an array of coordinates.
For each coordinate, compute the Euclidean distance between that point and search_point.
Find the minimum within each region.
Across all regions, find the smallest distance.
import cv2
import numpy as np
img = cv2.imread('test197_img.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh_img = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)
n_groups, comp_grouped = cv2.connectedComponents(thresh_img)
components = []
search_point = [600, 150]
for i in range(1, n_groups):
mask = (comp_grouped == i)
component_coords = np.argwhere(mask)[:, ::-1]
min_distance = np.sqrt(((component_coords - search_point) ** 2).sum(axis=1)).min()
components.append({
'mask': mask,
'min_distance': min_distance,
})
closest = min(components, key=lambda x: x['min_distance'])['mask']
Output:
UPDATE: it seems like the ratio between the distances I calculate myself and the distances returned by cv2 is exactly 256. This is not surprising, since looking at their code (line 394 here) shows the distances in pixels get multiplied by 256. I just don't understand why, I guess.
I am using cv2.convexityDefects to find, well, the convex defects of some shape. I wish to compute the depths (distances to the convex hull, as nicely explained here) of the defects.
However, the distances I get are way too large to make sense. I additionally tried to compute the distances manually, using the start and end point output arguments of cv2.convexityDefects (see code below), and get a more reasonable result.
The distances calculated by cv2.convexityDefects are,
>> d
array([21315, 26578, 19093, 56472, 35230, 20476, 26825], dtype=int32)
which make no sense at all in the context of the image, which ~500 pixels wide. What am I doing wrong?
More info:
This is an image (ehhr I mean work of art) I created for this test,
Here is a the code:
from numpy.linalg import norm
# Load an image
img = cv2.imread('buff.png')
# Threshold
ret, img = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
# Detect edges
edges = cv2.Canny(img, 1, 2)
# Find contours
cnts, h = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
# Find convex hull
hull = cv2.convexHull(cnts[0], returnPoints = False)
# Find defects
defects = cv2.convexityDefects(cnts[0], np.sort(np.squeeze(hull)))
# Reshape the output
s = np.reshape(defects,(-1,4))[:,0] # start point idx
e = np.reshape(defects,(-1,4))[:,1] # end point idx
f = np.reshape(defects,(-1,4))[:,2] # defect idx
d = np.reshape(defects,(-1,4))[:,3] # distances as calculated by cv2.convexityDefects
# Draw contours in red, convex hull in blue
img = cv2.drawContours(img, cnts[0], -1, (255,0,0), 2)
img = cv2.polylines(img, [cnts[0][np.squeeze(hull)]], True, (0,0,255), 3)
# Calculate distances manually and put in a list d2
d2 = list()
for i in range(len(f)):
# Draw the defects in blue, start points in pink:
img = cv2.circle(img, tuple(cnts[0][f[i]][0]), 5,(0, 0, 255), -1)
img = cv2.circle(img, tuple(cnts[0][s[i]][0]), 5,(255, 0, 255), -1)
# Manually calculate the distances as the distances between the defects (blue points)
# and the line defined between each pair of start and end points (pink here, for demonstration)
d2.append(norm(np.cross(cnts[0][s[i]][0]-cnts[0][e[i]][0], cnts[0][e[i]][0]-cnts[0][f[i]][0]))/norm(cnts[0][s[i]][0]-cnts[0][e[i]][0]))
pp.imshow(img)
pp.show()
Here is the output of the above code, when applied on the test image:
It seems like the ratio between the distances I calculate myself and the distances returned by cv2 is exactly 256. This is not surprising, since looking at their code (line 394 here) shows the distances in pixels get multiplied by 256. I just don't understand why, I guess.
I am trying to write a script to calculate the angle between two bones given an x-ray.
A sample x-ray would look like the following:
I am trying to calculate the midline of each bone, essentially a line following the midpoints of the two sides of a bone, and then compare the angle between the two midlines.
I have tried using OpenCV to get the outline of the bones, but it does not seem accurate enough and gets lots of extra data. I am stuck on how to move next and how I would calculate the midline. I am quite new to image processing but have experience with Python.
Getting edges using OpenCV results:
Code for OpenCV:
import cv2
# Load the image
img = cv2.imread("xray-3.jpg")
# Find the contours
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(img,60,200)
im2, contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
hierarchy = hierarchy[0] # get the actual inner list of hierarchy descriptions
# For each contour, find the bounding rectangle and draw it
cv2.drawContours(img, contours, -1, (0,255,0), 3)
# Finally show the image
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
If it's not cheating, i'd recommend cropping the image to not in include as much of the labels and scales as possible without removing any areas of interest.
That being said, I think your method of getting the contours will be usable if you do some preprocessing to the image. One algorithm that might do the trick is a Difference of Gaussians (DoG) filter which will bring out the edges a little more. I modified slightly this code which will compute the DoG filter using a few different sigma and k values.
from skimage import io, feature, color, filters, img_as_float
from matplotlib import pyplot as plt
raw_img = io.imread('xray-3.jpg')
original_image = img_as_float(raw_img)
img = color.rgb2gray(original_image)
k = 1.6
plt.subplot(2,3,1)
plt.imshow(original_image)
plt.title('Original Image')
for idx,sigma in enumerate([4.0, 8.0, 16.0, 32.0]):
s1 = filters.gaussian(img, k*sigma)
s2 = filters.gaussian(img, sigma)
# multiply by sigma to get scale invariance
dog = s1 - s2
plt.subplot(2,3,idx+2)
print("min: {} max: {}".format(dog.min(), dog.max())
plt.imshow(dog, cmap='RdBu')
plt.title('DoG with sigma=' + str(sigma) + ', k=' + str(k))
ax = plt.subplot(2, 3, 6)
blobs_dog = [(x[0], x[1], x[2]) for x in feature.blob_dog(img, min_sigma=4, max_sigma=32, threshold=0.5, overlap=1.0)]
# skimage has a bug in my version where only maxima were returned by the above
blobs_dog += [(x[0], x[1], x[2]) for x in feature.blob_dog(-img, min_sigma=4, max_sigma=32, threshold=0.5, overlap=1.0)]
#remove duplicates
blobs_dog = set(blobs_dog)
img_blobs = color.gray2rgb(img)
for blob in blobs_dog:
y, x, r = blob
c = plt.Circle((x, y), r, color='red', linewidth=2, fill=False)
ax.add_patch(c)
plt.imshow(img_blobs)
plt.title('Detected DoG Maxima')
plt.show()
At first glance, it appears that sigma=8.0, k=1.6 might be your best bet as this seems to best exaggerate the edges of the lower leg while getting rid of the noise across it. Particularly over that of the subjects left (image right) leg. Give your edge detection another go and play around with k and sigma and let me know what you get :)
If the results look good you should be able to get a center point between the edges detected for either leg in each row of the image. Then just find the line of best fit for the mid points for either leg and you should be good to go. You will also need to isolate one leg from another, so again, if it's not cheating, maybe crop the image down the middle into two images.
I am writing code in python 2.7.12 using opencv '2.4.9.1'.
I have a 2d numpy array containing values in range [0,255].
My aim is to find largest region containing value in range[x,y]
I found
How to use python OpenCV to find largest connected component in a single channel image that matches a specific value? as pretty well-explained .
Only, the catch is - it is meant for opencv 3 .
I can try to write a function of this type
[pseudo code]
def get_component(x,y,list):
append x,y to list
visited[x][y]=1
if(x+1<m && visited[x+1][y]==0)
get_component(x+1,y,list)
if(y+1<n && visited[x][y+1]==0)
get_component(x,y+1,list)
if(x+1<m)&&(y+1<n)&&visited[x+1][y+1]==0
get_component(x+1,y+1,list)
return
MAIN
biggest_component = NULL
biggest_component_size = 0
low = lowest_value_in_user_input_range
high = highest_value_in_user_input_range
matrix a = gray image of size mxn
matrix visited = all value '0' of size mxn
for x in range(m):
for y in range(n):
list=NULL
if(a[x][y]>=low) && (a[x][y]<=high) && visited[x][y]==1:
get_component(x,y,list)
if (list.size>biggest_component_size)
biggest_component = list
Get maximum x , maximum y , min x and min y from above list containing coordinates of every point of largest component to make rectangle R .
Mission accomplished !
[/pseudo code]
Such an approach will not be efficient, I think.
Can you suggest functions for doing the same with my setup ?
Thanks.
Happy to see my answer linked! Indeed, connectedComponentsWithStats() and even connectedComponents() are OpenCV 3+ functions, so you can't use them. Instead, the easy thing to do is just use findContours().
You can calculate moments() of each contour, and included in the moments is the area of the contour.
Important note: The OpenCV function findContours() uses 8-way connectivity, not 4-way (i.e. it also checks diagonal connectivity, not just up, down, left, right). If you need 4-way, you'd need to use a different approach. Let me know if that's the case and I can update..
In the spirit of the other post, here's the general approach:
Binarize your image with the thresholds you're interested in.
Run cv2.findContours() to get the contour of each distinct component in the image.
For each contour, calculate the cv2.moments() of the contour and keep the maximum area contour (m00 in the dict returned from moments() is the area of the contour).
Either keep the contour as a list of points if that's what you need, otherwise draw them on a new blank image if you want it as a mask.
I lack creativity today, so you get the cameraman as our example image as you didn't provide one.
import cv2
import numpy as np
img = cv2.imread('cameraman.png', cv2.IMREAD_GRAYSCALE)
Now, let's binarize to get some separated blobs:
bin_img = cv2.inRange(img, 50, 80)
Now let's find the contours.
contours = cv2.findContours(bin_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
# For OpenCV 3+ use:
# contours = cv2.findContours(bin_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[1]
Now for the main bit; looping through the contours and finding the largest one:
max_area = 0
max_contour_index = 0
for i, contour in enumerate(contours):
contour_area = cv2.moments(contour)['m00']
if contour_area > max_area:
max_area = contour_area
max_contour_index = i
So now we have an index max_contour_index of the largest contour by area, so you can access the largest contour directly just by doing contours[max_contour_index]. You could of course just sort the contours list by the contour area and grab the first (or last, depending on sort order). If you want to make a mask of the one component, you can use
cv2.drawContours(new_blank_image, contours, max_contour_index, color=255, thickness=-1)
Note the -1 will fill the contour as opposed to outlining it. Here's an example drawing the contour over the original image:
Looks about right.
All in one function:
def largest_component_mask(bin_img):
"""Finds the largest component in a binary image and returns the component as a mask."""
contours = cv2.findContours(bin_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]
# should be [1] if OpenCV 3+
max_area = 0
max_contour_index = 0
for i, contour in enumerate(contours):
contour_area = cv2.moments(contour)['m00']
if contour_area > max_area:
max_area = contour_area
max_contour_index = i
labeled_img = np.zeros(bin_img.shape, dtype=np.uint8)
cv2.drawContours(labeled_img, contours, max_contour_index, color=255, thickness=-1)
return labeled_img
I'm trying to build a simple image analyzing tool that will find items that fit in a color range and then finds the centers of said objects.
As an example, after masking, I'm analyzing an image like this:
What I'm doing so far code-wise is rather simple:
import cv2
import numpy
bound = 30
inc = numpy.array([225,225,225])
lower = inc - bound
upper = inc + bound
img = cv2.imread("input.tiff")
cv2.imshow("Original", img)
mask = cv2.inRange(img, lower, upper)
cv2.imshow("Range", mask)
contours = cv2.findContours(mask, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_TC89_L1)
print contours
This, however, gives me a countless number of contours. I'm somewhat at a loss while reading the corresponding manpage. Can I use moments to reasonably analyze the contours? Are contours even the right tool for this?
I found this question, that vaguely covers finding the center of one object, but how would I modify this approach when there are multiple items?
How do I find the centers of the objects in the image? For example, in the above sample image I'm looking to find three points (the centers of the rectangle and the two circles).
Try print len(contours). That will give you around the expected answer. The output you see is the full representation of the contours which could be thousands of points.
Try this code:
import cv2
import numpy
img = cv2.imread('inp.png', 0)
_, contours, _ = cv2.findContours(img.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_TC89_L1)
print len(contours)
centres = []
for i in range(len(contours)):
moments = cv2.moments(contours[i])
centres.append((int(moments['m10']/moments['m00']), int(moments['m01']/moments['m00'])))
cv2.circle(img, centres[-1], 3, (0, 0, 0), -1)
print centres
cv2.imshow('image', img)
cv2.imwrite('output.png',img)
cv2.waitKey(0)
This gives me 4 centres:
[(474, 411), (96, 345), (58, 214), (396, 145)]
The obvious thing to do here is to also check for the area of the contours and if it is too small as a percentage of the image, don't count it as a real contour, it will just be noise. Just add something like this to the top of the for loop:
if cv2.contourArea(contours[i]) < 100:
continue
For the return values from findContours, I'm not sure what the first value is for as it is not present in the C++ version of OpenCV (which is what I use). The second value is obviously just the contours (an array of arrays) and the third value is a hierarchy holding information on the nesting of contours, which can be very handy.
You can use the opencv minEnclosingCircle() function on your contours to get the center of each object.
Check out this example which is in c++ but you can adapt the logic Example