Finding each centroid of multiple connected objects - python

I am SUPER new to python coding and would like some help. I was able to segment each cell outline within a biological tissue (super cool!) and now I am trying to find the centroid of each cell within a tissue using this:
I am using this code:
img = cv2.imread('/Users/kate/Desktop/SegmenterTest/SegmentedCells/Seg1.png')
image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(image, 60, 255, cv2.THRESH_BINARY)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
# loop over the contours
for c in cnts:
# compute the center of the contour
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
# draw the contour and center of the shape on the image
cv2.drawContours(image, [c], -1, (0, 255, 0), 2)
cv2.circle(image, (cX, cY), 7, (255, 255, 255), -1)
cv2.putText(image, "center", (cX - 20, cY - 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
# show the image
cv2.imshow("Image", image)
cv2.waitKey(0)
However, when I use this code, it is giving me the centroid of the ENTIRE object, and not each individual object to give this.
I have no idea where to go from here, so a nudge in the right direction would be greatly appreciated!

You can use the function regionprops from the module scikit-image in your case. Here is what I got.
This is the code I used.
import cv2
import matplotlib.pyplot as plt
from skimage import measure
import numpy as np
cells = cv2.imread('cells.png',0)
ret,thresh = cv2.threshold(cells,20,255,cv2.THRESH_BINARY_INV)
labels= measure.label(thresh, background=0)
bg_label = labels[0,0]
labels[labels==bg_label] = 0 # Assign background label to 0
props = measure.regionprops(labels)
fig,ax = plt.subplots(1,1)
plt.axis('off')
ax.imshow(cells,cmap='gray')
centroids = np.zeros(shape=(len(np.unique(labels)),2)) # Access the coordinates of centroids
for i,prop in enumerate(props):
my_centroid = prop.centroid
centroids[i,:]= my_centroid
ax.plot(my_centroid[1],my_centroid[0],'r.')
# print(centroids)
# fig.savefig('out.png', bbox_inches='tight', pad_inches=0)
plt.show()
Good luck with your research!

Problem
cv2.findContours uses an algorithm which has a few different 'retrieval modes'. These affect which contours are returned and how they are returned. This is documented here. These are given as the second argument to findContours. Your code uses cv2.RETR_EXTERNAL which means findContours will only return the outermost border of separate objects.
Solution
Changing this argument to cv2.RETR_LIST will give you all the contours in the image (including the one outermost border). This is the simplest solution.
E.g.
import cv2
import imutils
img = cv2.imread('/Users/kate/Desktop/SegmenterTest/SegmentedCells/Seg1.png')
image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(image, 60, 255, cv2.THRESH_BINARY)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
# loop over the contours
for c in cnts:
# compute the center of the contour
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
# draw the contour and center of the shape on the image
cv2.drawContours(image, [c], -1, (0, 255, 0), 2)
cv2.circle(image, (cX, cY), 7, (255, 255, 255), -1)
cv2.putText(image, "center", (cX - 20, cY - 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
# show the image
cv2.imshow("Image", image)
cv2.waitKey(0)
Selecting only the innermost objects
To reliably omit the outer contours you can take advantage of the ability of findContours to return a hierarchy of the contours it detects. To do this, you can change the retrieval mode argument once again to RETR_TREE, which will generate a full hierarchy.
The hierarchy is an array containing arrays of 4 values for each contour in the image. Each value is an index of a contour in the contour array. From the docs:
For each i-th contour contours[i], the elements hierarchy[i][0] ,
hierarchy[i][1] , hierarchy[i][2], and hierarchy[i][3] are set to
0-based indices in contours of the next and previous contours at the
same hierarchical level, the first child contour and the parent
contour, respectively. If for the contour i there are no next,
previous, parent, or nested contours, the corresponding elements of
hierarchy[i] will be negative.
When we say 'innermost', what we mean is contours that have no children (contours inside of them). So we want those contours whose entry in the hierarchy has a negative 3rd value. That is, contours[i], such that hierarchy[i][2] < 0
A small wrinkle is that although findContours returns a tuple which includes the hierarchy, imutils.grabContours discards the hierarchy and returns just the array of contours. All this means is that we have to do the work of grabContours ourselves, if we intend on working with different versions of OpenCV. This is just a simple if else statement.
res = cv2.findContours(thresh.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# switch for different versions of OpenCV
if len(cnts) == 3:
_, cnts, hierarchy = res
else:
cnts, hierarchy = res
Once you have hierarchy, checking if a contour, cnts[i] is 'innermost' can be done with hierarchy[0][i][2] < 0, which should be False for contours that contain other contours.
A full example based on your question's code:
import cv2
import imutils
img = cv2.imread('/Users/kate/Desktop/SegmenterTest/SegmentedCells/Seg1.png')
image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(image, 60, 255, cv2.THRESH_BINARY)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# switch for different versions of OpenCV
if len(cnts) == 3:
_, cnts, hierarchy = cnts
else:
cnts, hierarchy = cnts
# loop over the contours
for i, c in enumerate(cnts):
# check that it is 'innermost'
if hierarchy[0][i][2] < 0:
# compute the center of the contour
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
# draw the contour and center of the shape on the image
cv2.drawContours(image, [c], -1, (0, 255, 0), 2)
cv2.circle(image, (cX, cY), 7, (255, 255, 255), -1)
cv2.putText(image, "center", (cX - 20, cY - 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
# show the image
cv2.imshow("Image", image)
cv2.waitKey(0)

Related

Filling the outside of selected Hull Convex with black?

I created a convex hull of a hand from an X-ray. I did this by first creating contours along the edges of the hand. I am wanting to blacken the outside region of convex hull using OpenCV in Python. How do I approach this?
The code below creates the convex hull after doing contours:
img_path = 'sample_image.png'
# Getting the threshold of the image:
image = cv2.imread(img_path)
original = image.copy()
blank = np.zeros(image.shape[:2], dtype = np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5,5), 0)
thresh = cv2.threshold(blur, 140, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Drawing the contours along the edges of the hand in X-ray:
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE,
cv2.CHAIN_APPROX_SIMPLE)
contours = max(contours, key = lambda x: cv2.contourArea(x))
cv2.drawContours(image, [contours], -1, (255,255,0), 2)
# Drawing the hull along the digits along the edges:
hull = cv2.convexHull(contours)
cv2.drawContours(image, [hull], -1, (0, 255, 255), 2)
Input image: Sample_image.png
Result image: Output from the code
Update
Answer given by #AKX below proposed to draw the contour with filled polygon onto a new white blank and then use bitwise_and. The result code:
# Merge text into a single contour
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
dilate = cv2.dilate(thresh, kernel, iterations = 2)
contours, hierarchy = cv2.findContours(dilate, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = max(contours, key = lambda x: cv2.contourArea(x))
cv2.drawContours(image, [contours], -1, (255,255,0), 2)
plt.imshow(image)
# Created a new mask and used bitwise_and to select for contours:
hull = cv2.convexHull(contours)
mask = np.zeros_like(image)
cv2.drawContours(mask, [hull], -1, (255, 255, 255), -1)
masked_image = cv2.bitwise_and(image, mask)
plt.imshow(masked_image)
Masking
Instead of drawing the hull contour as a line, draw it as a filled polygon onto a blank image with white, then bitwise_and it onto the original so everything outside the polygon is blanked out.
mask = np.zeros_like(image)
cv2.drawContours(mask, [hull], -1, (255, 255, 255), -1)
masked_image = cv2.bitwise_and(image, mask)
cv2.imwrite(r'dog_masked.jpg', masked_image)
input image
(no idea why I called it dog.jpg, it doesn't look like a dog)
result
(the line below the dog remains since it's inside the hull created by the dog's feet)
Cropping
To crop the image to only include the hull's area, use cv2.boundingRect and slice the image array:
x, y, w, h = cv2.boundingRect(hull)
cropped = image[y:y + h, x:x + w]
cv2.imwrite(r'dog_crop.jpg', cropped)
Result

Increase contour detection accuracy of chess board squares using openCV in python

I wanted to detect contours of chess board black squares from the following image.
The following code is detecting only few black squares successfully, how can we increase the accuracy?
import cv2
import numpy as np
imPath = r" " # <----- image path
def imageResize(orgImage, resizeFact):
dim = (int(orgImage.shape[1]*resizeFact),
int(orgImage.shape[0]*resizeFact)) # w, h
return cv2.resize(orgImage, dim, cv2.INTER_AREA)
img = imageResize(cv2.imread(imPath), 0.5)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.inRange(gray, 135, 155) # to pick only black squares
# find contours
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cntImg = img.copy()
minArea, maxArea = 3000, 3500
valid_cnts = []
for c in cnts:
area = cv2.contourArea(c)
if area > minArea and area < maxArea:
valid_cnts.append(c)
# draw centers for troubleshooting
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
cv2.circle(cntImg, (cX, cY), 5, (0, 0, 255), -1)
cv2.drawContours(cntImg, valid_cnts, -1, (0, 255, 0), 2)
cv2.imshow('org', img)
cv2.imshow('threshold', thresh)
cv2.imshow('contour', cntImg)
cv2.waitKey(0)
cv2.destroyAllWindows()
Gives threshold and contour -
[0.0, 0.5, 1.0, 1.5, 2.0, 3.0, 7.0, 7.5, 9.5, 3248.5, 3249.0, 6498.0] are the unique cnts areas. Typical areas for desired black squares are 3248.5, 3249.0, here's a quick snippet for getting unique cnts areas -
cntAreas = [cv2.contourArea(x) for x in cnts]
print(sorted(set(cntAreas)))
Highly appreciate any help!!
The problem was due to gaps in canny edges which was initiated from noise in the grayscale image. By using dilate morph operation, the noise is reduced and now giving well connected canny edges to make closed contours.
Full code -
import cv2
import numpy as np
imPath = r" " # <----- image path
def imageResize(orgImage, resizeFact):
dim = (int(orgImage.shape[1]*resizeFact),
int(orgImage.shape[0]*resizeFact)) # w, h
return cv2.resize(orgImage, dim, cv2.INTER_AREA)
img = imageResize(cv2.imread(imPath), 0.5)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2)) # <-----
morphed = cv2.dilate(gray, kernel, iterations=1)
thresh = cv2.inRange(morphed, 135, 155) # to pick only black squares
# find canny edge
edged_wide = cv2.Canny(thresh, 10, 200, apertureSize=3)
cv2.waitKey(0)
# find Contours
contours, hierarchy = cv2.findContours(
thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) # cv2.CHAIN_APPROX_NONE stores all coords unlike SIMPLE, cv2.RETR_EXTERNAL
cntImg = img.copy()
minArea, maxArea = 2000, 4000
valid_cnts = []
for c in contours:
area = cv2.contourArea(c)
if area > minArea and area < maxArea:
valid_cnts.append(c)
# draw centers
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
cv2.circle(cntImg, (cX, cY), 5, (0, 0, 255), -1)
cv2.drawContours(cntImg, valid_cnts, -1, (0, 255, 0), 2)
cv2.imshow('threshold', thresh)
cv2.imshow('morphed', morphed)
cv2.imshow('canny edge', edged_wide)
cv2.imshow('contour', cntImg)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here's the contour plot -

How to obtain and store centroid coordinates of rooms of a floor plan image?

I have a floor plan image which consists of multiple rooms. Using Python, I want to find the centers of each room and store the coordinates in the form of (x,y) so that I can use them further for mathematical calculations. The existing drawContours and FindContours functions help in determining the contours, but how can I store the values obtained into a list.
The image represents a sample floor plan with multiple rooms.
I tried using moments but the function doesn't work properly.
As you may see this image is obtained from drawContours function. But then how do I store the x and y coordinates.
Here's my code :
k= []
# Going through every contours found in the image.
for cnt in contours :
approx = cv2.approxPolyDP(cnt, 0.009 * cv2.arcLength(cnt, True), True)
# draws boundary of contours.
cv2.drawContours(img, [approx], -1, (0, 0,255), 3)
# Used to flatted the array containing
# the co-ordinates of the vertices.
n = approx.ravel()
i = 0
x=[]
y=[]
for j in n :
if(i % 2 == 0):
x = n[i]
y = n[i + 1]
# String containing the co-ordinates.
string = str(x) + " ," + str(y)
if(i == 0):
# text on topmost co-ordinate.
cv2.putText(img, string, (x, y),
font, 0.5, (255, 0, 0))
k.append(str((x, y)))
else:
# text on remaining co-ordinates.
cv2.putText(img, string, (x, y),
font, 0.5, (0, 255, 0))
k.append(str((x, y)))
i = i + 1
# Showing the final image.
cv2_imshow( img )
# Exiting the window if 'q' is pressed on the keyboard.
if cv2.waitKey(0) & 0xFF == ord('q'):
cv2.destroyAllWindows()
Here's a simple approach:
Obtain binary image. Load image, grayscale, and Otsu's threshold.
Remove text. We find contours then filter using contour area to remove contours smaller than some threshold value. We effectively remove these contours by filling them in with cv2.drawContours.
Find rectangular boxes and obtain centroid coordinates. We find contours again then filter using contour area and contour approximation. We then find moments for each contour which gives us the centroid.
Here's a visualization:
Remove text
Result
Coordinates
[(93, 241), (621, 202), (368, 202), (571, 80), (317, 79), (93, 118)]
Code
import cv2
import numpy as np
# Load image, grayscale, Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Remove text
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area < 1000:
cv2.drawContours(thresh, [c], -1, 0, -1)
thresh = 255 - thresh
result = cv2.cvtColor(thresh, cv2.COLOR_GRAY2BGR)
coordinates = []
# Find rectangular boxes and obtain centroid coordinates
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.05 * peri, True)
if len(approx) == 4 and area < 100000:
# cv2.drawContours(result, [c], -1, (36,255,12), 1)
M = cv2.moments(c)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
coordinates.append((cx, cy))
cv2.circle(result, (cx, cy), 3, (36,255,12), -1)
cv2.putText(result, '({}, {})'.format(int(cx), int(cy)), (int(cx) -40, int(cy) -10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (36,255,12), 2)
print(coordinates)
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.imshow('result', result)
cv2.waitKey()

How to draw Contours around the circular object and find its ROI using OpenCV Python?

I was trying to draw a contour, over a circular object present in the image and find its area and centroid. But I wasn't able to do it, because the contour was drawn all over the image as shown in the figure.
I want to draw contour only over the circular white object as shown in the figure.
Expected Result:
I want to draw the circular contour only over this white object shown in the image and show its centroid and area. OpenCV, ignore rest of the part.
Below is the code attached.
Code:
import cv2
import numpy as np
import imutils
cap = cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
while True:
_,frame = cap.read()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower_blue = np.array([90,60,0])
upper_blue = np.array([121,255,255])
mask = cv2.inRange(hsv, lower_blue, upper_blue)
cnts = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts= imutils.grab_contours(cnts)
for c in cnts:
area = cv2.contourArea(c)
if area > 1500:
cv2.drawContours(frame, [c], -1, (0,255,0), 3)
M = cv2.moments(c)
cx = int(M["m10"] / M["m00"])
cy = int(M["m01"] / M["m00"])
cv2.circle(frame, (cx, cy), 7, (255, 255, 255), -1)
cv2.putText(frame, "Centre", (cx - 20, cy - 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1)
cv2.imshow("frame", frame)
print("area is ...", area)
print("centroid is at ...", cx, cy)
k=cv2.waitKey(1000)
if k ==27:
break
cap.release()
cv2.destroyAllWindows()
Any help would be appreciated. Thanks in advance.
This can be done in many ways depending on the need. One simple way can be:
Firstly, filter the ROI. So we know 3 things, ROI is white, is a circle and we know its approx area.
For white color:
def detect_white(img):
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 220, 255, cv2.THRESH_BINARY)
return thresh
Although you would need to play with white detection algorithm as per your need. Some good ways for white detection would be with threshold(like above), using HSL colorspace as whiteness is closely dependent on lightness(cv2.cvtColor(img, cv2.COLOR_BGR2HLS) etc.
So in the code below, first we filter out the ROI by white color, then by the shape and area of those white contours.
image = cv2.imread("path/to/your/image")
white_only = detect_white(image)
contours, hierarchy = cv2.findContours(white_only, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
circles = []
for contour in contours:
epsilon = 0.01 * cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, epsilon, True)
area = cv2.contourArea(contour)
if len(approx) > 12 and area > 1000 and area < 3000:
circles.append(contour)
# Now the list circles would have all such
# contours satisfying the above conditions
# If len(circles) != 1 or isn't the circle you
# desire, more filtering is required
cv2.drawContours(image, circles, -1, (0, 255, 0), 3)
Choosing the contour relies upon area and vertices(as returned by cv2. approxPolyDP()).
As in your image, the big white circle has a nice amount of area and is very close to a circle, I checked for it like: len(approx) > 12 and area > 1000 and area < 3000:. Tweak with this line according to your scenario and tell me if it solves your problem. If it doesn't, we can discuss some more nicer ways or play with this one to make it more accurate.

Displaying recognized shapes with Python opencv

I am still new to opencv, but I found a piece of code that identifies shapes contours in an image and indicates their center.The only issue is that the program would display one contour and one center, and the user has to close the window manually so the center of another shape, along with the first one, gets displayed.
Is there a way to have a single window indicating all contours and center of shapes at once?
This is quite problematic for me because I'm planning to replace the image with a camera stream later on. So, I would appreciate any additional suggestions for making this code more efficient.
Here's the code (the last 2 lines are the suspects):
import argparse
import imutils
import cv2
image = cv2.imread("shapes3.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
thresh = cv2.threshold(blurred, 60, 255, cv2.THRESH_BINARY)[1]
# find contours in the thresholded image
cnts = cv2.findContours(thresh.copy(), cv2.RETR_TREE,
cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if imutils.is_cv2() else cnts[1]
# loop over the contours
for c in cnts:
print("1")
# compute the center of the contour
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
# draw the contour and center of the shape on the image
cv2.drawContours(image, [c], -1, (0, 255, 0), 2)
cv2.circle(image, (cX, cY), 7, (229, 83, 0), -1)
cv2.putText(image, "center", (cX - 20, cY - 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 59, 174), 2)
# show the image
cv2.imshow("Image", image) #displaying processed image
cv2.waitKey(0)
Link to the source
(rename it as shapes3.png)
The problem was fixed by displaying the image after the the for loop terminates
# loop over the contours
for c in cnts:
print("1")
# compute the center of the contour
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
# draw the contour and center of the shape on the image
cv2.drawContours(image, [c], -1, (0, 255, 0), 2)
cv2.circle(image, (cX, cY), 7, (229, 83, 0), -1)
cv2.putText(image, "center", (cX - 20, cY - 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 59, 174), 2)
# show the image
cv2.imshow("Image", image) #displaying processed image
cv2.waitKey(0)

Categories

Resources