OpenCv mean returns 4 element tuple - python

I have a set of contours defined and filled in OpenCV, and I'm trying to use this as a mask to find the mean intensity in each ROI. I thought I could do this using the cv2.mean function with a defined mask. My code is (im2 is an image read from file):
msk = np.zeros(im2.shape, np.uint8)
cv2.bilateralFilter(im2, 5, 200, 5)
im2 = cv2.GaussianBlur(im2,(5,5),0
binImg = cv2.adaptiveThreshold(im2, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 55, -5)
contours, heir = cv2.findContours(binImg, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(msk, contours, -1, 255, -1)
print len(contours)
print cv2.mean(im2, mask = msk)
This returns:
3361
(155.88012076286788, 0.0, 0.0, 0.0)
I thought that I would get a mean intensity per contour, but it looks like an overall mean intensity for each channel (the image is greyscale). Are my expectations incorrect, or is my code incorrect?

Just to follow up on this (and close it out), I did resolve it by iterating over contours, and using the contours as a mask for the original image. The code is:
msk = np.zeros(im2.shape, np.uint8)
cv2.bilateralFilter(im2, 5, 200, 5)
im2 = cv2.GaussianBlur(im2,(5,5),0)
binImg = cv2.adaptiveThreshold(im2, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 55, -5)
contours, heir = cv2.findContours(binImg, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(msk, contours, -1, 255, -1)
for cnt in contours:
res = np.zeros(img.shape, np.uint8)
(x,y), radius = cv2.minEnclosingCircle(cnt)
ctr = (int(x), int(y))
rad = int(radius)
circ = cv2.circle(res, ctr, rad,1,-1)
print "Area: " + str(cv2.contourArea(cnt)), "Mean: " + str(float(cv2.meanStdDev(img, mask=res)[0]))
It should be noted that I'm using the meanStdDev (I did some editing and wanted to return Std Dev as well), rather than mean, but either should work for finding means. It's still not clear why mean seemed to return 4 results (for 4 channels?) on a greyscale image in the original example.

Related

wall instance segmentation with cv2.Sobel

I am new with python
I want to do instance segmentation with Sobel edge detection
the input images are original image and mask image that segmented by Unet(this network only detect wall not instance segment)
the output image is instance segmented
like this
give input image like this:
Input original Image
and here is mask
mask image
and the output should be this:
output image
I wrote some code but it fail at edge detection
here is the code:
#original image
original_img = io.imread('1.1.png')
#mask
seg_img = io.imread('1.2.png')
#convert to gray
seg_hsv = cv2.cvtColor(seg_img, cv2.COLOR_RGB2HSV)
#create mask
mask = cv2.inRange(seg_hsv, (0, 50, 50), (10, 255, 255))
#extract wall from original image
selected_img = cv2.bitwise_and(original_img, original_img, mask=mask)
grey = cv2.cvtColor(selected_img, cv2.COLOR_RGB2GRAY)
#blur
median_blur = cv2.medianBlur(grey, 9)
#sobel
sobel_x = cv2.Sobel(median_blur, cv2.CV_16S, 1, 0, ksize=-1)
sobel_x_16S = np.absolute(sobel_x)
sobel_x_8U = np.uint8(sobel_x_16S)
sobel_y = cv2.Sobel(median_blur, cv2.CV_16S, 0, 1, ksize=-1)
sobel_y_16S = np.absolute(sobel_y)
sobel_y_8U = np.uint8(sobel_y_16S)
edgexy = np.sqrt(sobel_y ** 2 + sobel_x ** 2)
#find contour
converted = cv2.convertScaleAbs(edgexy)
ret, thresh = cv2.threshold(converted, 10, 20, cv2.THRESH_BINARY_INV)
c, h = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for contour in c:
cv2.drawContours(selected_img, [contour], -1, (0, 255, 0))
plt.imshow(selected_img, cmap="gray")
plt.show()
i tried the sobel but failed at edge detection and finding contours
also i want to not detect black boxes and person as contour

Python OpenCV.fillPoly() not filling my polygons. Why?

Using OpenCV for Python, I am trying to get a mask of the noise elements in a image to be later used as input for the cv.inpaint() function.
I am given a greyscale image (2D matrix with values from 0 to 255) in the input_mtx_8u variable, with noise (isolated polygons of very low values).
So far what I did was:
get the edges in which the gradient is above 25:
laplacian = cv2.Laplacian(input_mtx_8u, cv2.CV_8UC1)
lapl_bin, lapl_bin_val = cv2.threshold(laplacian, 25, 255, cv2.THRESH_BINARY)
get the contours of the artifacts
contours, _ = cv2.findContours(lapl_bin_val, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
fill the contours identified
filled_mtx = input_mtx_8u.copy()
cv2.fillPoly(filled_mtx, contours, (255, 255, 0), 4)
For some reason, my 'filled polygons' are not completely filled (see figure).
What can I be doing wrong?
As pointed by #fmw42 , a solution to get the contours filled is using drawContours() instead of fillPoly().
The final working code I got is:
# input_mtx_8u = 2D matrix with uint8 values from 0 to 255
laplacian = cv2.Laplacian(input_mtx_8u, cv2.CV_8UC1)
lapl_bin, lapl_bin_val = cv2.threshold(laplacian, 25, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(lapl_bin_val, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
inpaint_mask = np.zeros(input_mtx_8u.shape, dtype=np.uint8)
for contour in contours:
cv2.drawContours(inpaint_mask, [contour], -1, (255, 0, 0), thickness=-1)
# inpaint_mask = can be used as the mask for cv2.inpaint()
Note that for some reason:
cv2.drawContours(input_mtx_cont, contours, -1, (255, 0, 0), thickness=-1)
does not work. One must loop and draw contour by contour...

How can i get the rgb color values from inside of a contour in image using opencv?

I know here already some questions were asked but they did't help me to solve my problem. I will appreciate any help to solve my problem.
I'm new to opencv.
I have an image and apply some code to get contours from image. Now i want to get the RGB color values from detected contours. How can i do that?
I do research on it and find that it could be solved by using contours so i try to implement contours and now finally i want to get the color values of the contours.
Here is my Code:
import cv2
import numpy as np
img = cv2.imread('C:/Users/Rizwan/Desktop/example_strip1.jpg')
img_hsv = cv2.cvtColor(255-img, cv2.COLOR_BGR2HSV)
lower_red = np.array([40, 20, 0])
upper_red = np.array([95, 255, 255])
mask = cv2.inRange(img_hsv, lower_red, upper_red)
contours, _ = cv2.findContours(mask, cv2.RETR_TREE,
cv2.CHAIN_APPROX_SIMPLE)
color_detected_img = cv2.bitwise_and(img, img, mask=mask)
print(len(contours))
for c in contours:
area = cv2.contourArea(c)
x, y, w, h = cv2.boundingRect(c)
ax = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 0), 2)
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
im = cv2.drawContours(color_detected_img, [box], -1, (255, 0, 0), 2)
cv2.imshow("Cropped", color_detected_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
I expect the output should be the RGB values of the detected color inside the contours.
As asked in the comments, here's a possible solution to extract the BGR(!) values from the pixels of an image inside a before found contour. The proper detecting of the desired, colored stripes is omitted here as also discussed in the comments.
Having an image and a filled mask of a contour, for example from cv2.drawContours, we can simply use NumPy's boolean array indexing by converting the (most likely uint8) mask to an bool_ array.
Here's a short code snippet, that uses NumPy's savetxt to store all values in some txt file:
import cv2
import numpy as np
# Some dummy image
img = np.zeros((100, 100, 3), np.uint8)
img = cv2.rectangle(img, (0, 0), (49, 99), (255, 0, 0), cv2.FILLED)
img = cv2.rectangle(img, (50, 0), (99, 49), (0, 255, 0), cv2.FILLED)
img = cv2.rectangle(img, (50, 50), (99, 99), (0, 0, 255), cv2.FILLED)
# Mask of some dummy contour
mask = np.zeros((100, 100), np.uint8)
mask = cv2.fillPoly(mask, np.array([[[20, 20], [30, 70], [70, 50], [20, 20]]]), 255)
# Show only for visualization purposes
cv2.imshow('img', img)
cv2.imshow('mask', mask)
# Convert mask to boolean array
mask = np.bool_(mask)
# Use boolean array indexing to get all BGR values from img within mask
values = img[mask]
# For example, save values to txt file
np.savetxt('values.txt', values)
cv2.waitKey(0)
cv2.destroyAllWindows()
The dummy image looks like this:
The dummy contour mask looke like this:
The resulting values.txt has some >1000 entries, please check yourself. Attention: Values are BGR values; e.g. prior converting the image to RGB is needed to get RGB values.
Hope that helps!

Filling contour won't work using drawContours() thickness=-1 [duplicate]

This question already has an answer here:
Contour shows dots rather than a curve when retrieving it from the list, but shows the curve otherwise
(1 answer)
Closed 3 years ago.
I am trying to fill a contour which was obtained by separately thresholding 3 color channels.
image_original = cv2.imread(original_image_path)
image_contours = np.zeros((image_original.shape[0], image_original.shape[1], 1), dtype=np.uint8)
image_contour = np.zeros((image_original.shape[0], image_original.shape[1], 1), dtype=np.uint8)
image_binary = np.zeros((image_original.shape[0], image_original.shape[1], 1), dtype=np.uint8)
image_area = image_original.shape[0] * image_original.shape[1]
for channel in range(image_original.shape[2]):
ret, image_thresh = cv2.threshold(image_original[:, :, channel], 120, 255, cv2.THRESH_OTSU)
_, contours, hierarchy = cv2.findContours(image_thresh, 1, 1)
for index, contour in enumerate(contours):
if( cv2.contourArea( contour ) > image_area * background_remove_offset ):
del contours[index]
cv2.drawContours(image_contours, contours, -1, (255,255,255), 3)
_, contours, hierarchy = cv2.findContours(image_contours, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
cv2.drawContours(image_contour, max(contours, key = cv2.contourArea), -1, (255, 255, 255), 1)
cv2.imwrite(output_contour_image_path, image_contour)
cv2.drawContours(image_binary, max(contours, key = cv2.contourArea), -1, (255, 255, 255), thickness=-1)
cv2.imwrite(output_binary_image_path, image_binary)
cv2.imshow("binary", image_binary)
which is supposed to work by setting the thickness=-1 but it only draws up the contour with 1 thickness same as that of thickness=1 specifically in the following line.
cv2.drawContours(image_binary, max(contours, key = cv2.contourArea), -1, (255, 255, 255), thickness=-1)
Results are as follows,
Which should come up with a binary filled image other than a one just with a contour of thickness=1
well, solved it it seems the cv2.drawContours() function need contours as list type, just changing the line
cv2.drawContours(image_binary, max(contours, key = cv2.contourArea), -1, 255, thickness=-1)
to
cv2.drawContours(image_binary, [max(contours, key = cv2.contourArea)], -1, 255, thickness=-1)
Solves it.

How to obtain combined convex Hull of multiple separate shapes

I have 2 shapes (pic 1) and need to find one convexHull of both of them combined (pic2). More precisely I am interested in obtaining external corners (purple circles pic 2). The shapes are detached. The shape I trace is a square sheet of transparent plastic with two color stripes on the side. Stripes are very easy to trace (inRange).
One quick and dirty method I am thinking is to connect centers of the stripes with a white line and then obtain convexHull. I am also thinking on concatenating lists of vertexes of both shapes and obtain combined convexHull but I am not certain if this method will crash the convexHull function.
Is there any more elegant way to resolve this problem?
Please help
Pic 1
Pic 2
Issue resolved.
Works like a charm. Concatenating points of separate shapes don't crash convexHull.
I posted the code on GitHub https://github.com/wojciechkrukar/OpenCV/blob/master/RectangleDetector/RectangleDetector.ipynb
This is the result:
Here is the most important chunk of code:
_ , contours,hier = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
#size of contour points
length = len(contours)
#concatinate poits form all shapes into one array
cont = np.vstack(contours[i] for i in range(length))
hull = cv2.convexHull(cont)
uni_hull = []
uni_hull.append(hull) # <- array as first element of list
cv2.drawContours(image,uni_hull,-1,255,2);
import numpy as np
import cv2
img = cv2.imread('in1.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
(_, thresh) = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)
thresh = ~thresh
points = np.column_stack(np.where(thresh.transpose() > 0))
hull1 = cv2.convexHull(points)
result1 = cv2.polylines(img.copy(), [hull1], True, (0,0,255), 2)
cv2.imshow('result1', result1)
points2 = np.fliplr(np.transpose(np.nonzero(thresh)))
approx = cv2.convexHull(points2)
result2 = cv2.polylines(img.copy(), [approx], True, (255,255,0), 2)
cv2.imshow('result2', result2)
(contours, _) = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
points3 = [pt[0] for ctr in contours for pt in ctr]
points3 = np.array(points3).reshape((-1,1,2)).astype(np.int32)
hull3 = cv2.convexHull(points3)
result3 = cv2.drawContours(img.copy(), [hull3], -1, (0,255,0), 1, cv2.LINE_AA)
cv2.imshow('result3', result3)
points4 = list(set(zip(*np.where(img >= 128)[1::-1])))
points4 = np.array(points4).reshape((-1,1,2)).astype(np.int32)
hull4 = cv2.convexHull(points4)
result4 = cv2.drawContours(img.copy(), [hull4], -1, (0,255,255), 1, cv2.LINE_AA)
cv2.imshow('result4', result4)
result = np.hstack([result1, result2, result3, result4])
cv2.imwrite('result.jpg', result)

Categories

Resources