Check if pixel is inside a connected component in opencv python - python

I'm thresholding an image which gives me some white regions. And I have a pixel location that's located in one of these regions. I'm using opencv connectedComponentsWithStats to get the regions and then find if the pixel is in any of these regions. How can I do that?
On that note, is there a better way of finding in which thresholded region that pixel is located?

numLabels, labelImage, stats, centroids = cv2.connectedComponentsWithStats(thresh, connectivity, cv2.CV_32S)
numLabels = number of labels or regions in your thresholded image
labelImage = matrix or image containing unique labels(1, 2, 3..) representing each region, background is represented as 0 in labelImage.
stats = stats is a matrix of stats which contains information about the regions.
centroids = centroid of each region.
In your case, you can use the labelImage to find out the unique label value on the pixel coordinate to find out in which region it lies in.

You can use pointPolygonTest function to check whether a point is inside a contour or not.
So, after thresholding, find the contours in the image using findContours function. Then you can pass the contours and the point to this function to check whether the point is inside the region or not.
Since you have the connected components and stats (that you found using connectedComponentsWithStats), you can test faster using this approach.

Related

Select remaining points after cropping a point cloud

I am currently facing a problem regarding point cloud cropping.
More specifically, I already know how to crop a point cloud based on Open3D, a package for point cloud processing. There are several ways to do it, for example:
newCamView = np.hstack((camView, np.zeros(shape=camView.shape[0]).reshape(3,1)))
vol = o3d.visualization.SelectionPolygonVolume()
vol.bounding_polygon = o3d.utility.Vector3dVector(newCamView)
vol.orthogonal_axis = "Z"
vol.axis_max = 10
vol.axis_min = -10
pcd_cropped = vol.crop_point_cloud(pcd_raw)
pcd_final = np.asarray(np.hstack((pcd_cropped.points,pcd_cropped.colors)))
But in the context of my problem, I also need to extract the points outside the volume of interest. And even after studying the Open3D documentation and searching on the internet I can't find an answer.
I would be interested in some help to either find out how to invert the selection based on a cropping method or a way to extract the specific indexes of the points that lie within the bounding volume so that I can use the function select_by_index from o3d.geometry.PointCloud to get both inliers and outliers.
You can use the Point Cloud Distance for this task. The following code should give you the points outside the crop:
dists = np.asarray(pcd_raw.compute_point_cloud_distance(pcd_cropped))
indices = np.where(dists > 0.00001)[0]
pcd_cropped_inv = pcd_raw.select_by_index(indices)
Another method to crop pointcloud in open3d is using object of class bounding box. (So this method is only for rectangular shape and not for a polygon based cropping.)
Lets create an arbitrary bounding box with center at origin, a value below 1 as edge length, and rotation R.
R = np.identity(3)
extent = np.ones(3)/1.5 # trying to create a bounding box below 1 unit
center = np.zeros(3)
obb = o3d.geometry.OrientedBoundingBox(center,R,extent) # or you can use axis aligned bounding box class
Now you can crop your point cloud (pcd) by:
cropped = pcd.crop(obb)
o3d.visualization.draw_geometries([cropped]) #press ESC to close
To get indices of points inside this bounding box:
inliers_indices = obb.get_point_indices_within_bounding_box(pcd.points)
inliers_pcd = pcd.select_by_index(inliers_indices, invert=False) # select inside points = cropped
outliers_pcd = pcd.select_by_index(inliers_indices, invert=True) #select outside points
o3d.visualization.draw_geometries([outliers_pcd])
If you already know the boundaries that you want to crop, you can create a bounding box like above and crop. Or if you want to crop w.r.t. the bounding box of another pointcloud/object (if you know pose, you can transform it and then compute bounding box of it) and use this bounding box to crop the larger point cloud. To get bounding box of point cloud:
obb = pcd.get_oriented_bounding_box(robust=False) #set robust =True for more robust computation.
aabb = pcd.get_axis_aligned_bounding_box()

How to find the right point correspondance between template and a rotated image?

I have a .dxf file containing a drawing (template) which is just a piece with holes, from said drawing I successfully extract the coordinates of the holes and their diameters given in a list [[x1,y1,d1],[x2,y2,d2]...[xn,yn,dn]].
After this, I take a picture of the piece (same as template) and after some image processing, I obtain the coordinates of my detected holes and the contours. However, this piece in the picture can be rotated with respect to the template.
How do I do the right hole correspondance (between coordinates of holes in template and the rotated coordinates of holes in image) so I can know the which diameter corresponds to each hole in the image?
Is there any method of point sorting it can give me this correspondence?
I'm working with Python and OpenCV.
All answers will be highly appreciated. Thanks!!!
Image of Template: https://ibb.co/VVpWmKx
In the template image, contours are drawn to the same size as given in the .dxf file, which differs to the size (in pixels) of the contours of the piece taken from camera.
Processed image taken from the camera, contours of the piece are shown: https://ibb.co/3rjCg5F
I've tried OpenCV functions of feature matching (ORB algorithm) so I can get the rotation angle the piece in picture was rotates with respect to the template?
but I still cannot get this rotation angle? how can I get the rotation angle with image descriptors?
is this the best approach for this problem? are there any better methods to address this problem?
Considering the image of the extracted contours, you might not need something as heavy as the feature matching algorithm of the OCV library. One approach would be to take the most outter contour of the piece and get the cv::minAreaRect of it. Resulting rotated rectangle will give you the angle. Now you just have to decide if the symmetry matches, because it might be flipped. That can be done as well in many ways. One of the most simple one (excluding the fact, the scale might be off) is that you take the most outter contour again, fill it and count the percentage of the points that overlay with the template. The one with right symmetric orientation should match in almost all points. Given that the scale of the matched piece and the template are the same.
emm you should use huMoments which gives translation, scale and rotation invariance descriptor for matching.
The hu moment can be found here https://en.wikipedia.org/wiki/Image_moment. and it is implemented in opencv
you can dig up the theory of Moment invariants on the wiki site pretty easily
to use it you can simply call
// Calculate Moments
Moments moments = moments(im, false);
// Calculate Hu Moments
double huMoments[7];
HuMoments(moments, huMoments);
The sample moment will be
h[0] = 0.00162663
h[1] = 3.11619e-07
h[2] = 3.61005e-10
h[3] = 1.44485e-10
h[4] = -2.55279e-20
h[5] = -7.57625e-14
h[6] = 2.09098e-20
Usually, here is a large range of the moment. There usually coupled with a log transform to lower the dynamic range for matching
H=log(H)
H[0] = 2.78871
H[1] = 6.50638
H[2] = 9.44249
H[3] = 9.84018
H[4] = -19.593
H[5] = -13.1205
H[6] = 19.6797
BTW, you might need to pad the template to extract the edge contour

Comparing and plotting regions of the same color over a dataset of a few hundred images

A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub

Detect Red and Green Circles

I want to detect red and green circles separately in the following image (and a few other similar images)
I'm using opencv and python.
I've tried using houghcircles but that wasn't of any help even after changing the params.
Any suggestion how to do this would really help a lot.
I would appreciate if someone sends a code
You mentioned in the comments that the circles will always have the same size.
Let's take advantage of this fact. My code snippets are in C++ language but this should not be a problem because they are only here to show which OpenCV functions to use (and how) ...
TL; DR Do this:
Create typical circle image - the template image.
Use template matching to get all circle positions.
Check the color of every circle.
Now, let's begin!
Step 1 - The template image
You need an image that shows the circle that is clearly separated from the background. You have two options (both are equally good):
make such an image yourself (computing it if you know the radius), or
simply take one image from the set you are working on and then crop one well-visible circle and save it as a separate image (that's what I did because it was a quicker option)
The circle can be of any color - it is only important that it is distinct from the background.
Step 2 - Template matching
Load the image and template image and convert them to HSV color space. Then split channels so that you will be able to only work with saturation channel:
using namespace std;
using namespace cv;
Mat im_rgb = imread("circles.jpg");
Mat tm_rgb = imread("template.jpg");
Mat im_hsv, tm_hsv;
cvtColor(im_rgb, im_hsv, CV_RGB2HSV);
cvtColor(tm_rgb, tm_hsv, CV_RGB2HSV);
vector<Mat> im_channels, tm_channels;
split(im_hsv, im_channels);
split(tm_hsv, tm_channels);
That's how circles and the template look now:
Next, you have to obtain an image that will contain information about circle borders. Regardless of what you do to achieve that, you have to apply exactly the same operations on image and template saturation channels.
I used sobel operator to get the job done. The code example only shows the operations I did on image saturation channel; the template saturation channel went through exactly the same procedure:
Mat im_f;
im_channels[1].convertTo(im_f, CV_32FC1);
GaussianBlur(im_f, im_f, Size(3, 3), 1, 1);
Mat sx, sy;
Sobel(im_f, sx, -1, 1, 0);
Sobel(im_f, sy, -1, 0, 1);
Mat image_input = abs(sx) + abs(sy);
That's how the circles on the obtained image and the template look like:
Now, perform template matching. I advise that you choose the type of template matching that computes normalized correlation coefficients:
Mat match_result;
matchTemplate(image_input, template_input, match_result, CV_TM_CCOEFF_NORMED);
This is the template matching result:
This image tells you how well the template correlates with the underlying image if you place the template at different positions on image. For example, the result value at pixel (0,0) corresponds to template placed at (0,0) on the input image.
When the template is placed in such a position that it matches well with the underlying image, the correlation coefficient is high. Use threshold method to discard everything except the peaks of signal (the values of template matching will lie inside [-1, 1] interval and you are only interested in values that are close to 1):
Mat thresholded;
threshold(match_result, thresholded, 0.8, 1.0, CV_THRESH_BINARY);
Next, determine the positions of template result maxima inside each isolated area. I recommend that you use thresholded image as a mask for this purpose. Only one maximum needs to be selected within each area.
These positions tell you where you have to place the template so that it matches best with the circles. I drew rectangles that start at these points and have the same width/height as template image:
Step 3: The color of the circle
Now you know where templates should be positioned so that they cover the circles nicely. But you still have to find out where the circle center is located on the template image. You can do this by computing center of mass of the template's saturation channel:
On the image, the circle centers are located at these points:
Point circ_center_on_image = template_position + circ_center_on_template;
Now you only have to check if the red color channel intensity at these points is larger that the green channel intensity. If yes, the circle is red, otherwise it is green:

Find edges (graph edges) in a binary image using OpenCV

I have a binary image, and I am trying to represent a graph, such that the white parts of the image are the vertices and edges, where the big area white spots are the vertices, and the edges are the white parts that connect between the big white parts that I detected as vertices.
I managed to find the center of the big white parts by using OpenCV functions such as erosion, findContours and moments, using moments centroids.
So I have the vertices of the graph.
My next goal is to get the edges, meaning finding lines that are IN WHITE AREAS only, represented by 2 points, (x1,y1) and (x2,y2).
I tried using all kinds of function such as:
cv2.Canny()
cv2.findLine
cv2.findContour with different parameters on the binary image
For understanding my goal one can think about it as a maze, where the beginning of it is the biggest white spot in the image, and the end of the maze is the second biggest white spot, and the places you can walk through are all white areas of the image.
Some code segments I used in my project:
First finds the edges, given a binary image (finalImage) and return the centroids
def findCentroids(finalImage):
_, contours0, hierarchy = cv2.findContours(finalImage.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
moments = [cv2.moments(cnt) for cnt in contours0]
centroids = []
for M in moments:
if M["m00"] != 0:
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
centroids.append((cX, cY))
return centroids
So like I found centroids, I want to find more centroids (to make the image less erosed) and then perhaps find all edges that connect between these centroids. This doesn't seem like a good method so I hope to get better approaches in answers.
EDIT
So I thought about another idea, which is to use the connected components method. I try to use the connected components supplied by cv2, likewise:
output = cv2.connectedComponentsWithStats((imageForEdges), 8, cv2.CV_32S)
But the outcome is that only black spots are recognized as components, which is the opposite as what I need. I tried to use the inverted image and it gave the same results, as I assume the algorithm prefers the spots that are completely bounded, and not the background (which is the white color in my case, and the whole purpose of me using it, that it finds areas that are not bounded
)
Did you check out Iwanowski's algorithm ?
https://pdfs.semanticscholar.org/cd14/22f1e33022b0bede3f4a03844bc7dcc979ed.pdf
"The paper describes a method for the analysis of the content of a binary image in order to find its structure. The class of images it deals with consists of images showing at its foreground groups of objects connected one to another forming a graph-like structure. Described method extract automatically this structure from image bitmap and produces a matrix containing connections between all the objects shown on the input image"

Categories

Resources