Select remaining points after cropping a point cloud - python

I am currently facing a problem regarding point cloud cropping.
More specifically, I already know how to crop a point cloud based on Open3D, a package for point cloud processing. There are several ways to do it, for example:
newCamView = np.hstack((camView, np.zeros(shape=camView.shape[0]).reshape(3,1)))
vol = o3d.visualization.SelectionPolygonVolume()
vol.bounding_polygon = o3d.utility.Vector3dVector(newCamView)
vol.orthogonal_axis = "Z"
vol.axis_max = 10
vol.axis_min = -10
pcd_cropped = vol.crop_point_cloud(pcd_raw)
pcd_final = np.asarray(np.hstack((pcd_cropped.points,pcd_cropped.colors)))
But in the context of my problem, I also need to extract the points outside the volume of interest. And even after studying the Open3D documentation and searching on the internet I can't find an answer.
I would be interested in some help to either find out how to invert the selection based on a cropping method or a way to extract the specific indexes of the points that lie within the bounding volume so that I can use the function select_by_index from o3d.geometry.PointCloud to get both inliers and outliers.

You can use the Point Cloud Distance for this task. The following code should give you the points outside the crop:
dists = np.asarray(pcd_raw.compute_point_cloud_distance(pcd_cropped))
indices = np.where(dists > 0.00001)[0]
pcd_cropped_inv = pcd_raw.select_by_index(indices)

Another method to crop pointcloud in open3d is using object of class bounding box. (So this method is only for rectangular shape and not for a polygon based cropping.)
Lets create an arbitrary bounding box with center at origin, a value below 1 as edge length, and rotation R.
R = np.identity(3)
extent = np.ones(3)/1.5 # trying to create a bounding box below 1 unit
center = np.zeros(3)
obb = o3d.geometry.OrientedBoundingBox(center,R,extent) # or you can use axis aligned bounding box class
Now you can crop your point cloud (pcd) by:
cropped = pcd.crop(obb)
o3d.visualization.draw_geometries([cropped]) #press ESC to close
To get indices of points inside this bounding box:
inliers_indices = obb.get_point_indices_within_bounding_box(pcd.points)
inliers_pcd = pcd.select_by_index(inliers_indices, invert=False) # select inside points = cropped
outliers_pcd = pcd.select_by_index(inliers_indices, invert=True) #select outside points
o3d.visualization.draw_geometries([outliers_pcd])
If you already know the boundaries that you want to crop, you can create a bounding box like above and crop. Or if you want to crop w.r.t. the bounding box of another pointcloud/object (if you know pose, you can transform it and then compute bounding box of it) and use this bounding box to crop the larger point cloud. To get bounding box of point cloud:
obb = pcd.get_oriented_bounding_box(robust=False) #set robust =True for more robust computation.
aabb = pcd.get_axis_aligned_bounding_box()

Related

Check if pixel is inside a connected component in opencv python

I'm thresholding an image which gives me some white regions. And I have a pixel location that's located in one of these regions. I'm using opencv connectedComponentsWithStats to get the regions and then find if the pixel is in any of these regions. How can I do that?
On that note, is there a better way of finding in which thresholded region that pixel is located?
numLabels, labelImage, stats, centroids = cv2.connectedComponentsWithStats(thresh, connectivity, cv2.CV_32S)
numLabels = number of labels or regions in your thresholded image
labelImage = matrix or image containing unique labels(1, 2, 3..) representing each region, background is represented as 0 in labelImage.
stats = stats is a matrix of stats which contains information about the regions.
centroids = centroid of each region.
In your case, you can use the labelImage to find out the unique label value on the pixel coordinate to find out in which region it lies in.
You can use pointPolygonTest function to check whether a point is inside a contour or not.
So, after thresholding, find the contours in the image using findContours function. Then you can pass the contours and the point to this function to check whether the point is inside the region or not.
Since you have the connected components and stats (that you found using connectedComponentsWithStats), you can test faster using this approach.

How to find the right point correspondance between template and a rotated image?

I have a .dxf file containing a drawing (template) which is just a piece with holes, from said drawing I successfully extract the coordinates of the holes and their diameters given in a list [[x1,y1,d1],[x2,y2,d2]...[xn,yn,dn]].
After this, I take a picture of the piece (same as template) and after some image processing, I obtain the coordinates of my detected holes and the contours. However, this piece in the picture can be rotated with respect to the template.
How do I do the right hole correspondance (between coordinates of holes in template and the rotated coordinates of holes in image) so I can know the which diameter corresponds to each hole in the image?
Is there any method of point sorting it can give me this correspondence?
I'm working with Python and OpenCV.
All answers will be highly appreciated. Thanks!!!
Image of Template: https://ibb.co/VVpWmKx
In the template image, contours are drawn to the same size as given in the .dxf file, which differs to the size (in pixels) of the contours of the piece taken from camera.
Processed image taken from the camera, contours of the piece are shown: https://ibb.co/3rjCg5F
I've tried OpenCV functions of feature matching (ORB algorithm) so I can get the rotation angle the piece in picture was rotates with respect to the template?
but I still cannot get this rotation angle? how can I get the rotation angle with image descriptors?
is this the best approach for this problem? are there any better methods to address this problem?
Considering the image of the extracted contours, you might not need something as heavy as the feature matching algorithm of the OCV library. One approach would be to take the most outter contour of the piece and get the cv::minAreaRect of it. Resulting rotated rectangle will give you the angle. Now you just have to decide if the symmetry matches, because it might be flipped. That can be done as well in many ways. One of the most simple one (excluding the fact, the scale might be off) is that you take the most outter contour again, fill it and count the percentage of the points that overlay with the template. The one with right symmetric orientation should match in almost all points. Given that the scale of the matched piece and the template are the same.
emm you should use huMoments which gives translation, scale and rotation invariance descriptor for matching.
The hu moment can be found here https://en.wikipedia.org/wiki/Image_moment. and it is implemented in opencv
you can dig up the theory of Moment invariants on the wiki site pretty easily
to use it you can simply call
// Calculate Moments
Moments moments = moments(im, false);
// Calculate Hu Moments
double huMoments[7];
HuMoments(moments, huMoments);
The sample moment will be
h[0] = 0.00162663
h[1] = 3.11619e-07
h[2] = 3.61005e-10
h[3] = 1.44485e-10
h[4] = -2.55279e-20
h[5] = -7.57625e-14
h[6] = 2.09098e-20
Usually, here is a large range of the moment. There usually coupled with a log transform to lower the dynamic range for matching
H=log(H)
H[0] = 2.78871
H[1] = 6.50638
H[2] = 9.44249
H[3] = 9.84018
H[4] = -19.593
H[5] = -13.1205
H[6] = 19.6797
BTW, you might need to pad the template to extract the edge contour

Comparing and plotting regions of the same color over a dataset of a few hundred images

A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub

How do I programmatically find the pixel locations of specific features in an image?

I'm building an automated electricity / gas meter reader using OpenCV and Python. I've got as far as taking shots with a webcam:
I can then use afine transform to unwarp the image (an adaptation of this example):
def unwarp_image(img):
rows,cols = img.shape[:2]
# Source points
left_top = 12
left_bottom = left_top+2
top_left = 24
top_right = 13
bottom = 47
right = 180
srcTri = np.array([(left_top,top_left),(right,top_right),(left_bottom,bottom)], np.float32)
# Corresponding Destination Points. Remember, both sets are of float32 type
dst_height=30
dstTri = np.array([(0,0),(cols-1,0),(0,dst_height)],np.float32)
# Affine Transformation
warp_mat = cv2.getAffineTransform(srcTri,dstTri) # Generating affine transform matrix of size 2x3
dst = cv2.warpAffine(img,warp_mat,(cols,dst_height)) # Now transform the image, notice dst_size=(cols,rows), not (rows,cols)
#cv2.imshow("crop_img", dst)
#cv2.waitKey(0)
return dst
..which gives me an image something like this:
I still need to extract the text using some sort of OCR routine but first I'd like to automate the part that identifies what pixel locations to apply the affine transform to. So if someone knocks the webcam it doesn't stop the software working.
Since your image is pretty much planar, you can look into finding the homography between the image you get from the webcam and the desired image (in the upright position).
Edit: This will rotate the image in the upright position. Once you've registered your image (brought it in the upright position), you could do row-wise or column-wise projections (sum all the pixels along the columns to get one vector, sum all the pixels along the rows to get one vector). You can use these vectors to figure out where you have a jump in color, and crop it there.
Alternatively you can use the Hough transform, which gives you lines in an image. You can probably get away with not registering the image if you do this.

Python OpenCV - Find black areas in a binary image

There is any method/function in the python wrapper of Opencv that finds black areas in a binary image? (like regionprops in Matlab)
Up to now I load my source image, transform it into a binary image via threshold and then invert it to highlight the black areas (that now are white).
I can't use third party libraries such as cvblobslob or cvblob
Basically, you use the findContours function, in combination with many other functions OpenCV provides for especially this purpose.
Useful functions used (surprise, surprise, they all appear on the Structural Analysis and Shape Descriptors page in the OpenCV Docs):
findContours
drawContours
moments
contourArea
arcLength
boundingRect
convexHull
fitEllipse
example code (I have all the properties from Matlab's regionprops except WeightedCentroid and EulerNumber - you could work out EulerNumber by using cv2.RETR_TREE in findContours and looking at the resulting hierarchy, and I'm sure WeightedCentroid wouldn't be that hard either.
# grab contours
cs,_ = cv2.findContours( BW.astype('uint8'), mode=cv2.RETR_LIST,
method=cv2.CHAIN_APPROX_SIMPLE )
# set up the 'FilledImage' bit of regionprops.
filledI = np.zeros(BW.shape[0:2]).astype('uint8')
# set up the 'ConvexImage' bit of regionprops.
convexI = np.zeros(BW.shape[0:2]).astype('uint8')
# for each contour c in cs:
# will demonstrate with cs[0] but you could use a loop.
i=0
c = cs[i]
# calculate some things useful later:
m = cv2.moments(c)
# ** regionprops **
Area = m['m00']
Perimeter = cv2.arcLength(c,True)
# bounding box: x,y,width,height
BoundingBox = cv2.boundingRect(c)
# centroid = m10/m00, m01/m00 (x,y)
Centroid = ( m['m10']/m['m00'],m['m01']/m['m00'] )
# EquivDiameter: diameter of circle with same area as region
EquivDiameter = np.sqrt(4*Area/np.pi)
# Extent: ratio of area of region to area of bounding box
Extent = Area/(BoundingBox[2]*BoundingBox[3])
# FilledImage: draw the region on in white
cv2.drawContours( filledI, cs, i, color=255, thickness=-1 )
# calculate indices of that region..
regionMask = (filledI==255)
# FilledArea: number of pixels filled in FilledImage
FilledArea = np.sum(regionMask)
# PixelIdxList : indices of region.
# (np.array of xvals, np.array of yvals)
PixelIdxList = regionMask.nonzero()
# CONVEX HULL stuff
# convex hull vertices
ConvexHull = cv2.convexHull(c)
ConvexArea = cv2.contourArea(ConvexHull)
# Solidity := Area/ConvexArea
Solidity = Area/ConvexArea
# convexImage -- draw on convexI
cv2.drawContours( convexI, [ConvexHull], -1,
color=255, thickness=-1 )
# ELLIPSE - determine best-fitting ellipse.
centre,axes,angle = cv2.fitEllipse(c)
MAJ = np.argmax(axes) # this is MAJor axis, 1 or 0
MIN = 1-MAJ # 0 or 1, minor axis
# Note: axes length is 2*radius in that dimension
MajorAxisLength = axes[MAJ]
MinorAxisLength = axes[MIN]
Eccentricity = np.sqrt(1-(axes[MIN]/axes[MAJ])**2)
Orientation = angle
EllipseCentre = centre # x,y
# ** if an image is supplied with the BW:
# Max/Min Intensity (only meaningful for a one-channel img..)
MaxIntensity = np.max(img[regionMask])
MinIntensity = np.min(img[regionMask])
# Mean Intensity
MeanIntensity = np.mean(img[regionMask],axis=0)
# pixel values
PixelValues = img[regionMask]
After inverting binary image to turn black to white areas, apply cv.FindContours function. It will give you boundaries of the region you need.
Later you can use cv.BoundingRect to get minimum bounding rectangle around region. Once you got the rectangle vertices, you can find its center etc.
Or to find centroid of region, use cv.Moment function after finding contours. Then use cv.GetSpatialMoments in x and y direction. It is explained in opencv manual.
To find area, use cv.ContourArea function.
Transform it to binary image using threshold with the CV_THRESH_BINARY_INV flag, you get threshold + inversion in one step.
If you can consider using another free library, you could use SciPy. It has a very convenient way of counting areas:
from scipy import ndimage
def count_labels(self, mask_image):
"""This function returns the count of labels in a mask image."""
label_im, nb_labels = ndimage.label(mask_image)
return nb_labels
If necessary you can use:
import cv2 as opencv
image = opencv.inRange(image, lower_threshold upper_threshold)
before to get a mask image, which contains only black and white, where white are the objects in the given range.
I know this is an old question, but for completeness I wanted to point out that cv2.moments() will not always work for small contours. In this case, you can use cv2.minEnclosingCircle() which will always return the center coordinates (and radius), even if you have only a single point. Slightly more resource-hungry though, I think...

Categories

Resources