I am rather new to Python and I think I'm trying to attempt something quite complicated? I have repeatedly tried to search for this and think I'm missing some base knowledge as I truly don't understand what I've read.
Effectively I have 3 arrays, which contain x points, y points, and z points. They create an approximately hemispherical shape. I have these points from taking a contour of a 2D axisymmetrical semicircular shape, extracting the x points and y points from it into separate arrays, and creating a "z points" array of zeros of the same length as the previous two arrays.
From this, I then rotate the y points into the z-domain using angles to create a 3D approximation of the 2D contour.
This 3D shape is completely bounded (in that it creates both the bottom and the top), as linked below:
3d Approximation
I've butchered the following code from my much longer program, it's the only part that is pertinent to my question however:
c, contours, _ = cv2.findContours(WB, mode = cv2.RETR_EXTERNAL, method = cv2.CHAIN_APPROX_NONE) # Find new contours in recently bisected mask
contours = sorted (contours, key = cv2.contourArea, reverse = True) # # Sort contours (there is only 2 contours left, plinth/background, and droplet)
contour = contours[1] # Second longest contour is the desired contour
xpointsList = [xpoints[0][0] for xpoints in contour] # Create new array of all x co-ordinates
ypointsList = [ypoints[0][1] for ypoints in contour] # Create new array of all y co-ordinates
zpointsList = np.zeros(len(xpointsList)) # # Create an array of 0 values to represent the z domain. True contour sits at 0 on all points in the z-domain
angles = np.arange(0,180,5, dtype = int) # # Creating an array of angles between to form a full drop in degrees of 5
i = 0
b = 0
d = 0
qx = np.zeros(len(xpointsList)*len(angles)) # Setting variable to equal the length of x points times the length of iterative angle calculations
qy = np.zeros(len(ypointsList)*len(angles)) # Setting variable to equal the length of x points times the length of iterative angle calculations
qz = np.zeros(len(zpointsList)*len(angles)) # Setting variable to equal the length of x points times the length of iterative angle calculations
ax = plt.axes(projection='3d')
for b in range(0,len(angles)):
angle = angles[b]
for i in range(0,len(ypointsList)):
qx[i+d] = xpointsList[i]-origin[0] # Setting the x-axis to equal it's current values around the origin
qz[i+d] = origin[0] + math.cos(math.radians(angle)) * (zpointsList[i]) - math.sin(math.radians(angle)) * (ypointsList[i] - origin[1]) # creating a z value based on 10 degree rotations of the y values around the x axis
qy[i+d] = origin[1] + math.sin(math.radians(angle)) * (zpointsList[i]) + math.cos(math.radians(angle)) * (ypointsList[i] - origin[1]) # adapting the y value based on the creation of the z values.
i = i + 1
b = b + 1
d = d + len (xpointsList)
ax.plot3D (qy, qz, qx, color = 'blue', linewidth = 0.1)
Effectively my question is, how do I use this structure to somehow find the volume using the co-ordinate arrays? I think I need to use some kind of scipy.spatial ConvexHull to fill in the areas between my rotations so that it has a surface and work from there?
I don't understand your approach (I'm not that good in Python) so can't help you fix it, but there was a similar question with much simpler solution here:
https://stackoverflow.com/a/48114604/1479630
Your situation is even easier because what you basically have is a distorted sphere, and you have a regular grid on its surface. You can iterate over all surface triangles and for each, compute volume of tetrahedron made of this triangle and center of object. If you sum up those volumes you'll get volume of entire object.
Related
I need to sort a selection of 3D coordinates in a winding order as seen in the image below. The bottom-right vertex should be the first element of the array and the bottom-left vertex should be the last element of the array. This needs to work given any direction that the camera is facing the points and at any orientation of those points. Since "top-left","bottom-right", etc is relative, I assume I can use the camera as a reference point? We can also assume all 4 points will be coplanar.
I am using the Blender API (writing a Blender plugin) and have access to the camera's view matrix if that is even necessary. Mathematically speaking is this even possible if so how? Maybe I am overcomplicating things?
Since the Blender API is in Python I tagged this as Python, but I am fine with pseudo-code or no code at all. I'm mainly concerned with how to approach this mathematically as I have no idea where to start.
Since you assume the four points are coplanar, all you need to do is find the centroid, calculate the vector from the centroid to each point, and sort the points by the angle of the vector.
import numpy as np
def sort_points(pts):
centroid = np.sum(pts, axis=0) / pts.shape[0]
vector_from_centroid = pts - centroid
vector_angle = np.arctan2(vector_from_centroid[:, 1], vector_from_centroid[:, 0])
sort_order = np.argsort(vector_angle) # Find the indices that give a sorted vector_angle array
# Apply sort_order to original pts array.
# Also returning centroid and angles so I can plot it for illustration.
return (pts[sort_order, :], centroid, vector_angle[sort_order])
This function calculates the angle assuming that the points are two-dimensional, but if you have coplanar points then it should be easy enough to find the coordinates in the common plane and eliminate the third coordinate.
Let's write a quick plot function to plot our points:
from matplotlib import pyplot as plt
def plot_points(pts, centroid=None, angles=None, fignum=None):
fig = plt.figure(fignum)
plt.plot(pts[:, 0], pts[:, 1], 'or')
if centroid is not None:
plt.plot(centroid[0], centroid[1], 'ok')
for i in range(pts.shape[0]):
lstr = f"pt{i}"
if angles is not None:
lstr += f" ang: {angles[i]:.3f}"
plt.text(pts[i, 0], pts[i, 1], lstr)
return fig
And now let's test this:
With random points:
pts = np.random.random((4, 2))
spts, centroid, angles = sort_points(pts)
plot_points(spts, centroid, angles)
With points in a rectangle:
pts = np.array([[0, 0], # pt0
[10, 5], # pt2
[10, 0], # pt1
[0, 5]]) # pt3
spts, centroid, angles = sort_points(pts)
plot_points(spts, centroid, angles)
It's easy enough to find the normal vector of the plane containing our points, it's simply the (normalized) cross product of the vectors joining two pairs of points:
plane_normal = np.cross(pts[1, :] - pts[0, :], pts[2, :] - pts[0, :])
plane_normal = plane_normal / np.linalg.norm(plane_normal)
Now, to find the projections of all points in this plane, we need to know the "origin" and basis of the new coordinate system in this plane. Let's assume that the first point is the origin, the x axis joins the first point to the second, and since we know the z axis (plane normal) and x axis, we can calculate the y axis.
new_origin = pts[0, :]
new_x = pts[1, :] - pts[0, :]
new_x = new_x / np.linalg.norm(new_x)
new_y = np.cross(plane_normal, new_x)
Now, the projections of the points onto the new plane are given by this answer:
proj_x = np.dot(pts - new_origin, new_x)
proj_y = np.dot(pts - new_origin, new_y)
Now you have two-dimensional points. Run the code above to sort them.
After many hours, I finally found a solution. #Pranav Hosangadi's solution worked for the 2D side of things. However, I was having trouble projecting the 3D coordinates to 2D coordinates using the second part of his solution. I also tried projecting the coordinates as described in this answer, but it did not work as intended. I then discovered an API function called location_3d_to_region_2d() (see docs) which, as the name implies, gets the 2D screen coordinates in pixels of the given 3D coordinate. I didn't need to necessarily "project" anything into 2D in the first place, getting the screen coordinates worked perfectly fine and is much more simple. From that point, I could sort the coordinates using Pranav's function with some slight adjustments to get it in the order illustrated in the screenshot of my first post and I wanted it returned as a list instead of a NumPy array.
import bpy
from bpy_extras.view3d_utils import location_3d_to_region_2d
import numpy
def sort_points(pts):
"""Sort 4 points in a winding order"""
pts = numpy.array(pts)
centroid = numpy.sum(pts, axis=0) / pts.shape[0]
vector_from_centroid = pts - centroid
vector_angle = numpy.arctan2(
vector_from_centroid[:, 1], vector_from_centroid[:, 0])
# Find the indices that give a sorted vector_angle array
sort_order = numpy.argsort(-vector_angle)
# Apply sort_order to original pts array.
return list(sort_order)
# Get 2D screen coords of selected vertices
region = bpy.context.region
region_3d = bpy.context.space_data.region_3d
corners2d = []
for corner in selected_verts:
corners2d.append(location_3d_to_region_2d(
region, region_3d, corner))
# Sort the 2d points in a winding order
sort_order = sort_points(corners2d)
sorted_corners = [selected_verts[i] for i in sort_order]
Thanks, Pranav for your time and patience in helping me solve this problem!
There is a simpler and faster solution for the Blender case:
1.) The following code sorts 4 planar points in 2D (vertices of the plane object in Blender) very efficiently:
def sort_clockwise(pts):
rect = np.zeros((4, 2), dtype="float32")
s = pts.sum(axis=1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]
diff = np.diff(pts, axis=1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]
return rect
2.) Blender keeps vertices related data, such as the translation, rotation and scale in the world matrix. If you query for vertices.co(ordinates) only, you just get the original coordinates, without translation, rotation and scaling. But that does not affect the order of vertices. That simplifies the problem because what you get is actually a 2D (with z's = 0) mesh data. If you sort that 2D data (excluding z's) you will get the information, the sort indices for the 3D sorted data. You can modify the code above to get the indices from that 2D array. For the plane object of Blender, for some reason the order is always [0,1,3,2], not [0,1,2,3]. The following modified code gives the sorted indices for the vertices data in 2D.
def sorted_ix_clockwise(pts):
#rect = zeros((4, 2), dtype="float32")
ix = array([0,0,0,0])
s = pts.sum(axis=1)
#rect[0] = pts[argmin(s)]
#rect[2] = pts[argmax(s)]
ix[0] = argmin(s)
ix[2] = argmax(s)
dif = diff(pts, axis=1)
#rect[1] = pts[argmin(dif)]
#rect[3] = pts[argmax(dif)]
ix[1] = argmin(dif)
ix[3] = argmax(dif)
return ix
You can use these indices to get the actual 3D sorted data, which you can obtain by multiplying vertices coordinates with the world matrix to include any translation, rotation and scaling.
I have a the X and Y coordinates of a 2D cloud of points that I want to map onto a 2D uniform grid with a resolution of imageResolution of initially all zeros. I want all pixels in the grid which overlay the 2D cloud of points to contain ones, to produce a binary image.
Please note, there are a very large number of points both in my 2D cloud of points and in the uniform grid, and so loops are not an effective solution here.
I have looked at convex hulls but my points are not necessarily in a convex set.
I have tried this following code, but its not giving me the correct binary map, since its only assigning 1s to the nearest grid points closest to the points in the point cloud (see image below):
X = points[:,0] #1D array of X coordinates
Y = points[:,1] #1D array of Y coordinates
imageResolution = 256
xVec = np.linspace(0,800,imageResolution)
yVec = xVec
def find_index(x,y):
xi=np.searchsorted(xVec,x)
yi=np.searchsorted(yVec,y)
return xi,yi
xIndex, yIndex = find_index(X,Y)
binaryMap = np.zeros((imageResolution,imageResolution))
binaryMap[xIndex,yIndex] = 1
fig = plt.figure(1)
plt.imshow(binaryMap, cmap='jet')
plt.colorbar()
Please see this image which shows my 2D cloud of points, the desired binary map I want, and the current binary map I am getting from the code above. Please note the red pixels are difficult to see in the last image.
How do I create a binary mask on a square grid from a 2D cloud of points in Python?
Thank you
DISCLAIMER :: Untested suggestion
If I've understood correctly, rather than mark an individual pixel with 1, you should be marking a neighborhood of pixels.
You could try inserting the following lines just before binaryMap[xIndex,yIndex] = 1:
DELTA_UPPER=2 # Param. Needs fine-tuning
delta = np.arange(DELTA_UPPER).reshape(-1,1)
xIndex = xIndex + delta
xIndex [xIndex >= imageResolution] = imageResolution-1
yIndex = yIndex + delta
yIndex [yIndex >= imageResolution] = imageResolution-1
x_many, y_many = np.broadcast_arrays (xIndex[:,None], yIndex)
xIndex = x_many.reshape(-1)
yIndex = y_many.reshape(-1)
Note:
DELTA_UPPER is a parameter that you will have to fine-tune by playing around with. (Maybe start with DELTA_UPPER=3)
UNTESTED CODE
Based on further clarifications, posting this second answer, to better index the binaryMap, given that points contains floats.
imageResoluton = 256
MAX_X = # Fill in here the max x value ever possible in `points`
MIN_X = # Fill in here the min x value ever possible in `points`
MAX_Y = # Fill in here the max y value ever possible in `points`
MIN_Y = # Fill in here the min y value ever possible in `points`
SCALE_FAC = imageResolution / max(MAX_X-MIN_X, MAX_Y-MIN_Y)
X = np.around(SCALE_FAC * points[:,0]).astype(np.int64)
Y = np.around(SCALE_FAC * points[:,1]).astype(np.int64)
X [X >= imageResolution] = imageResolution-1
Y [Y >= imageResolution] = imageResolution-1
binaryMap[X, Y] = 1
(There's no need for find_index())
I have a square 2D array data that I would like to add to a larger 2D array frame at some given set of non-integer coordinates coords. The idea is that data will be interpolated onto frame with it's center at the new coordinates.
Some toy data:
# A gaussian to add to the frame
x, y = np.meshgrid(np.linspace(-1,1,10), np.linspace(-1,1,10))
data = 50*np.exp(-np.sqrt(x**2+y**2)**2)
# The frame to add the gaussian to
frame = np.random.normal(size=(100,50))
# The desired (x,y) location of the gaussian center on the new frame
coords = 23.4, 22.6
Here's the idea. I want to add this:
to this:
to get this:
If the coordinates were integers (indexes), of course I could simply add them like this:
frame[23:33,22:32] += data
But I want to be able to specify non-integer coordinates so that data is regridded and added to frame.
I've looked into PIL.Image methods but my use case is just for 2D data, not images. Is there a way to do this with just scipy? Can this be done with interp2d or a similar function? Any guidance would be greatly appreciated!
Scipy's shift function from scipy.ndimage.interpolation is what you are looking for, as long as the grid spacings between data and frame overlap. If not, look to the other answer. The shift function can take floating point numbers as input and will do a spline interpolation. First, I put the data into an array as large as frame, then shift it, and then add it. Make sure to reverse the coordinate list, as x is the rightmost dimension in numpy arrays. One of the nice features of shift is that it sets to zero those values that go out of bounds.
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage.interpolation import shift
# A gaussian to add to the frame.
x, y = np.meshgrid(np.linspace(-1,1,10), np.linspace(-1,1,10))
data = 50*np.exp(-np.sqrt(x**2+y**2)**2)
# The frame to add the gaussian to
frame = np.random.normal(size=(100,50))
x_frame = np.arange(50)
y_frame = np.arange(100)
# The desired (x,y) location of the gaussian center on the new frame.
coords = np.array([23.4, 22.6])
# First, create a frame as large as the frame.
data_large = np.zeros(frame.shape)
data_large[:data.shape[0], :data.shape[1]] = data[:,:]
# Subtract half the distance as the bottom left is at 0,0 instead of the center.
# The shift of 4.5 is because data is 10 points wide.
# Reverse the coords array as x is the last coordinate.
coords_shift = -4.5
data_large = shift(data_large, coords[::-1] + coords_shift)
frame += data_large
# Plot the result and add lines to indicate to coordinates
plt.figure()
plt.pcolormesh(x_frame, y_frame, frame, cmap=plt.cm.jet)
plt.axhline(coords[1], color='w')
plt.axvline(coords[0], color='w')
plt.colorbar()
plt.gca().invert_yaxis()
plt.show()
The script gives you the following figure, which has the desired coordinates indicated with white dotted lines.
One possible solution is to use scipy.interpolate.RectBivariateSpline. In the code below, x_0 and y_0 are the coordinates of a feature from data (i.e., the position of the center of the Gaussian in your example) that need to be mapped to the coordinates given by coords. There are a couple of advantages to this approach:
If you need to "place" the same object into multiple locations in the output frame, the spline needs to be computed only once (but evaluated multiple times).
In case you actually need to compute integrated flux of the model over a pixel, you can use the integral method of scipy.interpolate.RectBivariateSpline.
Resample using spline interpolation:
from scipy.interpolate import RectBivariateSpline
x = np.arange(data.shape[1], dtype=np.float)
y = np.arange(data.shape[0], dtype=np.float)
kx = 3; ky = 3; # spline degree
spline = RectBivariateSpline(
x, y, data.T, kx=kx, ky=ky, s=0
)
# Define coordinates of a feature in the data array.
# This can be the center of the Gaussian:
x_0 = (data.shape[1] - 1.0) / 2.0
y_0 = (data.shape[0] - 1.0) / 2.0
# create output grid, shifted as necessary:
yg, xg = np.indices(frame.shape, dtype=np.float64)
xg += x_0 - coords[0] # see below how to account for pixel scale change
yg += y_0 - coords[1] # see below how to account for pixel scale change
# resample and fill extrapolated points with 0:
resampled_data = spline.ev(xg, yg)
extrapol = (((xg < -0.5) | (xg >= data.shape[1] - 0.5)) |
((yg < -0.5) | (yg >= data.shape[0] - 0.5)))
resampled_data[extrapol] = 0
Now plot the frame and resampled data:
plt.figure(figsize=(14, 14));
plt.imshow(frame+resampled_data, cmap=plt.cm.jet,
origin='upper', interpolation='none', aspect='equal')
plt.show()
If you also want to allow for scale changes, then replace code for computing xg and yg above with:
coords = 20, 80 # change coords to easily identifiable (in plot) values
zoom_x = 2 # example scale change along X axis
zoom_y = 3 # example scale change along Y axis
yg, xg = np.indices(frame.shape, dtype=np.float64)
xg = (xg - coords[0]) / zoom_x + x_0
yg = (yg - coords[1]) / zoom_y + y_0
Most likely this is what you actually want based on your example. Specifically, the coordinates of pixels in data are "spaced" by 0.222(2) distance units. Therefore it actually seems that for your particular example (whether accidental or intentional), you have a zoom factor of 0.222(2). In that case your data image would shrink to almost 2 pixels in the output frame.
Comparison to #Chiel answer
In the image below, I compare the results from my method (left), #Chiel's method (center) and difference (right panel):
Fundamentally, the two methods are quite similar and possibly even use the same algorithm (I did not look at the code for shift but based on the description - it also uses splines). From comparison image it is visible that the biggest differences are at the edges and, for unknown to me reasons, shift seems to truncate the shifted image slightly too soon.
I think the biggest difference is that my method allows for pixel scale changes and it also allows re-use of the same interpolator to place the original image at different locations in the output frame. #Chiel's method is somewhat simpler but (what I did not like about it is that) it requires creation of a larger array (data_large) into which the original image is placed in the corner.
While the other answers have gone into detail, but here's my lazy solution:
xc,yc = 23.4, 22.6
x, y = np.meshgrid(np.linspace(-1,1,10)-xc%1, np.linspace(-1,1,10)-yc%1)
data = 50*np.exp(-np.sqrt(x**2+y**2)**2)
frame = np.random.normal(size=(100,50))
frame[23:33,22:32] += data
And it's the way you liked it. As you mentioned, the coordinates of both are the same, so the origin of data is somewhere between the indices. Now just simply shift it by the amount you want it to be off a grid point (remainder to one) in the second line and you're good to go (you might need to flip the sign, but I think this is correct).
Consider a 3D numpy array D of dimension, say, (30 x 40 x 50). For each voxel D[x,y,z] I want to store a vector that contains neighboring voxels within a certain radius (including the D[x,y,z] itself).
(As an example here is a picture of such a sphere of radius 2: https://puu.sh/wwIYW/e3bd63ceae.png)
Is there a simple and fast way to code this?
I have written a function for it, but it is painfully slow and IDLE eventually crashes because the data structure I store the vectors in becomes too large.
Current code:
def searchlight(M_in):
radius = 4
[m,n,k] = M_in.shape
M_out = np.zeros([m,n,k],dtype=object)
count = 0
for i in range(m):
for j in range(n):
for z in range(k):
i_interval = list(range((i-4),(i+5)))
j_interval = list(range((j-4),(j+5)))
z_interval = list(range((z-4),(z+5)))
coordinates = list(itertools.product(i_interval,j_interval,z_interval))
coordinates = [pair for pair in coordinates if ((abs(pair[0]-i)+abs(pair[1]-j)+abs(pair[2]-z))<=radius)]
coordinates = [pair for pair in coordinates if ((pair[0]>=0) and (pair[1]>=0) and pair[2]>=0) and (pair[0]<m) and (pair[1]<n) and (pair[2]<k)]
out = []
for pair in coordinates:
out.append(M_in[pair[0],pair[1],pair[2]])
M_out[i,j,z] = out
count = count +1
return M_out
Here a way to do that. For efficiency, you need therefore to use ndarrays : This only take in account complete voxels. Edges must be managed "by hand".
from pylab import *
a=rand(100,100,100) # the data
r=4
ra=range(-r,r+1)
sphere=array([[x,y,z] for x in ra for y in ra for z in ra if np.abs((x,y,z)).sum()<=r])
# the unit "sphere"
indcenters=array(meshgrid(*(range(r,n-r) for n in a.shape),indexing='ij'))
# indexes of the centers of the voxels. edges are cut.
all_inds=(indcenters[newaxis].T+sphere.T).T
#all the indexes.
voxels=np.stack([a[tuple(inds)] for inds in all_inds],-1)
# the voxels.
#voxels.shape is (92, 92, 92, 129)
All the costly operations are vectorized. Comprehension lists are prefered for clarity in external loop.
You can now perform vectorized operations on voxels. for exemple the brightest voxel :
light=voxels.sum(-1)
print(np.unravel_index(light.argmax(),light.shape))
#(33,72,64)
All of this is of course extensive in memory. you must split your space for
big data or voxels.
Since you say the data structure is too large, you'll likely have to compute the vector on the fly for a given voxel. You can do this pretty quickly though:
class SearchLight(object):
def __init__(self, M_in, radius):
self.M_in = M_in
m, n, k = self.M_in.shape
# compute the sphere coordinates centered at (0,0,0)
# just like in your sample code
i_interval = list(range(-radius,radius+1))
j_interval = list(range(-radius,radius+1))
z_interval = list(range(-radius,radius+1))
coordinates = list(itertools.product(i_interval,j_interval,z_interval))
coordinates = [pair for pair in coordinates if ((abs(pair[0])+abs(pair[1])+abs(pair[2]))<=radius)]
# store those indices as a template
self.sphere_indices = np.array(coordinates)
def get_vector(self, i, j, k):
# offset sphere coordinates by the requested centre.
coordinates = self.sphere_indices + [i,j,k]
# filter out of bounds coordinates
coordinates = coordinates[(coordinates >= 0).all(1)]
coordinates = coordinates[(coordinates < self.M_in.shape).all(1)]
# use those coordinates to index the initial array.
return self.M_in[coordinates[:,0], coordinates[:,1], coordinates[:,2]]
To use the object on a given array you can simply do:
sl = SearchLight(M_in, 4)
# get vector of values for voxel i,j,k
vector = sl.get_vector(i,j,k)
This should give you the same vector you would get from
M_out[i,j,k]
in your sample code, without storing all the results at once in memory.
This can also probably be further optimized, particularly in terms of the coordinate filtering, but it may not be necessary. Hope that helps.
This question is related to Transformation between two set of points . Hovewer this is better specified, and some assumptions added.
I have element image and some model.
I've detected contours on both
contoursModel0, hierarchyModel = cv2.findContours(model.copy(), cv2.RETR_LIST,
cv2.CHAIN_APPROX_SIMPLE);
contoursModel = [cv2.approxPolyDP(cnt, 2, True) for cnt in contoursModel0];
contours0, hierarchy = cv2.findContours(canny.copy(), cv2.RETR_LIST,
cv2.CHAIN_APPROX_SIMPLE);
contours = [cv2.approxPolyDP(cnt, 2, True) for cnt in contours0];
Then I've matched each contour to each other
modelMassCenters = [];
imageMassCenters = [];
for cnt in contours:
for cntModel in contoursModel:
result = cv2.matchShapes(cnt, cntModel, cv2.cv.CV_CONTOURS_MATCH_I1, 0);
if(result != 0):
if(result < 0.05):
#Here are matched contours
momentsModel = cv2.moments(cntModel);
momentsImage = cv2.moments(cnt);
massCenterModel = (momentsModel['m10']/momentsModel['m00'],
momentsModel['m01']/momentsModel['m00']);
massCenterImage = (momentsImage['m10']/momentsImage['m00'],
momentsImage['m01']/momentsImage['m00']);
modelMassCenters.append(massCenterModel);
imageMassCenters.append(massCenterImage);
Matched contours are something like features.
Now I want to detect transformation between this two sets of points.
Assumptions: element is rigid body, only rotation, displacement and scale change.
Some features may be miss detected how to eliminate them. I've once used cv2.findHomography and it takes two vectors and calculates homography between them even there are some miss matches.
cv2.getAffineTransformation takes only three points (can't cope missmatches) and here I have multiple features.
Answer in my previous question says how to calculate this transformation but does not take missmatches. Also I think that it is possible to return some quality level from algorithm (by checking how many points are missmatched, after computing some transformation from the rest)
And the last question: should I take all vector points to compute transformation or treat only mass centers of this shapes as feature?
To show it I've added simple image. Features with green are good matches in red bad matches. Here match should be computed from 3 green featrues and red missmatches should affect match quality.
I'm adding fragments of solution I've figured out for now (but I think it could be done much better):
for i in range(0, len(modelMassCenters) - 1):
for j in range(i + 1, len(modelMassCenters) - 1 ):
x1, y1 = modelMassCenters[i];
x2, y2 = modelMassCenters [j];
modelVec = (x2 - x1, y2 - y1);
x1, y1 = imageMassCenters[i];
x2, y2 = imageMassCenters[j];
imageVec = (x2 - x1, y2 - y1);
rotation = angle(modelVec,imageVec);
rotations.append((i, j, rotation));
scale = length(modelVec)/length(imageVec);
scales.append((i, j, scale));
After computing scales and rotation given by each pair of corresponding lines I'm going to find median value and average values of rotation which does not differ more than some delta from median value. The same thing with scale. Then points which are making those values taken to computation will be used to compute displacement.
Your second step (match contours to each other by doing a pairwise shape comparison) sounds very vulnerable to errors if features have a similar shape, e.g., you have several similar-sized circular contours. Yet if you have a rigid body with 5 circular features in one quadrant only, you could get a very robust estimate of the affine transform if you consider the body and its features as a whole. So don't discard information like a feature's range and direction from the center of the whole body when matching features. Those are at least as important in correlating features as size and shape of the individual contour.
I'd try something like (untested pseudocode):
"""
Convert from rectangular (x,y) to polar (r,w)
r = sqrt(x^2 + y^2)
w = arctan(y/x) = [-\pi,\pi]
"""
def polar(x, y): # w in radians
from math import hypot, atan2, pi
return hypot(x, y), atan2(y, x)
model_features = []
model = params(model_body_contour) # return tuple (center_x, center_y, area)
for contour in model_feature_contours:
f = params(countour)
range, angle = polar(f[0]-model[0], f[1]-model[1])
model_features.append((angle, range, f[2]))
image_features = []
image = params(image_body_contour)
for contour in image_feature_contours:
f = params(countour)
range, angle = polar(f[0]-image[0], f[1]-image[1])
image_features.append((angle, range, f[2]))
# sort image_features and model_features by angle, range
#
# correlate image_features against model_features across angle offsets
# rotation = angle offset of max correlation
# scale = average(model areas and ranges) / average(image areas and ranges)
If you have very challenging images, such as a ring of 6 equally-spaced similar-sized features, 5 of which have the same shape and one is different (e.g. 5 circles and a star), you could add extra parameters such as eccentricity and sharpness to the list of feature parameters, and include them in the correlation when searching for the rotation angle.