python implementation of 3D rigid body translation and rotation - python

I've been trying to work out how to solve the following problem using python:
We have points a, b, c, d which form a rigid body
Some unknown 3D translation and rotation is applied to the rigid body
We now know the coordinates for a, b, c
We want to calculate coordinates for d
What I know so far:
Trying to do this with "straightforward" Euler angle calculations seems like a bad idea due to gimbal lock etc.
Step 4 will therefore involve a transformation matrix, and once you know the rotation and translation matrix it looks like this step is easy using one of these:
http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
https://pypi.python.org/pypi/euclid/0.01
What I can't work out is how I can calculate the rotation and translation matrices given the "new" coordinates of a, b, c.
I can see that in the general case (non-rigid body) the rotation part of this is Wahba's problem, but I think that for rigid bodies there should be some faster way of calculating it directly by working out a set of orthogonal unit vectors using the points.

For a set of corresponding points that you're trying to match (with possible perturbation) I've used SVD (singular value decomposition), which appears to exist in numpy.
An example of this technique (in Python even) can be found here, but I haven't evaluated it for correctness.
What you're going for is a "basis transform" or "change of basis" which will be represented as a transformation matrix. Assuming your 3 known points are not collinear, you can create your initial basis by:
Computing the vectors: x=(b-a) and y=(c-a)
Normalize x (x = x / magnitude(x))
Project y onto x (proj_y = x DOT y * x)
Subtract the projection from y (y = y - proj_y)
Normalize y
Compute z = x CROSS y
That gives you an initial x,y,z coordinate basis A. Do the same for your new points, and you get a second basis B. Now you want to find transform T which will take a point in A and convert it to B (change of basis). That part is easy. You can invert A to transform the points back to the Normal basis, then use B to transform into the second one. Since A is orthonormal, you can just transpose A to get the inverse. So the "new d" is equal to d * inverse(A) * B. (Though depending on your representation, you may need to use B * inverse(A) * d.)
You need to have some familiarity with matrices to get all that. Your representation of vectors and matrices will inform you as to which order to multiply the matrices to get T (T is either inverse(A)*B or B*inverse(A)).
To compute your basis matrix from your vectors x=(x1,x2,x3), y=(y1,y2,y3), z=(z1,z2,z3) you populate it as:
| x1 y1 z1 |
| x2 y2 z2 |
| x3 y3 z3 |

Related

In which scenario would one use another matrix than the identity matrix for finding eigenvalues?

The scipy.linalg.eigh function can take two matrices as arguments: first the matrix a, of which we will find eigenvalues and eigenvectors, but also the matrix b, which is optional and chosen as the identity matrix in case it is left blank.
In what scenario would someone like to use this b matrix?
Some more context: I am trying to use xdawn covariances from the pyRiemann package. This uses the scipy.linalg.eigh function with a covariance matrix a and a baseline covariance matrix b. You can find the implementation here. This yields an error, as the b matrix in my case is not positive definitive and thus not useable in the scipy.linalg.eigh function. Removing this matrix and just using the identity matrix however solves this problem and yields relatively nice results... The problem is that I do not really understand what I changed, and maybe I am doing something I should not be doing.
This is the code from the pyRiemann package I am using (modified to avoid using functions defined in other parts of the package):
# X are samples (EEG data), y are labels
# shape of X is (1000, 64, 2459)
# shape of y is (1000,)
from scipy.linalg import eigh
Ne, Ns, Nt = X.shape
tmp = X.transpose((1, 2, 0))
b = np.matrix(sklearn.covariance.empirical_covariance(tmp.reshape(Ne, Ns * Nt).T))
for c in self.classes_:
# Prototyped response for each class
P = np.mean(X[y == c, :, :], axis=0)
# Covariance matrix of the prototyper response & signal
a = np.matrix(sklearn.covariance.empirical_covariance(P.T))
# Spatial filters
evals, evecs = eigh(a, b)
# and I am now using the following, disregarding the b matrix:
# evals, evecs = eigh(a)
If A and B were both symmetric matrices that doesn't necessarily have to imply that inv(A)*B must be a symmetric matrix. And so, if i had to solve a generalised eigenvalue problem of Ax=lambda Bx then i would use eig(A,B) rather than eig(inv(A)*B), so that the symmetry isn't lost.
One practical application is in finding the natural frequencies of a dynamic mechanical system from differential equations of the form M (d²x/dt²) = Kx where M is a positive definite matrix known as the mass matrix and K is the stiffness matrix, and x is displacement vector and d²x/dt² is acceleration vector which is the second derivative of the displacement vector. To find the natural frequencies, x can be substituted with x0 sin(ωt) where ω is the natural frequency. The equation reduces to Kx = ω²Mx. Now, one can use eig(inv(K)*M) but that might break the symmetry of the resultant matrix, and so I would use eig(K,M) instead.
A - lambda B x it means that x is not in the same basis as the covariance matrix.
If the matrix is not definite positive it means that there are vectors that can be flipped by your B.
I hope it was helpful.

Least square on linear N-way-equal problem

Suppose I want to find the "intersection point" of 2 arbitrary high-dimensional lines. The two lines won't actually intersect, but I still want to find the most intersect point (i.e. a point that is as close to all lines as possible).
Suppose those lines have direction vectors A, B and initial points C, D,
I can find the most intersect point by simply set up a linear least square problem: converting the line-intersection equation
Ax + C = By + D
to least-square form
[A, -B] # [[x, y]] = D - C
where # standards for matrix times vector, and then I can use e.g. np.linalg.lstsq to solve it.
But how can I find the "most intersect point" of 3 or more arbitrary lines? If I follow the same rule, I now have
Ax + D = By + E = Cz + F
The only way I can think of is decomposing this into three equations:
Ax + D = By + E
Ax + D = Cz + F
By + E = Cz + F
and converting them to least-square form
[A, -B, 0] [E - D]
[A, 0, -C] # [[x, y, z]] = [F - D]
[0, B, -C] [F - E]
The problem is the size of the least-square problem increases quadraticly about the number of lines. I'm wondering are there more efficient way to solve n-way-equal least-square linear problem?
I was thinking about the necessity of By + E = Cz + F above providing the other two terms. But since this problem do not have exact solution (i.e. they don't actually intersect), I believe doing so will create more "weight" on some variable?
Thank you for your help!
EDIT
I just tested pairing the first term with all other terms in the n-way-equality (and no other pairs) using the following code
def lineIntersect(k, b):
"k, b: N-by-D matrices describing N D-dimensional lines: k[i] * x + b[i]"
# Convert the problem to least-square form `Ax = B`
# A is temporarily defined 3-dimensional for convenience
A = np.zeros((len(k)-1, k.shape[1], len(k)), k.dtype)
A[:,:,0] = k[0]
A[range(len(k)-1),:,range(1,len(k))] = -k[1:]
# Convert to 2-dimensional matrix by flattening first two dimensions
A = A.reshape(-1, len(k))
# B should be 1-dimensional vector
B = (b[1:] - b[0]).ravel()
x = np.linalg.lstsq(A, B, None)[0]
return (x[:,None] * k + b).mean(0)
The result below indicates doing so is not correct because the first term in the n-way-equality is "weighted differently".
The first output is difference between the regular result and the result of different input order (line order should not matter) where the first term did not change.
The second output is the same with the first term did change.
k = np.random.rand(10, 100)
b = np.random.rand(10, 100)
print(np.linalg.norm(lineIntersect(k, b) - lineIntersect(np.r_[k[:1],k[:0:-1]], np.r_[b[:1],b[:0:-1]])))
print(np.linalg.norm(lineIntersect(k, b) - lineIntersect(k[::-1], b[::-1])))
results in
7.889616961715915e-16
0.10702479853076755
Another criterion for the 'almost intersection point' would be a point x such that the sum of the squares of the distances of x to the lines is as small as possible. Like your criterion, if the lines actually do intersect then the almost intersection point will be the actual intersection point. However I think the sum of distances squared criterion makes it straightforward to compute the point in question:
Suppose we represent a line by a point and a unit vector along the line. So if a line is represented by p,t then the points on the line are of the form
p + l*t for scalar l
The distance-squared of a point x from a line p,t is
(x-p)'*(x-p) - square( t'*(x-p))
If we have N lines p[i],t[i] then the sum of the distances squared from a point x is
Sum { (x-p[i])'*(x-p[i]) - square( t[i]'*(x[i]-p[i]))}
Expanding this out I get the above to be
x'*S*x - 2*x'*V + K
where
S = N*I - Sum{ t[i]*t[i]'}
V = Sum{ p[i] - (t[i]'*p[i])*t[i] }
and K does not depend on x
Unless all the lines are parallel, S will be (strictly) positive definite and hence invertible, and in that case our sum of distances squared is
(x-inv(S)*V)'*S*(x-inv(S)*V) + K - V'*inv(S)*V
Thus the minimising x is
inv(S)*V
So the drill is: normalise your 'direction vectors' (and scale each point by the same factor as used to scale the direction), form S and V as above, solve
S*x = V for x
This question might be better suited for the math stackexchange. Also, does anyone have a good way of formatting math here? Sorry that it's hard to read, I did my best with unicode.
EDIT: I misinterpreted what #ZisIsNotZis meant by the lines Ax+C so what disregard the next paragraph.
I'm not convinced that your method is stated correctly. Would you mind posting your code and a small example of the output (maybe in 2d with 3 or 4 lines so we can plot it)? When you're trying to find the intersection of two lines shouldn't you do Ax+C = Bx+D? If you do Ax+C=By+D you can pick some x on the first line and some y on the second line and satisfy both equations exactly. Because here x and y should be the same size as A and B which is the dimension of the space rather than scalars.
There are many ways to understand the problem of finding a point that is as close to all lines as possible. I think the most natural one is that the sum of squares of euclidian distance to each line is minimized.
Suppose we have a line in R^n: c^Tz + d = 0 (where c is unit length) and another point x. Then the shortest vector from x to the line is: (I-cc^T)(x-d) so the square of the distance from x to the line is ║(I-cc^T)(x-d)║^2. We can find the closest point to the line by minimizing this distance. Note that this is a standard least squares problem of the form min_x ║b-Ax║_2.
Now, suppose we have lines given by c_iz+d_i for i=1,...,m. The squared distance d_i^2 from a point x to the i-th line is d_i^2 = ║(I-cc^T)(x-d)║_2^2. We now want to solve the problem of min_x \sum_{i=1}^{m} d_i^2.
In matrix form we have:
║ ⎡ (I-c_1 c_1^T)(x-d_1) ⎤ ║
║ | (I-c_2 c_2^T)(x-d_2) | ║
min_x ║ | ... | ║
║ ⎣ (I-c_n c_n^T)(x-d_n) ⎦ ║_2
This is again in the form min_x ║b - Ax║_2 so there are good solvers available.
Each block has size n (dimension of the space) and there are m blocks (number of lines). So the system is mn byn. In particular, it is linear in the number of lines and quadratic in the dimension of the space.
It also has the advantage that if you add a line you simply add another block to the least squares system. This also offers the possibility of updating solutions iteratively as you add lines.
I'm not sure if there are special solvers for this type of least squares system. Note that each block is the identity minus a rank one matrix, so that might give some additional structure which can be used to speed things up. That said, I think using existing solvers will almost always work better than writing your own, unless you have quite a bit of background in numerical analysis or have a very specialized class of systems to solve.
Not a solution, some thoughts:
If line in nD space has parametric equation (with unit Dir vector)
L(t) = Base + Dir * t
then squared distance from point P to this line is
W = P - Base
Dist^2 = (W - (W.dot.Dir) * Dir)^2
If it is possible to write Min(Sum(Dist[i]^2)) in form suitable for LSQ method (make partial derivatives by every point coordinate), so resulting system might be solved for (x1..xn) coordinate vector.
(Situation resembles reversal of many points and single line of usual LSQ)
You say that you have two "high-dimensional" lines. This implies that the matrix indicating the lines has many more columns than rows.
If this is the case and you can efficiently find a low-rank decomposition such that A=LRᵀ, then you can rewrite the solution of the least squares problem min ||Ax-y||₂ as x=(Rᵀ RLᵀ L)⁻¹ Lᵀ y.
If m is the number of lines and n the dimension of the lines, then this reduces the least-squares time complexity from O(mn²+nʷ) to O(nr²+mr²) where r=min(m,n).
The problem then is to find such a decomposition.

Computing 3D-homography with 5 3D-points

I've got a set of 3D-points in a projective space and I want to transform them into a metric 3D space so that I could measure distances in meters.
In order to do so, I need a 3D to 3D homography, which is a 4x4 matrix with 15 degrees of freedom (so I need 5 3D-points to get 15 equations).
I have a set of these 5 3D-points from the projective space and their corresponding 5 3D-points aligned in the metric space (which I expect the 5 projective points to be transformed to).
I can't figure out how to estimate the homography matrix. At first I tried:
A=np.vstack([p1101.T, p1111.T, p0101.T, p0001.T, p0011.T])
b=np.array([[1,1,0,1], [1,1,1,1], [0,1,0,1], [0,0,0,1], [0,0,1,1]])
x, _, _, _ = np.linalg.lstsq(A,b)
H = x.T
where p1101 is a [X,Y,Z,1] point which corresponds to [1,1,0,1] in the 3D metric space, etc..
However, this is not correct since I'm in projective space, so I need to create somehow an equation set where I divide the rows of H with its last or something like that.
I thought maybe there is an implemented method that will do it for me, for example in opencv, but didn't find. Any help would be appreciated.
I finally solved this question with a friend, and would like to share the solution.
Since in projective space, one needs to solve an equation set where the homogene coordinate of the outcome is the denominator of each other coordinate. i.e, if you want to find a 4x4 homography matrix H, and you have matching 3D points x and b (b is in the meteric space), you'll need to optimize the search of H parameters such that H applied on x will give a vector v with 4 coordinates, such that all the first three coordinates of v divided by the last coordinate are b. written in numpy:
v = H.dot(x)
v = v[:3]/v[3]
v == b # True
mathematically, the optimization is based on this (this is focused on the first coordinate only, for simplicity, but other coordinates are done the same way):
so in python one needs to arrange the equations for the solver in the explained manner, with 5 matching points. The way that was purposed in the question is good (just didn't solve the right problem), and in these terms it will make Ax=b least squares optimization such that A is 15x15 matrix, and b is a 15 dimensional vector.
Each matching point generates 3 equations, then 5 matching points will generate 15 equations built into the matrix A, thus solving the 15 DOF of the 3D homography H.

Calculate astronomical distance of two set of sky position with theano

I want to compute the angular distance between all points in two different sets, something like cdist of scipy but with a different distance algorithm and using theano. The angular distance between two sources with right ascension (ra) in (0,2pi) and with declination (dec) in (-pi/2, pi/2) is:
theta = arccos(sin(dec1)*sin(dec2)+cos(dec1)*cos(dec2)*cos(ra1-ra2))
suppose that X is a matrix consists of N sources with their position (ra, dec):
#RA DEC
54.29 -35.19
54.62 -35.45
...
and W is other set of sources M different sources. How can I determine the angular separation of all X sources with all W sources?
Inspired to the euclidian distance:
edist = T.sqrt((X ** 2).sum(1).reshape((X.shape[0], 1)) + (W ** 2).sum(1).reshape((1, W.shape[0])) - 2 * X.dot(W.T))
I have tried with:
d = T.arccos(\\
T.sin(X.reshape((X.shape[0], 1, -1))[...,1])*T.sin(W.reshape((1, W.shape[0], -1))[..., 1])+\\
T.cos(X.reshape((X.shape[0], 1, -1))[...,1])*T.cos(W.reshape((1, W.shape[0], -1))[..., 1])*\\
T.cos(X.reshape((X.shape[0], 1, -1))[...,0] -W.reshape((1, W.shape[0], -1))[...,0]))
that resulting d matrix has shape (N, M) instead of (N, M, 2), since I expected to sum over the third axis; further the numerical result is wrong (I have compared it with TOPCAT which is a software astronomy-oriented. Any suggestion?
You need to debug your expression by parts - calculate sin(dec1) first and make sure you get the right shape and the right numerical result. Then the multiplication with sin(dec2) and so on until you get the full arccos expression.
One idea of something that is possibly wrong with your code is the use of * for multiplication - if you want to multiply matrices you should use T.multiply() instead of *.
I have resolved the issue: simply i have to convert right ascension and declination from degree to radian. Now, the method works.

Rotated Paraboloid Surface Fitting

I have a set of experimentally determined (x, y, z) points which correspond to a parabola. Unfortunately, the data is not aligned along any particular axis, and hence corresponds to a rotated parabola.
I have the following general surface:
Ax^2 + By^2 + Cz^2 + Dxy + Gyz + Hzx + Ix + Jy + Kz + L = 0
I need to produce a model that can represent the parabola accurately using (I'm assuming) least squares fitting. I cannot seem to figure out how this works. I have though of rotating the parabola until its central axis lines up with z-axis but I do not know what this axis is. Matlab's cftool only seems to fit equations of the form z = f(x, y) and I am not aware of anything in python that can solve this.
I also tried solving for the parameters numerically. When I tried making this into a matrix equation and solving by least squares, the matrix turned out to be invertible and hence my parameters were just all zero. I also am stuck on this and any help would be appreciated. I don't really mind the method as I am familiar with matlab, python and linear algebra if need be.
Thanks
Dont use any toolboxes, GUIs or special functions for this problem. Your problem is very common and the equation you provided may be solved in a very straight-forward manner. The solution to the linear least squares problem can be outlined as:
The basis of the vector space is x^2, y^2, z^2, xy, yz, zx, x, y, z, 1. Therefore your vector has 10 dimensions.
Your problem may be expressed as Ap=b, where p = [A B C D E F G H I J K L]^T is the vector containing your parameters. The right hand side b should be all zeros, but will contain some residual due to model errors, uncertainty in the data or for numerical reasons. This residual has to be minimized.
The matrix A has a dimension of N by 10, where N denotes the number of known points on surface of the parabola.
A = [x(1)^2 y(1)^2 ... y(1) z(1) 1
...
x(N)^2 y(N)^2 ... y(N) z(N) 1]
Solve the overdetermined system of linear equations by computing p = A\b.
Do you have enough data points to fit all 10 parameters - you will need at least 10?
I also suspect that 10 parameters are to many to describe a general paraboloid, meaning that some of the parameters are dependent. My fealing is that a translated and rotated paraboloid needs 7 parameters (although I'm not really sure)

Categories

Resources