N-dimensional high order polynomial interpolation - python

I am searching some clues about a complex issue that I am facing, regarding interpolation in a 4D space.
I have a dataset composed by 340 points in a 3-dimensional space (I have three variables - A, B, C - each defined by 340 elements). Each point is identified by a certain value of an output variable. So, generally I have
f(A,B,C) = D
I need to interpolate the dataset in order to predict the value of D for each point in the design space. What I did was to write a small script to obtain the coefficients of the polynomial m through the numpy method linalg.lstsq
def polyfit4d(x,y,z, metric, order):
ncols = (order + 1)**3
G = np.zeros((x.size, ncols))
ij = itertools.product(range(order+1), repeat=3)
for w, (i,j,k) in enumerate(ij):
G[:,w] = x**i * y**j * z**k
m, residuals, rank, s = np.linalg.lstsq(G, metric)
return m, residuals
Then, I used an evaluating function to obtain the values of the function at all the points of the design space.
def polyval4d(x, y, z, m):
string = ''
order = int(math.ceil(((len(m))**(1/3.0))-1))
ij = itertools.product(range(order+1), repeat=3)
f = np.zeros_like(x)
for a, (i,j,k) in zip(m, ij):
f += a * x**i * y**j * z**k
return f
Since my design space is 3-dimensional, I passed to the polyval function three 3D matrix with all the points X,Y,Z of the design space.
The f is a 3D matrix of outputs D. Each point in this matrix is the value of D calculated evaluating the polynomial, found with polyfit, in each point of the design space (sorry for the tricky sentence).
What I do then is to plot the contour plot of a slice of this 3D design space. I choose one value of Z, and I plot the 2D plane formed by X,Y with the contours levels based on the values of D.
The problem is that the result is not what I expect. The contour plot is almost of the same colour, with some variations in one corner.
I searched everywhere on the internet, and also the Python wiki suggests functions that works only for the 2D case.
Has anyone ever faced this kind of problem? Do I miss something in the evaluation/definition of this N-dimensional polynomial?
Thanks a lot for your kind attention.
Federico

Related

In which scenario would one use another matrix than the identity matrix for finding eigenvalues?

The scipy.linalg.eigh function can take two matrices as arguments: first the matrix a, of which we will find eigenvalues and eigenvectors, but also the matrix b, which is optional and chosen as the identity matrix in case it is left blank.
In what scenario would someone like to use this b matrix?
Some more context: I am trying to use xdawn covariances from the pyRiemann package. This uses the scipy.linalg.eigh function with a covariance matrix a and a baseline covariance matrix b. You can find the implementation here. This yields an error, as the b matrix in my case is not positive definitive and thus not useable in the scipy.linalg.eigh function. Removing this matrix and just using the identity matrix however solves this problem and yields relatively nice results... The problem is that I do not really understand what I changed, and maybe I am doing something I should not be doing.
This is the code from the pyRiemann package I am using (modified to avoid using functions defined in other parts of the package):
# X are samples (EEG data), y are labels
# shape of X is (1000, 64, 2459)
# shape of y is (1000,)
from scipy.linalg import eigh
Ne, Ns, Nt = X.shape
tmp = X.transpose((1, 2, 0))
b = np.matrix(sklearn.covariance.empirical_covariance(tmp.reshape(Ne, Ns * Nt).T))
for c in self.classes_:
# Prototyped response for each class
P = np.mean(X[y == c, :, :], axis=0)
# Covariance matrix of the prototyper response & signal
a = np.matrix(sklearn.covariance.empirical_covariance(P.T))
# Spatial filters
evals, evecs = eigh(a, b)
# and I am now using the following, disregarding the b matrix:
# evals, evecs = eigh(a)
If A and B were both symmetric matrices that doesn't necessarily have to imply that inv(A)*B must be a symmetric matrix. And so, if i had to solve a generalised eigenvalue problem of Ax=lambda Bx then i would use eig(A,B) rather than eig(inv(A)*B), so that the symmetry isn't lost.
One practical application is in finding the natural frequencies of a dynamic mechanical system from differential equations of the form M (d²x/dt²) = Kx where M is a positive definite matrix known as the mass matrix and K is the stiffness matrix, and x is displacement vector and d²x/dt² is acceleration vector which is the second derivative of the displacement vector. To find the natural frequencies, x can be substituted with x0 sin(ωt) where ω is the natural frequency. The equation reduces to Kx = ω²Mx. Now, one can use eig(inv(K)*M) but that might break the symmetry of the resultant matrix, and so I would use eig(K,M) instead.
A - lambda B x it means that x is not in the same basis as the covariance matrix.
If the matrix is not definite positive it means that there are vectors that can be flipped by your B.
I hope it was helpful.

Least square on linear N-way-equal problem

Suppose I want to find the "intersection point" of 2 arbitrary high-dimensional lines. The two lines won't actually intersect, but I still want to find the most intersect point (i.e. a point that is as close to all lines as possible).
Suppose those lines have direction vectors A, B and initial points C, D,
I can find the most intersect point by simply set up a linear least square problem: converting the line-intersection equation
Ax + C = By + D
to least-square form
[A, -B] # [[x, y]] = D - C
where # standards for matrix times vector, and then I can use e.g. np.linalg.lstsq to solve it.
But how can I find the "most intersect point" of 3 or more arbitrary lines? If I follow the same rule, I now have
Ax + D = By + E = Cz + F
The only way I can think of is decomposing this into three equations:
Ax + D = By + E
Ax + D = Cz + F
By + E = Cz + F
and converting them to least-square form
[A, -B, 0] [E - D]
[A, 0, -C] # [[x, y, z]] = [F - D]
[0, B, -C] [F - E]
The problem is the size of the least-square problem increases quadraticly about the number of lines. I'm wondering are there more efficient way to solve n-way-equal least-square linear problem?
I was thinking about the necessity of By + E = Cz + F above providing the other two terms. But since this problem do not have exact solution (i.e. they don't actually intersect), I believe doing so will create more "weight" on some variable?
Thank you for your help!
EDIT
I just tested pairing the first term with all other terms in the n-way-equality (and no other pairs) using the following code
def lineIntersect(k, b):
"k, b: N-by-D matrices describing N D-dimensional lines: k[i] * x + b[i]"
# Convert the problem to least-square form `Ax = B`
# A is temporarily defined 3-dimensional for convenience
A = np.zeros((len(k)-1, k.shape[1], len(k)), k.dtype)
A[:,:,0] = k[0]
A[range(len(k)-1),:,range(1,len(k))] = -k[1:]
# Convert to 2-dimensional matrix by flattening first two dimensions
A = A.reshape(-1, len(k))
# B should be 1-dimensional vector
B = (b[1:] - b[0]).ravel()
x = np.linalg.lstsq(A, B, None)[0]
return (x[:,None] * k + b).mean(0)
The result below indicates doing so is not correct because the first term in the n-way-equality is "weighted differently".
The first output is difference between the regular result and the result of different input order (line order should not matter) where the first term did not change.
The second output is the same with the first term did change.
k = np.random.rand(10, 100)
b = np.random.rand(10, 100)
print(np.linalg.norm(lineIntersect(k, b) - lineIntersect(np.r_[k[:1],k[:0:-1]], np.r_[b[:1],b[:0:-1]])))
print(np.linalg.norm(lineIntersect(k, b) - lineIntersect(k[::-1], b[::-1])))
results in
7.889616961715915e-16
0.10702479853076755
Another criterion for the 'almost intersection point' would be a point x such that the sum of the squares of the distances of x to the lines is as small as possible. Like your criterion, if the lines actually do intersect then the almost intersection point will be the actual intersection point. However I think the sum of distances squared criterion makes it straightforward to compute the point in question:
Suppose we represent a line by a point and a unit vector along the line. So if a line is represented by p,t then the points on the line are of the form
p + l*t for scalar l
The distance-squared of a point x from a line p,t is
(x-p)'*(x-p) - square( t'*(x-p))
If we have N lines p[i],t[i] then the sum of the distances squared from a point x is
Sum { (x-p[i])'*(x-p[i]) - square( t[i]'*(x[i]-p[i]))}
Expanding this out I get the above to be
x'*S*x - 2*x'*V + K
where
S = N*I - Sum{ t[i]*t[i]'}
V = Sum{ p[i] - (t[i]'*p[i])*t[i] }
and K does not depend on x
Unless all the lines are parallel, S will be (strictly) positive definite and hence invertible, and in that case our sum of distances squared is
(x-inv(S)*V)'*S*(x-inv(S)*V) + K - V'*inv(S)*V
Thus the minimising x is
inv(S)*V
So the drill is: normalise your 'direction vectors' (and scale each point by the same factor as used to scale the direction), form S and V as above, solve
S*x = V for x
This question might be better suited for the math stackexchange. Also, does anyone have a good way of formatting math here? Sorry that it's hard to read, I did my best with unicode.
EDIT: I misinterpreted what #ZisIsNotZis meant by the lines Ax+C so what disregard the next paragraph.
I'm not convinced that your method is stated correctly. Would you mind posting your code and a small example of the output (maybe in 2d with 3 or 4 lines so we can plot it)? When you're trying to find the intersection of two lines shouldn't you do Ax+C = Bx+D? If you do Ax+C=By+D you can pick some x on the first line and some y on the second line and satisfy both equations exactly. Because here x and y should be the same size as A and B which is the dimension of the space rather than scalars.
There are many ways to understand the problem of finding a point that is as close to all lines as possible. I think the most natural one is that the sum of squares of euclidian distance to each line is minimized.
Suppose we have a line in R^n: c^Tz + d = 0 (where c is unit length) and another point x. Then the shortest vector from x to the line is: (I-cc^T)(x-d) so the square of the distance from x to the line is ║(I-cc^T)(x-d)║^2. We can find the closest point to the line by minimizing this distance. Note that this is a standard least squares problem of the form min_x ║b-Ax║_2.
Now, suppose we have lines given by c_iz+d_i for i=1,...,m. The squared distance d_i^2 from a point x to the i-th line is d_i^2 = ║(I-cc^T)(x-d)║_2^2. We now want to solve the problem of min_x \sum_{i=1}^{m} d_i^2.
In matrix form we have:
║ ⎡ (I-c_1 c_1^T)(x-d_1) ⎤ ║
║ | (I-c_2 c_2^T)(x-d_2) | ║
min_x ║ | ... | ║
║ ⎣ (I-c_n c_n^T)(x-d_n) ⎦ ║_2
This is again in the form min_x ║b - Ax║_2 so there are good solvers available.
Each block has size n (dimension of the space) and there are m blocks (number of lines). So the system is mn byn. In particular, it is linear in the number of lines and quadratic in the dimension of the space.
It also has the advantage that if you add a line you simply add another block to the least squares system. This also offers the possibility of updating solutions iteratively as you add lines.
I'm not sure if there are special solvers for this type of least squares system. Note that each block is the identity minus a rank one matrix, so that might give some additional structure which can be used to speed things up. That said, I think using existing solvers will almost always work better than writing your own, unless you have quite a bit of background in numerical analysis or have a very specialized class of systems to solve.
Not a solution, some thoughts:
If line in nD space has parametric equation (with unit Dir vector)
L(t) = Base + Dir * t
then squared distance from point P to this line is
W = P - Base
Dist^2 = (W - (W.dot.Dir) * Dir)^2
If it is possible to write Min(Sum(Dist[i]^2)) in form suitable for LSQ method (make partial derivatives by every point coordinate), so resulting system might be solved for (x1..xn) coordinate vector.
(Situation resembles reversal of many points and single line of usual LSQ)
You say that you have two "high-dimensional" lines. This implies that the matrix indicating the lines has many more columns than rows.
If this is the case and you can efficiently find a low-rank decomposition such that A=LRᵀ, then you can rewrite the solution of the least squares problem min ||Ax-y||₂ as x=(Rᵀ RLᵀ L)⁻¹ Lᵀ y.
If m is the number of lines and n the dimension of the lines, then this reduces the least-squares time complexity from O(mn²+nʷ) to O(nr²+mr²) where r=min(m,n).
The problem then is to find such a decomposition.

How to use scipy.optimize to minimize multiple scalar-valued functions simultaneously

In this question I asked for a way to compute the closest projected point to a hyperbolic paraboloid using python.
Thanks to the answer, I was able to use the code below to calculate the closest point to multiple paraboloids.
from scipy.optimize import minimize
# This function calculate the closest projection on a hyperbolic paraboloid
# As Answered by #Jaime https://stackoverflow.com/questions/18858448/speeding-up-a-closest-point-on-a-hyperbolic-paraboloid-algorithm
def fun_single(x, p0, p1, p2, p3, p):
u, v = x
s = u*(p1-p0) + v*(p3-p0) + u*v*(p2-p3-p1+p0) + p0
return np.linalg.norm(p-s)
# Example use case:
# Generate some random data for 3 random hyperbolic paraboloids
# A real life use case will count in the tens of thousands.
import numpy as np
COUNT = 3
p0 = np.random.random_sample((COUNT,3))
p1 = np.random.random_sample((COUNT,3))
p2 = np.random.random_sample((COUNT,3))
p3 = np.random.random_sample((COUNT,3))
p = np.random.random_sample(3)
uv = []
for i in xrange(COUNT):
uv.append(minimize(fun_single, (0.5, 0.5), (p0[i], p1[i], p2[i], p3[i], p)).x)
uv = np.array(uv)
# UV projections for my random data
#[[ 0.34109572 4.39237344]
# [-0.2720813 0.17083423]
# [ 0.48993333 -0.99415568]]
Now that I have a projection for each item it's possible to find more useful info, such as which of the given items is closest to the query point, find its array index and derive more data from it, etc...
The problem with calling minimize for each item is that it becomes very slow when dealing with hundreds of thousands of items. So to try to resolve the issue I took a crack at changing the function to work with many inputs.
from numpy.core.umath_tests import inner1d
# This function calculate the closest projection to many hyperbolic paraboloids
def fun_array(x, p0, p1, p2, p3, p):
u, v = x
s = u*(p1-p0) + v*(p3-p0) + u*v*(p2-p3-p1+p0) + p0
V = p-s
return np.min(np.sqrt(inner1d(V,V)))
# Lets pass all the data to minimize
uv = minimize(fun_array, (0.5, 0.5), (p0, p1, p2, p3, p)).x
# Result: [ 0.25090064, 1.19732181]
# This corresponds to index 2 of my random data,
# which is the closest projection.
Minimizing the function fun_array is much faster than the iterative approach, but it only returns the single closest projection, not all projections.
QUESTION
Is it possible to use minimize to return all projections as with the iterative approach? And if not, is it at least possible to get the index of the "winning" array element?
The strict answer
You have to be tricky but it's not that difficult to trick minimize. The point is that minimize only works for scalar cost functions. But we can get away with summing up all your distances, since they are naturally nonnegative quantities and the global minimum is defined by the configuration where each distance is minimal. So instead of asking for the minimum points of COUNT bivariate scalar functions, instead we ask for the minimum of a single scalar function of COUNT*2 variables. This just happens to be the sum of COUNT bivariate functions. But note that I'm not convinced that this will be faster, because I can imagine higher-dimensional minimum searches to be less stable than a corresponding set of lower-dimensional independent minimum searches.
What you should definitely do is pre-allocate memory for uv and insert values into that, rather than growing a list item by item a lot of times:
uv = np.empty((COUNT,2))
for i in range(COUNT):
uv[i,:] = minimize(fun_single, (0.5, 0.5), (p0[i], p1[i], p2[i], p3[i], p)).x
Anyway, in order to use a single call to minimize we only need to vectorize your function, which is easier than you'd think:
def fun_vect(x, p0, p1, p2, p3, p):
x = x.reshape(-1,2) # dimensions are mangled by minimize() call
u,v = x.T[...,None] # u,v shaped (COUNT,1) for broadcasting
s = u*(p1-p0) + v*(p3-p0) + u*v*(p2-p3-p1+p0) + p0 # shape (COUNT,3)
return np.linalg.norm(p-s, axis=1).sum() # sum up distances for overall cost
x0 = 0.5*np.ones((COUNT,2))
uv_vect = minimize(fun_vect, x0, (p0, p1, p2, p3, p)).x.reshape(-1,2)
This function, as you see, extend the scalar one along columns. Each row corresponds to an independent minimization problem (consistently with your definition of the points). The vectorization is straightforward, the only nontrivial part is that we need to play around with dimensions to make sure that everything broadcasts nicely, and we should take care to reshape x0 on input because minimize has a habit of flattening the array-valued input position. And of course the final result has to be reshaped again. Correspondingly, an array of shape (COUNT,2) has to be provided as x0, this is the only feature from which minimize can deduce the dimensionality of your problem.
Comparison for my random data:
>>> uv
array([[-0.13386872, 0.14324999],
[ 2.42883931, 0.55099395],
[ 1.03084756, 0.35847593],
[ 1.47276203, 0.29337082]])
>>> uv_vect
array([[-0.13386898, 0.1432499 ],
[ 2.42883952, 0.55099405],
[ 1.03085143, 0.35847888],
[ 1.47276244, 0.29337179]])
Note that I changed COUNT to be 4, because I like to keep every dimension distinct when testing. This way I can be sure that I run into an error if I mess up my dimensions. Also note that in general you might want to keep the complete object returned by minimize just to make sure that everything went fine and converged.
A more useful solution
As we discussed in comments, the above solution---while perfectly answers the question---is not particularly feasible, since it takes too long to run, much longer than doing each minimization separately. The problem was interesting enough that it got me thinking. Why not try to solve the problem as exactly as possible?
What you're trying to do (now considering a single hyperboloid and a query point q) is finding the s(u,v) point with the parametrization by Jaime
s(u,v) = p0 + u * (p1 - p0) + v * (p3 - p0) + u * v * (p2 - p3 - p1 + p0)
for which the distance d(s,q) is minimal. Since the distance is a proper metric (in particular, it is non-negative), this is equivalent to minimizing d(s,q)^2. So far so good.
Let's rewrite the parametrized equation of s by introducing a few constant vectors in order to simplify the derivation:
s(u,v) = p0 + u*a + v*b + u*v*c
s - q = p0-q0 + u*a + v*b + u*v*c
= d + u*a + v*b + u*v*c
d(s,q)^2 = (s-q)^2
(In this section ^ will represent the power, because this is linear algebra.) Now, the minimum of the distance function is a stationary point, so in the u_min,v_min point we're looking for the gradient of s(u,v) with respect to u and v is zero. This is equivalent to saying that the derivative of d(s,q)^2 with respect to both u and v has to be simultaneously zero; this gives us two nonlinear equations with the unknowns u and v:
2*(s-q)*ds/du = 0 (1)
2*(s-q)*ds/dv = 0 (2)
Expanding these two equations is a somewhat tedious job. The first equation happens to be linear in u, the second in v. I collected all the terms containing u in the first equation, which gave me the relationship
u(v) = (-v^2*b.c - v*(c.d + a.b) - a.d)/(a + v*c)^2
where . represents the dot product. The above equation tells us that for whatever v we choose, equation (1) will exactly be satisfied if u is chosen thus. So we have to solve equation (2).
What I did was expand all the terms in equation (2), and substitute u(v) into u. The original equation had polynomial terms of 1,u,v,uv,u^2,u^2v, so I can tell you this is not pretty. With some minor assumptions of no divergence (which divergences would probably correspond to the equivalent of vertical lines in the case of a line fitting problem), we can arrive at the following beautiful equation:
(b.d + v*b^2)*f^2 - (c.d + a.b + 2*v*b.c)*e*f + (a.c + v*c^2)*e^2 = 0
with the new scalars defined as
e = v^2*b.c + v*(c.d + a.b) + a.d
f = (a + v*c)^2 = (a^2 + 2*v*a.c + v^2*c^2)
Whatever v solves this equation, the corresponding (u(v),v) point will correspond to a stationary point of the distance. We should first note that this equation considers the root of a fifth-order polynomial if v. There's guaranteed to be at least one real root, and in the worst case there can be as many as 5 real roots. Whether these correspond to minima, maxima, or (in unlikely cases) saddle points is open for discussion.
The real benefit of the above result is that we have a fighting chance of finding all the roots of the equation! This is a huge deal, since nonlinear root searching/minimization will in general give you only one root at a time, without being able to tell you if you've missed any. Enter numpy.polynomial.polynomial.polyroots. Despite all the linear algebra fluff surrounding it, we're only looking for the (at most 5!) root of a polynomial, for which we can test the distances and choose the global minimum (if necessary). If there's only one root, we can be sure that it's the minimum based on geometrical considerations.
Note that I haven't mentioned a caveat yet: the polynomial library can only work with one polynomial at a time. We will still have to loop over each hyperboloid manually. But here's the deal: we will be able to guarantee that we're finding the exact minimum, rather than unknowingly accepting local distance minima. And it might even be faster than minimize. Let's see:
import numpy as np
# generate dummy inputs
COUNT = 100
p0 = np.random.random_sample((COUNT,3))
p1 = np.random.random_sample((COUNT,3))
p2 = np.random.random_sample((COUNT,3))
p3 = np.random.random_sample((COUNT,3))
p = np.random.random_sample(3)
def mydot(v1,v2):
"""generalized dot product for multidimensional arrays: (...,N,3)x(...,N,3) -> (...,N,1)"""
# (used in u_from_v for vectorized dot product)
return np.einsum('...j,...j->...',v1,v2)[...,None]
def u_from_v(v, a, b, c, d):
"""return u(v) corresponding to zero of gradient"""
# use mydot() instead of dot to enable array-valued v input
res = (- v**2*mydot(b,c) - v*(mydot(c,d)+mydot(a,b)) - mydot(a,d))/np.linalg.norm(a+v*c, axis=-1, keepdims=True)**2
return res.squeeze()
def check_distance(uv, p0, p1, p2, p3, p):
"""compute the distance from optimization results to query point"""
u,v = uv.T[...,None]
s = u*(p1-p0) + v*(p3-p0) + u*v*(p2-p3-p1+p0) + p0
return np.linalg.norm(p-s, axis=-1)
def poly_for_v(a, b, c, d):
"""return polynomial representation of derivative of d(s,p)^2 for the parametrized s(u(v),v) point"""
# only works with a scalar problem:( one polynomial at a time
# v is scalar, a-b-c-d are 3-dimensional vectors (for a given paraboloid)
# precompute scalar products appearing multiple times in the formula
ab = a.dot(b)
ac = a.dot(c)
cc = c.dot(c)
cd = c.dot(d)
bc = b.dot(c)
Poly = np.polynomial.polynomial.Polynomial
e = Poly([a.dot(d), cd+ab, bc])
f = Poly([a.dot(a), 2*ac, cc])
res = Poly([b.dot(d), b.dot(b)])*f**2 - Poly([cd+ab,2*bc])*e*f + Poly([ac,cc])*e**2
return res
def minimize_manually(p0, p1, p2, p3, p):
"""numpy polynomial version for the minimization problem"""
# auxiliary arrays, shape (COUNT,3)
a = p1 - p0
b = p3 - p0
c = p2 - p3 - p1 + p0
d = p0 - p
# preallocate for collected result
uv_min = np.empty((COUNT,2))
for k in range(COUNT):
# collect length-3 vectors needed for a given surface
aa,bb,cc,dd = (x[k,:] for x in (a,b,c,d))
# compute 5 complex roots of the derivative distance
roots = poly_for_v(aa, bb, cc, dd).roots()
# keep exactly real roots
vroots = roots[roots.imag==0].real
if vroots.size == 1:
# we're done here
vval, = vroots
uval = u_from_v(vval, aa, bb, cc, dd)
uv_min[k,:] = uval,vval
else:
# need to find the root with minimal distance
uvals = u_from_v(vroots[:,None], aa, bb, cc, dd)
uvtmp = np.stack((uvals,vroots),axis=-1)
dists = check_distance(uvtmp, p0[k,:], p1[k,:], p2[k,:], p3[k,:], p)
winner = np.argmin(dists) # index of (u,v) pair of minimum
uv_min[k,:] = uvtmp[winner,:]
return uv_min
uv_min = minimize_manually(p0, p1, p2, p3, p)
# for comparison with the minimize-based approaches:
# distances = check_distance(uv_manual,p0,p1,p2,p3,p))
The above example has COUNT of 100, but if you start with COUNT=1 and keep running both the minimize version and the above exact version, you'll see roughly once in every 10-20 runs that the minimize-based approach misses the real minimum. So the above is safer, it's guaranteed to find the proper minima.
I also did some timing checks with COUNT=100, around 100 ms for the polynomial-based solution, around 200 ms for the minimize-based looping version. With COUNT=1000: 1 second for the polynomial, 2 seconds for the looping minimize-based. Considering how even for larger problems the above is both more precise and more efficient, I see no reason why not to use this instead.

Plane fitting in a 3d point cloud

I am trying to find planes in a 3d point cloud, using the regression formula Z= aX + bY +C
I implemented least squares and ransac solutions,
but the 3 parameters equation limits the plane fitting to 2.5D- the formula can not be applied on planes parallel to the Z-axis.
My question is how can I generalize the plane fitting to full 3d?
I want to add the fourth parameter in order to get the full equation
aX +bY +c*Z + d
how can I avoid the trivial (0,0,0,0) solution?
Thanks!
The Code I'm using:
from sklearn import linear_model
def local_regression_plane_ransac(neighborhood):
"""
Computes parameters for a local regression plane using RANSAC
"""
XY = neighborhood[:,:2]
Z = neighborhood[:,2]
ransac = linear_model.RANSACRegressor(
linear_model.LinearRegression(),
residual_threshold=0.1
)
ransac.fit(XY, Z)
inlier_mask = ransac.inlier_mask_
coeff = model_ransac.estimator_.coef_
intercept = model_ransac.estimator_.intercept_
Update
This functionality is now integrated in https://github.com/daavoo/pyntcloud and makes the plane fitting process much simplier:
Given a point cloud:
You just need to add a scalar field like this:
is_floor = cloud.add_scalar_field("plane_fit")
Wich will add a new column with value 1 for the points of the plane fitted.
You can visualize the scalar field:
Old answer
I think that you could easily use PCA to fit the plane to the 3D points instead of regression.
Here is a simple PCA implementation:
def PCA(data, correlation = False, sort = True):
""" Applies Principal Component Analysis to the data
Parameters
----------
data: array
The array containing the data. The array must have NxM dimensions, where each
of the N rows represents a different individual record and each of the M columns
represents a different variable recorded for that individual record.
array([
[V11, ... , V1m],
...,
[Vn1, ... , Vnm]])
correlation(Optional) : bool
Set the type of matrix to be computed (see Notes):
If True compute the correlation matrix.
If False(Default) compute the covariance matrix.
sort(Optional) : bool
Set the order that the eigenvalues/vectors will have
If True(Default) they will be sorted (from higher value to less).
If False they won't.
Returns
-------
eigenvalues: (1,M) array
The eigenvalues of the corresponding matrix.
eigenvector: (M,M) array
The eigenvectors of the corresponding matrix.
Notes
-----
The correlation matrix is a better choice when there are different magnitudes
representing the M variables. Use covariance matrix in other cases.
"""
mean = np.mean(data, axis=0)
data_adjust = data - mean
#: the data is transposed due to np.cov/corrcoef syntax
if correlation:
matrix = np.corrcoef(data_adjust.T)
else:
matrix = np.cov(data_adjust.T)
eigenvalues, eigenvectors = np.linalg.eig(matrix)
if sort:
#: sort eigenvalues and eigenvectors
sort = eigenvalues.argsort()[::-1]
eigenvalues = eigenvalues[sort]
eigenvectors = eigenvectors[:,sort]
return eigenvalues, eigenvectors
And here is how you could fit the points to a plane:
def best_fitting_plane(points, equation=False):
""" Computes the best fitting plane of the given points
Parameters
----------
points: array
The x,y,z coordinates corresponding to the points from which we want
to define the best fitting plane. Expected format:
array([
[x1,y1,z1],
...,
[xn,yn,zn]])
equation(Optional) : bool
Set the oputput plane format:
If True return the a,b,c,d coefficients of the plane.
If False(Default) return 1 Point and 1 Normal vector.
Returns
-------
a, b, c, d : float
The coefficients solving the plane equation.
or
point, normal: array
The plane defined by 1 Point and 1 Normal vector. With format:
array([Px,Py,Pz]), array([Nx,Ny,Nz])
"""
w, v = PCA(points)
#: the normal of the plane is the last eigenvector
normal = v[:,2]
#: get a point from the plane
point = np.mean(points, axis=0)
if equation:
a, b, c = normal
d = -(np.dot(normal, point))
return a, b, c, d
else:
return point, normal
However as this method is sensitive to outliers you could use RANSAC to make the fit robust to outliers.
There is a Python implementation of ransac here.
And you should only need to define a Plane Model class in order to use it for fitting planes to 3D points.
In any case if you can clean the 3D points from outliers (maybe you could use a KD-Tree S.O.R filter to that) you should get pretty good results with PCA.
Here is an implementation of an S.O.R:
def statistical_outilier_removal(kdtree, k=8, z_max=2 ):
""" Compute a Statistical Outlier Removal filter on the given KDTree.
Parameters
----------
kdtree: scipy's KDTree instance
The KDTree's structure which will be used to
compute the filter.
k(Optional): int
The number of nearest neighbors wich will be used to estimate the
mean distance from each point to his nearest neighbors.
Default : 8
z_max(Optional): int
The maximum Z score wich determines if the point is an outlier or
not.
Returns
-------
sor_filter : boolean array
The boolean mask indicating wherever a point should be keeped or not.
The size of the boolean mask will be the same as the number of points
in the KDTree.
Notes
-----
The 2 optional parameters (k and z_max) should be used in order to adjust
the filter to the desired result.
A HIGHER 'k' value will result(normally) in a HIGHER number of points trimmed.
A LOWER 'z_max' value will result(normally) in a HIGHER number of points trimmed.
"""
distances, i = kdtree.query(kdtree.data, k=k, n_jobs=-1)
z_distances = stats.zscore(np.mean(distances, axis=1))
sor_filter = abs(z_distances) < z_max
return sor_filter
You could feed the function with a KDtree of your 3D points computed maybe using this implementation
import pcl
cloud = pcl.PointCloud()
cloud.from_array(points)
seg = cloud.make_segmenter_normals(ksearch=50)
seg.set_optimize_coefficients(True)
seg.set_model_type(pcl.SACMODEL_PLANE)
seg.set_normal_distance_weight(0.05)
seg.set_method_type(pcl.SAC_RANSAC)
seg.set_max_iterations(100)
seg.set_distance_threshold(0.005)
inliers, model = seg.segment()
you need to install python-pcl first. Feel free to play with the parameters. points here is a nx3 numpy array with n 3d points. Model will be [a, b, c, d] such that ax + by + cz + d = 0

Rotated Paraboloid Surface Fitting

I have a set of experimentally determined (x, y, z) points which correspond to a parabola. Unfortunately, the data is not aligned along any particular axis, and hence corresponds to a rotated parabola.
I have the following general surface:
Ax^2 + By^2 + Cz^2 + Dxy + Gyz + Hzx + Ix + Jy + Kz + L = 0
I need to produce a model that can represent the parabola accurately using (I'm assuming) least squares fitting. I cannot seem to figure out how this works. I have though of rotating the parabola until its central axis lines up with z-axis but I do not know what this axis is. Matlab's cftool only seems to fit equations of the form z = f(x, y) and I am not aware of anything in python that can solve this.
I also tried solving for the parameters numerically. When I tried making this into a matrix equation and solving by least squares, the matrix turned out to be invertible and hence my parameters were just all zero. I also am stuck on this and any help would be appreciated. I don't really mind the method as I am familiar with matlab, python and linear algebra if need be.
Thanks
Dont use any toolboxes, GUIs or special functions for this problem. Your problem is very common and the equation you provided may be solved in a very straight-forward manner. The solution to the linear least squares problem can be outlined as:
The basis of the vector space is x^2, y^2, z^2, xy, yz, zx, x, y, z, 1. Therefore your vector has 10 dimensions.
Your problem may be expressed as Ap=b, where p = [A B C D E F G H I J K L]^T is the vector containing your parameters. The right hand side b should be all zeros, but will contain some residual due to model errors, uncertainty in the data or for numerical reasons. This residual has to be minimized.
The matrix A has a dimension of N by 10, where N denotes the number of known points on surface of the parabola.
A = [x(1)^2 y(1)^2 ... y(1) z(1) 1
...
x(N)^2 y(N)^2 ... y(N) z(N) 1]
Solve the overdetermined system of linear equations by computing p = A\b.
Do you have enough data points to fit all 10 parameters - you will need at least 10?
I also suspect that 10 parameters are to many to describe a general paraboloid, meaning that some of the parameters are dependent. My fealing is that a translated and rotated paraboloid needs 7 parameters (although I'm not really sure)

Categories

Resources