The scipy.linalg.eigh function can take two matrices as arguments: first the matrix a, of which we will find eigenvalues and eigenvectors, but also the matrix b, which is optional and chosen as the identity matrix in case it is left blank.
In what scenario would someone like to use this b matrix?
Some more context: I am trying to use xdawn covariances from the pyRiemann package. This uses the scipy.linalg.eigh function with a covariance matrix a and a baseline covariance matrix b. You can find the implementation here. This yields an error, as the b matrix in my case is not positive definitive and thus not useable in the scipy.linalg.eigh function. Removing this matrix and just using the identity matrix however solves this problem and yields relatively nice results... The problem is that I do not really understand what I changed, and maybe I am doing something I should not be doing.
This is the code from the pyRiemann package I am using (modified to avoid using functions defined in other parts of the package):
# X are samples (EEG data), y are labels
# shape of X is (1000, 64, 2459)
# shape of y is (1000,)
from scipy.linalg import eigh
Ne, Ns, Nt = X.shape
tmp = X.transpose((1, 2, 0))
b = np.matrix(sklearn.covariance.empirical_covariance(tmp.reshape(Ne, Ns * Nt).T))
for c in self.classes_:
# Prototyped response for each class
P = np.mean(X[y == c, :, :], axis=0)
# Covariance matrix of the prototyper response & signal
a = np.matrix(sklearn.covariance.empirical_covariance(P.T))
# Spatial filters
evals, evecs = eigh(a, b)
# and I am now using the following, disregarding the b matrix:
# evals, evecs = eigh(a)
If A and B were both symmetric matrices that doesn't necessarily have to imply that inv(A)*B must be a symmetric matrix. And so, if i had to solve a generalised eigenvalue problem of Ax=lambda Bx then i would use eig(A,B) rather than eig(inv(A)*B), so that the symmetry isn't lost.
One practical application is in finding the natural frequencies of a dynamic mechanical system from differential equations of the form M (d²x/dt²) = Kx where M is a positive definite matrix known as the mass matrix and K is the stiffness matrix, and x is displacement vector and d²x/dt² is acceleration vector which is the second derivative of the displacement vector. To find the natural frequencies, x can be substituted with x0 sin(ωt) where ω is the natural frequency. The equation reduces to Kx = ω²Mx. Now, one can use eig(inv(K)*M) but that might break the symmetry of the resultant matrix, and so I would use eig(K,M) instead.
A - lambda B x it means that x is not in the same basis as the covariance matrix.
If the matrix is not definite positive it means that there are vectors that can be flipped by your B.
I hope it was helpful.
I have been struggling the last days trying to compute the degrees of freedom of two pair of vectors (x and y) following reference of Chelton (1983) which is:
degrees of freedom according to Chelton(1983)
and I can't find a proper way to calculate the normalized cross correlation function using np.correlate,
I always get an output that it isn't in between -1, 1.
Is there any easy way to get the cross correlation function normalized in order to compute the degrees of freedom of two vectors?
Nice Question. There is no direct way but you can "normalize" the input vectors before using np.correlate like this and reasonable values will be returned within a range of [-1,1]:
Here i define the correlation as generally defined in signal processing textbooks.
c'_{ab}[k] = sum_n a[n] conj(b[n+k])
CODE: If a and b are the vectors:
a = (a - np.mean(a)) / (np.std(a) * len(a))
b = (b - np.mean(b)) / (np.std(b))
c = np.correlate(a, b, 'full')
References:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.correlate.html
https://en.wikipedia.org/wiki/Cross-correlation
MATLAB ➜ xcorr(a, b, 'normalized');
MATLAB normalized cross-correlation implementation in Python.
import numpy as np
a = [1, 2, 3, 4]
b = [2, 4, 6, 8]
norm_a = np.linalg.norm(a)
a = a / norm_a
norm_b = np.linalg.norm(b)
b = b / norm_b
c = np.correlate(a, b, mode = 'full')
If you are interested in the normalized correlation when the sequences are aligned (not the correlation function of the correlation versus time offsets), the function numpy.corrcoef does this directly, as computing the covariance matrix of x and y and then normalizing it by the standard deviation of x and the standard deviation of y.
https://numpy.org/doc/stable/reference/generated/numpy.corrcoef.html#numpy.corrcoef
This is the Pearson correlation coefficient and will have a range of +/-1.
a = np.dot(abs(var1),abs(var2),'full')
b = np.correlate(var1,var2,'full')
c = b/a
This is my idea: but it will normalize it 0-1
In this question I asked for a way to compute the closest projected point to a hyperbolic paraboloid using python.
Thanks to the answer, I was able to use the code below to calculate the closest point to multiple paraboloids.
from scipy.optimize import minimize
# This function calculate the closest projection on a hyperbolic paraboloid
# As Answered by #Jaime https://stackoverflow.com/questions/18858448/speeding-up-a-closest-point-on-a-hyperbolic-paraboloid-algorithm
def fun_single(x, p0, p1, p2, p3, p):
u, v = x
s = u*(p1-p0) + v*(p3-p0) + u*v*(p2-p3-p1+p0) + p0
return np.linalg.norm(p-s)
# Example use case:
# Generate some random data for 3 random hyperbolic paraboloids
# A real life use case will count in the tens of thousands.
import numpy as np
COUNT = 3
p0 = np.random.random_sample((COUNT,3))
p1 = np.random.random_sample((COUNT,3))
p2 = np.random.random_sample((COUNT,3))
p3 = np.random.random_sample((COUNT,3))
p = np.random.random_sample(3)
uv = []
for i in xrange(COUNT):
uv.append(minimize(fun_single, (0.5, 0.5), (p0[i], p1[i], p2[i], p3[i], p)).x)
uv = np.array(uv)
# UV projections for my random data
#[[ 0.34109572 4.39237344]
# [-0.2720813 0.17083423]
# [ 0.48993333 -0.99415568]]
Now that I have a projection for each item it's possible to find more useful info, such as which of the given items is closest to the query point, find its array index and derive more data from it, etc...
The problem with calling minimize for each item is that it becomes very slow when dealing with hundreds of thousands of items. So to try to resolve the issue I took a crack at changing the function to work with many inputs.
from numpy.core.umath_tests import inner1d
# This function calculate the closest projection to many hyperbolic paraboloids
def fun_array(x, p0, p1, p2, p3, p):
u, v = x
s = u*(p1-p0) + v*(p3-p0) + u*v*(p2-p3-p1+p0) + p0
V = p-s
return np.min(np.sqrt(inner1d(V,V)))
# Lets pass all the data to minimize
uv = minimize(fun_array, (0.5, 0.5), (p0, p1, p2, p3, p)).x
# Result: [ 0.25090064, 1.19732181]
# This corresponds to index 2 of my random data,
# which is the closest projection.
Minimizing the function fun_array is much faster than the iterative approach, but it only returns the single closest projection, not all projections.
QUESTION
Is it possible to use minimize to return all projections as with the iterative approach? And if not, is it at least possible to get the index of the "winning" array element?
The strict answer
You have to be tricky but it's not that difficult to trick minimize. The point is that minimize only works for scalar cost functions. But we can get away with summing up all your distances, since they are naturally nonnegative quantities and the global minimum is defined by the configuration where each distance is minimal. So instead of asking for the minimum points of COUNT bivariate scalar functions, instead we ask for the minimum of a single scalar function of COUNT*2 variables. This just happens to be the sum of COUNT bivariate functions. But note that I'm not convinced that this will be faster, because I can imagine higher-dimensional minimum searches to be less stable than a corresponding set of lower-dimensional independent minimum searches.
What you should definitely do is pre-allocate memory for uv and insert values into that, rather than growing a list item by item a lot of times:
uv = np.empty((COUNT,2))
for i in range(COUNT):
uv[i,:] = minimize(fun_single, (0.5, 0.5), (p0[i], p1[i], p2[i], p3[i], p)).x
Anyway, in order to use a single call to minimize we only need to vectorize your function, which is easier than you'd think:
def fun_vect(x, p0, p1, p2, p3, p):
x = x.reshape(-1,2) # dimensions are mangled by minimize() call
u,v = x.T[...,None] # u,v shaped (COUNT,1) for broadcasting
s = u*(p1-p0) + v*(p3-p0) + u*v*(p2-p3-p1+p0) + p0 # shape (COUNT,3)
return np.linalg.norm(p-s, axis=1).sum() # sum up distances for overall cost
x0 = 0.5*np.ones((COUNT,2))
uv_vect = minimize(fun_vect, x0, (p0, p1, p2, p3, p)).x.reshape(-1,2)
This function, as you see, extend the scalar one along columns. Each row corresponds to an independent minimization problem (consistently with your definition of the points). The vectorization is straightforward, the only nontrivial part is that we need to play around with dimensions to make sure that everything broadcasts nicely, and we should take care to reshape x0 on input because minimize has a habit of flattening the array-valued input position. And of course the final result has to be reshaped again. Correspondingly, an array of shape (COUNT,2) has to be provided as x0, this is the only feature from which minimize can deduce the dimensionality of your problem.
Comparison for my random data:
>>> uv
array([[-0.13386872, 0.14324999],
[ 2.42883931, 0.55099395],
[ 1.03084756, 0.35847593],
[ 1.47276203, 0.29337082]])
>>> uv_vect
array([[-0.13386898, 0.1432499 ],
[ 2.42883952, 0.55099405],
[ 1.03085143, 0.35847888],
[ 1.47276244, 0.29337179]])
Note that I changed COUNT to be 4, because I like to keep every dimension distinct when testing. This way I can be sure that I run into an error if I mess up my dimensions. Also note that in general you might want to keep the complete object returned by minimize just to make sure that everything went fine and converged.
A more useful solution
As we discussed in comments, the above solution---while perfectly answers the question---is not particularly feasible, since it takes too long to run, much longer than doing each minimization separately. The problem was interesting enough that it got me thinking. Why not try to solve the problem as exactly as possible?
What you're trying to do (now considering a single hyperboloid and a query point q) is finding the s(u,v) point with the parametrization by Jaime
s(u,v) = p0 + u * (p1 - p0) + v * (p3 - p0) + u * v * (p2 - p3 - p1 + p0)
for which the distance d(s,q) is minimal. Since the distance is a proper metric (in particular, it is non-negative), this is equivalent to minimizing d(s,q)^2. So far so good.
Let's rewrite the parametrized equation of s by introducing a few constant vectors in order to simplify the derivation:
s(u,v) = p0 + u*a + v*b + u*v*c
s - q = p0-q0 + u*a + v*b + u*v*c
= d + u*a + v*b + u*v*c
d(s,q)^2 = (s-q)^2
(In this section ^ will represent the power, because this is linear algebra.) Now, the minimum of the distance function is a stationary point, so in the u_min,v_min point we're looking for the gradient of s(u,v) with respect to u and v is zero. This is equivalent to saying that the derivative of d(s,q)^2 with respect to both u and v has to be simultaneously zero; this gives us two nonlinear equations with the unknowns u and v:
2*(s-q)*ds/du = 0 (1)
2*(s-q)*ds/dv = 0 (2)
Expanding these two equations is a somewhat tedious job. The first equation happens to be linear in u, the second in v. I collected all the terms containing u in the first equation, which gave me the relationship
u(v) = (-v^2*b.c - v*(c.d + a.b) - a.d)/(a + v*c)^2
where . represents the dot product. The above equation tells us that for whatever v we choose, equation (1) will exactly be satisfied if u is chosen thus. So we have to solve equation (2).
What I did was expand all the terms in equation (2), and substitute u(v) into u. The original equation had polynomial terms of 1,u,v,uv,u^2,u^2v, so I can tell you this is not pretty. With some minor assumptions of no divergence (which divergences would probably correspond to the equivalent of vertical lines in the case of a line fitting problem), we can arrive at the following beautiful equation:
(b.d + v*b^2)*f^2 - (c.d + a.b + 2*v*b.c)*e*f + (a.c + v*c^2)*e^2 = 0
with the new scalars defined as
e = v^2*b.c + v*(c.d + a.b) + a.d
f = (a + v*c)^2 = (a^2 + 2*v*a.c + v^2*c^2)
Whatever v solves this equation, the corresponding (u(v),v) point will correspond to a stationary point of the distance. We should first note that this equation considers the root of a fifth-order polynomial if v. There's guaranteed to be at least one real root, and in the worst case there can be as many as 5 real roots. Whether these correspond to minima, maxima, or (in unlikely cases) saddle points is open for discussion.
The real benefit of the above result is that we have a fighting chance of finding all the roots of the equation! This is a huge deal, since nonlinear root searching/minimization will in general give you only one root at a time, without being able to tell you if you've missed any. Enter numpy.polynomial.polynomial.polyroots. Despite all the linear algebra fluff surrounding it, we're only looking for the (at most 5!) root of a polynomial, for which we can test the distances and choose the global minimum (if necessary). If there's only one root, we can be sure that it's the minimum based on geometrical considerations.
Note that I haven't mentioned a caveat yet: the polynomial library can only work with one polynomial at a time. We will still have to loop over each hyperboloid manually. But here's the deal: we will be able to guarantee that we're finding the exact minimum, rather than unknowingly accepting local distance minima. And it might even be faster than minimize. Let's see:
import numpy as np
# generate dummy inputs
COUNT = 100
p0 = np.random.random_sample((COUNT,3))
p1 = np.random.random_sample((COUNT,3))
p2 = np.random.random_sample((COUNT,3))
p3 = np.random.random_sample((COUNT,3))
p = np.random.random_sample(3)
def mydot(v1,v2):
"""generalized dot product for multidimensional arrays: (...,N,3)x(...,N,3) -> (...,N,1)"""
# (used in u_from_v for vectorized dot product)
return np.einsum('...j,...j->...',v1,v2)[...,None]
def u_from_v(v, a, b, c, d):
"""return u(v) corresponding to zero of gradient"""
# use mydot() instead of dot to enable array-valued v input
res = (- v**2*mydot(b,c) - v*(mydot(c,d)+mydot(a,b)) - mydot(a,d))/np.linalg.norm(a+v*c, axis=-1, keepdims=True)**2
return res.squeeze()
def check_distance(uv, p0, p1, p2, p3, p):
"""compute the distance from optimization results to query point"""
u,v = uv.T[...,None]
s = u*(p1-p0) + v*(p3-p0) + u*v*(p2-p3-p1+p0) + p0
return np.linalg.norm(p-s, axis=-1)
def poly_for_v(a, b, c, d):
"""return polynomial representation of derivative of d(s,p)^2 for the parametrized s(u(v),v) point"""
# only works with a scalar problem:( one polynomial at a time
# v is scalar, a-b-c-d are 3-dimensional vectors (for a given paraboloid)
# precompute scalar products appearing multiple times in the formula
ab = a.dot(b)
ac = a.dot(c)
cc = c.dot(c)
cd = c.dot(d)
bc = b.dot(c)
Poly = np.polynomial.polynomial.Polynomial
e = Poly([a.dot(d), cd+ab, bc])
f = Poly([a.dot(a), 2*ac, cc])
res = Poly([b.dot(d), b.dot(b)])*f**2 - Poly([cd+ab,2*bc])*e*f + Poly([ac,cc])*e**2
return res
def minimize_manually(p0, p1, p2, p3, p):
"""numpy polynomial version for the minimization problem"""
# auxiliary arrays, shape (COUNT,3)
a = p1 - p0
b = p3 - p0
c = p2 - p3 - p1 + p0
d = p0 - p
# preallocate for collected result
uv_min = np.empty((COUNT,2))
for k in range(COUNT):
# collect length-3 vectors needed for a given surface
aa,bb,cc,dd = (x[k,:] for x in (a,b,c,d))
# compute 5 complex roots of the derivative distance
roots = poly_for_v(aa, bb, cc, dd).roots()
# keep exactly real roots
vroots = roots[roots.imag==0].real
if vroots.size == 1:
# we're done here
vval, = vroots
uval = u_from_v(vval, aa, bb, cc, dd)
uv_min[k,:] = uval,vval
else:
# need to find the root with minimal distance
uvals = u_from_v(vroots[:,None], aa, bb, cc, dd)
uvtmp = np.stack((uvals,vroots),axis=-1)
dists = check_distance(uvtmp, p0[k,:], p1[k,:], p2[k,:], p3[k,:], p)
winner = np.argmin(dists) # index of (u,v) pair of minimum
uv_min[k,:] = uvtmp[winner,:]
return uv_min
uv_min = minimize_manually(p0, p1, p2, p3, p)
# for comparison with the minimize-based approaches:
# distances = check_distance(uv_manual,p0,p1,p2,p3,p))
The above example has COUNT of 100, but if you start with COUNT=1 and keep running both the minimize version and the above exact version, you'll see roughly once in every 10-20 runs that the minimize-based approach misses the real minimum. So the above is safer, it's guaranteed to find the proper minima.
I also did some timing checks with COUNT=100, around 100 ms for the polynomial-based solution, around 200 ms for the minimize-based looping version. With COUNT=1000: 1 second for the polynomial, 2 seconds for the looping minimize-based. Considering how even for larger problems the above is both more precise and more efficient, I see no reason why not to use this instead.
I am trying to find planes in a 3d point cloud, using the regression formula Z= aX + bY +C
I implemented least squares and ransac solutions,
but the 3 parameters equation limits the plane fitting to 2.5D- the formula can not be applied on planes parallel to the Z-axis.
My question is how can I generalize the plane fitting to full 3d?
I want to add the fourth parameter in order to get the full equation
aX +bY +c*Z + d
how can I avoid the trivial (0,0,0,0) solution?
Thanks!
The Code I'm using:
from sklearn import linear_model
def local_regression_plane_ransac(neighborhood):
"""
Computes parameters for a local regression plane using RANSAC
"""
XY = neighborhood[:,:2]
Z = neighborhood[:,2]
ransac = linear_model.RANSACRegressor(
linear_model.LinearRegression(),
residual_threshold=0.1
)
ransac.fit(XY, Z)
inlier_mask = ransac.inlier_mask_
coeff = model_ransac.estimator_.coef_
intercept = model_ransac.estimator_.intercept_
Update
This functionality is now integrated in https://github.com/daavoo/pyntcloud and makes the plane fitting process much simplier:
Given a point cloud:
You just need to add a scalar field like this:
is_floor = cloud.add_scalar_field("plane_fit")
Wich will add a new column with value 1 for the points of the plane fitted.
You can visualize the scalar field:
Old answer
I think that you could easily use PCA to fit the plane to the 3D points instead of regression.
Here is a simple PCA implementation:
def PCA(data, correlation = False, sort = True):
""" Applies Principal Component Analysis to the data
Parameters
----------
data: array
The array containing the data. The array must have NxM dimensions, where each
of the N rows represents a different individual record and each of the M columns
represents a different variable recorded for that individual record.
array([
[V11, ... , V1m],
...,
[Vn1, ... , Vnm]])
correlation(Optional) : bool
Set the type of matrix to be computed (see Notes):
If True compute the correlation matrix.
If False(Default) compute the covariance matrix.
sort(Optional) : bool
Set the order that the eigenvalues/vectors will have
If True(Default) they will be sorted (from higher value to less).
If False they won't.
Returns
-------
eigenvalues: (1,M) array
The eigenvalues of the corresponding matrix.
eigenvector: (M,M) array
The eigenvectors of the corresponding matrix.
Notes
-----
The correlation matrix is a better choice when there are different magnitudes
representing the M variables. Use covariance matrix in other cases.
"""
mean = np.mean(data, axis=0)
data_adjust = data - mean
#: the data is transposed due to np.cov/corrcoef syntax
if correlation:
matrix = np.corrcoef(data_adjust.T)
else:
matrix = np.cov(data_adjust.T)
eigenvalues, eigenvectors = np.linalg.eig(matrix)
if sort:
#: sort eigenvalues and eigenvectors
sort = eigenvalues.argsort()[::-1]
eigenvalues = eigenvalues[sort]
eigenvectors = eigenvectors[:,sort]
return eigenvalues, eigenvectors
And here is how you could fit the points to a plane:
def best_fitting_plane(points, equation=False):
""" Computes the best fitting plane of the given points
Parameters
----------
points: array
The x,y,z coordinates corresponding to the points from which we want
to define the best fitting plane. Expected format:
array([
[x1,y1,z1],
...,
[xn,yn,zn]])
equation(Optional) : bool
Set the oputput plane format:
If True return the a,b,c,d coefficients of the plane.
If False(Default) return 1 Point and 1 Normal vector.
Returns
-------
a, b, c, d : float
The coefficients solving the plane equation.
or
point, normal: array
The plane defined by 1 Point and 1 Normal vector. With format:
array([Px,Py,Pz]), array([Nx,Ny,Nz])
"""
w, v = PCA(points)
#: the normal of the plane is the last eigenvector
normal = v[:,2]
#: get a point from the plane
point = np.mean(points, axis=0)
if equation:
a, b, c = normal
d = -(np.dot(normal, point))
return a, b, c, d
else:
return point, normal
However as this method is sensitive to outliers you could use RANSAC to make the fit robust to outliers.
There is a Python implementation of ransac here.
And you should only need to define a Plane Model class in order to use it for fitting planes to 3D points.
In any case if you can clean the 3D points from outliers (maybe you could use a KD-Tree S.O.R filter to that) you should get pretty good results with PCA.
Here is an implementation of an S.O.R:
def statistical_outilier_removal(kdtree, k=8, z_max=2 ):
""" Compute a Statistical Outlier Removal filter on the given KDTree.
Parameters
----------
kdtree: scipy's KDTree instance
The KDTree's structure which will be used to
compute the filter.
k(Optional): int
The number of nearest neighbors wich will be used to estimate the
mean distance from each point to his nearest neighbors.
Default : 8
z_max(Optional): int
The maximum Z score wich determines if the point is an outlier or
not.
Returns
-------
sor_filter : boolean array
The boolean mask indicating wherever a point should be keeped or not.
The size of the boolean mask will be the same as the number of points
in the KDTree.
Notes
-----
The 2 optional parameters (k and z_max) should be used in order to adjust
the filter to the desired result.
A HIGHER 'k' value will result(normally) in a HIGHER number of points trimmed.
A LOWER 'z_max' value will result(normally) in a HIGHER number of points trimmed.
"""
distances, i = kdtree.query(kdtree.data, k=k, n_jobs=-1)
z_distances = stats.zscore(np.mean(distances, axis=1))
sor_filter = abs(z_distances) < z_max
return sor_filter
You could feed the function with a KDtree of your 3D points computed maybe using this implementation
import pcl
cloud = pcl.PointCloud()
cloud.from_array(points)
seg = cloud.make_segmenter_normals(ksearch=50)
seg.set_optimize_coefficients(True)
seg.set_model_type(pcl.SACMODEL_PLANE)
seg.set_normal_distance_weight(0.05)
seg.set_method_type(pcl.SAC_RANSAC)
seg.set_max_iterations(100)
seg.set_distance_threshold(0.005)
inliers, model = seg.segment()
you need to install python-pcl first. Feel free to play with the parameters. points here is a nx3 numpy array with n 3d points. Model will be [a, b, c, d] such that ax + by + cz + d = 0