CVXPY) Maximum diagonal entry of the inverse matrix - python

Currently what I am doing is: S(v) is a d-dimensional positive definite matrix determined by the d-dimensional vector v.
I want to optimize the maximum diagonal entry of the inverse of S(v) subject to
entrywise-sum of v equal to 1.
(See https://mathoverflow.net/questions/416095/cvxpy-maximum-diagonal-entry-of-the-inverse-matrix for details)
This is a convex problem, but CVXPY does not support a matrix inverse including a variable, as far as I know.
Using matrix_frac for a canonical vector e (will return e # S(v) # e) and
taking maximum for d different canonical vectors will work,
but it will create d-inverse of d by d matrix for each computation, which is really heavy.
Any other good solution?

Related

In which scenario would one use another matrix than the identity matrix for finding eigenvalues?

The scipy.linalg.eigh function can take two matrices as arguments: first the matrix a, of which we will find eigenvalues and eigenvectors, but also the matrix b, which is optional and chosen as the identity matrix in case it is left blank.
In what scenario would someone like to use this b matrix?
Some more context: I am trying to use xdawn covariances from the pyRiemann package. This uses the scipy.linalg.eigh function with a covariance matrix a and a baseline covariance matrix b. You can find the implementation here. This yields an error, as the b matrix in my case is not positive definitive and thus not useable in the scipy.linalg.eigh function. Removing this matrix and just using the identity matrix however solves this problem and yields relatively nice results... The problem is that I do not really understand what I changed, and maybe I am doing something I should not be doing.
This is the code from the pyRiemann package I am using (modified to avoid using functions defined in other parts of the package):
# X are samples (EEG data), y are labels
# shape of X is (1000, 64, 2459)
# shape of y is (1000,)
from scipy.linalg import eigh
Ne, Ns, Nt = X.shape
tmp = X.transpose((1, 2, 0))
b = np.matrix(sklearn.covariance.empirical_covariance(tmp.reshape(Ne, Ns * Nt).T))
for c in self.classes_:
# Prototyped response for each class
P = np.mean(X[y == c, :, :], axis=0)
# Covariance matrix of the prototyper response & signal
a = np.matrix(sklearn.covariance.empirical_covariance(P.T))
# Spatial filters
evals, evecs = eigh(a, b)
# and I am now using the following, disregarding the b matrix:
# evals, evecs = eigh(a)
If A and B were both symmetric matrices that doesn't necessarily have to imply that inv(A)*B must be a symmetric matrix. And so, if i had to solve a generalised eigenvalue problem of Ax=lambda Bx then i would use eig(A,B) rather than eig(inv(A)*B), so that the symmetry isn't lost.
One practical application is in finding the natural frequencies of a dynamic mechanical system from differential equations of the form M (d²x/dt²) = Kx where M is a positive definite matrix known as the mass matrix and K is the stiffness matrix, and x is displacement vector and d²x/dt² is acceleration vector which is the second derivative of the displacement vector. To find the natural frequencies, x can be substituted with x0 sin(ωt) where ω is the natural frequency. The equation reduces to Kx = ω²Mx. Now, one can use eig(inv(K)*M) but that might break the symmetry of the resultant matrix, and so I would use eig(K,M) instead.
A - lambda B x it means that x is not in the same basis as the covariance matrix.
If the matrix is not definite positive it means that there are vectors that can be flipped by your B.
I hope it was helpful.

Solve linear equations on a Gram matrix with numpy

Consider a case where, given an MxM matrix A and a vector b, I want to solve something of the form inv(A # A.T) # b (where I know A is invertible).
As far as I know, it is always faster to use solve_* rather than inv. There are also variants for more efficient solving for PSD matrices (which A # A.T must be), using Cholesky factorization.
My question - since I'm constructing the matrix A # A.T just to immediately throw it away - is there a more specialized procedure for solving linear equations with the gram matrix of A without having to construct it?
You can compute the factorization of A and then use that to solve your system.
Assume we want to solve
A A^T x = b
for x.
Compute the factorization of A=LU.
Then solve Ay=b for y.
Then solve A^T x = y for x.
This way you dont have to compute the matrix A^T A.
Note that if one has a factorization of A=LU then one can solve Ax=b as well as A^T x=b efficiently for x.
This is because A^T=U^T L^T which is again a factorization of a lower times an upper triangular matrix.

Computing 3D-homography with 5 3D-points

I've got a set of 3D-points in a projective space and I want to transform them into a metric 3D space so that I could measure distances in meters.
In order to do so, I need a 3D to 3D homography, which is a 4x4 matrix with 15 degrees of freedom (so I need 5 3D-points to get 15 equations).
I have a set of these 5 3D-points from the projective space and their corresponding 5 3D-points aligned in the metric space (which I expect the 5 projective points to be transformed to).
I can't figure out how to estimate the homography matrix. At first I tried:
A=np.vstack([p1101.T, p1111.T, p0101.T, p0001.T, p0011.T])
b=np.array([[1,1,0,1], [1,1,1,1], [0,1,0,1], [0,0,0,1], [0,0,1,1]])
x, _, _, _ = np.linalg.lstsq(A,b)
H = x.T
where p1101 is a [X,Y,Z,1] point which corresponds to [1,1,0,1] in the 3D metric space, etc..
However, this is not correct since I'm in projective space, so I need to create somehow an equation set where I divide the rows of H with its last or something like that.
I thought maybe there is an implemented method that will do it for me, for example in opencv, but didn't find. Any help would be appreciated.
I finally solved this question with a friend, and would like to share the solution.
Since in projective space, one needs to solve an equation set where the homogene coordinate of the outcome is the denominator of each other coordinate. i.e, if you want to find a 4x4 homography matrix H, and you have matching 3D points x and b (b is in the meteric space), you'll need to optimize the search of H parameters such that H applied on x will give a vector v with 4 coordinates, such that all the first three coordinates of v divided by the last coordinate are b. written in numpy:
v = H.dot(x)
v = v[:3]/v[3]
v == b # True
mathematically, the optimization is based on this (this is focused on the first coordinate only, for simplicity, but other coordinates are done the same way):
so in python one needs to arrange the equations for the solver in the explained manner, with 5 matching points. The way that was purposed in the question is good (just didn't solve the right problem), and in these terms it will make Ax=b least squares optimization such that A is 15x15 matrix, and b is a 15 dimensional vector.
Each matching point generates 3 equations, then 5 matching points will generate 15 equations built into the matrix A, thus solving the 15 DOF of the 3D homography H.

Find eigenvectors with specific eigenvalue of sparse matrix in python

I have a large sparse matrix and I want to find its eigenvectors with specific eigenvalue. In scipy.sparse.linalg.eigs, it says the required argument k:
"k is the number of eigenvalues and eigenvectors desired. k must be smaller than N-1. It is not possible to compute all eigenvectors of a matrix".
The problem is that I don't know how many eigenvectors corresponding to the eigenvalue I want. What should I do in this case?
I'd suggest using Singular Value Decomposition (SVD) instead. There is a function from scipy where you can use SVD from scipy.sparse.linalg import svds and it can handle sparse matrix. You can find eigenvalues (in this case will be singular value) and eigenvectors by the following:
U, Sigma, VT = svds(X, k=n_components, tol=tol)
where X can be sparse CSR matrix, U and VT is set of left eigenvectors and right eigenvectors corresponded to singular values in Sigma. Here, you can control number of components. I'd say start with small n_components first and then increase it. You can rank your Sigma and see the distribution of singular value you have. There will be some large number and drop quickly. You can make threshold on how many eigenvectors you want to keep from singular values.
If you want to use scikit-learn, there is a class sklearn.decomposition.TruncatedSVD that let you do what I explained.

Calculating eigen values of very large sparse matrices in python

I have a very large sparse matrix which represents a transition martix in a Markov Chain, i.e. the sum of each row of the matrix equals one and I'm interested in finding the first eigenvalue and its corresponding vector which is smaller than one. I know that the eigenvalues are bounded in the section [-1, 1] and they are all real (non-complex).
I am trying to calculate the values using python's scipy.sparse.eigs function, however, one of the parameters of the functions is the number of eigenvalues/vectors to estimate and every time I've increased the number of parameters to estimate, the numbers of eigenvalues which are exactly one grew as well.
Needless to say, I am using the which parameter with the value 'LR' in order to get the k largest eigenvalues, with k being the number of values to estimate.
Does anyone have an idea how to solve this problem (finding the first eigenvalue smaller than one and its corresponding vector)?
I agree with #pv. If your matrix S was symmetric, you could see it as a laplacian matrix of the matrix I - S. The number of connected components of I - S is the number of zero-eigenvalues of this matrix (i.e, the dimension of the space associated to eigenvalue 1 of S). You could check the number of connected components of the graph whose similarity matrix is I - S*S' for a start, e.g. with scipy.sparse.csgraph.connected_components.

Categories

Resources