Solve linear equations on a Gram matrix with numpy - python

Consider a case where, given an MxM matrix A and a vector b, I want to solve something of the form inv(A # A.T) # b (where I know A is invertible).
As far as I know, it is always faster to use solve_* rather than inv. There are also variants for more efficient solving for PSD matrices (which A # A.T must be), using Cholesky factorization.
My question - since I'm constructing the matrix A # A.T just to immediately throw it away - is there a more specialized procedure for solving linear equations with the gram matrix of A without having to construct it?

You can compute the factorization of A and then use that to solve your system.
Assume we want to solve
A A^T x = b
for x.
Compute the factorization of A=LU.
Then solve Ay=b for y.
Then solve A^T x = y for x.
This way you dont have to compute the matrix A^T A.
Note that if one has a factorization of A=LU then one can solve Ax=b as well as A^T x=b efficiently for x.
This is because A^T=U^T L^T which is again a factorization of a lower times an upper triangular matrix.

Related

How to make toeplitz matrix constraint in cvxpy?

Using cvxpy( convex optimisation solver in python), there are options to make a matrix variable to be symmetric or positive semidefinite, but I require that the matrix be toeplitz (all left to right diagonals have the same element is each diagonal entry). Is there an efficient way of doing this?
One way would be to add the constraint:
a[i,j] = a[i-1,j-1]
for all i,j>0. The presolver (part of a solver) would exploit this and reduce the size of the model.

Python Matrix equation solving method

What method should i use?
a,b are vectors, or arrays n dimensionals, and X is (nxn) dimensional. Im using numpy for this.
I have a
X^T X a=X^T b
matrix vector equation. X,X^T,b is known and the question is a.
I have tried X^T X as X^T#X=z and doing z^-1, then
z^-1#X^T =g and doing np.linalg.solve(g,b). Is there some basic linear algebra i'm doing wrong here?
Is there a specific python code for these types of equations?
"Is there a specific python code for these types of equations?"
Yes. The problem that you are solving is ordinary least squares (see also linear least squares).
NumPy has the function numpy.linalg.lstsq for solving such problems. In your case, to compute a given X and b, you would use
a, residuals, rank, singvals = np.linalg.lstsq(X, b)
residuals, rank and singvals are additional information returned by lstsq, as explained in the docstring.

In which scenario would one use another matrix than the identity matrix for finding eigenvalues?

The scipy.linalg.eigh function can take two matrices as arguments: first the matrix a, of which we will find eigenvalues and eigenvectors, but also the matrix b, which is optional and chosen as the identity matrix in case it is left blank.
In what scenario would someone like to use this b matrix?
Some more context: I am trying to use xdawn covariances from the pyRiemann package. This uses the scipy.linalg.eigh function with a covariance matrix a and a baseline covariance matrix b. You can find the implementation here. This yields an error, as the b matrix in my case is not positive definitive and thus not useable in the scipy.linalg.eigh function. Removing this matrix and just using the identity matrix however solves this problem and yields relatively nice results... The problem is that I do not really understand what I changed, and maybe I am doing something I should not be doing.
This is the code from the pyRiemann package I am using (modified to avoid using functions defined in other parts of the package):
# X are samples (EEG data), y are labels
# shape of X is (1000, 64, 2459)
# shape of y is (1000,)
from scipy.linalg import eigh
Ne, Ns, Nt = X.shape
tmp = X.transpose((1, 2, 0))
b = np.matrix(sklearn.covariance.empirical_covariance(tmp.reshape(Ne, Ns * Nt).T))
for c in self.classes_:
# Prototyped response for each class
P = np.mean(X[y == c, :, :], axis=0)
# Covariance matrix of the prototyper response & signal
a = np.matrix(sklearn.covariance.empirical_covariance(P.T))
# Spatial filters
evals, evecs = eigh(a, b)
# and I am now using the following, disregarding the b matrix:
# evals, evecs = eigh(a)
If A and B were both symmetric matrices that doesn't necessarily have to imply that inv(A)*B must be a symmetric matrix. And so, if i had to solve a generalised eigenvalue problem of Ax=lambda Bx then i would use eig(A,B) rather than eig(inv(A)*B), so that the symmetry isn't lost.
One practical application is in finding the natural frequencies of a dynamic mechanical system from differential equations of the form M (d²x/dt²) = Kx where M is a positive definite matrix known as the mass matrix and K is the stiffness matrix, and x is displacement vector and d²x/dt² is acceleration vector which is the second derivative of the displacement vector. To find the natural frequencies, x can be substituted with x0 sin(ωt) where ω is the natural frequency. The equation reduces to Kx = ω²Mx. Now, one can use eig(inv(K)*M) but that might break the symmetry of the resultant matrix, and so I would use eig(K,M) instead.
A - lambda B x it means that x is not in the same basis as the covariance matrix.
If the matrix is not definite positive it means that there are vectors that can be flipped by your B.
I hope it was helpful.

Find determinant and inverse matrix, when coefficients are modulo 26 in Python

I try to encode Hill cipher in Python, but i have some problems.
I consider matrices that their coefficients are integers modulo 26
How to make a program that calculates the determinant of that matrix?
And second question:
How to find inversion of that matrix ?(if exists)
Regards
If you can use Sagemath (run your code in Sage or import Sage into Python), you can use:
M = Matrix(Zmod(26), your_numpy_matrix)
determinant = M.det()
inverse = M.inverse()
Theoretically, you can compute the whole determinant and then apply modulo, but this will lead to problems.
I tried sympy, but did not manager a working solution for larger dimension.
M = sympy.Matrix(your_numpy_matrix)
determinant = M.det() % 26 #huge intermediate results/crashes
inverse = M.inv_mod(26)
numpy will not work, as you will encounter numerical problems.
PS: I am aware, the question is old, but I wanted to document possible solutions.

Finding the closest solution to a system of linear equations

I have a matrix A of size MxN where M>N and a vector b size M. I want to solve Ax=b as close as possible to b given I know that the system of equations is not solvable. In other words, I want to find x that will give me a vector that is closest to b. Looking online, it seems like I could reduce A down to its basis (linear independent vectors) and then find the projection of b onto that basis. However, I am not sure how to do this in python. I understand it would have something to do with qr decomposition, but I am not sure what the next step would be. And how it would be possible to recover x.
You can compute a least squares solution via np.linalg.lstsq:
x = np.linalg.lstsq(A, b)

Categories

Resources