Numpy: Raise diagonalizable square matrix to infinite power - python

Consider a Markovian process with a diagonalizable transition matrix A such that A=PDP^-1, where D is a diagonal matrix with eigenvalues of A, and P is a matrix whose columns are eigenvectors of A.
To compute, for each state, the likelihood of ending up in each absorbing state, I'd like to raise the transition matrix to the power of n, with n approaching infinity:
A^n=PD^nP^-1
Which is the pythonic way of doing this in Numpy? I could naively compute the eigenvalues and eigenvectors of A, raising the eigenvalues to infinity. Due to my assumption that I have a transition matrix, we would have only eigenvalues equal to one (which will remain one), and eigenvalues between 0 and 1, which will become zero (inspired by this answer):
import numpy as np
from scipy import linalg
# compute left eigenvalues and left eigenvectors
eigenvalues, leftEigenvectors = linalg.eig(transitionMatrix, right=False, left=True)
# for stationary distribution, eigenvalues and vectors are real (up to numerical precision)
eigenvalues = eigenvalues.real
leftEigenvectors = leftEigenvectors.real
# create a filter to collect the eigenvalues close to one
absoluteTolerance = 1e-10
mask = abs(eigenvalues - 1) < absoluteTolerance
# raise eigenvalues to the power of infinity
eigenvalues[mask] = 1
eigenvalues[~mask] = 0
D_raised = np.diag(eigenvalues)
A_raised = leftEigenvectors # D_raised # linalg.inv(leftEigenvectors)
Would this be the recommended approach, i.e., one that is both numerically stable and efficient?

Related

Matrix Mutliplication should converge, but it doesn't

I'm trying to calculate the eigenvalues of multiplication between matrices of this kind:
import numpy as np
μ=1.5
σ=0.5
m=np.random.normal(μ,σ)
P=[[-1, m, 0.1, 0.1],[1,0,0,0],[0,1,0,0],[0,0,1,0]] #matrix
n=200
for i in range(n):
m=np.random.normal(μ,σ)
T=[[-1, m, 0.1, 0.1],[1,0,0,0],[0,1,0,0],[0,0,1,0]]
P=np.dot(T,P)
l,v=np.linalg.eig(T)
λ,w=np.linalg.eig(P)
print(l)
print(λ)
Being three of the eigenvalues of the matrix T lower than 1 (in module), I expect something similar from the eigenvalues of the matrix product P, in particular that the three eigenvalues lower than 1 will correspond to eigenvalues of P which decreases and converge to 0 with n increasing. In fact, this is true until n=40. Then it doesn't work anymore.
I'm sure there is something wrong in the algorithm and not in the math because for σ=0 P would be the product of n identical matrices with eigenvalues lower than 1. But the eigenvalues of P diverges.
Your T[i] is not guaranteed to have eigenvalues less than 1
Add an assertion and you will see when it violates the assumption
import numpy as np
μ=1.5
σ=0.5
m=np.random.normal(μ,σ)
P=[[-1, m, 0.1, 0.1],[1,0,0,0],[0,1,0,0],[0,0,1,0]] #matrix
n=200
for i in range(n):
m=np.random.normal(μ,σ)
T=[[-1, m, 0.1, 0.1],[1,0,0,0],[0,1,0,0],[0,0,1,0]]
assert np.all(abs(np.linalg.eigvals(T)) < 1)
P=np.dot(T,P)
l,v=np.linalg.eig(T)
λ,w=np.linalg.eig(P)
print(abs(l))
print(λ)
Being a stochastic system even violating the constraint sometimes it can converge, but it requires some careful analysis of the expected values of the sequence.
The matrix structure resembles the controlable canonical form in linar dynamic systems [1].
But since the matrix T sometimes have eigenvalues larger than 1, the reasoning you suggested does not apply in this case.

In which scenario would one use another matrix than the identity matrix for finding eigenvalues?

The scipy.linalg.eigh function can take two matrices as arguments: first the matrix a, of which we will find eigenvalues and eigenvectors, but also the matrix b, which is optional and chosen as the identity matrix in case it is left blank.
In what scenario would someone like to use this b matrix?
Some more context: I am trying to use xdawn covariances from the pyRiemann package. This uses the scipy.linalg.eigh function with a covariance matrix a and a baseline covariance matrix b. You can find the implementation here. This yields an error, as the b matrix in my case is not positive definitive and thus not useable in the scipy.linalg.eigh function. Removing this matrix and just using the identity matrix however solves this problem and yields relatively nice results... The problem is that I do not really understand what I changed, and maybe I am doing something I should not be doing.
This is the code from the pyRiemann package I am using (modified to avoid using functions defined in other parts of the package):
# X are samples (EEG data), y are labels
# shape of X is (1000, 64, 2459)
# shape of y is (1000,)
from scipy.linalg import eigh
Ne, Ns, Nt = X.shape
tmp = X.transpose((1, 2, 0))
b = np.matrix(sklearn.covariance.empirical_covariance(tmp.reshape(Ne, Ns * Nt).T))
for c in self.classes_:
# Prototyped response for each class
P = np.mean(X[y == c, :, :], axis=0)
# Covariance matrix of the prototyper response & signal
a = np.matrix(sklearn.covariance.empirical_covariance(P.T))
# Spatial filters
evals, evecs = eigh(a, b)
# and I am now using the following, disregarding the b matrix:
# evals, evecs = eigh(a)
If A and B were both symmetric matrices that doesn't necessarily have to imply that inv(A)*B must be a symmetric matrix. And so, if i had to solve a generalised eigenvalue problem of Ax=lambda Bx then i would use eig(A,B) rather than eig(inv(A)*B), so that the symmetry isn't lost.
One practical application is in finding the natural frequencies of a dynamic mechanical system from differential equations of the form M (d²x/dt²) = Kx where M is a positive definite matrix known as the mass matrix and K is the stiffness matrix, and x is displacement vector and d²x/dt² is acceleration vector which is the second derivative of the displacement vector. To find the natural frequencies, x can be substituted with x0 sin(ωt) where ω is the natural frequency. The equation reduces to Kx = ω²Mx. Now, one can use eig(inv(K)*M) but that might break the symmetry of the resultant matrix, and so I would use eig(K,M) instead.
A - lambda B x it means that x is not in the same basis as the covariance matrix.
If the matrix is not definite positive it means that there are vectors that can be flipped by your B.
I hope it was helpful.

Find eigenvectors with specific eigenvalue of sparse matrix in python

I have a large sparse matrix and I want to find its eigenvectors with specific eigenvalue. In scipy.sparse.linalg.eigs, it says the required argument k:
"k is the number of eigenvalues and eigenvectors desired. k must be smaller than N-1. It is not possible to compute all eigenvectors of a matrix".
The problem is that I don't know how many eigenvectors corresponding to the eigenvalue I want. What should I do in this case?
I'd suggest using Singular Value Decomposition (SVD) instead. There is a function from scipy where you can use SVD from scipy.sparse.linalg import svds and it can handle sparse matrix. You can find eigenvalues (in this case will be singular value) and eigenvectors by the following:
U, Sigma, VT = svds(X, k=n_components, tol=tol)
where X can be sparse CSR matrix, U and VT is set of left eigenvectors and right eigenvectors corresponded to singular values in Sigma. Here, you can control number of components. I'd say start with small n_components first and then increase it. You can rank your Sigma and see the distribution of singular value you have. There will be some large number and drop quickly. You can make threshold on how many eigenvectors you want to keep from singular values.
If you want to use scikit-learn, there is a class sklearn.decomposition.TruncatedSVD that let you do what I explained.

Find SVD of a symmetric matrix in Python

I know np.linalg.svd(A) would return the SVD of matrix A.
A=u * np.diag(s) * v
However if it is a symmetric matrix you only need one unitary matrix:
A=v.T * np.diag(s) * v
In R we can use La.svd(A,nu=0) but is there any functions to accelerate the SVD process in Python for a symmetric matrix?
In SVD of symmetric matrices, U=V would be valid only if the eigenvalues of A are all positive. Otherwise V would be almost equal to U but not exactly equal to U, as some the columns of V corresponding to negative eigenvalues would be negatives of the corresponding columns of U. And so, A=v.T * np.diag(s) * v is valid only if A has all its eigenvalues positive.
Assuming the symmetric matrix is a real matrix, the U matrix of singular value decomposition is the eigenvector matrix itself, and the singular values would be absolute values of the eigenvalues of the matrix, and the V matrix would be the same as U matrix except that the columns corresponding to negative eigenvalues are the negative values of the columns of U.
And so, by finding out the eigenvalues and the eigenvectors, SVD can be computed.
Python code:
import numpy as np
def svd_symmetric(A):
[s,u] = np.linalg.eig(A) #eigenvalues and eigenvectors
v = u.copy()
v[:,s<0] = -u[:,s<0] #replacing the corresponding columns with negative sign
s = abs(s)
return [u, s, v.T]
n = 5
A = np.matrix(np.random.rand(n,n)) # a matrix of random numbers
A = A.T+A #making it a symmetric matrix
[u, s, vT] = svd_symmetric(A)
print(u * np.diag(s) * vT)
This prints the same matrix as A.
(Note: I don't know whether it works for complex matrices or not.)

numpy: Possible for zero determinant matrix to be inverted?

By definition, a square matrix that has a zero determinant should not be invertible. However, for some reason, after generating a covariance matrix, I take the inverse of it successfully, but taking the determinant of the covariance matrix ends up with an output of 0.0.
What could be potentially going wrong? Should I not trust the determinant output, or should I not trust the inverse covariance matrix? Or both?
Snippet of my code:
cov_matrix = np.cov(data)
adjusted_cov = cov_matrix + weight*np.identity(cov_matrix.shape[0]) # add small weight to ensure cov_matrix is non-singular
inv_cov = np.linalg.inv(adjusted_cov) # runs with no error, outputs a matrix
det = np.linalg.det(adjusted_cov) # ends up being 0.0
The numerical inversion of matrices does not involve computing the determinant. (Cramer's formula for the inverse is not practical for large matrices.) So, the fact that determinant evaluates to 0 (due to insufficient precision of floats) is not an obstacle for the matrix inversion routine.
Following up on the comments by BobChao87, here is a simplified test case (Python 3.4 console, numpy imported as np)
A = 0.2*np.identity(500)
np.linalg.inv(A)
Output: a matrix with 5 on the main diagonal, which is the correct inverse of A.
np.linalg.det(A)
Output: 0.0, because the determinant (0.2^500) is too small to be represented in double precision.
A possible solution is a kind of pre-conditioning (here, just rescaling): before computing the determinant, multiply the matrix by a factor that will make its entries closer to 1 on average. In my example, np.linalg.det(5*A) returns 1.
Of course, using the factor of 5 here is cheating, but np.linalg.det(3*A) also returns a nonzero value (about 1.19e-111). If you try np.linalg.det(2**k*A) for k running through modest positive integers, you will likely hit one that will return nonzero. Then you will know that the determinant of the original matrix was approximately 2**(-k*n) times the output, where n is the matrix size.

Categories

Resources