I have a large sparse matrix and I want to find its eigenvectors with specific eigenvalue. In scipy.sparse.linalg.eigs, it says the required argument k:
"k is the number of eigenvalues and eigenvectors desired. k must be smaller than N-1. It is not possible to compute all eigenvectors of a matrix".
The problem is that I don't know how many eigenvectors corresponding to the eigenvalue I want. What should I do in this case?
I'd suggest using Singular Value Decomposition (SVD) instead. There is a function from scipy where you can use SVD from scipy.sparse.linalg import svds and it can handle sparse matrix. You can find eigenvalues (in this case will be singular value) and eigenvectors by the following:
U, Sigma, VT = svds(X, k=n_components, tol=tol)
where X can be sparse CSR matrix, U and VT is set of left eigenvectors and right eigenvectors corresponded to singular values in Sigma. Here, you can control number of components. I'd say start with small n_components first and then increase it. You can rank your Sigma and see the distribution of singular value you have. There will be some large number and drop quickly. You can make threshold on how many eigenvectors you want to keep from singular values.
If you want to use scikit-learn, there is a class sklearn.decomposition.TruncatedSVD that let you do what I explained.
Related
Currently what I am doing is: S(v) is a d-dimensional positive definite matrix determined by the d-dimensional vector v.
I want to optimize the maximum diagonal entry of the inverse of S(v) subject to
entrywise-sum of v equal to 1.
(See https://mathoverflow.net/questions/416095/cvxpy-maximum-diagonal-entry-of-the-inverse-matrix for details)
This is a convex problem, but CVXPY does not support a matrix inverse including a variable, as far as I know.
Using matrix_frac for a canonical vector e (will return e # S(v) # e) and
taking maximum for d different canonical vectors will work,
but it will create d-inverse of d by d matrix for each computation, which is really heavy.
Any other good solution?
How can I implement a power method in python that can find eigenvectors corresponding to the two eigenvalues of the largest magnitude by assuring that second vector remains orthogonal to the first one? For simple case, the given matrix will be small and symmetric.
I would like to find all the eigenvectors within a specific eigenvalue range. The matrix i am dealing with is large sparse symmetric matrix with size (162000, 162000). I am using
Eigenvalue, Eigenvector = scipy.sparse.linalg.eigsh(A, k=10, sigma="""105 to 205""")
I cannot use sigma = 105 to 205, i have written just for understanding my purpose.
Question 1: I want all the eigenvectors in the eigenvalue range of sigma = 105 to 205. How can I do this in python?
Question 2: Is it possible to calculate all the eigenvalue and eigenvectors for this large matrix in python?
The application is specific to structural dynamics where the stiffness and mass matrix is of the same size (162000, 162000).
I have a very large sparse matrix which represents a transition martix in a Markov Chain, i.e. the sum of each row of the matrix equals one and I'm interested in finding the first eigenvalue and its corresponding vector which is smaller than one. I know that the eigenvalues are bounded in the section [-1, 1] and they are all real (non-complex).
I am trying to calculate the values using python's scipy.sparse.eigs function, however, one of the parameters of the functions is the number of eigenvalues/vectors to estimate and every time I've increased the number of parameters to estimate, the numbers of eigenvalues which are exactly one grew as well.
Needless to say, I am using the which parameter with the value 'LR' in order to get the k largest eigenvalues, with k being the number of values to estimate.
Does anyone have an idea how to solve this problem (finding the first eigenvalue smaller than one and its corresponding vector)?
I agree with #pv. If your matrix S was symmetric, you could see it as a laplacian matrix of the matrix I - S. The number of connected components of I - S is the number of zero-eigenvalues of this matrix (i.e, the dimension of the space associated to eigenvalue 1 of S). You could check the number of connected components of the graph whose similarity matrix is I - S*S' for a start, e.g. with scipy.sparse.csgraph.connected_components.
By definition, a square matrix that has a zero determinant should not be invertible. However, for some reason, after generating a covariance matrix, I take the inverse of it successfully, but taking the determinant of the covariance matrix ends up with an output of 0.0.
What could be potentially going wrong? Should I not trust the determinant output, or should I not trust the inverse covariance matrix? Or both?
Snippet of my code:
cov_matrix = np.cov(data)
adjusted_cov = cov_matrix + weight*np.identity(cov_matrix.shape[0]) # add small weight to ensure cov_matrix is non-singular
inv_cov = np.linalg.inv(adjusted_cov) # runs with no error, outputs a matrix
det = np.linalg.det(adjusted_cov) # ends up being 0.0
The numerical inversion of matrices does not involve computing the determinant. (Cramer's formula for the inverse is not practical for large matrices.) So, the fact that determinant evaluates to 0 (due to insufficient precision of floats) is not an obstacle for the matrix inversion routine.
Following up on the comments by BobChao87, here is a simplified test case (Python 3.4 console, numpy imported as np)
A = 0.2*np.identity(500)
np.linalg.inv(A)
Output: a matrix with 5 on the main diagonal, which is the correct inverse of A.
np.linalg.det(A)
Output: 0.0, because the determinant (0.2^500) is too small to be represented in double precision.
A possible solution is a kind of pre-conditioning (here, just rescaling): before computing the determinant, multiply the matrix by a factor that will make its entries closer to 1 on average. In my example, np.linalg.det(5*A) returns 1.
Of course, using the factor of 5 here is cheating, but np.linalg.det(3*A) also returns a nonzero value (about 1.19e-111). If you try np.linalg.det(2**k*A) for k running through modest positive integers, you will likely hit one that will return nonzero. Then you will know that the determinant of the original matrix was approximately 2**(-k*n) times the output, where n is the matrix size.