Matching Largest Eigenvalues to Eigenvectors - python

In Python I've calculated the eigenvectors and eigenvalues of my data matrix X through eig(). I'm looking to find the top 2 principal components of the data (U = [u1 u2]). I know the top 2 components are the 2 eigenvectors corresponding to the 2 largest eigenvalues, but I'm not sure how to calculate that information with the data at hand (eigenvalues, eigenvectors, and X).
Eigenvectors and eigenvalues calculated:
Eigenvectors = [[-0.68065502 -0.72805308 -0.08153196]
[-0.71680551 0.68482721 -0.13115467]
[-0.15132287 0.03082853 0.98800354]]
Eigenvalues = [2.84217094e-14 2.15257831e+02 8.95193455e+02]

Given the Eigenvalues you got Eigenvalues = [2.84217094e-14 2.15257831e+02 8.95193455e+02]
Your two largest Eigenvalues are 8.95193455e+02 and 2.15257831e+02,
The sum of your eigenvalues is 1110.0 which corresponds to the 100% of the information.
So your largest eigenvalue 8.95193455e+02 has 80.6% of the information. The second eigenvalue 2.15257831e+02 has the remaining 19.4% of the information, and the last eigenvalue 2.84217094e-14 is too small, thus it can be considered noise.
For the eigenvectors that match these eigenvalues, each column of your Eigenvectors matrix is associated to one eigenvalue, and they are in the same order.
For example, your first eigenvalue 8.95193455e+02 is associated to the eigenvector
[[-0.08153196]
[-0.13115467]
[ 0.98800354]]

Related

Eigenvectors to transformation matrix

The following are my eigenvalues and eigenvectors.
(array([2.10968950e+08, 1.34010152e+07, 1.29648732e+06]),
matrix([[-0.55967132, -0.66160005, 0.49905249],
[-0.55140969, -0.15224734, -0.82022442],
[-0.61863993, 0.73423847, 0.2796042 ]]))
How do I go about creating a 3x3 transformation matrix where the rows of the transformation matrix consist of the three normalized eigenvectors sorted from largest to smallest eigenvalues.

Power method to find eigenvectors of largest eigenvalues

How can I implement a power method in python that can find eigenvectors corresponding to the two eigenvalues of the largest magnitude by assuring that second vector remains orthogonal to the first one? For simple case, the given matrix will be small and symmetric.

How to find the eigenvectors within a specific eigenvalue range for a large sparse symmetric matrix (162000 by 162000)?

I would like to find all the eigenvectors within a specific eigenvalue range. The matrix i am dealing with is large sparse symmetric matrix with size (162000, 162000). I am using
Eigenvalue, Eigenvector = scipy.sparse.linalg.eigsh(A, k=10, sigma="""105 to 205""")
I cannot use sigma = 105 to 205, i have written just for understanding my purpose.
Question 1: I want all the eigenvectors in the eigenvalue range of sigma = 105 to 205. How can I do this in python?
Question 2: Is it possible to calculate all the eigenvalue and eigenvectors for this large matrix in python?
The application is specific to structural dynamics where the stiffness and mass matrix is of the same size (162000, 162000).

Find eigenvectors with specific eigenvalue of sparse matrix in python

I have a large sparse matrix and I want to find its eigenvectors with specific eigenvalue. In scipy.sparse.linalg.eigs, it says the required argument k:
"k is the number of eigenvalues and eigenvectors desired. k must be smaller than N-1. It is not possible to compute all eigenvectors of a matrix".
The problem is that I don't know how many eigenvectors corresponding to the eigenvalue I want. What should I do in this case?
I'd suggest using Singular Value Decomposition (SVD) instead. There is a function from scipy where you can use SVD from scipy.sparse.linalg import svds and it can handle sparse matrix. You can find eigenvalues (in this case will be singular value) and eigenvectors by the following:
U, Sigma, VT = svds(X, k=n_components, tol=tol)
where X can be sparse CSR matrix, U and VT is set of left eigenvectors and right eigenvectors corresponded to singular values in Sigma. Here, you can control number of components. I'd say start with small n_components first and then increase it. You can rank your Sigma and see the distribution of singular value you have. There will be some large number and drop quickly. You can make threshold on how many eigenvectors you want to keep from singular values.
If you want to use scikit-learn, there is a class sklearn.decomposition.TruncatedSVD that let you do what I explained.

Calculating eigen values of very large sparse matrices in python

I have a very large sparse matrix which represents a transition martix in a Markov Chain, i.e. the sum of each row of the matrix equals one and I'm interested in finding the first eigenvalue and its corresponding vector which is smaller than one. I know that the eigenvalues are bounded in the section [-1, 1] and they are all real (non-complex).
I am trying to calculate the values using python's scipy.sparse.eigs function, however, one of the parameters of the functions is the number of eigenvalues/vectors to estimate and every time I've increased the number of parameters to estimate, the numbers of eigenvalues which are exactly one grew as well.
Needless to say, I am using the which parameter with the value 'LR' in order to get the k largest eigenvalues, with k being the number of values to estimate.
Does anyone have an idea how to solve this problem (finding the first eigenvalue smaller than one and its corresponding vector)?
I agree with #pv. If your matrix S was symmetric, you could see it as a laplacian matrix of the matrix I - S. The number of connected components of I - S is the number of zero-eigenvalues of this matrix (i.e, the dimension of the space associated to eigenvalue 1 of S). You could check the number of connected components of the graph whose similarity matrix is I - S*S' for a start, e.g. with scipy.sparse.csgraph.connected_components.

Categories

Resources