Python Sparse matrix inverse and laplacian calculation - python

I have two sparse matrix A (affinity matrix) and D (Diagonal matrix) with dimension 100000*100000. I have to compute the Laplacian matrix L = D^(-1/2)*A*D^(-1/2). I am using scipy CSR format for sparse matrix.
I didnt find any method to find inverse of sparse matrix. How to find L and inverse of sparse matrix? Also suggest that is it efficient to do so by using python or shall i call matlab function for calculating L?

In general the inverse of a sparse matrix is not sparse which is why you won't find sparse matrix inverters in linear algebra libraries. Since D is diagonal, D^(-1/2) is trivial and the Laplacian matrix calculation is thus trivial to write down. L has the same sparsity pattern as A but each value A_{ij} is multiplied by (D_i*D_j)^{-1/2}.
Regarding the issue of the inverse, the standard approach is always to avoid calculating the inverse itself. Instead of calculating L^-1, repeatedly solve Lx=b for the unknown x. All good matrix solvers will allow you to decompose L which is expensive and then back-substitute (which is cheap) repeatedly for each value of b.

Related

How to determine adjoint of matrix in non-standard inner product space (in python)

I'm trying to compute the adjoint of a 10 000 x 10 000 sparse matrix in a non-standard inner product space.
(For physicist out there: the matrix is a representation of the Fokker-Planck operator and is in this case not self-adjoint.)
The inner product space in which I'd like to compute the adjoint is the weighted by the eigenvector corresponding to the first eigenvalue of the matrix (which corresponds to the equilibrium probability density.)
The scipy function scipy.sparse.linalg.LinearOperator or the numpy.matrix.H do not seem to have options for a different inner product space.
Computing the adjoint of a matrix A turns out to be quite simple.
If your inner product space is weighted with a matrix M, you simply have to compute M^-1 A^T M where T is the (conjugate) transpose and the first term is the inverse of matrix M.
In code for a sparse matrix A this would be:
import scipy.sparse as sparse
def computeAdjoint(A,measure):
"""Compute the adjoint matrix our inner product space by multiplying
with the kernel of integration/weighting function inverse on the left
and weighting function itself on the right"""
Minv=sparse.diags(1./measure)
M=sparse.diags(measure)
return Minv*A.transpose()*M
(Also note that the * stands for matrix multiplication and A.multiply(M) for component wise multiplication)

Find eigenvectors with specific eigenvalue of sparse matrix in python

I have a large sparse matrix and I want to find its eigenvectors with specific eigenvalue. In scipy.sparse.linalg.eigs, it says the required argument k:
"k is the number of eigenvalues and eigenvectors desired. k must be smaller than N-1. It is not possible to compute all eigenvectors of a matrix".
The problem is that I don't know how many eigenvectors corresponding to the eigenvalue I want. What should I do in this case?
I'd suggest using Singular Value Decomposition (SVD) instead. There is a function from scipy where you can use SVD from scipy.sparse.linalg import svds and it can handle sparse matrix. You can find eigenvalues (in this case will be singular value) and eigenvectors by the following:
U, Sigma, VT = svds(X, k=n_components, tol=tol)
where X can be sparse CSR matrix, U and VT is set of left eigenvectors and right eigenvectors corresponded to singular values in Sigma. Here, you can control number of components. I'd say start with small n_components first and then increase it. You can rank your Sigma and see the distribution of singular value you have. There will be some large number and drop quickly. You can make threshold on how many eigenvectors you want to keep from singular values.
If you want to use scikit-learn, there is a class sklearn.decomposition.TruncatedSVD that let you do what I explained.

Multiplying a sparse matrix with sparse vector (efficient way)

I have a sparse matrix M of size N*N with Nd non-zero elements and a sparse vector A of size N*1 with Na non-zero elements. (N is large)
I want to calculate the matrix multiplication B=MA.
I use the sparse matrix representation in scipy.sparse. P=csr_matrix(M). Then I do B=P.dot(A).
I know the complexity of this operation is O(Nd). It seems that A is regarded as a dense vector in the calculation. Because when I change the number Na, the computation time of this multiplication does not change. But the vector A is also sparse. Is there any efficient ways to perform this multiplication with less computation time.
In my simulation, M is fix. The vectors A are differents but they are all sparse.
Thank you very much.

Linear dependent rows: Huge Sparse Matrix

I have a huge sparse matrix A
<5000x5000 sparse matrix of type '<type 'numpy.float64'>'
with 14979 stored elements in Compressed Sparse Column format>
for whom I need to delete linearly dependent rows. I have a prior that j rows will be dependent. I need to
find out which sets of rows are linearly dependent
for each set, keep one arbitrary row and remove the others
I was trying to follow this question, but the corresponding method for sparse matrices, scipy.sparse.linalg.eigs says that
k: The number of eigenvalues and eigenvectors desired. k must be smaller than N. It is not possible to compute all eigenvectors of a
matrix.
How should I proceed?
scipy.sparse.linalg.eigs uses implicitly restarted Arnoldi iteration. The algorithm is meant for finding a few eigenvectors quickly, and can't find all of them.
5000x5000, however, is not that large. Have you considered just using numpy.linalg.eig or scipy.linalg.eig? It will probably take a few minutes, but it isn't completely infeasible. You don't gain anything by using a sparse matrix, but I'm not sure there's an algorithm for efficiently finding all eigenvectors of a sparse matrix.

proximity matrix in python

What is the best way to compute the distance/proximity matrix for very large sparse vectors?
For example you are given the following design matrix, where each row is 68771 dimensional sparse vector.
designMatrix
<5830x68771 sparse matrix of type ''
with 1229041 stored elements in Compressed Sparse Row format>
Have you tried the routines in scipy.spatial.distance?
http://docs.scipy.org/doc/scipy/reference/spatial.distance.html
If this forces you to go to a dense representation, then you may be better off rolling your own, depending on the density of nonzero elements. You could squeeze out the zeros while retaining a map between the new and original indices, calculate the pairwise distances on the remaining nonzero elements and then use the indexing to map things back.

Categories

Resources