How to find eigenvalues and eigenvector in python without Numpy library? - python

I am trying to find eigenvalues and eigenvector in python without Numpy but I don't know how to calculate det(A-lemda*I) for finding eigenvalue.

If you really want to avoid using numpy.linalg (why? This must be some sort of a learning problem, surely), you can e.g. implement a variant of a qr algorithm: factorize A into the product of Q#R, multiply R#Q, repeat until convergence.
However, if it's indeed a learning exercise, your best bet is to pick a textbook on numerical linear algebra.
And if it is not, then keep in mind that you are very unlikely to outperform (in any definition of performance) the tried-and-tested lapack routines that numpy.linalg wraps.

Related

Large sparse matrix inversion on Python

I'm currently working with a least-square algorithm on Python, regarding some geodetic calculations.
I chose Python (which is not the fastest) and it works pretty well. However, in my code, I have inversions of large sparse symmetric (non-positive definite, so can't use Cholesky) matrix to execute (image below). I currenty use np.linalg.inv() which is using the LU decomposition method.
I pretty sure there would be some optimization to do in terms of rapidity.
I thought about Cuthill-McKee algotihm to rearange the matrix and take its inverse. Do you have any ideas or advice ?
Thank you very much for your answers !
Good news is that if you're using any of the popular python libraries for linear algebra (namely, numpy), the speed of python really doesn't matter for the math – it's all done natively inside the library.
For example, when you write matrix_prod = matrix_a # matrix_b, that's not triggering a bunch of Python loops to multiply the two matrices, but using numpy's internal implementation (which I think uses the FORTRAN LAPACK library).
The scipy.sparse.linalg module has your back covered when it comes to solving sparsely stored matrices specifying sparse systems of equations. (which is what you do with the inverse of a matrix). If you want to use sparse matrices, that's your way to go – notice that there's matrices that are sparse in mathematical terms (i.e., most entries are 0), and matrices which are stored as sparse matrix, which means you avoid storing millions of zeros. Numpy itself doesn't have sparsely stored matrices, but scipy does.
If your matrix is densely stored, but mathematically sparse, i.e. you're using standard numpy ndarrays to store it, then you won't get any more rapid by implementing anything in Python. The theoretical complexity gains will be outweighed by the practical slowness of Python compared to highly optimized inversion.
Inverting a sparse matrix usually loses the sparsity. Also, you never invert a matrix if you can avoid it at all! For a sparse matrix, solving the linear equation system Ax = b, with A your matrix and b a known vector, for x, is so much faster done forward than computing A⁻¹! So,
I'm currently working with a least-square algorithm on Python, regarding some geodetic calculations.
since LS says you don't need the inverse matrix, simply don't calculate it, ever. The point of LS is finding a solution that's as close as it gets, even if your matrix isn't invertible. Which can very well be the case for sparse matrices!

Where can I find documents describing exact algorithms many numerical linear algebra packages use?

My (general) question is whether I can find documents describing exact algorithms many numerical linear algebra packages use, and if so, how (or where) I can find them.
For example, I want to know which algorithm Python function, numpy.linalg.eigh, uses. This reference doesn't describe or even mention specific algorithms, but says it (internally) uses LAPACK routines, _syevd and _heevd. Then where can I find which algorithms _syevd and _heevd use.
I know that you can use, e.g., Jacobi methods for eigenvalue decompostion for symmetric matrices and that you can use, e.g., Lanczos methods for eigenvalue decomposition for positive definite matrices. But I can't find documents describing specifically which algorithms each of those functions use.
Thank you!

Fast matrix inversion without a package

Assume that I have a square matrix M. Assume that I would like to invert the matrix M.
I am trying to use the the fractions mpq class within gmpy2 as members of my matrix M. If you are not familiar with these fractions, they are functionally similar to python's built-in package fractions. The only problem is, there are no packages that will invert my matrix unless I take them out of fraction form. I require the numbers and the answers in fraction form. So I will have to write my own function to invert M.
There are known algorithms that I could program, such as gaussian elimination. However, performance is an issue, so my question is as follows:
Is there a computationally fast algorithm that I could use to calculate the inverse of a matrix M?
Is there anything else you know about these matrices? For example, for symmetric positive definite matrices, Cholesky decomposition allows you to invert faster than the standard Gauss-Jordan method you mentioned.
For general matrix inversions, the Strassen algorithm will give you a faster result than Gauss-Jordan but slower than Cholesky.
It seems like you want exact results, but if you're fine with approximate inversions, then there are algorithms which approximate the inverse much faster than the previously mentioned algorithms.
However, you might want to ask yourself if you need the entire matrix inverse for your specific application. Depending on what you are doing it might be faster to use another matrix property. In my experience computing the matrix inverse is an unnecessary step.
I hope that helps!

Limits on Complex Sparse Linear Algebra in Python

I am prototyping numerical algorithms for linear programming and matrix manipulation with very large (100,000 x 100,000) very sparse (0.01% fill) complex (a+b*i) matrices with symmetric structure and asymmetric values. I have been happily using MATLAB for seven years, but have been receiving suggestions to switch to Python since it is open source.
I understand that there are many different Python numeric packages available, but does Python have any limits for handling these types of matrices and solving linear optimization problems in real time at high speed? Does Python have a sparse complex matrix solver comparable in speed to MATLAB's backslash A\b operator? (I have written Gaussian and LU codes, but A\B is always at least 5 times faster than anything else that I have tried and scales linearly with matrix size.)
Probably your sparse solvers were slower than A\b at least in part due to the interpreter overhead of MATLAB scripts. Internally MATLAB uses UMFPACK's multifrontal solver for LU() function and A\b operator (see UMFPACK manual).
You should try scipy.sparse package with scipy.sparse.linalg for the assortment of solvers available. In particular, spsolve() function has an option to call UMFPACK routine instead of the builtin SuperLU solver.
... solving linear optimization problems in real time at high speed?
Since you have time constraints you might want to consider iterative solvers instead of direct ones.
You can get an idea of the performance of SuperLU implementation in spsolve and iterative solvers available in SciPy from another post on this site.

Diagonalizing large sparse matrix with Python/Scipy

I am working with a large (complex) Hermitian matrix and I am trying to diagonalize it efficiently using Python/Scipy.
Using the eigh function from scipy.linalgit takes about 3s to generate and diagonalize a roughly 800x800 matrix and compute all the eigenvalues and eigenvectors.
The eigenvalues in my problem are symmetrically distributed around 0 and range from roughly -4 to 4. I only need the eigenvectors corresponding to the negative eigenvalues, though, which turns the range I am looking to calculate into [-4,0).
My matrix is sparse, so it's natural to use the scipy.sparsepackage and its functions to calculate the eigenvectors via eigsh, since it uses much less memory to store the matrix.
Also I can tell the program to only calculate the negative eigenvalues via which='SA'. The problem with this method is, that it takes now roughly 40s to compute half the eigenvalues/eigenvectors. I know, that the ARPACK algorithm is very inefficient when computing small eigenvalues, but I can't think of any other way to compute all the eigenvectors that I need.
Is there any way, to speed up the calculation? Maybe with using the shift-invert mode? I will have to do many, many diagonalizations and eventually increase the size of the matrix as well, so I am a bit lost at the moment.
I would really appreciate any help!
This question is probably better to ask on http://scicomp.stackexchange.com as it's more of a general math question, rather than specific to Scipy or related to programming.
If you need all eigenvectors, it does not make very much sense to use ARPACK. Since you need N/2 eigenvectors, your memory requirement is at least N*N/2 floats; and probably in practice more. Using eigh requires N*N+3*N floats. eigh is then within a factor of 2 from the minimum requirement, so the easiest solution is to stick with it.
If you can process the eigenvectors "on-line" so that you can throw the previous one away before processing the next, there are other approaches; look at the answers to similar questions on scicomp.

Categories

Resources