Fast matrix inversion without a package - python

Assume that I have a square matrix M. Assume that I would like to invert the matrix M.
I am trying to use the the fractions mpq class within gmpy2 as members of my matrix M. If you are not familiar with these fractions, they are functionally similar to python's built-in package fractions. The only problem is, there are no packages that will invert my matrix unless I take them out of fraction form. I require the numbers and the answers in fraction form. So I will have to write my own function to invert M.
There are known algorithms that I could program, such as gaussian elimination. However, performance is an issue, so my question is as follows:
Is there a computationally fast algorithm that I could use to calculate the inverse of a matrix M?

Is there anything else you know about these matrices? For example, for symmetric positive definite matrices, Cholesky decomposition allows you to invert faster than the standard Gauss-Jordan method you mentioned.
For general matrix inversions, the Strassen algorithm will give you a faster result than Gauss-Jordan but slower than Cholesky.
It seems like you want exact results, but if you're fine with approximate inversions, then there are algorithms which approximate the inverse much faster than the previously mentioned algorithms.
However, you might want to ask yourself if you need the entire matrix inverse for your specific application. Depending on what you are doing it might be faster to use another matrix property. In my experience computing the matrix inverse is an unnecessary step.
I hope that helps!

Related

Large sparse matrix inversion on Python

I'm currently working with a least-square algorithm on Python, regarding some geodetic calculations.
I chose Python (which is not the fastest) and it works pretty well. However, in my code, I have inversions of large sparse symmetric (non-positive definite, so can't use Cholesky) matrix to execute (image below). I currenty use np.linalg.inv() which is using the LU decomposition method.
I pretty sure there would be some optimization to do in terms of rapidity.
I thought about Cuthill-McKee algotihm to rearange the matrix and take its inverse. Do you have any ideas or advice ?
Thank you very much for your answers !
Good news is that if you're using any of the popular python libraries for linear algebra (namely, numpy), the speed of python really doesn't matter for the math – it's all done natively inside the library.
For example, when you write matrix_prod = matrix_a # matrix_b, that's not triggering a bunch of Python loops to multiply the two matrices, but using numpy's internal implementation (which I think uses the FORTRAN LAPACK library).
The scipy.sparse.linalg module has your back covered when it comes to solving sparsely stored matrices specifying sparse systems of equations. (which is what you do with the inverse of a matrix). If you want to use sparse matrices, that's your way to go – notice that there's matrices that are sparse in mathematical terms (i.e., most entries are 0), and matrices which are stored as sparse matrix, which means you avoid storing millions of zeros. Numpy itself doesn't have sparsely stored matrices, but scipy does.
If your matrix is densely stored, but mathematically sparse, i.e. you're using standard numpy ndarrays to store it, then you won't get any more rapid by implementing anything in Python. The theoretical complexity gains will be outweighed by the practical slowness of Python compared to highly optimized inversion.
Inverting a sparse matrix usually loses the sparsity. Also, you never invert a matrix if you can avoid it at all! For a sparse matrix, solving the linear equation system Ax = b, with A your matrix and b a known vector, for x, is so much faster done forward than computing A⁻¹! So,
I'm currently working with a least-square algorithm on Python, regarding some geodetic calculations.
since LS says you don't need the inverse matrix, simply don't calculate it, ever. The point of LS is finding a solution that's as close as it gets, even if your matrix isn't invertible. Which can very well be the case for sparse matrices!

how to get the inverse of distance matrix?

I have a huge distance matrix.
Example: (10000 * 10000)..
Is there an effective way to find a inverse matrix?
I've tried numpy's Inv() but it's too slow.
Is there a more effective way?
You can try using Singular Value Decomposition https://numpy.org/doc/stable/reference/generated/numpy.linalg.svd.html
Inverting the decomposed form might take less time.
You probably don't actually need the inverse matrix.
There are a lot of numeric techniques that let people solve matrix problems without computing the inverse. Unfortunately, you have not described what your problem is, so there is no way to know which of those techniques might be useful to you.
For such a large matrix (10k x 10k), you probably want to look for some kind of iterative technique. Alternately, it might be better to look for some way to avoid constructing such a large matrix in the first place -- e.g., try using the source data in some other way.

Python: Is there a matlab-like backslash operator?

Matlab and Julia have the backslash operator that solves linear systems. I don't really know what Matlab does, but Julia does not compute the inverse, but it computes the effect the inverse has on a given vector, which is computationally easier.
I have a numpy sparse matrix and I want to apply its pseudo-inverse to a vector. Does Python have to compute the pseudo-inverse first or is there a backslash-like operator I can use?
Edit: In a sense I want to solve a linear system Ax=b. However the matrix A does not have full rank and the vector b is not in A's range. So the system does not have a solution. So in practice I want to get the vector X that minimises the norm of Ax-b. This is exactly what the pseudo-inverse matrix does. My question is whether I there is a function that will give me that without having to compute the pseudo-inverse first.

Diagonalizing large sparse matrix with Python/Scipy

I am working with a large (complex) Hermitian matrix and I am trying to diagonalize it efficiently using Python/Scipy.
Using the eigh function from scipy.linalgit takes about 3s to generate and diagonalize a roughly 800x800 matrix and compute all the eigenvalues and eigenvectors.
The eigenvalues in my problem are symmetrically distributed around 0 and range from roughly -4 to 4. I only need the eigenvectors corresponding to the negative eigenvalues, though, which turns the range I am looking to calculate into [-4,0).
My matrix is sparse, so it's natural to use the scipy.sparsepackage and its functions to calculate the eigenvectors via eigsh, since it uses much less memory to store the matrix.
Also I can tell the program to only calculate the negative eigenvalues via which='SA'. The problem with this method is, that it takes now roughly 40s to compute half the eigenvalues/eigenvectors. I know, that the ARPACK algorithm is very inefficient when computing small eigenvalues, but I can't think of any other way to compute all the eigenvectors that I need.
Is there any way, to speed up the calculation? Maybe with using the shift-invert mode? I will have to do many, many diagonalizations and eventually increase the size of the matrix as well, so I am a bit lost at the moment.
I would really appreciate any help!
This question is probably better to ask on http://scicomp.stackexchange.com as it's more of a general math question, rather than specific to Scipy or related to programming.
If you need all eigenvectors, it does not make very much sense to use ARPACK. Since you need N/2 eigenvectors, your memory requirement is at least N*N/2 floats; and probably in practice more. Using eigh requires N*N+3*N floats. eigh is then within a factor of 2 from the minimum requirement, so the easiest solution is to stick with it.
If you can process the eigenvectors "on-line" so that you can throw the previous one away before processing the next, there are other approaches; look at the answers to similar questions on scicomp.

How do I do matrix computations in python without rounding?

I have some integer matrices of moderate size (a few hundred rows). I need to solve equations of the form Ax = b where b is a standard basis vector and A is one of my matrices. I have been using numpy.linalg.lstsq for this purpose, but the rounding errors end up being too significant.
How can I carry out an exact symbolic computation?
(PS I don't really need the code to be efficient; I'm more concerned about ease of coding.)
If your only option is to use free tools written in python, sympy might work, but it could well be simpler to use mathematica.
Note that if you're serious about your comment that you require your solution vector to be integer, then you're looking for something called the "integer least squares problem". Which is believed to be NP-hard. There are some heuristic solvers, but it all gets very complicated.
The mpmath library has support for arbitrary-precision floating-point numbers, and supports matrix algebra: http://mpmath.googlecode.com/svn/tags/0.17/doc/build/matrices.html
Using sympy to do the computation exactly is then a second option.

Categories

Resources