Matlab and Julia have the backslash operator that solves linear systems. I don't really know what Matlab does, but Julia does not compute the inverse, but it computes the effect the inverse has on a given vector, which is computationally easier.
I have a numpy sparse matrix and I want to apply its pseudo-inverse to a vector. Does Python have to compute the pseudo-inverse first or is there a backslash-like operator I can use?
Edit: In a sense I want to solve a linear system Ax=b. However the matrix A does not have full rank and the vector b is not in A's range. So the system does not have a solution. So in practice I want to get the vector X that minimises the norm of Ax-b. This is exactly what the pseudo-inverse matrix does. My question is whether I there is a function that will give me that without having to compute the pseudo-inverse first.
Related
I'm currently working with a least-square algorithm on Python, regarding some geodetic calculations.
I chose Python (which is not the fastest) and it works pretty well. However, in my code, I have inversions of large sparse symmetric (non-positive definite, so can't use Cholesky) matrix to execute (image below). I currenty use np.linalg.inv() which is using the LU decomposition method.
I pretty sure there would be some optimization to do in terms of rapidity.
I thought about Cuthill-McKee algotihm to rearange the matrix and take its inverse. Do you have any ideas or advice ?
Thank you very much for your answers !
Good news is that if you're using any of the popular python libraries for linear algebra (namely, numpy), the speed of python really doesn't matter for the math – it's all done natively inside the library.
For example, when you write matrix_prod = matrix_a # matrix_b, that's not triggering a bunch of Python loops to multiply the two matrices, but using numpy's internal implementation (which I think uses the FORTRAN LAPACK library).
The scipy.sparse.linalg module has your back covered when it comes to solving sparsely stored matrices specifying sparse systems of equations. (which is what you do with the inverse of a matrix). If you want to use sparse matrices, that's your way to go – notice that there's matrices that are sparse in mathematical terms (i.e., most entries are 0), and matrices which are stored as sparse matrix, which means you avoid storing millions of zeros. Numpy itself doesn't have sparsely stored matrices, but scipy does.
If your matrix is densely stored, but mathematically sparse, i.e. you're using standard numpy ndarrays to store it, then you won't get any more rapid by implementing anything in Python. The theoretical complexity gains will be outweighed by the practical slowness of Python compared to highly optimized inversion.
Inverting a sparse matrix usually loses the sparsity. Also, you never invert a matrix if you can avoid it at all! For a sparse matrix, solving the linear equation system Ax = b, with A your matrix and b a known vector, for x, is so much faster done forward than computing A⁻¹! So,
I'm currently working with a least-square algorithm on Python, regarding some geodetic calculations.
since LS says you don't need the inverse matrix, simply don't calculate it, ever. The point of LS is finding a solution that's as close as it gets, even if your matrix isn't invertible. Which can very well be the case for sparse matrices!
In most of the below answers for complex matrix differential equations, the odeintw package has been suggested.
https://stackoverflow.com/a/45970853/7952027
https://stackoverflow.com/a/26320130/7952027
https://stackoverflow.com/a/26747232/7952027
https://stackoverflow.com/a/26582411/7952027
I want to know the theory behind the manipulations done in the code of odeintw.
Like why one has to build that banded jacobian, the idea behind the functions _complex_to_real_jac, _transform_banded_jac, etc.
The answer is in the comments.
A complex matrix space is a real vector space, so a complex matrix can be represented by an array of real numbers preserving this linear structure. All odeintw has to do is to wrap odeint or better the function given to it with this basis transformation, forward and backward.
Now if you want to speed up the computation by providing the Jacobian, it also needs to be translated into the real form. In the method of lines as example you get banded Jacobians, the translation has to keep that property for efficiency reasons.
M-o-L is a common method in solving PDE of the heat or wave equation type. Essentially, it discretizes the space dimension(s) while leaving the time dimension continuous, resulting in a large-dimensional ODE system in time direction. The resulting Jacobians only are non-zero at nearest-neighbor interactions, thus very sparse, and have a banded structure if the discretization is via a regular grid.
lutz-lehmann
The nontrivial part arises when you want to specify the Jacobian via the Dfun argument. The complex Jacobian requires that the right-hand side of the equation be complex differentiable (i.e. holomorphic). For example, the function f(z) = z* (the complex conjugate) is not complex differentiable, so you can't specify a complex Jacobian for the equation dz/dt = z*. You would have to rewrite it as a system of real equations. (This example is in the docstring of odeintw.)
If the right-hand side is complex differentiable, then you can give a complex Jacobian via Dfun.
warren-weckesser
I'm attempting to write a simple implementation of the Newton-Raphson method, in Python. I've already done so using the SymPy library, however what I'm working on now will (ultimately) end up running in an environment where only Numpy is available.
For those unfamiliar with the algorithm, it works (in my case) as follows:
I have some a system of symbolic equations, which I "stack" to form a matrix F. The unknowns are X,Y,Z,T (which I wish to determine). Some additional values are initially unknown, until passed to my solver, which substitutes these known values for variables in the symbolic expressions.
Now, the Jacobian matrix (J) of F is computed. This, too, is a matrix of symbolic expressions.
Now, I iterate in some range (max_iter). With each iteration, I form a matrix A by substituing for the unknowns X,Y,Z,T in F current estimates (starting with some initial values). Similarly, I form a matrix b by substituting for X,Y,Z,T current estimates.
I then obtain new estimates by solving the matrix equation Ax = b for x. This vector x holds dT, dX, dY, dZ. I then add these to current estimates for T,X,Y,Z, and iterate again.
Thus far, I've found my largest issue to be computing the Jacobian matrix. I need only to do this once, however it will be different depending upon the coefficients fed to the solver (not unknowns, but only known once fed to the solver, so I can't simply hard-code the Jacobian).
While I'm not terribly familiar with Numpy, I know that it offers numpy.gradient. I'm not sure, however, that this is the same as SymPy's .jacobian.
How can the Jacobian matrix be found, either in "pure" Python, or with Numpy?
EDIT:
Should it be useful to you, more information on the problem can be found [here]. 1. It can be formulated a few different ways, however (as of now) I'm writing it as 4 equations of the form:
\sqrt{(X-x_i)^2+(Y-y_i)^2+(Z-z_i)^2 }= c * (t_i-T)
Where X,Y,Z and T are unknown.
This describes the solution to a localization problem, where we know (a) the location of n >= 4 observers in a 3-dimensional space, (b) the time at which each observer "saw" some signal, and (c) the velocity of the signal. The goal is to determine the coordinates of the signal source X,Y,Z (and, as a side effect, the time of emission, T).
Notice that I've tried (many) other approaches to solving this problem, and all leads point toward a combination of Newton-Raphson with regression.
I would like to get all of the eigenvalues and eigenvectors for a particular real symmetric matrix. This is obviously possible with numpy.linalg.eigh, however, this matrix has a particular sparse structure which allows linear scaling dot-product with a vector. For this reason, I would like to use scipy.sparse.linalg.eigsh, which allows for a LinearOperator in place of the input array, and use of the implicitly restarted Lanczos method.
My problem is that scipy.sparse.linalg.eigsh does not allow calculation of all eigenvalues and eigenvectors (i.e. k=n), and the rank of my input matrix is typically equal to n. Is there any way to get around this, or does any other function allow similar functionality?
Assume that I have a square matrix M. Assume that I would like to invert the matrix M.
I am trying to use the the fractions mpq class within gmpy2 as members of my matrix M. If you are not familiar with these fractions, they are functionally similar to python's built-in package fractions. The only problem is, there are no packages that will invert my matrix unless I take them out of fraction form. I require the numbers and the answers in fraction form. So I will have to write my own function to invert M.
There are known algorithms that I could program, such as gaussian elimination. However, performance is an issue, so my question is as follows:
Is there a computationally fast algorithm that I could use to calculate the inverse of a matrix M?
Is there anything else you know about these matrices? For example, for symmetric positive definite matrices, Cholesky decomposition allows you to invert faster than the standard Gauss-Jordan method you mentioned.
For general matrix inversions, the Strassen algorithm will give you a faster result than Gauss-Jordan but slower than Cholesky.
It seems like you want exact results, but if you're fine with approximate inversions, then there are algorithms which approximate the inverse much faster than the previously mentioned algorithms.
However, you might want to ask yourself if you need the entire matrix inverse for your specific application. Depending on what you are doing it might be faster to use another matrix property. In my experience computing the matrix inverse is an unnecessary step.
I hope that helps!