Limits on Complex Sparse Linear Algebra in Python - python

I am prototyping numerical algorithms for linear programming and matrix manipulation with very large (100,000 x 100,000) very sparse (0.01% fill) complex (a+b*i) matrices with symmetric structure and asymmetric values. I have been happily using MATLAB for seven years, but have been receiving suggestions to switch to Python since it is open source.
I understand that there are many different Python numeric packages available, but does Python have any limits for handling these types of matrices and solving linear optimization problems in real time at high speed? Does Python have a sparse complex matrix solver comparable in speed to MATLAB's backslash A\b operator? (I have written Gaussian and LU codes, but A\B is always at least 5 times faster than anything else that I have tried and scales linearly with matrix size.)

Probably your sparse solvers were slower than A\b at least in part due to the interpreter overhead of MATLAB scripts. Internally MATLAB uses UMFPACK's multifrontal solver for LU() function and A\b operator (see UMFPACK manual).
You should try scipy.sparse package with scipy.sparse.linalg for the assortment of solvers available. In particular, spsolve() function has an option to call UMFPACK routine instead of the builtin SuperLU solver.
... solving linear optimization problems in real time at high speed?
Since you have time constraints you might want to consider iterative solvers instead of direct ones.
You can get an idea of the performance of SuperLU implementation in spsolve and iterative solvers available in SciPy from another post on this site.

Related

Large sparse matrix inversion on Python

I'm currently working with a least-square algorithm on Python, regarding some geodetic calculations.
I chose Python (which is not the fastest) and it works pretty well. However, in my code, I have inversions of large sparse symmetric (non-positive definite, so can't use Cholesky) matrix to execute (image below). I currenty use np.linalg.inv() which is using the LU decomposition method.
I pretty sure there would be some optimization to do in terms of rapidity.
I thought about Cuthill-McKee algotihm to rearange the matrix and take its inverse. Do you have any ideas or advice ?
Thank you very much for your answers !
Good news is that if you're using any of the popular python libraries for linear algebra (namely, numpy), the speed of python really doesn't matter for the math – it's all done natively inside the library.
For example, when you write matrix_prod = matrix_a # matrix_b, that's not triggering a bunch of Python loops to multiply the two matrices, but using numpy's internal implementation (which I think uses the FORTRAN LAPACK library).
The scipy.sparse.linalg module has your back covered when it comes to solving sparsely stored matrices specifying sparse systems of equations. (which is what you do with the inverse of a matrix). If you want to use sparse matrices, that's your way to go – notice that there's matrices that are sparse in mathematical terms (i.e., most entries are 0), and matrices which are stored as sparse matrix, which means you avoid storing millions of zeros. Numpy itself doesn't have sparsely stored matrices, but scipy does.
If your matrix is densely stored, but mathematically sparse, i.e. you're using standard numpy ndarrays to store it, then you won't get any more rapid by implementing anything in Python. The theoretical complexity gains will be outweighed by the practical slowness of Python compared to highly optimized inversion.
Inverting a sparse matrix usually loses the sparsity. Also, you never invert a matrix if you can avoid it at all! For a sparse matrix, solving the linear equation system Ax = b, with A your matrix and b a known vector, for x, is so much faster done forward than computing A⁻¹! So,
I'm currently working with a least-square algorithm on Python, regarding some geodetic calculations.
since LS says you don't need the inverse matrix, simply don't calculate it, ever. The point of LS is finding a solution that's as close as it gets, even if your matrix isn't invertible. Which can very well be the case for sparse matrices!

Eigenvectors of a large sparse matrix in tensorflow

I was wondering if there is a way to calculate the first few eigenvectors of a very large sparse matrix in tensorflow, hoping that it might be faster than scipy's implementation of ARPACK, which doesn't seem to support parallel computing. At least, as far as I noticed.
I believe you should rather look into PETCs4py or SLEPc4py.
They are python binding of PETSc (Portable, Extensible Toolkit for
Scientific Computation) and SLEPc (Scalable Library for Eigenvalue Problem Computations).
PETSc and SLEPc support MPI and therefore PETCs4py and SLEPc4py do too.
I believe you will find useful examples in examples

Speedups for arbitrary decimal precision singular value decomposition and matrix inversion

I am using mpmath for arbitrary decimal precision. I am creating large square matrices (30 x 30 and 100 x 100). For my code, I am executing singular value decomposition and matrix inversion using mpmath's built-in packages.
My problem is that mpmath is slow, even with a gmpy back-end. I need precision up to 50 decimal points (if the solution is fast, I prefer it to scale to more decimal points).
Is there a solution to speeding up these linear algebra problems in python?
Someone asked a similar question here, but there are 2 differences:
The answers did not address singular value decomposition
The answers gave methods of estimating the inverse, but they did not attempt to show that approaching the true answer is faster than mpmath's method. I have tried the solution given in this post, and I have found it to be slower than mpmath's internal algorithm.
The way in this case is to rewrite the code that needs to be accelerated to pure c/с++ using the fastest algorithms. For example, try to use GPM library on c++ directly, without using python wrappers. After that, connect this code to python code using pybind11.
Example with pybind11: https://github.com/pybind/python_example

Scipy: Linear programming with sparse matrices

I want to solve a linear program in python. The number of variables (I will call it N from now on) is very large (~50000) and in order to formulate the problem in the way scipy.optimize.linprog requires it, I have to construct two N x N matrices (A and B below). The LP can be written as
minimize: c.x
subject to:
A.x <= a
B.x = b
x_i >= 0 for all i in {0, ..., n}
whereby . denotes the dot product and a, b, and c are vectors with length N.
My experience is that constructing such large matrices (A and B have both approx. 50000x50000 = 25*10^8 entries) comes with some issues: If the hardware is not very strong, NumPy may refuse to construct such big matrices at all (see for example Very large matrices using Python and NumPy) and even if NumPy creates the matrix without problems, there is a huge performance issue. This is natural regarding the huge amount of data NumPy has to deal with.
However, even though my linear program comes with N variables, the matrices I work with are very sparse. One of them has only entries in the very first row, the other one only in the first M rows, with M < N/2. Of course I would like to exploit this fact.
As far as I have read (e.g. Trying to solve a Scipy optimization problem using sparse matrices and failing), scipy.optimize.linprog does not work with sparse matrices. Therefore, I have the following questions:
Is it actually true that SciPy does not offer any possibility to solve a linear program with sparse matrices? (If not, how can I do it?)
Do you know any alternative library that will solve the problem more effectively than SciPy with non-sparse matrices? (The library suggested in the thread above seems to be not flexible enough for my purposes - as far as I understand its documentation)
Can it be expected that a new implementation of the simplex algorithm (using plain Python, no C) that exploits the sparsity of the matrices will be more efficient than SciPy with non-sparse matrices?
I would say forming a dense matrix (or two) to solve a large sparse LP is probably not the right thing to do. When solving a large sparse LP it is important to use a solver that has facilities to handle such problems and also to generate the model in a way that does not create explicitly any of these zero elements.
Writing a stable, fast, sparse Simplex LP solver in Python as a replacement for the SciPy dense solver is not a trivial exercise. Moreover a solver written in pure Python may not perform as well.
For the size you indicate, although not very, very large (may be large medium sized model would be a good classification) you may want to consider a commercial solver like Cplex, Gurobi or Mosek. These solvers are very fast and very reliable (they solve basically any LP problem you throw at them). They all have Python APIs. The solvers are free or very cheap for academics.
If you want to use an Open Source solver, you may want to look at the COIN CLP solver. It also has a Python interface.
If your model is more complex then you also may want to consider to use a Python modeling tool such as Pulp or Pyomo (Gurobi also has good modeling support in Python).
I can't believe nobody has pointed you in the direction of PuLP! You will be able to create your problem efficiently, like so:
import pulp
prob = pulp.LpProblem("test problem",pulp.LpMaximize)
x = pulp.LpVariable.dicts('x', range(5), lowBound=0.0)
prob += pulp.lpSum([(ix+1)*x[ix] for ix in range(5)]), "objective"
prob += pulp.lpSum(x)<=3, "capacity"
prob.solve()
for k, v in prob.variablesDict().iteritems():
print k, v.value()
PuLP is fantastic, comes with a very decent solver (CBC) and can be hooked up to open source and commercial solvers. I am currently using it in production for a forestry company and exploring Dippy for the hardest (integer) problems we have. Best of luck!

Computing generalized eigen values for sparse matrices in python

I am using scipy.sparse.linalg.eigsh to solve the generalized eigen value problem for a very sparse matrix and running into memory problems. The matrix is a square matrix with 1 million rows/columns, but each row has only about 25 non-zero entries. Is there a way to solve the problem without reading the entire matrix into memory, i.e. working with only blocks of the matrix in memory at a time?
It's ok if the solution involves using a different library in python or in java.
For ARPACK, you only need to code up a routine that computes certain matrix-vector products. This can be implemented in any way you like, for instance reading the matrix from the disk.
from scipy.sparse.linalg import LinearOperator
def my_matvec(x):
y = compute matrix-vector product A x
return y
A = LinearOperator(matvec=my_matvec, shape=(1000000, 1000000))
scipy.sparse.linalg.eigsh(A)
Check the scipy.sparse.linalg.eigsh documentation for what is needed in the generalized eigenvalue problem case.
The Scipy ARPACK interface exposes more or less the complete ARPACK interface, so I doubt you will gain much by switching to FORTRAN or some other way to access Arpack.

Categories

Resources