I have some integer matrices of moderate size (a few hundred rows). I need to solve equations of the form Ax = b where b is a standard basis vector and A is one of my matrices. I have been using numpy.linalg.lstsq for this purpose, but the rounding errors end up being too significant.
How can I carry out an exact symbolic computation?
(PS I don't really need the code to be efficient; I'm more concerned about ease of coding.)
If your only option is to use free tools written in python, sympy might work, but it could well be simpler to use mathematica.
Note that if you're serious about your comment that you require your solution vector to be integer, then you're looking for something called the "integer least squares problem". Which is believed to be NP-hard. There are some heuristic solvers, but it all gets very complicated.
The mpmath library has support for arbitrary-precision floating-point numbers, and supports matrix algebra: http://mpmath.googlecode.com/svn/tags/0.17/doc/build/matrices.html
Using sympy to do the computation exactly is then a second option.
Related
Dealing with numerical calculations having exponential terms often becomes painful, thanks to overflow errors. For example, suppose you have a probability density, P(x)=C*exp(f(x)/k), where k is very small number, say of the order of 10^(-5).
To find the value of C one has to integrate P(x). Here comes the overflow error. I know it also depends on the form of f(x). But for this moment let us assume, f(x)=sin(x).
How to deal with such problems?
What are the tricks we may use to avoid them?
Is the severity of such problems language dependent? If yes, in which language should one write his code?
As I mentioned in the comments, I strongly advise using analytical methods as far as you can. However, If you want to compute integrals of the form
I=Integral[Exp[f(x)],{x,a,b}]
Where f(x) could potentially overflow the exponent, then you might want to renormalize the system a bit in the following way:
Assume that f(c) is the maximum of f(x) in the domain [a,b], then you can write:
I=Exp[f(c)] Integral[Exp[f(x)-f(c)],{x,a,b}]
It's an ugly trick, but at least your exponents will be small in the integral.
note: just realized this is the comment of roygvib
One of the options is to use GSL - GNU Scientific Library (python and fortran wrappers are available).
There is a function gsl_sf_exp_e10_e that according to the documentation
computes the exponential \exp(x) using the gsl_sf_result_e10 type to return a result with extended range. This function may be useful if the value of \exp(x) would overflow the numeric range of double.
However, I would like to note, that it's slow due to additional checks during evaluation.
P.S. As it was said earlier, it's better to use analytical solutions where possible.
Assume that I have a square matrix M. Assume that I would like to invert the matrix M.
I am trying to use the the fractions mpq class within gmpy2 as members of my matrix M. If you are not familiar with these fractions, they are functionally similar to python's built-in package fractions. The only problem is, there are no packages that will invert my matrix unless I take them out of fraction form. I require the numbers and the answers in fraction form. So I will have to write my own function to invert M.
There are known algorithms that I could program, such as gaussian elimination. However, performance is an issue, so my question is as follows:
Is there a computationally fast algorithm that I could use to calculate the inverse of a matrix M?
Is there anything else you know about these matrices? For example, for symmetric positive definite matrices, Cholesky decomposition allows you to invert faster than the standard Gauss-Jordan method you mentioned.
For general matrix inversions, the Strassen algorithm will give you a faster result than Gauss-Jordan but slower than Cholesky.
It seems like you want exact results, but if you're fine with approximate inversions, then there are algorithms which approximate the inverse much faster than the previously mentioned algorithms.
However, you might want to ask yourself if you need the entire matrix inverse for your specific application. Depending on what you are doing it might be faster to use another matrix property. In my experience computing the matrix inverse is an unnecessary step.
I hope that helps!
I am using mpmath for arbitrary decimal precision. I am creating large square matrices (30 x 30 and 100 x 100). For my code, I am executing singular value decomposition and matrix inversion using mpmath's built-in packages.
My problem is that mpmath is slow, even with a gmpy back-end. I need precision up to 50 decimal points (if the solution is fast, I prefer it to scale to more decimal points).
Is there a solution to speeding up these linear algebra problems in python?
Someone asked a similar question here, but there are 2 differences:
The answers did not address singular value decomposition
The answers gave methods of estimating the inverse, but they did not attempt to show that approaching the true answer is faster than mpmath's method. I have tried the solution given in this post, and I have found it to be slower than mpmath's internal algorithm.
The way in this case is to rewrite the code that needs to be accelerated to pure c/с++ using the fastest algorithms. For example, try to use GPM library on c++ directly, without using python wrappers. After that, connect this code to python code using pybind11.
Example with pybind11: https://github.com/pybind/python_example
I am doing convolution operations involving some very small numbers, and am encountering a lot of underflow on the way.
You can take a look at decimal package coming in standard library. If it doesn't work for you, there are other alternatives like mpmath or bigfloat.
I have started using numpy along with pysparse package which interfaces UMFPACK, however there is a problem with the floating point results with numpy. By the way, this is a lanczos eigenvalue solver for structural problems.
When I do the same operations in MATLAB I get different results, well the results are on the order of 1e-6,1e-8 and with MATLAB's representation, I get the right eigenvalues. NumPy and PySparse results are also not that far, at least on the order level, however using them to create a triadiagonal matrix on which to find the eigenvalues is the source of the problem. I could not understand what is going wrong, well the issue is the floating point representation, but how to fix this if possible? I tried to use 'Float64' as my datatype but that does not make a change on the results of the problem. Such as
q = ones(n, dtype = 'Float64')
One more, what is the most mature sparse package for python, and what kind of interfaces are provided, if any? As told, PySparse seemed fine to me at first sight...
float64 is the default data type in Numpy. You could try using float128 for more precision, but be warned that certain functions (and basically everything on Windows) will coerce it to float64 anyway.
I would recommend using scipy.sparse for your sparse eigenvector problems. I have tried both PySparse and scipy.sparse, and I would conclude that although PySparse is easier to use, scipy.sparse is more mature.
Here's the sparse linear algebra documentation: http://docs.scipy.org/doc/scipy/reference/sparse.linalg.html