How can one perform high precision convolution in Python? - python

I am doing convolution operations involving some very small numbers, and am encountering a lot of underflow on the way.

You can take a look at decimal package coming in standard library. If it doesn't work for you, there are other alternatives like mpmath or bigfloat.

Related

Is there a way of setting a default precision that differs from double in Python?

I'm aware of Decimal, however I am working with a lot of code written by someone else, and I don't want to go through a large amount of code to change every initialization of a floating point number to Decimal. It would be more convenient if there was some kind of package where I could put SetPrecision(128) or such at the top of my scripts and be off to the races. I suspect no such thing exists but I figured I would ask just in case I'm wrong.
To head off XY Problem comments, I'm solving differential equations which are supposed to be positive invariant, and one quantity which has an equilibrium on the order of 1e-12 goes negative regardless of the error tolerance I specify (using scipy's interface to LSODA).
yes, but no. `
The bigfloat package is a Python wrapper for the GNU MPFR library for
arbitrary-precision floating-point reliable arithmetic. The MPFR
library is a well-known portable C library for arbitrary-precision
arithmetic on floating-point numbers. It provides precise control over
precisions and rounding modes and gives correctly-rounded reproducible
platform-independent results.`
Blockquote
https://pythonhosted.org/bigfloat
You would then need to coerce the builtin float to be bigfloat everywhere, which would likely be non-trivial.
LSODA exposed through scipy.integrate is double precision only.
You might want to look into some rescaling of variables, so that that thing which is 1e-12 becomes closer to unity.
EDIT. In the comments, you indicated
As I've stated three times, I am open to rewriting to avoid LSODA
Then what you can try doing is to look over the code of solve_ivp, which is pure python. Feed it with decimals or mpmath high-precision floats. Observe where it fails, look for where it assumes double precision. Rewrite, remove this assumption. Rinse and repeat. Whether it'll work in the end, I don't know. Whether it's worth it, I suspect not, but YMMV.

Speedups for arbitrary decimal precision singular value decomposition and matrix inversion

I am using mpmath for arbitrary decimal precision. I am creating large square matrices (30 x 30 and 100 x 100). For my code, I am executing singular value decomposition and matrix inversion using mpmath's built-in packages.
My problem is that mpmath is slow, even with a gmpy back-end. I need precision up to 50 decimal points (if the solution is fast, I prefer it to scale to more decimal points).
Is there a solution to speeding up these linear algebra problems in python?
Someone asked a similar question here, but there are 2 differences:
The answers did not address singular value decomposition
The answers gave methods of estimating the inverse, but they did not attempt to show that approaching the true answer is faster than mpmath's method. I have tried the solution given in this post, and I have found it to be slower than mpmath's internal algorithm.
The way in this case is to rewrite the code that needs to be accelerated to pure c/с++ using the fastest algorithms. For example, try to use GPM library on c++ directly, without using python wrappers. After that, connect this code to python code using pybind11.
Example with pybind11: https://github.com/pybind/python_example

Unlimited-precision decimal math support in Python itself

I have a question towards the pure math support of Python. As far as I know, at some point the calculation is not precise enough in Python, given that the operands are large enough.
Is there some kind of unlimited precision support in Python the language itself, rather than importing a library like numpy? I'm asking for something like BigDecimal in Java, which supports unlimited precision of decimal calculation.
For python native decimal.Decimal & fractions.Fraction performance once harnessed inside numpy dense-matrix ( sparse-matrix est'd ) computations -- quantitative performance comparison -- kindly ref. >>> https://stackoverflow.com/a/26248202/3666197
Have you looked at the decimal module? https://docs.python.org/2/library/decimal.html

Python numpy type decimal places

I was wondering if there is a type in Numpy that allows numbers with around 20 decimal places, besides the type "decimal".
If not, do you have a suggestion to achieve the speed I would get performing calculations, say as when using floats?
Thanks,
Blaise
mpmath at https://code.google.com/p/mpmath/
mpmath is a pure-Python library for multiprecision floating-point arithmetic. It provides an extensive set of transcendental functions,
unlimited exponent sizes, complex numbers, interval arithmetic,
numerical integration and differentiation, root-finding, linear
algebra, and much more. Almost any calculation can be performed just
as well at 10-digit or 1000-digit precision, and in many cases
mpmath implements asymptotically fast algorithms that scale well for extremely high precision work. mpmath internally uses Python's
builtin long integers by default, but automatically switches to
GMP/MPIR for much faster high-precision arithmetic if gmpy is
installed or if mpmath is imported from within Sage.

How do I do matrix computations in python without rounding?

I have some integer matrices of moderate size (a few hundred rows). I need to solve equations of the form Ax = b where b is a standard basis vector and A is one of my matrices. I have been using numpy.linalg.lstsq for this purpose, but the rounding errors end up being too significant.
How can I carry out an exact symbolic computation?
(PS I don't really need the code to be efficient; I'm more concerned about ease of coding.)
If your only option is to use free tools written in python, sympy might work, but it could well be simpler to use mathematica.
Note that if you're serious about your comment that you require your solution vector to be integer, then you're looking for something called the "integer least squares problem". Which is believed to be NP-hard. There are some heuristic solvers, but it all gets very complicated.
The mpmath library has support for arbitrary-precision floating-point numbers, and supports matrix algebra: http://mpmath.googlecode.com/svn/tags/0.17/doc/build/matrices.html
Using sympy to do the computation exactly is then a second option.

Categories

Resources