I was wondering if there is a type in Numpy that allows numbers with around 20 decimal places, besides the type "decimal".
If not, do you have a suggestion to achieve the speed I would get performing calculations, say as when using floats?
Thanks,
Blaise
mpmath at https://code.google.com/p/mpmath/
mpmath is a pure-Python library for multiprecision floating-point arithmetic. It provides an extensive set of transcendental functions,
unlimited exponent sizes, complex numbers, interval arithmetic,
numerical integration and differentiation, root-finding, linear
algebra, and much more. Almost any calculation can be performed just
as well at 10-digit or 1000-digit precision, and in many cases
mpmath implements asymptotically fast algorithms that scale well for extremely high precision work. mpmath internally uses Python's
builtin long integers by default, but automatically switches to
GMP/MPIR for much faster high-precision arithmetic if gmpy is
installed or if mpmath is imported from within Sage.
Related
math is a Python module used by many to do a bit more advanced mathematical functions and using the decimal module. One can calculate stuff correctly 1.2-1.1=0.0999~, but using the decimal type it's 0.1.
My problem is that these two modules don't work well with each other. For example, log(1000, 10)=2.9999~, but using a decimal type gives the same result. How can I make these two work with each other? Do I need to implement my own functions? Isn't there any way?
You have Decimal.log10, Decimal.ln and Decimal.logb methods of each Decimal instance, and many more (max, sqrt):
from decimal import Decimal
print(Decimal('0.001').log10())
# Decimal('-3')
print(Decimal('0.001').ln())
# Decimal('-6.907755278982137052053974364')
There are no trigonometry functions, though.
More advanced alternative for arbitrary-precision calculations is mpmath, available at PyPI. It won't provide you with sin or tan as well, of course, because sin(x) is irrational for many x values, and so storing it as Decimal doesn't make sense. However, given fixed precision you can compute these functions via Tailor series etc. with mpmath help. You can also do Decimal(math.sin(0.17)) to get some decimal holding something close to sine of 0.17 radians.
Also refer to official Decimal recipes.
There is a log10 method in Decimal.
from decimal import *
a = Decimal('1000')
print(a.log10())
However, it would make more sense to me to use a function that calculates the exact value of a logarithm if you're trying to solve logarithms with exact integer solutions. Logarithm functions are generally expected to output some irrational result in typical usage. You could instead use a for loop and repeated division.
I'm aware of Decimal, however I am working with a lot of code written by someone else, and I don't want to go through a large amount of code to change every initialization of a floating point number to Decimal. It would be more convenient if there was some kind of package where I could put SetPrecision(128) or such at the top of my scripts and be off to the races. I suspect no such thing exists but I figured I would ask just in case I'm wrong.
To head off XY Problem comments, I'm solving differential equations which are supposed to be positive invariant, and one quantity which has an equilibrium on the order of 1e-12 goes negative regardless of the error tolerance I specify (using scipy's interface to LSODA).
yes, but no. `
The bigfloat package is a Python wrapper for the GNU MPFR library for
arbitrary-precision floating-point reliable arithmetic. The MPFR
library is a well-known portable C library for arbitrary-precision
arithmetic on floating-point numbers. It provides precise control over
precisions and rounding modes and gives correctly-rounded reproducible
platform-independent results.`
Blockquote
https://pythonhosted.org/bigfloat
You would then need to coerce the builtin float to be bigfloat everywhere, which would likely be non-trivial.
LSODA exposed through scipy.integrate is double precision only.
You might want to look into some rescaling of variables, so that that thing which is 1e-12 becomes closer to unity.
EDIT. In the comments, you indicated
As I've stated three times, I am open to rewriting to avoid LSODA
Then what you can try doing is to look over the code of solve_ivp, which is pure python. Feed it with decimals or mpmath high-precision floats. Observe where it fails, look for where it assumes double precision. Rewrite, remove this assumption. Rinse and repeat. Whether it'll work in the end, I don't know. Whether it's worth it, I suspect not, but YMMV.
I have a question towards the pure math support of Python. As far as I know, at some point the calculation is not precise enough in Python, given that the operands are large enough.
Is there some kind of unlimited precision support in Python the language itself, rather than importing a library like numpy? I'm asking for something like BigDecimal in Java, which supports unlimited precision of decimal calculation.
For python native decimal.Decimal & fractions.Fraction performance once harnessed inside numpy dense-matrix ( sparse-matrix est'd ) computations -- quantitative performance comparison -- kindly ref. >>> https://stackoverflow.com/a/26248202/3666197
Have you looked at the decimal module? https://docs.python.org/2/library/decimal.html
I am doing convolution operations involving some very small numbers, and am encountering a lot of underflow on the way.
You can take a look at decimal package coming in standard library. If it doesn't work for you, there are other alternatives like mpmath or bigfloat.
I have some integer matrices of moderate size (a few hundred rows). I need to solve equations of the form Ax = b where b is a standard basis vector and A is one of my matrices. I have been using numpy.linalg.lstsq for this purpose, but the rounding errors end up being too significant.
How can I carry out an exact symbolic computation?
(PS I don't really need the code to be efficient; I'm more concerned about ease of coding.)
If your only option is to use free tools written in python, sympy might work, but it could well be simpler to use mathematica.
Note that if you're serious about your comment that you require your solution vector to be integer, then you're looking for something called the "integer least squares problem". Which is believed to be NP-hard. There are some heuristic solvers, but it all gets very complicated.
The mpmath library has support for arbitrary-precision floating-point numbers, and supports matrix algebra: http://mpmath.googlecode.com/svn/tags/0.17/doc/build/matrices.html
Using sympy to do the computation exactly is then a second option.