math is a Python module used by many to do a bit more advanced mathematical functions and using the decimal module. One can calculate stuff correctly 1.2-1.1=0.0999~, but using the decimal type it's 0.1.
My problem is that these two modules don't work well with each other. For example, log(1000, 10)=2.9999~, but using a decimal type gives the same result. How can I make these two work with each other? Do I need to implement my own functions? Isn't there any way?
You have Decimal.log10, Decimal.ln and Decimal.logb methods of each Decimal instance, and many more (max, sqrt):
from decimal import Decimal
print(Decimal('0.001').log10())
# Decimal('-3')
print(Decimal('0.001').ln())
# Decimal('-6.907755278982137052053974364')
There are no trigonometry functions, though.
More advanced alternative for arbitrary-precision calculations is mpmath, available at PyPI. It won't provide you with sin or tan as well, of course, because sin(x) is irrational for many x values, and so storing it as Decimal doesn't make sense. However, given fixed precision you can compute these functions via Tailor series etc. with mpmath help. You can also do Decimal(math.sin(0.17)) to get some decimal holding something close to sine of 0.17 radians.
Also refer to official Decimal recipes.
There is a log10 method in Decimal.
from decimal import *
a = Decimal('1000')
print(a.log10())
However, it would make more sense to me to use a function that calculates the exact value of a logarithm if you're trying to solve logarithms with exact integer solutions. Logarithm functions are generally expected to output some irrational result in typical usage. You could instead use a for loop and repeated division.
Related
I'm aware of Decimal, however I am working with a lot of code written by someone else, and I don't want to go through a large amount of code to change every initialization of a floating point number to Decimal. It would be more convenient if there was some kind of package where I could put SetPrecision(128) or such at the top of my scripts and be off to the races. I suspect no such thing exists but I figured I would ask just in case I'm wrong.
To head off XY Problem comments, I'm solving differential equations which are supposed to be positive invariant, and one quantity which has an equilibrium on the order of 1e-12 goes negative regardless of the error tolerance I specify (using scipy's interface to LSODA).
yes, but no. `
The bigfloat package is a Python wrapper for the GNU MPFR library for
arbitrary-precision floating-point reliable arithmetic. The MPFR
library is a well-known portable C library for arbitrary-precision
arithmetic on floating-point numbers. It provides precise control over
precisions and rounding modes and gives correctly-rounded reproducible
platform-independent results.`
Blockquote
https://pythonhosted.org/bigfloat
You would then need to coerce the builtin float to be bigfloat everywhere, which would likely be non-trivial.
LSODA exposed through scipy.integrate is double precision only.
You might want to look into some rescaling of variables, so that that thing which is 1e-12 becomes closer to unity.
EDIT. In the comments, you indicated
As I've stated three times, I am open to rewriting to avoid LSODA
Then what you can try doing is to look over the code of solve_ivp, which is pure python. Feed it with decimals or mpmath high-precision floats. Observe where it fails, look for where it assumes double precision. Rewrite, remove this assumption. Rinse and repeat. Whether it'll work in the end, I don't know. Whether it's worth it, I suspect not, but YMMV.
With Python's math library, an expression for an irrational number like
math.sin(math.pi/3)
will return a decimal approximation.
I've been looking at the documentation for a way to keep it at the exact value (i.e. returning sqrt(3)/2 instead of 0.8660254037844386), but I haven't found anything. I'm pretty new to Python, so I apologize if I missed it.
Symbolic mathematics is not something that most general-purpose programming languages do. Python works with integers and (real and complex) floating point values. For example the symbol math.pi is not a representation of the mathematical constant π, but just a Python variable containing a finite-precision floating point approximation of π.
However, some people do sometimes want to do symbolic mathematics, so tools exist as third-party libraries. For example, you can do symbolic maths in Python by installing the SymPy package.
Here is your answer using sympy:
import sympy
sympy.sqrt(sympy.pi/3)
#> sqrt(pi)/3
I was wondering if there is a type in Numpy that allows numbers with around 20 decimal places, besides the type "decimal".
If not, do you have a suggestion to achieve the speed I would get performing calculations, say as when using floats?
Thanks,
Blaise
mpmath at https://code.google.com/p/mpmath/
mpmath is a pure-Python library for multiprecision floating-point arithmetic. It provides an extensive set of transcendental functions,
unlimited exponent sizes, complex numbers, interval arithmetic,
numerical integration and differentiation, root-finding, linear
algebra, and much more. Almost any calculation can be performed just
as well at 10-digit or 1000-digit precision, and in many cases
mpmath implements asymptotically fast algorithms that scale well for extremely high precision work. mpmath internally uses Python's
builtin long integers by default, but automatically switches to
GMP/MPIR for much faster high-precision arithmetic if gmpy is
installed or if mpmath is imported from within Sage.
Is there a faster equivalent of the fractions module, something like a cFractions module, just as there is a cDecimal module, which is a faster equivalent of the Decimal module ? The fractions module is too slow.
Use http://code.google.com/p/gmpy/
It uses the GMP mutliple-precision library for fast integer and rational arithmetic.
Note: I'm also the maintainer.
I was struggling with lack of this package as well and decided to implement one called cfractions (source code available on Github).
The only thing we need is to install it
/path/to/python3 -m pip install cfractions
and after that replace fractions with cfractions in your modules, as easy as that.
Main features include
less memory
>>> from cfractions import Fraction
>>> import sys
>>> sys.getsizeof(Fraction())
32
compared to
>>> from fractions import Fraction
>>> import sys
>>> sys.getsizeof(Fraction())
48
so it's basically a plain Python object + 2 pointers for numerator & denominator.
more speed:
construction from pair of int
construction from single float
construction from str
sum of n instances
product of n instances
or if we take a look at relative performance
we can see fractions.Fraction skyrocketing, yay!
Note: I'm using perfplot package, all benchmarks run on Python3.9.4.
Python3.5+ support,
plain Python C API, no additional dependencies,
constructing from numerator/denominator pair, single int/float/any numbers.Rational value, str (from version 1.4.0),
full spectre of arithmetic & comparison operations,
string representation (both __repr__ & __str__),
pickleing and copying,
immmutability & hashability,
operating with int and float (with conversion of Fraction instance to float for the latter, as it is for fractions.Fraction),
PyPy support (by falling back to fractions.Fraction proxy),
property-based tests of all operations using Hypothesis framework.
What it doesn't include
operating with complex.
Unfortunately, there's no c equivalent available without needing a compiled external dependency. Depending on your needs, the gist I've made: https://gist.github.com/mscuthbert/f22942537ebbba2c31d4 may help.
It exposes a function opFrac(num) that optionally converts an int, float, or Fraction into a float or Fraction with a denominator limit (I use 65535 because I'm working with small fractions); if the float can be exactly represented in binary (i.e., it's a multiple of some power of two denominator), it leaves it alone. Otherwise it converts it to a Fraction. Similarly, if the Fraction is exactly representable in binary we convert it to a float; otherwise we leave it alone.
The Fraction(float).limit_denominator(x) call is extracted out into a helper function, _preFracLimitDenominator, that only creates one Fraction object rather than the three normally created with the call.
The use cases for this gist are pretty few, but where they exist, the results are spectacular. For my project, music21, we work mostly with notes that are generally placed on a beat (integer) or on a half, quarter, eighth, etc. beat (exactly representable in binary), but on the rarer occasions when notes have placement (offset) or duration that is, say, 1/3 or 1/5 of a beat, we were running into big floating point conversion problems that led to obscure bugs. Our test suite was running in 350 seconds using floating point offsets and durations. Switching everything to Fractions ballooned the time to 1100 seconds -- totally unacceptable. Switching to optional Fractions with fast Fraction creation brought the time back to 360 seconds, or only a 3% performance hit.
If you can deal with sometimes working with floats and sometimes Fractions, this may be the way to go.
I couldn't find anything.
You could make one.http://docs.python.org/extending/extending.html
A quick search on fractions in c gave me http://www.spiration.co.uk/post/1400/fractions-in-c---a-rational-arithmetic-library. Use the 2nd post, it also handles negative numbers.
But that may not be what you need and you can find something else. If you don't want to extend python you have to stick to Fractions if you can't find anyone who has a cFractions module. I'm sorry.
I was looking at the Golden Ratio formula for finding the nth Fibonacci number, and it made me curious.
I know Python handles arbitrarily large integers, but what sort of precision do you get with decimals? Is it just straight on top of a C double or something, or does it use a a more accurate modified implementation too? (Obviously not with arbitrary accuracy. ;D)
almost all platforms map Python floats to IEEE-754 “double precision”.
http://docs.python.org/tutorial/floatingpoint.html#representation-error
there's also the decimal module for arbitrary precision floating point math
Python floats use the double type of the underlying C compiler. As Bwmat says, this is generally IEEE-754 double precision.
However if you need more precision than that you can use the Python decimal module which was added in Python 2.4.
Python 2.6 also added the fraction module which may be a better fit for some problems.
Both of these are going to be slower than using the float type, but that is the price for more precision.