Does anyone know of a faster decimal implementation in python?
As the example below demonstrates, the standard library's decimal module is ~100 times slower than float.
from timeit import Timer
def run(val, the_class):
test = the_class(1)
for c in xrange(10000):
d = the_class(val)
d + test
d - test
d * test
d / test
d ** test
str(d)
abs(d)
if __name__ == "__main__":
a = Timer("run(123.345, float)", "from decimal_benchmark import run")
print "FLOAT", a.timeit(1)
a = Timer("run('123.345', Decimal)", "from decimal_benchmark import run; from decimal import Decimal")
print "DECIMAL", a.timeit(1)
Outputs:
FLOAT 0.040635041427
DECIMAL 3.39666790146
You can try cdecimal:
from cdecimal import Decimal
As of Python 3.3, the cdecimal implementation is now the built-in implementation of the decimal standard library module, so you don't need to install anything. Just use decimal.
For Python 2.7, installing cdecimal and using it instead of decimal should provide a speedup similar to what Python 3 gets by default.
The GMP library is one of the best arbitrary precision math libraries around, and there is a Python binding available at GMPY. I would try that method.
You should compare Decimal to Long Integer performance, not floating point. Floating point is mostly hardware these days. Decimal is used for decimal precision, while Floating Point is for wider range. Use the decimal package for monetary calculations.
To quote the decimal package manual:
Decimal numbers can be represented exactly. In contrast, numbers like 1.1 do not have an exact representation in binary floating point. End users typically would not expect 1.1 to display as 1.1000000000000001 as it does with binary floating point.
The exactness carries over into arithmetic. In decimal floating point, "0.1 + 0.1 + 0.1 - 0.3" is exactly equal to zero. In binary floating point, result is 5.5511151231257827e-017. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal would be preferred in accounting applications which have strict equality invariants.
Use cDecimal.
Adding the following to your benchmark:
a = Timer("run('123.345', Decimal)", "import sys; import cdecimal; sys.modules['decimal'] = cdecimal; from decimal_benchmark import run; from decimal import Decimal")
print "CDECIMAL", a.timeit(1)
My results are:
FLOAT 0.0257983528473
DECIMAL 2.45782495288
CDECIMAL 0.0687125069413
(Python 2.7.6/32, Win7/64, AMD Athlon II 2.1GHz)
python Decimal is very slow, one can use float or a faster implementation of Decimal cDecimal.
Related
I am still a beginner at Python, and using the in-built decimal method in Python, it's "slightly" off.
Here's the code:
import decimal
print(decimal.Decimal(0.02))
And here's the output:
0.0200000000000000004163336342344337026588618755340576171875
I begin to wonder, how is this possible? 0.02 is not 0.0200000000000000004163336342344337026588618755340576171875, but the decimal module perceives 0.02 as 0.0200000000000000004163336342344337026588618755340576171875. Is the decimal module bugged, or am I doing something wrong?
The Decimal method can't do anything if your input is already a float, it won't round it.
The only way for Decimal to not do that, it needs a string:
print(decimal.Decimal("0.02"))
The decimal module implements fixed and floating point arithmetic using the model familiar to most people, rather than the IEEE floating point version implemented by most computer hardware. A Decimal instance can represent any number exactly, round up or down, and apply a limit to the number of significant digits.
Decimal values are represented as instances of the Decimal class. The constructor takes as argument an integer, or a string. Floating point numbers must be converted to a string before being used to create a Decimal, letting the caller explicitly deal with the number of digits for values that cannot be expressed exactly using hardware floating point representations.
So, in your case, please pass the string as an input for your Decimal method. Hope this helps.
import decimal
print(decimal.Decimal('0.02'))
Check the documentation here
Is there some type like long in Python 2 that can handle very small number like 8.5e-350?
Or is there any way around in this situation because python supresses floats after 320 decimal places to 0?
The standard library comes with the decimal module
The decimal module provides support for fast correctly-rounded decimal
floating point arithmetic. It offers several advantages over the float
datatype.
...
Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as
large as needed for a given problem:
>>> from decimal import *
>>> getcontext().prec = 500
>>> Decimal(10) / Decimal(3)
Decimal('3.3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333')
>>> len('3.3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333')
501
Quick start tutorial
Python uses IEEE 745 so you don't get 350 decimal places, you get about 15 significant figures and a limit on the exponent to +/-350. If that number of significant figures is OK, and you don't need more dynamic range (i.e. all your values are small), express your quantities in different units so your values are in a suitable range.
I'm writing a program where I need to pass in very accurate decimal representations of fractions (i.e. accurate to over 200 decimal places). However simply telling python to include more decimal places (using %.50f, for instance) often simply adds a bunch of 0s to the ends of certain decimals.
Is there a way to get python to display accurately an arbitrary number of decimal places for a fraction? Do I need to install a package/module?
Python 2.7 and above can do the following,
Using the Decimal Library from the python standard libraries by using from decimal import * and do this:
from decimal import *
with localcontext() as context:
context.prec = #your_precision_here
print(Decimal(#calculation_here))
Check the decimal module, which is included with Python. It can show the exact decimal representation of any value stored in a float or decimal variable.
This is not quite the same as showing arbitrarily many decimal places for a fraction, but see if it meets your needs.
I would need to have a float variable rounded to 2 significant digits and store the result into a new variable (or the same of before, it doesn't matter) but this is what happens:
>>> a
981.32000000000005
>>> b= round(a,2)
>>> b
981.32000000000005
I would need this result, but into a variable that cannot be a string since I need to insert it as a float...
>>> print b
981.32
Actually truncate would also work I don't need extreme precision in this case.
What you are trying to do is in fact impossible. That's because 981.32 is not exactly representable as a binary floating point value. The closest double precision binary floating point value is:
981.3200000000000500222085975110530853271484375
I suspect that this may come as something of a shock to you. If so, then I suggest that you read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
You might choose to tackle your problem in one of the following ways:
Accept that binary floating point numbers cannot represent such values exactly, and continue to use them. Don't do any rounding at all, and keep the full value. When you wish to display the value as text, format it so that only two decimal places are emitted.
Use a data type that can represent your number exactly. That means a decimal rather than binary type. In Python you would use decimal.
Try this :
Round = lambda x, n: eval('"%.' + str(int(n)) + 'f" % ' + repr(x))
print Round(0.1, 2)
0.10
print Round(0.1, 4)
0.1000
print Round(981,32000000000005, 2)
981,32
Just indicate the number of digits you want as a second kwarg
I wrote a solution of this problem.
Plz try
from decimal import *
from autorounddecimal.core import adround,decimal_round_digit
decimal_round_digit(Decimal("981.32000000000005")) #=> Decimal("981.32")
adround(981.32000000000005) # just wrap decimal_round_digit
More detail can be found in https://github.com/niitsuma/autorounddecimal
There is a difference between the way Python prints floats and the way it stores floats. For example:
>>> a = 1.0/5.0
>>> a
0.20000000000000001
>>> print a
0.2
It's not actually possible to store an exact representation of many floats, as David Heffernan points out. It can be done if, looking at the float as a fraction, the denominator is a power of 2 (such as 1/4, 3/8, 5/64). Otherwise, due to the inherent limitations of binary, it has to make do with an approximation.
Python recognizes this, and when you use the print function, it will use the nicer representation seen above. This may make you think that Python is storing the float exactly, when in fact it is not, because it's not possible with the IEEE standard float representation. The difference in calculation is pretty insignificant, though, so for most practical purposes it isn't a problem. If you really really need those significant digits, though, use the decimal package.
This question already has answers here:
Is floating point arbitrary precision available?
(5 answers)
Closed 3 years ago.
I'm just learning the basics of Python at the moment and I thought that, as a learning exercise, I'd try writing something that would approximate the number "e". Anyway, it always gives the answer to 11 decimal places and I want it to give something more like 1000 decimal places. How do I do I do this?
Are you sure you need to make them "more precise"? Or do you just need to see more digits than Python shows by default?
>>> import math
>>> math.pi
3.141592653589793
>>>
>>> '{0:0.2f}'.format(math.pi)
'3.14'
>>>
>>> '{0:0.30f}'.format(math.pi)
'3.141592653589793115997963468544'
>>>
>>> '{0:0.60f}'.format(math.pi)
'3.141592653589793115997963468544185161590576171875000000000000'
However, note that
Floating point numbers are usually implemented using double in C; information about the precision and internal representation of floating point numbers for the machine on which your program is running is available in sys.float_info
I assure you that pi doesn't go to zero after 48 digits :-)
Almost all machines today use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”.
A IEEE-754 double has 64 bits (8 bytes), with the 52 bits of the fraction significand appearing in the memory format, the total precision is approximately 16 decimal digits.
So to represent a float number have a higher precise than that, you should use Decimal.
import decimal
decimal.getcontext().prec = 100
If you want it to be a number, with a precision of a thousand digits, the short answer is you can't..
A workaround is, you can use the decimal module. Here is an example:
import decimal
a = decimal.Decimal('2387324895172987120570935712093570921579217509185712093')
In this case, however, a is not a number anymore. It's just an instance of the decimal.Decimal class. Well, you can still do some math operations with it.