This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 5 years ago.
I know this has been asked numerous times and I've come across many blogs and SO answers but this one's making me pull my hair out. I just want to multiply a two decimal number by 100 so I get rid of its decimals:
>>> 4321.90 * 100
432189.99999999994
>>> Decimal(4321.90) * Decimal(100)
Decimal('432189.9999999999636202119291')
I'm scared to use rounding for such seemingly trivial operation. Would it be safe? What if the precision problem plays tricks on me and the result is close to xxx.5? Can that happen? I do understand the problem at the binary level, but I come from C# and I don't have that problem with .Net's decimal type:
decimal x = 4321.90m;
decimal y = 100m;
Console.WriteLine(x * y);
432190,00
I thought Python's decimal module was supposed to fix that. I'm about to convert the initial value to string and do the math with string manipulations, and I feel bad about it...
The main reason it fails with Python is because 4321.90 is interpreted as float (you lose precision at that point) and then casted to Decimal at runtime. With C# 4321.90m is interpreted as decimal to begin with. Python simply doesn't support decimals as a built-in structure.
But there's an easy way to fix that with Python. Simply use strings:
>>> Decimal('4321.90') * Decimal('100')
Decimal('432190.00')
I'm about to convert the initial value to string
Yes! (but don't do it by calling str - use a string literal)
and do the math with string manipulations
No!
When hardcoding a decimal value into your source code, you should initialize it from a string literal, not a float literal. With 4321.90, floating-point rounding has already occurred, and building a Decimal won't undo that. With "4321.90", Decimal has the original text you wrote available to perform an exact initialization:
Decimal('4321.90')
Floating point inaccuracy again.
Decimal(number) doesn't change a thing: the value is modified before it hits Decimal.
You can avoid that by passing strings to Decimal, though:
Decimal("4321.90") * Decimal("100")
result:
Decimal('432190.00')
(so Decimal handles the floating point numbers without using the floating point registers & operations at all)
Related
This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 5 years ago.
I know this has been asked numerous times and I've come across many blogs and SO answers but this one's making me pull my hair out. I just want to multiply a two decimal number by 100 so I get rid of its decimals:
>>> 4321.90 * 100
432189.99999999994
>>> Decimal(4321.90) * Decimal(100)
Decimal('432189.9999999999636202119291')
I'm scared to use rounding for such seemingly trivial operation. Would it be safe? What if the precision problem plays tricks on me and the result is close to xxx.5? Can that happen? I do understand the problem at the binary level, but I come from C# and I don't have that problem with .Net's decimal type:
decimal x = 4321.90m;
decimal y = 100m;
Console.WriteLine(x * y);
432190,00
I thought Python's decimal module was supposed to fix that. I'm about to convert the initial value to string and do the math with string manipulations, and I feel bad about it...
The main reason it fails with Python is because 4321.90 is interpreted as float (you lose precision at that point) and then casted to Decimal at runtime. With C# 4321.90m is interpreted as decimal to begin with. Python simply doesn't support decimals as a built-in structure.
But there's an easy way to fix that with Python. Simply use strings:
>>> Decimal('4321.90') * Decimal('100')
Decimal('432190.00')
I'm about to convert the initial value to string
Yes! (but don't do it by calling str - use a string literal)
and do the math with string manipulations
No!
When hardcoding a decimal value into your source code, you should initialize it from a string literal, not a float literal. With 4321.90, floating-point rounding has already occurred, and building a Decimal won't undo that. With "4321.90", Decimal has the original text you wrote available to perform an exact initialization:
Decimal('4321.90')
Floating point inaccuracy again.
Decimal(number) doesn't change a thing: the value is modified before it hits Decimal.
You can avoid that by passing strings to Decimal, though:
Decimal("4321.90") * Decimal("100")
result:
Decimal('432190.00')
(so Decimal handles the floating point numbers without using the floating point registers & operations at all)
I would need to have a float variable rounded to 2 significant digits and store the result into a new variable (or the same of before, it doesn't matter) but this is what happens:
>>> a
981.32000000000005
>>> b= round(a,2)
>>> b
981.32000000000005
I would need this result, but into a variable that cannot be a string since I need to insert it as a float...
>>> print b
981.32
Actually truncate would also work I don't need extreme precision in this case.
What you are trying to do is in fact impossible. That's because 981.32 is not exactly representable as a binary floating point value. The closest double precision binary floating point value is:
981.3200000000000500222085975110530853271484375
I suspect that this may come as something of a shock to you. If so, then I suggest that you read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
You might choose to tackle your problem in one of the following ways:
Accept that binary floating point numbers cannot represent such values exactly, and continue to use them. Don't do any rounding at all, and keep the full value. When you wish to display the value as text, format it so that only two decimal places are emitted.
Use a data type that can represent your number exactly. That means a decimal rather than binary type. In Python you would use decimal.
Try this :
Round = lambda x, n: eval('"%.' + str(int(n)) + 'f" % ' + repr(x))
print Round(0.1, 2)
0.10
print Round(0.1, 4)
0.1000
print Round(981,32000000000005, 2)
981,32
Just indicate the number of digits you want as a second kwarg
I wrote a solution of this problem.
Plz try
from decimal import *
from autorounddecimal.core import adround,decimal_round_digit
decimal_round_digit(Decimal("981.32000000000005")) #=> Decimal("981.32")
adround(981.32000000000005) # just wrap decimal_round_digit
More detail can be found in https://github.com/niitsuma/autorounddecimal
There is a difference between the way Python prints floats and the way it stores floats. For example:
>>> a = 1.0/5.0
>>> a
0.20000000000000001
>>> print a
0.2
It's not actually possible to store an exact representation of many floats, as David Heffernan points out. It can be done if, looking at the float as a fraction, the denominator is a power of 2 (such as 1/4, 3/8, 5/64). Otherwise, due to the inherent limitations of binary, it has to make do with an approximation.
Python recognizes this, and when you use the print function, it will use the nicer representation seen above. This may make you think that Python is storing the float exactly, when in fact it is not, because it's not possible with the IEEE standard float representation. The difference in calculation is pretty insignificant, though, so for most practical purposes it isn't a problem. If you really really need those significant digits, though, use the decimal package.
This question already has answers here:
Why python decimal.Decimal precision differs with equable args?
(2 answers)
Closed 8 years ago.
I'm in the process of converting a programme I've made from using floats to decimals.
Obviously the main reason I'm doing this is for accuracy.
I haven't used decimal before so thought I'd have a play first. The first thing I did was this:
>>> x = Decimal(7.2)
>>> x
Decimal('7.20000000000000017763568394002504646778106689453125')
Now considering decimals meant to be accurate and avoid long trailing numbers like floats, I was pretty surprised to see that happen. It's also gone to 50 D.P. despite the standard preset of 28 (and doesn't matter what you set the preset too.
Is this a bug (|feature)? And why is it happening?
Decimal(7.2) will create a decimal from the exact value of the float 7.2. Since the float is not precise, while Decimal is, creating the decimal will carry over the inaccuracies from the float into the decimal, yielding the result you see there.
To create the exact decimal of 7.2, you need to specify it as a string:
Decimal('7.2')
This happens, because you feed a float literal, that cannot be represented accurately in binary. You should provide a string:
Decimal('7.2')
or use integers:
Decimal(72) / 10
This question already has answers here:
Is floating point arbitrary precision available?
(5 answers)
Closed 3 years ago.
I'm just learning the basics of Python at the moment and I thought that, as a learning exercise, I'd try writing something that would approximate the number "e". Anyway, it always gives the answer to 11 decimal places and I want it to give something more like 1000 decimal places. How do I do I do this?
Are you sure you need to make them "more precise"? Or do you just need to see more digits than Python shows by default?
>>> import math
>>> math.pi
3.141592653589793
>>>
>>> '{0:0.2f}'.format(math.pi)
'3.14'
>>>
>>> '{0:0.30f}'.format(math.pi)
'3.141592653589793115997963468544'
>>>
>>> '{0:0.60f}'.format(math.pi)
'3.141592653589793115997963468544185161590576171875000000000000'
However, note that
Floating point numbers are usually implemented using double in C; information about the precision and internal representation of floating point numbers for the machine on which your program is running is available in sys.float_info
I assure you that pi doesn't go to zero after 48 digits :-)
Almost all machines today use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”.
A IEEE-754 double has 64 bits (8 bytes), with the 52 bits of the fraction significand appearing in the memory format, the total precision is approximately 16 decimal digits.
So to represent a float number have a higher precise than that, you should use Decimal.
import decimal
decimal.getcontext().prec = 100
If you want it to be a number, with a precision of a thousand digits, the short answer is you can't..
A workaround is, you can use the decimal module. Here is an example:
import decimal
a = decimal.Decimal('2387324895172987120570935712093570921579217509185712093')
In this case, however, a is not a number anymore. It's just an instance of the decimal.Decimal class. Well, you can still do some math operations with it.
I am depending on some code that uses the Decimal class because it needs precision to a certain number of decimal places. Some of the functions allow inputs to be floats because of the way that it interfaces with other parts of the codebase. To convert them to decimal objects, it uses things like
mydec = decimal.Decimal(str(x))
where x is the float taken as input. My question is, does anyone know what the standard is for the 'str' method as applied to floats?
For example, take the number 2.1234512. It is stored internally as 2.12345119999999999 because of how floats are represented.
>>> x = 2.12345119999999999
>>> x
2.1234511999999999
>>> str(x)
'2.1234512'
Ok, str(x) in this case is doing something like '%.6f' % x. This is a problem with the way my code converts to decimals. Take the following:
>>> d = decimal.Decimal('2.12345119999999999')
>>> ds = decimal.Decimal(str(2.12345119999999999))
>>> d - ds
Decimal('-1E-17')
So if I have the float, 2.12345119999999999, and I want to pass it to Decimal, converting it to a string using str() gets me the wrong answer. I need to know what are the rules for str(x) that determine what the formatting will be, because I need to determine whether this code needs to be re-written to avoid this error (note that it might be OK, because, for example, the code might round to the 10th decimal place once we have a decimal object)
There must be some set of rules in python's docs that hopefully someone here can point me to. Thanks!
In the Python source, look in "Include/floatobject.h". The precision for the string conversion is set a few lines from the top after an comment with some explanation of the choice:
/* The str() precision PyFloat_STR_PRECISION is chosen so that in most cases,
the rounding noise created by various operations is suppressed, while
giving plenty of precision for practical use. */
#define PyFloat_STR_PRECISION 12
You have the option of rebuilding, if you need something different. Any changes will change formatting of floats and complex numbers. See ./Objects/complexobject.c and ./Objects/floatobject.c. Also, you can compare the difference between how repr and str convert doubles in these two files.
There's a couple of issues worth discussing here, but the summary is: you cannot extract information that is not stored on your system already.
If you've taken a decimal number and stored it as a floating point, you'll have lost information, since most decimal (base 10) numbers with a finite number of digits cannot be stored using a finite number of digits in base 2 (binary).
As was mentioned, str(a_float) will really call a_float.__str__(). As the documentation states, the purpose of that method is to
return a string containing a nicely printable representation of an object
There's no particular definition for the float case. My opinion is that, for your purposes, you should consider __str__'s behavior to be undefined, since there's no official documentation on it - the current implementation can change anytime.
If you don't have the original strings, there's no way to extract the missing digits of the decimal representation from the float objects. All you can do is round predictably, using string formatting (which you mention):
Decimal( "{0:.5f}".format(a_float) )
You can also remove 0s on the right with resulting_string.rstrip("0").
Again, this method does not recover the information that has been lost.