>>> sum([0.3, 0.1, 0.2])
0.6000000000000001
>>> sum([0.3, 0.1, 0.2]) == 0.6
False
What can I do to make the result be exactly 0.6?
I don't want to round the result to a certain number of decimal digits because then I could lose precision for other list instances.
A float is inherently imprecise in pretty much every language because it cannot be represented precisely in binary.
If you need exact precision use the Decimal class:
from decimal import Decimal
num1 = Decimal("0.3")
num2 = Decimal("0.2")
num3 = Decimal("0.1")
print(sum([num1, num2, num3]))
Which will return the very pleasing result of:
Decimal('0.6') # One can do float() on this output to get plain (0.6).
Which conveniently is also a Decimal object with which you can work.
Use math.fsome() instead of sum().
Related
I try get ration of variable and get unexpected result. Can somebody explain this?
>>> value = 3.2
>>> ratios = value.as_integer_ratio()
>>> ratios
(3602879701896397, 1125899906842624)
>>> ratios[0] / ratios[1]
3.2
I using python 3.3
But I think that (16, 5) is much better solution
And why it correct for 2.5
>>> value = 2.5
>>> value.as_integer_ratio()
(5, 2)
Use the fractions module to simplify fractions:
>>> from fractions import Fraction
>>> Fraction(3.2)
Fraction(3602879701896397, 1125899906842624)
>>> Fraction(3.2).limit_denominator()
Fraction(16, 5)
From the Fraction.limit_denominator() function:
Finds and returns the closest Fraction to self that has denominator at most max_denominator. This method is useful for finding rational approximations to a given floating-point number
Floating point numbers are limited in precision and cannot represent many numbers exactly; what you see is a rounded representation, but the real number is:
>>> format(3.2, '.50f')
'3.20000000000000017763568394002504646778106689453125'
because a floating point number is represented as a sum of binary fractions; 1/5 can only be represented by adding up 1/8 + 1/16 + 1/128 + more binary fractions for increasing exponents of two.
It's not 16/5 because 3.2 isn't 3.2 exactly... it's a floating point rough approximation of it... eg: 3.20000000000000017764
While using the fractions module, it is better to provide a string instead of a float to avoid floating point representation issues.
For example, if you pass '3.2' instead of 3.2 you get your desired result:
In : fractions.Fraction('3.2')
Out: Fraction(16, 5)
If you already have the value stored in a variable, you can use string formatting as well.
In : value = 3.2
In : fractions.Fraction(f'{value:.2f}')
Out: Fraction(16, 5)
This I imagine is extremely simple - but why in the following are the two values for y not == 0? I thought the whole point of the decimal module was to get rid of the float dust ...The following is an extremely simplified version of a mathematical routine that passes numbers around as variables.
from decimal import *
getcontext().prec = 2
q = Decimal(0.01)
x = Decimal(0.10) * Decimal(0.10)
y = Decimal(x) - Decimal(q)
print(x,y, Decimal(y))
'''
x== 0.010
y== -2.1E-19
Decimal(y) == -2.1E-19
'''
Try specifying the numbers as strings
>>> Decimal('0.10') * Decimal('0.10') - Decimal('0.0100')
>>> Decimal('0.000')
The float literal 0.10 is not precisely the mathematical number 0.10, using it to initialize Decimal doesn't avoid the float precision problem.
Instead, using strings to initialize Decimal can give you expected result:
x = Decimal('0.10') * Decimal('0.10')
y = Decimal(x) - Decimal('0.010')
This is a more detailed explanation of the point made in existing answers.
You really do need to get rid of the numeric literals such as 0.1 if you want exact decimal arithmetic. The numeric literals will typically be represented by IEEE 754 64-bit binary floating point numbers.
The closest such number to 0.1 is 0.1000000000000000055511151231257827021181583404541015625. Its square is 0.01000000000000000111022302462515657123851077828659396139564708135883709660962637144621112383902072906494140625, which is not the same as the closest to 0.01, 0.01000000000000000020816681711721685132943093776702880859375.
You can get a clearer view of what is going on by removing the prec =2 context, allowing more precise output:
from decimal import *
q = Decimal(0.01)
x = Decimal(0.10) * Decimal(0.10)
y = Decimal(x) - Decimal(q)
print(q)
print(x)
print(y)
Output:
0.01000000000000000020816681711721685132943093776702880859375
0.01000000000000000111022302463
9.020562075127831486705690622E-19
If you had used string literals, as suggested by the other responses, the conversion to Decimal would have been done directly, without going through binary floating point. Both 0.1 and 0.01 are exactly representable in Decimal, so there would be no rounding error.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 2 years ago.
Parts of this question have been addressed elsewhere (e.g. is floating point math broken?).
The following reveals a difference in the way numbers are generated by division vs multiplication:
>>> listd = [i/10 for i in range(6)]
>>> listm = [i*0.1 for i in range(6)]
>>> print(listd)
[0.0, 0.1, 0.2, 0.3, 0.4, 0.5]
>>> print(listm)
[0.0, 0.1, 0.2, 0.30000000000000004, 0.4, 0.5]
In the second case, 0.3 has a rounding error of about 1e-16, double floating point precision.
But I don't understand three things about the output:
Since the only numbers here exactly representable in binary are 0.0 and 0.5, why aren't those the only exact numbers printed above?
Why do the two list comprehensions evaluate differently?
Why are the two string representations of the numbers different, but not their binary representations?
>>> def bf(x):
return bin(struct.unpack('#i',struct.pack('!f',float(x)))[0])
>>> x1 = 3/10
>>> x2 = 3*0.1
>>> print(repr(x1).ljust(20), "=", bf(x1))
>>> print(repr(x2).ljust(20), "=", bf(x2))
0.3 = -0b1100101011001100110011011000010
0.30000000000000004 = -0b1100101011001100110011011000010
Answering each question:
Since the only numbers here exactly representable in binary are 0.0 and 0.5, why aren't those the only exact numbers printed above?
Python rounds off the display of any floating point number to the shortest literal that produces the same value when evaluated. So yes, many of those numbers aren't actually the same as the actual number they represent, but if you typed them in in Python, you'd get that (slightly inaccurate) value without the math.
Why do the two list comprehensions evaluate differently?
0.1 is already inaccurate, as you've stated, so multiplying by it is not exactly equivalent to dividing by 10 (where at least both inputs are precise integers). Sometimes that inaccuracy means the result is not the same as dividing by 10; after all, you multiplied by "just over one tenth", not "one tenth".
The critical point here is that 10 is represented exactly in binary, whereas 0.1 is not. Dividing by 10 gets you the closest possible representation for each fraction; multiplying by the inexact conversion of 0.1 does not guarantee precision. Sometimes you get "close enough" to round off the result to a single decimal place, sometimes not.
Is that enough rationale?
This I imagine is extremely simple - but why in the following are the two values for y not == 0? I thought the whole point of the decimal module was to get rid of the float dust ...The following is an extremely simplified version of a mathematical routine that passes numbers around as variables.
from decimal import *
getcontext().prec = 2
q = Decimal(0.01)
x = Decimal(0.10) * Decimal(0.10)
y = Decimal(x) - Decimal(q)
print(x,y, Decimal(y))
'''
x== 0.010
y== -2.1E-19
Decimal(y) == -2.1E-19
'''
Try specifying the numbers as strings
>>> Decimal('0.10') * Decimal('0.10') - Decimal('0.0100')
>>> Decimal('0.000')
The float literal 0.10 is not precisely the mathematical number 0.10, using it to initialize Decimal doesn't avoid the float precision problem.
Instead, using strings to initialize Decimal can give you expected result:
x = Decimal('0.10') * Decimal('0.10')
y = Decimal(x) - Decimal('0.010')
This is a more detailed explanation of the point made in existing answers.
You really do need to get rid of the numeric literals such as 0.1 if you want exact decimal arithmetic. The numeric literals will typically be represented by IEEE 754 64-bit binary floating point numbers.
The closest such number to 0.1 is 0.1000000000000000055511151231257827021181583404541015625. Its square is 0.01000000000000000111022302462515657123851077828659396139564708135883709660962637144621112383902072906494140625, which is not the same as the closest to 0.01, 0.01000000000000000020816681711721685132943093776702880859375.
You can get a clearer view of what is going on by removing the prec =2 context, allowing more precise output:
from decimal import *
q = Decimal(0.01)
x = Decimal(0.10) * Decimal(0.10)
y = Decimal(x) - Decimal(q)
print(q)
print(x)
print(y)
Output:
0.01000000000000000020816681711721685132943093776702880859375
0.01000000000000000111022302463
9.020562075127831486705690622E-19
If you had used string literals, as suggested by the other responses, the conversion to Decimal would have been done directly, without going through binary floating point. Both 0.1 and 0.01 are exactly representable in Decimal, so there would be no rounding error.
I try get ration of variable and get unexpected result. Can somebody explain this?
>>> value = 3.2
>>> ratios = value.as_integer_ratio()
>>> ratios
(3602879701896397, 1125899906842624)
>>> ratios[0] / ratios[1]
3.2
I using python 3.3
But I think that (16, 5) is much better solution
And why it correct for 2.5
>>> value = 2.5
>>> value.as_integer_ratio()
(5, 2)
Use the fractions module to simplify fractions:
>>> from fractions import Fraction
>>> Fraction(3.2)
Fraction(3602879701896397, 1125899906842624)
>>> Fraction(3.2).limit_denominator()
Fraction(16, 5)
From the Fraction.limit_denominator() function:
Finds and returns the closest Fraction to self that has denominator at most max_denominator. This method is useful for finding rational approximations to a given floating-point number
Floating point numbers are limited in precision and cannot represent many numbers exactly; what you see is a rounded representation, but the real number is:
>>> format(3.2, '.50f')
'3.20000000000000017763568394002504646778106689453125'
because a floating point number is represented as a sum of binary fractions; 1/5 can only be represented by adding up 1/8 + 1/16 + 1/128 + more binary fractions for increasing exponents of two.
It's not 16/5 because 3.2 isn't 3.2 exactly... it's a floating point rough approximation of it... eg: 3.20000000000000017764
While using the fractions module, it is better to provide a string instead of a float to avoid floating point representation issues.
For example, if you pass '3.2' instead of 3.2 you get your desired result:
In : fractions.Fraction('3.2')
Out: Fraction(16, 5)
If you already have the value stored in a variable, you can use string formatting as well.
In : value = 3.2
In : fractions.Fraction(f'{value:.2f}')
Out: Fraction(16, 5)