This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 2 years ago.
Parts of this question have been addressed elsewhere (e.g. is floating point math broken?).
The following reveals a difference in the way numbers are generated by division vs multiplication:
>>> listd = [i/10 for i in range(6)]
>>> listm = [i*0.1 for i in range(6)]
>>> print(listd)
[0.0, 0.1, 0.2, 0.3, 0.4, 0.5]
>>> print(listm)
[0.0, 0.1, 0.2, 0.30000000000000004, 0.4, 0.5]
In the second case, 0.3 has a rounding error of about 1e-16, double floating point precision.
But I don't understand three things about the output:
Since the only numbers here exactly representable in binary are 0.0 and 0.5, why aren't those the only exact numbers printed above?
Why do the two list comprehensions evaluate differently?
Why are the two string representations of the numbers different, but not their binary representations?
>>> def bf(x):
return bin(struct.unpack('#i',struct.pack('!f',float(x)))[0])
>>> x1 = 3/10
>>> x2 = 3*0.1
>>> print(repr(x1).ljust(20), "=", bf(x1))
>>> print(repr(x2).ljust(20), "=", bf(x2))
0.3 = -0b1100101011001100110011011000010
0.30000000000000004 = -0b1100101011001100110011011000010
Answering each question:
Since the only numbers here exactly representable in binary are 0.0 and 0.5, why aren't those the only exact numbers printed above?
Python rounds off the display of any floating point number to the shortest literal that produces the same value when evaluated. So yes, many of those numbers aren't actually the same as the actual number they represent, but if you typed them in in Python, you'd get that (slightly inaccurate) value without the math.
Why do the two list comprehensions evaluate differently?
0.1 is already inaccurate, as you've stated, so multiplying by it is not exactly equivalent to dividing by 10 (where at least both inputs are precise integers). Sometimes that inaccuracy means the result is not the same as dividing by 10; after all, you multiplied by "just over one tenth", not "one tenth".
The critical point here is that 10 is represented exactly in binary, whereas 0.1 is not. Dividing by 10 gets you the closest possible representation for each fraction; multiplying by the inexact conversion of 0.1 does not guarantee precision. Sometimes you get "close enough" to round off the result to a single decimal place, sometimes not.
Is that enough rationale?
Related
I try get ration of variable and get unexpected result. Can somebody explain this?
>>> value = 3.2
>>> ratios = value.as_integer_ratio()
>>> ratios
(3602879701896397, 1125899906842624)
>>> ratios[0] / ratios[1]
3.2
I using python 3.3
But I think that (16, 5) is much better solution
And why it correct for 2.5
>>> value = 2.5
>>> value.as_integer_ratio()
(5, 2)
Use the fractions module to simplify fractions:
>>> from fractions import Fraction
>>> Fraction(3.2)
Fraction(3602879701896397, 1125899906842624)
>>> Fraction(3.2).limit_denominator()
Fraction(16, 5)
From the Fraction.limit_denominator() function:
Finds and returns the closest Fraction to self that has denominator at most max_denominator. This method is useful for finding rational approximations to a given floating-point number
Floating point numbers are limited in precision and cannot represent many numbers exactly; what you see is a rounded representation, but the real number is:
>>> format(3.2, '.50f')
'3.20000000000000017763568394002504646778106689453125'
because a floating point number is represented as a sum of binary fractions; 1/5 can only be represented by adding up 1/8 + 1/16 + 1/128 + more binary fractions for increasing exponents of two.
It's not 16/5 because 3.2 isn't 3.2 exactly... it's a floating point rough approximation of it... eg: 3.20000000000000017764
While using the fractions module, it is better to provide a string instead of a float to avoid floating point representation issues.
For example, if you pass '3.2' instead of 3.2 you get your desired result:
In : fractions.Fraction('3.2')
Out: Fraction(16, 5)
If you already have the value stored in a variable, you can use string formatting as well.
In : value = 3.2
In : fractions.Fraction(f'{value:.2f}')
Out: Fraction(16, 5)
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 6 years ago.
Basically I thought I created a loop to just increase .1 every iteration. What I have got is these numbers below like 0.30000000000000004,0.7999999999999999, 3.0000000000000013. Here is my code and the results. Why is it not .1, .2. .3, etc. and/or why is it 0.30000000000000004, 0.4, 0.5, 0.6, 0.7, 0.7999999999999999, etc. Basically why are their unexpected, for me, decimals.
>>> tph_bin = []
>>> bin_num = 0
>>> while bin_num <= 3.5:
tph_bin.append(bin_num)
bin_num = bin_num + .1
>>> tph_bin
[0, 0.1, 0.2, 0.30000000000000004, 0.4, 0.5, 0.6, 0.7, 0.7999999999999999, 0.8999999999999999, 0.9999999999999999, 1.0999999999999999, 1.2, 1.3, 1.4000000000000001, 1.5000000000000002, 1.6000000000000003, 1.7000000000000004, 1.8000000000000005, 1.9000000000000006, 2.0000000000000004, 2.1000000000000005, 2.2000000000000006, 2.3000000000000007, 2.400000000000001, 2.500000000000001, 2.600000000000001, 2.700000000000001, 2.800000000000001, 2.9000000000000012, 3.0000000000000013, 3.1000000000000014, 3.2000000000000015, 3.3000000000000016, 3.4000000000000017]
Bonus Question: Is there a better way to create a list of numbers increasing by .1?
This is a floating point precision limitation. Please refer to:
https://docs.python.org/2/tutorial/floatingpoint.html
0.1 is actually stored as the binary fraction:
0.00011001100110011001100110011001100110011001100110011010
As you can see, that can lead to binary rounding errors as numbers are added.
Try using Decimal as an alternative if all you care is 1 decimal place precision:
from decimal import *
value = Decimal("0.1")+Decimal("0.1")+Decimal("0.1")
print value
# 0.3
if Decimal('0.3') == value
print 'This works!'
Answer to Bonus Question: Is there a better way to create a list of numbers increasing by .1?
with numpy
numpy.arange(0,3.5,0.1)
Its basically the disadvantages of using float numbers. The number 0.2 plus 0.1 just got overflowed and the result wont fit inside a memory block reserved to float types so it cuts of the part overflowed and rounds it around that value. If you want to work with numbers that way, you should operate with integers and then divide the result by 10 or just round up the values, but never try to compare floats by value because the outcome could be surprising.
>>> sum([0.3, 0.1, 0.2])
0.6000000000000001
>>> sum([0.3, 0.1, 0.2]) == 0.6
False
What can I do to make the result be exactly 0.6?
I don't want to round the result to a certain number of decimal digits because then I could lose precision for other list instances.
A float is inherently imprecise in pretty much every language because it cannot be represented precisely in binary.
If you need exact precision use the Decimal class:
from decimal import Decimal
num1 = Decimal("0.3")
num2 = Decimal("0.2")
num3 = Decimal("0.1")
print(sum([num1, num2, num3]))
Which will return the very pleasing result of:
Decimal('0.6') # One can do float() on this output to get plain (0.6).
Which conveniently is also a Decimal object with which you can work.
Use math.fsome() instead of sum().
I try get ration of variable and get unexpected result. Can somebody explain this?
>>> value = 3.2
>>> ratios = value.as_integer_ratio()
>>> ratios
(3602879701896397, 1125899906842624)
>>> ratios[0] / ratios[1]
3.2
I using python 3.3
But I think that (16, 5) is much better solution
And why it correct for 2.5
>>> value = 2.5
>>> value.as_integer_ratio()
(5, 2)
Use the fractions module to simplify fractions:
>>> from fractions import Fraction
>>> Fraction(3.2)
Fraction(3602879701896397, 1125899906842624)
>>> Fraction(3.2).limit_denominator()
Fraction(16, 5)
From the Fraction.limit_denominator() function:
Finds and returns the closest Fraction to self that has denominator at most max_denominator. This method is useful for finding rational approximations to a given floating-point number
Floating point numbers are limited in precision and cannot represent many numbers exactly; what you see is a rounded representation, but the real number is:
>>> format(3.2, '.50f')
'3.20000000000000017763568394002504646778106689453125'
because a floating point number is represented as a sum of binary fractions; 1/5 can only be represented by adding up 1/8 + 1/16 + 1/128 + more binary fractions for increasing exponents of two.
It's not 16/5 because 3.2 isn't 3.2 exactly... it's a floating point rough approximation of it... eg: 3.20000000000000017764
While using the fractions module, it is better to provide a string instead of a float to avoid floating point representation issues.
For example, if you pass '3.2' instead of 3.2 you get your desired result:
In : fractions.Fraction('3.2')
Out: Fraction(16, 5)
If you already have the value stored in a variable, you can use string formatting as well.
In : value = 3.2
In : fractions.Fraction(f'{value:.2f}')
Out: Fraction(16, 5)
This question already has answers here:
Python rounding error with float numbers [duplicate]
(2 answers)
Floating point representation error in Python [duplicate]
(1 answer)
Closed 10 years ago.
There is a python code as following:
import sys
import fileinput, string
K = 3
f = raw_input("please input the initial "+str(K)+" lamba: ").split()
Z = []
sumoflamba = 0.0
for m in f:
j = m.find("/")
if j!=-1:
e=float(m[:j])/float(m[j+1:])
else:
e = float(m)
sumoflamba+=e
if e==0:
print "the initial lamba cannot be zero!"
sys.exit()
Z.append(e)
print sumoflamba
if sumoflamba!=1:
print "initial lamba must be summed to 1!"
sys.exit()
When I run it with 0.7, 0.2, 0.1. It will print the warning and exits! However, when I run it with 0.1, 0.2, 0.7. It works fine. 0.3, 0.3, 0.4 works fine too. I do not have a clue....Can someone explain this, please?
The "print sumoflamda" will give 1.0 for all these cases.
Pretty much what the link Lattyware provided explains - but in a nutshell, you can't expect equality comparisons to work in floating point without being explicit about the precision. If you were to either round off the value or cast it to an integer you would get predictable results
>>> f1 = 0.7 + 0.2 + 0.1
>>> f2 = 0.1 + 0.2 + 0.7
>>> f1 == f2
False
>>> round(f1,2) == round(f2,2)
True
Floats are imprecise. The more you operate with them, the more imprecision they accumulate. Some numbers can be represented exactly, but most cannot. Comparing them for equality will almost always be a mistake.
It is a bad practice to check floating point numbers for equality. Best you can do here is to check that your number within the desired range. For details of how floating point works see http://en.wikipedia.org/wiki/Floating_point#Internal_representation