This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 3 years ago.
I was using a simple for loop to add numbers but I found a strange result when adding float.
Can you explain why I have the following output ?
1.1
1.2000000000000002
1.3000000000000003
1.4000000000000004
1.5000000000000004
1.6000000000000005
1.7000000000000006
1.8000000000000007
1.9000000000000008
2.000000000000001
2.100000000000001
2.200000000000001
2.300000000000001
2.4000000000000012
2.5000000000000013
2.6000000000000014
2.7000000000000015
2.8000000000000016
2.9000000000000017
3.0000000000000018
3.100000000000002
3.200000000000002
3.300000000000002
3.400000000000002
3.500000000000002
3.6000000000000023
3.7000000000000024
3.8000000000000025
3.9000000000000026
This is based on Anaconda Spyder
a = 1
for i in range(1,30):
a = a+0.1
print(a)
It's a known limitation of floating point arithmetic, computers cannot store infinitely precise floating point numbers. See python docs.
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 2 years ago.
I came across the following strange result in Python (I use Spyder environment). Any idea what is going on? And how can I fix this? I truly don't want to put 20 zeros in front of my variable nor using numpy for such a simple work makes sense!
int(121212000000000000000000000000000000000000000000000000)
Out[27]: 121212000000000000000000000000000000000000000000000000
int(121212*1e20)
Out[28]: 12121199999999999802867712
int(121212*10e20)
Out[29]: 121211999999999993733709824
It has to do with floating point precision.
You can use the decimal module like so:
>>> from decimal import Decimal
>>> Decimal(121212) * Decimal('10e20')
Decimal('121212000000000000000000000')
For more info, see the following Python tutorial.
This question already has answers here:
Is floating point math broken?
(31 answers)
Floating Point Numbers [duplicate]
(7 answers)
Closed 4 years ago.
I am trying to understand the decimal rounding behavior in python. What I find strange is the following:
1) Multiplying 70 by 11.46950 gives me 802.865
2) Multiplying 70 by 11.46750 gives me 802.7249999999999
Why is there extra precision in the second case and not the first case? I understand that internally, the decimal cannot be represented exactly. But that reason should also apply to the first case as well?
I am using python3.6.
Thanks
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 5 years ago.
When I subtract 4.7 from 2.3, I get a number with 16 decimal places, instead of getting a specific number with one decimal place. How come it doesn't give you a specific answer?
This is because of the numerical representation of both decimal numbers (4.7 and 2.3) in binary:
4.7 is represented in binary as 100.10110011001100110011...
2.3 is represented in binary as 10.01001100110011001101...
As you can see, both are periodic tithes in binary. This is why you do not obtain a precise result.
I hope it helps.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 5 years ago.
Can someone explain this?
Input:
58/100*100
Result:
57.99999999999999
Yet...
Input:
26/100*100
Result:
26.0
Also, how can I consistently get results like in the second case?
This is all due to floating point arithmetic
and a subtle change in the way python evaluates expressions containing numeric literals.
Since python 3, your expressions above will be calculated in floating point; before then integer arithmetic would have be used.
In IEEE754 floating point, 0.58 is further away from the true value than 0.26. That's enough to throw off the heuristics that your output formatter is using.
Performing the multiplication before the division can help in some circumstances, and will do here as the product can be represented exactly.
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Why does '0.2 + 0.1' show as '0.30000000000000004'? [duplicate]
(1 answer)
Closed 7 years ago.
Why when I'm doing this simple math subtraction I get this answer ?
In[10]: 1-0.9
Out[10]: 0.09999999999999998
someone know how to fix this ?
Refer https://docs.python.org/2/tutorial/floatingpoint.html
Use round(1-0.9, n) where it rounds the result to n decimal places
This is a common problem with floating point precision. Usually people round when floats are displayed so the precision limitation is not shown.