This question already has answers here:
Python rounding error with float numbers [duplicate]
(2 answers)
Python floating-point math is wrong [duplicate]
(2 answers)
Closed 9 years ago.
In my python,you can see:
>>> 0.6+0.8
1.4
>>> 1.6+0.8
2.4000000000000004
why the result is so strange?
I believe this is a problem with caluclating floats with binary rather than python,
http://docs.python.org/2/library/decimal.html explains it better than I could, in short
import decimal
num1 = decimal.Decimal("1.6")
num2 = decimal.Decimal("0.8")
num1 + num2
Writing a function to decimal your stuff for you will be easy enough.
This is because of floating point rounding error.
Read basic here: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
And the solution for Python for your case:
http://floating-point-gui.de/languages/python/
Related
This question already has answers here:
Why does integer division yield a float instead of another integer?
(4 answers)
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 1 year ago.
I've learnt that python supports very large numbers with int itself.
But in this case :
print(int(12630717197566440579/10))
My answer is
1263071719756644096
and not
1263071719756644057
As it's supposed to be.
Can someone tell me why?
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 3 years ago.
I was using a simple for loop to add numbers but I found a strange result when adding float.
Can you explain why I have the following output ?
1.1
1.2000000000000002
1.3000000000000003
1.4000000000000004
1.5000000000000004
1.6000000000000005
1.7000000000000006
1.8000000000000007
1.9000000000000008
2.000000000000001
2.100000000000001
2.200000000000001
2.300000000000001
2.4000000000000012
2.5000000000000013
2.6000000000000014
2.7000000000000015
2.8000000000000016
2.9000000000000017
3.0000000000000018
3.100000000000002
3.200000000000002
3.300000000000002
3.400000000000002
3.500000000000002
3.6000000000000023
3.7000000000000024
3.8000000000000025
3.9000000000000026
This is based on Anaconda Spyder
a = 1
for i in range(1,30):
a = a+0.1
print(a)
It's a known limitation of floating point arithmetic, computers cannot store infinitely precise floating point numbers. See python docs.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 years ago.
What is the explanation for this unexpected results in Python??!;
from math import *
>>>log(1000,10) ## expecting 3.0
2.9999999999999996
>>>1000**(1/3) ## expecting 10.0
9.999999999999998
Basically value of these functions are calculated using some series. As python by default uses floating values, so it calculates values very precisely.
You can see this...
Efficient implementation of natural logarithm (ln) and exponentiation
That`s why it gives these types of result.
You can use Fraction class from fractions to convert these values to integer.
https://docs.python.org/3.1/library/fractions.html
This question already has answers here:
Is floating point math broken?
(31 answers)
Floating Point Numbers [duplicate]
(7 answers)
Closed 4 years ago.
I am trying to understand the decimal rounding behavior in python. What I find strange is the following:
1) Multiplying 70 by 11.46950 gives me 802.865
2) Multiplying 70 by 11.46750 gives me 802.7249999999999
Why is there extra precision in the second case and not the first case? I understand that internally, the decimal cannot be represented exactly. But that reason should also apply to the first case as well?
I am using python3.6.
Thanks
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Why does '0.2 + 0.1' show as '0.30000000000000004'? [duplicate]
(1 answer)
Closed 7 years ago.
Why when I'm doing this simple math subtraction I get this answer ?
In[10]: 1-0.9
Out[10]: 0.09999999999999998
someone know how to fix this ?
Refer https://docs.python.org/2/tutorial/floatingpoint.html
Use round(1-0.9, n) where it rounds the result to n decimal places
This is a common problem with floating point precision. Usually people round when floats are displayed so the precision limitation is not shown.