This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 years ago.
I am very new to Python. I am learning basic stuff and stumbled at this.
Why is 5 in the decimal not rounding to the next higher decimal digit in this example below?
>>> round(2.67576,4)
2.6758
>>> round(2.67575,4)
2.6757
I was expecting that the answer to both the expressions would be the same, but they aren't
The short answer is "floats are weird".
This isn't so much a Python issue as an issue with any system that uses binary floating point to represent a quantity. The nature of floats is non exact, which makes it alright for tracking continuous values where precision is desired and exactness is not essential. Floats are not suitable for the kind of exact arithmetic you're trying to perform here.
I would suggest you look to Python's standard 'decimal' module for exact numeric work.
https://docs.python.org/2/library/decimal.html
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 1 year ago.
This is a pretty easy question.
I am doing a division of 1.2/0.2 on python 3.7.0 and I get 5.9999999999 instead of a clear 6. It is obvious that 1.2/0.2 is 6 so I don't understand why I am getting this result.
Thanks
you can round a number, for example
round(1.2 / 0.2)
would give an answer of 6. The reason your getting this is due to floating point precision error.
It's a problem caused when the internal representation of floating-point numbers, which uses a fixed number of binary digits to represent a decimal number. It is difficult to represent some decimal number in binary, so in many cases, it leads to small roundoff errors.
you can read more here
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 2 years ago.
Good day,
I'm getting a strange rounding error and I'm unsure as to why.
print(-0.0075+0.005)
is coming out in the terminal as
-0.0024999999999999996
Which appears to be throwing off the math in the rest of my program. Given that the numbers in this function could be anywhere between 1 and 0.0001, the number of decimal places can vary.
Any ideas why I'm not getting the expected answer of -0.0025
Joe,
The 'rounding error' you refer to is a consequence of the arithmetic system Python is using to perform the requested operation (though by no means limited to Python!).
All real numbers need to be represented by a finite number of bits in a computer program, thus they are 'rounded' to a suitable representation. This is necessary because with a finite number of bits it is not possible to represent all real numbers exactly (or even a finite interval of the numbers in the real line). So while you may define a variable to be 'a=0.005', the computer will store it as something very close to, but not exactly, that. Typically this rounding is done through floating-point representations which is the case in standard Python. In the binary version of this system, real numbers are approximated by integers multiplied by powers of 2 for their representation.
Consequently, operations such as the sum that you are performing operate on this 'rounded' version of the numbers and return another rounded version of the result. This implies that the arithmetic in the computer is always approximate, although usually, it is precise enough that we do not care. If these rounding errors are too large for your application, you may try to switch to use a more precise representation (more bits). You can find a good explainer with examples on Python's docs.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 years ago.
so I'm just doing a basic math problem and noticed that it is returning an abnormally long float rather than the clean answer I was looking for. I am trying to take the 5th root of 100,000. So in normal mathematical notation we would be doing 100,000^(1/5)=10; however, my code in python is returning 10.000000000000002. I've tried the following bits of code:
100000**(1/5)
And
100000**.2
I understand why this might not work perfectly for, say, 1,000^(1/3) because 1/3 is a never ending decimal; however, I would think it should work fine for 100,000 ^ (1/5). Not sure what I may be doing incorrectly. Any help or insight appreciated.
This is because of the floating point precision. The floats in Python are encode on 64 bits (on a 64 bits system). So, you have a maximum possible precision to represent number. If you want absolute precision, use fixed point with the Decimal module.
For further explanation, see this [article].(https://docs.python.org/3/tutorial/floatingpoint.html)
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Python float - str - float weirdness
Python float division does not appear to have accurate results. Can someone explain why?
>>>3.0/5
0.59999999999999998
Short answer: Floats use finite-precision binary encoding to represent numbers, so various operations lose some precision.
The Wikipedia page has a lot of information (maybe too much).
See also: How do I use accurate float arithmetic in Python?
Floating point arithmetic is not exact; there are rounding errors that are worsened by the fact that computers use binary floating point and not decimal floating point. See Wikipedia.
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 2 years ago.
My code:
a = '2.3'
I wanted to display a as a floating point value.
Since a is a string, I tried:
float(a)
The result I got was :
2.2999999999999998
I want a solution for this problem. Please, kindly help me.
I was following this tutorial.
I think it reflects more on your understanding of floating point types than on Python. See my article about floating point numbers (.NET-based, but still relevant) for the reasons behind this "inaccuracy". If you need to keep the exact decimal representation, you should use the decimal module.
This is not a drawback of python, rather, it is a drawback of the way floating point numbers are stored on a computer. Regardless of implementation language, you will find similar problems.
You say that you want to 'display' A as a floating point, why not just display the string? Visually it will be identical to what you expect.
As Jon mentioned, if your needs are more than just 'displaying' the floating point number, you should use the decimal module to store the exact representation.
Excellent answers explaining reasons. I just wish to add a possible practical solution from the standard library:
>>> from decimal import Decimal
>>> a = Decimal('2.3')
>>> print a
2.3
This is actually a (very) F.A.Q. for Python and you can read the answer here.
Edit: I just noticed that John Skeet already mentioned this. Oh well...