This question already has answers here:
Python3 rounding to nearest even
(3 answers)
Closed 3 years ago.
I'm using this function to round up floating 5s in Python:
def round_half_up_xx(n, decimals=2):
multiplier = 10 ** decimals
return math.floor(n*multiplier + 0.5) / multiplier
I'm getting weird results:
round_half_up_xx(81.225) => 81.22
round_half_up_xx(81.235) => 81.24
How do I revise the code so that round_half_up_xx(81.225) yields 81.23?
You can't, because 81.225 isn't a real value in IEEE 754 binary floating point. It's shorthand for 81.2249999999999943..., which, as it doesn't end with a 5 in the thousandths place, rounds to 81.22 without concerning itself with special rounding rules.
If you want true accuracy of this sort, you'll need to use the decimal module, initializing the decimal.Decimal values with ints or str (if you initialize with float, it will accurately reflect the precision of the float as best it can, so it won't be 81.225 either). With decimal precision, it can do decimal rounding with whatever rounding strategy you like, without reimplementing it from scratch like you've done here.
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
How to avoid floating point errors? [duplicate]
(2 answers)
Closed 3 days ago.
I am trying to make a program that subtracts decimal values from a number but when I input certain values it returns a long series of decimals instead of the correct value.
I am trying to subtract 2.9 from 3 and instead of getting 0.1 as the value I am getting 0.099999999999. I have tried playing with the values of both the starting and subtracting numbers. Every time however there is a value I subtract that gives me a result like this and breaks the code. Is there a way to stop this from happening/
Computer hardware has limitations when it comes to floating point arithmetic. Check the docs here for more information but the opening is as follows:
Floating-point numbers are represented in computer hardware as base 2 (binary) fractions. For example, the decimal fraction 0.125 has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction 0.001 has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only real difference being that the first is written in base 10 fractional notation, and the second in base 2.
Unfortunately, most decimal fractions cannot be represented exactly as binary fractions. A consequence is that, in general, the decimal floating-point numbers you enter are only approximated by the binary floating-point numbers actually stored in the machine.
This question already has answers here:
Is floating point math broken?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).
This question already has answers here:
Python 3.x rounding behavior
(13 answers)
Why is rounding 0.5 (decimal) not exact? [duplicate]
(1 answer)
Why round(4.5) == 4 and round(5.5) == 6 in Python 3.5? [duplicate]
(2 answers)
Closed 3 years ago.
I am playing around with python print function and came across this problem.
print('%.2f' % (0.665)) # 0.67
print('%.3f' % (0.0625))# 0.062
Since when two decimal places are kept for 0.665 and the result is 0.67, I expect that the output of keeping three decimal places of 0.0625 to be 0.063 but the result is 0.062.
The general rule is that when the rounded-off part of a number is exactly 5, you choose the direction that makes the final resulting digit even - this avoids any systematic bias in these halfway cases. This applies in the case of 0.0625, which is one of the uncommon decimal numbers that have an exact representation in floating-point binary - its final digit truly is a 5. (For an example of a number that rounds up in this case, try 0.375 to two places.) The number 0.665, on the other hand, does not actually exist - the closest floating-point value is actually 0.6650000000000000355271. The rounded-off part is definitely (although only slightly) greater than 5, so it necessarily rounds up.
It is, as often, the annoying problem with floating point rounding.
.0625 can be represented in an exact way, so it is rounded down. (2 is even, so the most usual rounding algorithm decides to round down in these cases.)
.665 cannot be represented in an exact way (its internal representation is either slightly smaller or slightly bigger than the given number. In this case it is probably slightly bigger, so despite the 6 before the 5 being even, it rounds up.
This question already has answers here:
Why is math.sqrt() incorrect for large numbers?
(4 answers)
Is floating point math broken?
(31 answers)
Closed 5 years ago.
If you take a number, take its square root, drop the decimal, and then raise it to the second power, the result should always be less than or equal to the original number.
This seems to hold true in python until you try it on 99999999999999975425 for some reason.
import math
def check(n):
assert math.pow(math.floor(math.sqrt(n)), 2) <= n
check(99999999999999975424) # No exception.
check(99999999999999975425) # Throws AssertionError.
It looks like math.pow(math.floor(math.sqrt(99999999999999975425)), 2) returns 1e+20.
I assume this has something to do with the way we store values in python... something related to floating point arithmetic, but I can't reason about specifically how that affects this case.
The problem is not really about sqrt or pow, the problem is you're using numbers larger than floating point can represent precisely. Standard IEEE 64 bit floating point arithmetic can't represent every integer value beyond 52 bits (plus one sign bit).
Try just converting your inputs to float and back again:
>>> int(float(99999999999999975424))
99999999999999967232
>>> int(float(99999999999999975425))
99999999999999983616
As you can see, the representable value skipped by 16384. The first step in math.sqrt is converting to float (C double), and at that moment, your value increased by enough to ruin the end result.
Short version: float can't represent large integers precisely. Use decimal if you need greater precision. Or if you don't care about the fractional component, as of 3.8, you can use math.isqrt, which works entirely in integer space (so you never experience precision loss, only the round down loss you expect), giving you the guarantee you're looking for, that the result is "the greatest integer a such that a² ≤ n".
Unlike Evan Rose's (now-deleted) answer claims, this is not due to an epsilon value in the sqrt algorithm.
Most math module functions cast their inputs to float, and math.sqrt is one of them.
99999999999999975425 cannot be represented as a float. For this input, the cast produces a float with exact numeric value 99999999999999983616, which repr shows as 9.999999999999998e+19:
>>> float(99999999999999975425)
9.999999999999998e+19
>>> int(_)
99999999999999983616L
The closest float to the square root of this number is 10000000000.0, and that's what math.sqrt returns.
This question already has answers here:
Is floating point math broken?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).