This question already has answers here:
Limiting floats to two decimal points
(35 answers)
Closed 2 years ago.
I want to round the number below to two decimal places. The result must be 33.39 but Python gives 33.38 because the 5 rounds to even and hence to 8.
round(33.385, 2)
This actually has nothing to do with round-to-nearest/even. 33.385 is a decimal number, but is represented in your hardware as an approximation in binary floating point. The decimal module can show you the exact decimal value of that binary approximation:
>>> import decimal
>>> decimal.Decimal(33.385)
Decimal('33.38499999999999801048033987171947956085205078125')
That's why it rounds to 33.38: the exact value stored is slightly closer to 33.38 than to 33.39.
If you need exact decimal results, don't use your hardware's binary floating point. For example, you could use the decimal module for this with the ROUND_HALF_UP rounding mode.
For example,
>>> from decimal import Decimal as D
>>> x = D("33.385")
>>> x
Decimal('33.385')
>>> twodigits = D("0.01")
>>> x.quantize(twodigits) # nearest/even is the default
Decimal('33.38')
>>> x.quantize(twodigits, rounding=decimal.ROUND_HALF_EVEN) # same thing
Decimal('33.38')
>>> x.quantize(twodigits, rounding=decimal.ROUND_HALF_UP) # what you want
Decimal('33.39')
>>> float(_) # back to binary float
33.39
>>> D(_) # but what the binary 33.39 _really_ is
Decimal('33.3900000000000005684341886080801486968994140625')
Actually round() rounds to the closest decimal point you specified. There are some incosistencies with floating point numbers in programming languages, because the computer can't create specific enough decimals with the given bits, so the programming language already shows you what you need to see, but not the real number. You can see the real number by using the decimal library. That way you can see why sometimes numbers that end in 5 round either up or down.
If you want to round down then you can use the math module to do it easily.
import math
math.floor(33.385 * 100) / 100
And if you want to round up then you can do the same with math.ceil
import math
math.ceil(33.385 * 100) / 100
Or if you wanted you could still use the round() function, but change it a bit
Round up:
decimal_point = 2
change = 0.3 / 10**decimal_point
round(33.385 + change, decimal_point)
Round down:
decimal_point = 2
change = 0.3 / 10**decimal_point
round(33.385 - change, decimal_point)
Just a result of floating-point approximation by computers.
Computers understand binary and not all floating point values have accurate binary rep.
A famous example:
print(0.1 + 0.2 == 0.3)
False
Wonder why?
The result is 0.30000000000000004 and it's because 0.2 in binary goes as in 0.00110011001100...
Use #Rfroes87's suggestion to convert your numbers into an integer-precision scale and then round off to nearest integer)
You can do the following:
import math
math.ceil(33.385 * 100.0) / 100.0
Reference: Round up to Second Decimal Place in Python
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).
This question already has answers here:
Why 0.2 is not equal to 0.2 when using the decimal method?
(2 answers)
Closed 1 year ago.
I'm reading up on the Python Decimal module. I have a need to make a large number of precise calculations, often with lots of decimal places, where being off by a small amount adds up over time. Enter the Decimal library.
Step 1: Read the intro to Decimal library (added bold):
Decimal numbers can be represented exactly. In contrast, numbers like 1.1 and 2.2 do not have exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as 3.3000000000000003 as it does with binary floating point.
Step 2: Plug a decimal in to Python. This seems to be imprecise - off by a very similar margin as the float calculation.
>>> from decimal import *
>>> 1.1 + 2.2
3.3000000000000003
>>> Decimal(3.3)
Decimal('3.29999999999999982236431605997495353221893310546875')
What's going on?
Per the documentation:
Construction from an integer or a float performs an exact conversion of the value of that integer or float.
The exact value of the float literal 3.3 is not 3.3 = 33/10, but the binary approximation 3715469692580659 / 250, whose exact value is what you see in your screenshot. If this is not what you want, then pass a str instead of a float to the constructor.
>>> from decimal import *
>>> Decimal(3.3)
Decimal('3.29999999999999982236431605997495353221893310546875')
>>> Decimal('3.3')
Decimal('3.3')
Also remember while that Decimal is exact at representing base-ten fractions like 1/10, 1/100, or 1/1000, other fractions are approximated (albeit to more precision than float).
>>> Decimal(1) / Decimal(3)
Decimal('0.3333333333333333333333333333')
>>> _ * 3
Decimal('0.9999999999999999999999999999')
If this is an issue for you, then use the Fraction class instead of Decimal.
>>> from fractions import *
>>> Fraction(1) / Fraction(3)
Fraction(1, 3)
>>> _ * 3
Fraction(1, 1)
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).
This question already has answers here:
Is floating point math broken?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).
This question already has answers here:
Is floating point math broken?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).