Determine how many decimal digits of a float are precise - python

The error below occurs on the 14th decimal:
>>> 1001*.2
200.20000000000002
Here* the error occurs on the 18th decimal digit:
>>> from decimal import Decimal
>>> Decimal.from_float(.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
# ^
# |_ here
*Note: I used Fraction since >>> 0.1 is displayed as 0.1 in the console, but I think this is related to how it's printed, not how it's stored.
Questions:
Is there a way to determine on which exactly decimal digit the error will occur?
Is there a difference between Python 2 and Python 3?

If we assume that the size of the widget is stored exactly, then there are 2 sources of error: the conversion of size_hint from decimal -> binary, and the multiplication. In Python, these should both be correctly rounded to nearest, so each should have relative error of half an ulp (unit in the last place). Since the second operation is a multiplication we can just add the bounds to get a total relative error which will be bounded 1 ulp, or 2-53.
Converting to decimal:
>>> math.trunc(math.log10(2.0**-53))
-15
this means you should be accurate to 15 significant figures.
There shouldn't be any difference between Python 2 and 3: Python has long been fairly strict about floating point behaviour, the only change I'm aware of is the behaviour of the round function, which isn't used here.

To answer the decimal to double-precision floating-point conversion part of your question...
The conversion of decimal fractions between 0.0 and 0.1 will be good to 15-16 decimal digits (Note: you start counting at the first non-zero digit after the point.)
0.1 = 0.1000000000000000055511151231257827021181583404541015625 is good to 16 digits (rounded to 17 it is 0.10000000000000001; rounded to 16 it is 0.1).
0.2 = 0.200000000000000011102230246251565404236316680908203125 is also good to 16 digits.
(An example only good to 15 digits:
0.81 = 0.810000000000000053290705182007513940334320068359375)

I'd recommend you take a read to pep485
Using == operator to compare floating-point values is not the right way to go, instead consider using math.isclose or cmath.isclose, here's a little example using your values:
try:
from math import isclose
v1 = 101 * 1 / 5
v2 = 101 * (1 / 5)
except:
v1 = float(101) * float(1) / float(5)
v2 = float(101) * (float(1) / float(5))
def isclose(a, b, rel_tol=1e-09, abs_tol=0.0):
return abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)
print("v1==v2 {0}".format(v1 == v2))
print("isclose(v1,v2) {0}".format(isclose(v1, v2)))
As you can see, I'm explicitly casting to float in python 2.x and using the function provided in the documentation, with python 3.x I just use directly your values and the provided function from math module.

Related

Rounding decimal place with 5 in the last digit [duplicate]

This question already has answers here:
Limiting floats to two decimal points
(35 answers)
Closed 2 years ago.
I want to round the number below to two decimal places. The result must be 33.39 but Python gives 33.38 because the 5 rounds to even and hence to 8.
round(33.385, 2)
This actually has nothing to do with round-to-nearest/even. 33.385 is a decimal number, but is represented in your hardware as an approximation in binary floating point. The decimal module can show you the exact decimal value of that binary approximation:
>>> import decimal
>>> decimal.Decimal(33.385)
Decimal('33.38499999999999801048033987171947956085205078125')
That's why it rounds to 33.38: the exact value stored is slightly closer to 33.38 than to 33.39.
If you need exact decimal results, don't use your hardware's binary floating point. For example, you could use the decimal module for this with the ROUND_HALF_UP rounding mode.
For example,
>>> from decimal import Decimal as D
>>> x = D("33.385")
>>> x
Decimal('33.385')
>>> twodigits = D("0.01")
>>> x.quantize(twodigits) # nearest/even is the default
Decimal('33.38')
>>> x.quantize(twodigits, rounding=decimal.ROUND_HALF_EVEN) # same thing
Decimal('33.38')
>>> x.quantize(twodigits, rounding=decimal.ROUND_HALF_UP) # what you want
Decimal('33.39')
>>> float(_) # back to binary float
33.39
>>> D(_) # but what the binary 33.39 _really_ is
Decimal('33.3900000000000005684341886080801486968994140625')
Actually round() rounds to the closest decimal point you specified. There are some incosistencies with floating point numbers in programming languages, because the computer can't create specific enough decimals with the given bits, so the programming language already shows you what you need to see, but not the real number. You can see the real number by using the decimal library. That way you can see why sometimes numbers that end in 5 round either up or down.
If you want to round down then you can use the math module to do it easily.
import math
math.floor(33.385 * 100) / 100
And if you want to round up then you can do the same with math.ceil
import math
math.ceil(33.385 * 100) / 100
Or if you wanted you could still use the round() function, but change it a bit
Round up:
decimal_point = 2
change = 0.3 / 10**decimal_point
round(33.385 + change, decimal_point)
Round down:
decimal_point = 2
change = 0.3 / 10**decimal_point
round(33.385 - change, decimal_point)
Just a result of floating-point approximation by computers.
Computers understand binary and not all floating point values have accurate binary rep.
A famous example:
print(0.1 + 0.2 == 0.3)
False
Wonder why?
The result is 0.30000000000000004 and it's because 0.2 in binary goes as in 0.00110011001100...
Use #Rfroes87's suggestion to convert your numbers into an integer-precision scale and then round off to nearest integer)
You can do the following:
import math
math.ceil(33.385 * 100.0) / 100.0
Reference: Round up to Second Decimal Place in Python

Round python decimal to nearest 0.05

I'm trying to round money numbers in Decimal to the nearest 0.05. Right now, I'm doing this:
def round_down(amount):
amount *= 100
amount = (amount - amount % 5) / Decimal(100)
return Decimal(amount)
def round_up(amount):
amount = int(math.ceil(float(100 * amount) / 5)) * 5 / Decimal(100)
return Decimal(amount)
Is there any way I can do this more elegantly without dealing with floats using python Decimals (using quantize perhaps)?
With floats, simply use round(x * 2, 1) / 2. This doesn't give control over the rounding direction, though.
Using Decimal.quantize you also get complete control over the type and direction of rounding (Python 3.5.1):
>>> from decimal import Decimal, ROUND_UP
>>> x = Decimal("3.426")
>>> (x * 2).quantize(Decimal('.1'), rounding=ROUND_UP) / 2
Decimal('3.45')
>>> x = Decimal("3.456")
>>> (x * 2).quantize(Decimal('.1'), rounding=ROUND_UP) / 2
Decimal('3.5')
A more generic solution for any rounding base.
from decimal import ROUND_DOWN
def round_decimal(decimal_number, base=1, rounding=ROUND_DOWN):
"""
Round decimal number to the nearest base
:param decimal_number: decimal number to round to the nearest base
:type decimal_number: Decimal
:param base: rounding base, e.g. 5, Decimal('0.05')
:type base: int or Decimal
:param rounding: Decimal rounding type
:rtype: Decimal
"""
return base * (decimal_number / base).quantize(1, rounding=rounding)
Examples:
>>> from decimal import Decimal, ROUND_UP
>>> round_decimal(Decimal('123.34'), base=5)
Decimal('120')
>>> round_decimal(Decimal('123.34'), base=6, rounding=ROUND_UP)
Decimal('126')
>>> round_decimal(Decimal('123.34'), base=Decimal('0.05'))
Decimal('123.30')
>>> round_decimal(Decimal('123.34'), base=Decimal('0.5'), rounding=ROUND_UP)
Decimal('123.5')
First note this problem (unexpected rounding down) only sometimes occurs when the digit immediately inferior (to the left of) the digit you're rounding to has a 5.
i.e.
>>> round(1.0005,3)
1.0
>>> round(2.0005,3)
2.001
>>> round(3.0005,3)
3.001
>>> round(4.0005,3)
4.0
>>> round(1.005,2)
1.0
>>> round(5.005,2)
5.0
>>> round(6.005,2)
6.0
>>> round(7.005,2)
7.0
>>> round(3.005,2)
3.0
>>> round(8.005,2)
8.01
But there's an easy solution, I've found that seems to always work, and which doesn't rely upon the import of additional libraries. The solution is to add a 1e-X where X is the length of the number string you're trying to use round on plus 1.
>>> round(0.075,2)
0.07
>>> round(0.075+10**(-2*6),2)
0.08
Aha! So based on this we can make a handy wrapper function, which is standalone and does not need additional import calls...
def roundTraditional(val,digits):
return round(val+10**(-len(str(val))-1))
Basically this adds a value guaranteed to be smaller than the least given digit of the string you're trying to use round on. By adding that small quantity it preserve's round's behavior in most cases, while now ensuring if the digit inferior to the one being rounded to is 5 it rounds up, and if it is 4 it rounds down.
The approach of using 10**(-len(val)-1) was deliberate, as it the largest small number you can add to force the shift, while also ensuring that the value you add never changes the rounding even if the decimal . is missing. I could use just 10**(-len(val)) with a condiditional if (val>1) to subtract 1 more... but it's simpler to just always subtract the 1 as that won't change much the applicable range of decimal numbers this workaround can properly handle. This approach will fail if your values reaches the limits of the type, this will fail, but for nearly the entire range of valid decimal values it should work.
You can also use the decimal library to accomplish this, but the wrapper I propose is simpler and may be preferred in some cases.
Edit: Thanks Blckknght for pointing out that the 5 fringe case occurs only for certain values here.

why 1 // 0.05 results in 19.0 in python?

I'm a new to python and I found a confusing result when using Python3.5.1 on my mac, I simply ran this command in my terminal
1 // 0.05
However, it printed 19.0 on my screen. From my point of view, it should be 20. Can someone explain what's happening here? I've already known that the '//' is similar to the math.floor() function. But I still can't get across to this.
Because the Python floating-point literal 0.05 represents a number very slightly larger than the mathematical value 0.05.
>>> '%.60f' % 0.05
'0.050000000000000002775557561562891351059079170227050781250000'
// is floor division, meaning that the result is the largest integer n such that n times the divisor is less than or equal to the dividend. Since 20 times 0.05000000000000000277555756156289135105907917022705078125 is larger than 1, this means the correct result is 19.
As for why the Python literal 0.05 doesn't represent the number 0.05, as well as many other things about floating point, see What Every Computer Scientist Should Know About Floating-Point Arithmetic
0.05 is not exactly representable in floating point. "%0.20f" % 0.05 shows that 0.05 is stored as a value very slightly greater than the exact value:
>>> print "%0.20f" % 0.05
0.05000000000000000278
On the other hand 1/0.05 does appear to be exactly 20:
>>> print "%0.20f" % (1/0.05)
20.00000000000000000000
However all floating point values are rounded to double when stored but calculations are done to a higher precision. In this case it seems the floor operation performed by 1//0.05 is done at full internal precision hence it is rounded down.
As the previous answerers have correctly pointed out, the fraction 0.05 = 1/20 cannot be exactly represented with a finite number of base-two digits. It works out to the repeating fraction 0.0000 1100 1100 1100... (much like 1/3 = 0.333... in familiar base-ten).
But this is not quite a complete answer to your question, because there's another bit of weirdness going on here:
>>> 1 / 0.05
20.0
>>> 1 // 0.05
19.0
Using the “true division” operator / happens to give the expected answer 20.0. You got lucky here: The rounding error in the division exactly cancels out the error in representing the value 0.05 itself.
But how come 1 // 0.05 returns 19? Isn't a // b supposed to be the same as math.floor(a /b)? Why the inconsistency between / and //?
Note that the divmod function is consistent with the // operator:
>>> divmod(1, 0.05)
(19.0, 0.04999999999999995)
This behavior can be explained by performing computing the floating-point division with exact rational arithmetic. When you write the literal 0.05 in Python (on an IEEE 754-compliant platform), the actual value represented is 3602879701896397 / 72057594037927936 = 0.05000000000000000277555756156289135105907917022705078125. This value happens to be slightly more than the intended 0.05, which means that its reciprocal will be slightly less.
To be precise, 72057594037927936 / 3602879701896397 = 19.999999999999998889776975374843521206126552300723564152465244707437044687...
So, // and divmod see an integer quotient of 19. The remainder works out to 0.04999999999999994726440633030506432987749576568603515625, which is rounded for display as 0.04999999999999995. So, the divmod answer above is in fact good to 53-bit accuracy, given the original incorrect value of 0.05.
But what about /? Well, the true quotient 72057594037927936 / 3602879701896397 isn't representable as a float, so it must be rounded, either down to 20-2**-48 (an error of about 2.44e-15) or up to 20.0 (an error of about 1.11e-15). And Python correctly picks the more accurate choice, 20.0.
So, it seems that Python's floating-point division is internally done with high enough precision to know that 1 / 0.05 (that's the float literal 0.05, not the exact decimal fraction 0.05), is actually less than 20, but the float type in itself is incapable of representing the difference.
At this point you may be thinking “So what? I don't care that Python is giving a correct reciprocal to an incorrect value. I want to know how to get the correct value in the first place.” And the answer to that is either:
decimal.Decimal('0.05') (and don't forget the quotes!)
fractions.Fraction('0.05') (Of course, you may also use the numerator-denominator arguments as Fraction(1, 20), which is useful if you need to deal with non-decimal fractions like 1/3.)

Why multiplying a float with an int number unlike what I expect in Python [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).

Python float to Decimal conversion

Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first.
This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal("%.15f" % my_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (Decimal("%.15f" % 100000.3) == Decimal('100000.300000000002910')).
Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?
Python <2.7
"%.15g" % f
Or in Python 3.0:
format(f, ".15g")
Python 2.7+, 3.2+
Just pass the float to Decimal constructor directly, like this:
from decimal import Decimal
Decimal(f)
I suggest this
>>> a = 2.111111
>>> a
2.1111110000000002
>>> str(a)
'2.111111'
>>> decimal.Decimal(str(a))
Decimal('2.111111')
You said in your question:
Can someone suggest a good way to
convert from float to Decimal
preserving value as the user has
entered
But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.
you can convert and than quantize to keep 5 digits after comma via
Decimal(float).quantize(Decimal("1.00000"))
Python does support Decimal creation from a float. You just cast it as a string first. But the precision loss doesn't occur with string conversion. The float you are converting doesn't have that kind of precision in the first place. (Otherwise you wouldn't need Decimal)
I think the confusion here is that we can create float literals in decimal format, but as soon as the interpreter consumes that literal the inner representation becomes a floating point number.
The "official" string representation of a float is given by the repr() built-in:
>>> repr(1.5)
'1.5'
>>> repr(12345.678901234567890123456789)
'12345.678901234567'
You can use repr() instead of a formatted string, the result won't contain any unnecessary garbage.
When you say "preserving value as the user has entered", why not just store the user-entered value as a string, and pass that to the Decimal constructor?
The main answer is slightly misleading. The g format ignores any leading zeroes after the decimal point, so format(0.012345, ".2g") returns 0.012 - three decimal places. If you need a hard limit on the number of decimal places, use the f formatter: format(0.012345, ".2f") == 0.01
I've come across the the same problem / question today and I'm not completely satisfied with any of the answers given so far. The core of the question seems to be:
Can someone suggest a good way to convert from float to Decimal [...] perhaps limiting number of significant digits that can be supported?
Short answer / solution: Yes.
def ftod(val, prec = 15):
return Decimal(val).quantize(Decimal(10)**-prec)
Long Answer:
As nosklo pointed out it is not possible to preserve the input of the user after it has been converted to float.
It is possible though to round that value with a reasonable precision and convert it into Decimal.
In my case I only need 2 to 4 digits after the separator, but they need to be accurate. Let's consider the classic 0.1 + 0.2 == 0.3 check.
>>> 0.1 + 0.2 == 0.3
False
Now let's do this with conversion to decimal (complete example):
>>> from decimal import Decimal
>>> def ftod(val, prec = 15): # float to Decimal
... return Decimal(val).quantize(Decimal(10)**-prec)
...
>>> ftod(0.1) + ftod(0.2) == ftod(0.3)
True
The answer by Ryabchenko Alexander was really helpful for me. It only lacks a way to dynamically set the precision – a feature I want (and maybe also need). The Decimal documentation FAQ gives an example on how to construct the required argument string for quantize():
>>> Decimal(10)**-4
Decimal('0.0001')
Here's how the numbers look like printed with 18 digits after the separator (coming from C programming I like the fancy python expressions):
>>> for x in [0.1, 0.2, 0.3, ftod(0.1), ftod(0.2), ftod(0.3)]:
... print("{:8} {:.18f}".format(type(x).__name__+":", x))
...
float: 0.100000000000000006
float: 0.200000000000000011
float: 0.299999999999999989
Decimal: 0.100000000000000000
Decimal: 0.200000000000000000
Decimal: 0.300000000000000000
And last I want to know for which precision the comparision still works:
>>> for p in [15, 16, 17]:
... print("Rounding precision: {}. Check 0.1 + 0.2 == 0.3 is {}".format(p,
... ftod(0.1, p) + ftod(0.2, p) == ftod(0.3, p)))
...
Rounding precision: 15. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 16. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 17. Check 0.1 + 0.2 == 0.3 is False
15 seems to be a good default for maximum precision. That should work on most systems. If you need more info, try:
>>> import sys
>>> sys.float_info
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
With float having 53 bits mantissa on my system, I calculated the number of decimal digits:
>>> import math
>>> math.log10(2**53)
15.954589770191003
Which tells me with 53 bits we get almost 16 digits. So 15 ist fine for the precision value and should always work. 16 is error-prone and 17 definitly causes trouble (as seen above).
Anyway ... in my specific case I only need 2 to 4 digits of precision, but as a perfectionist I enjoyed investigating this :-)
Any suggestions / improvements / complaints are welcome.
The "right" way to do this was documented in 1990 by Steele and White's and
Clinger's PLDI 1990 papers.
You might also look at this SO discussion about Python Decimal, including my suggestion to try using something like frap to rationalize a float.
You can use JSON to accomplish it
import json
from decimal import Decimal
float_value = 123456.2365
decimal_value = json.loads(json.dumps(float_value), parse_float=Decimal)
Inspired by this answer I found a workaround that allows to shorten the construction of a Decimal from a float bypassing (only apparently) the string step:
import decimal
class DecimalBuilder(float):
def __or__(self, a):
return decimal.Decimal(str(a))
>>> d = DecimalBuilder()
>>> x = d|0.1
>>> y = d|0.2
>>> x + y # works as desired
Decimal('0.3')
>>> d|0.1 + d|0.2 # does not work as desired, needs parenthesis
TypeError: unsupported operand type(s) for |: 'decimal.Decimal' and 'float'
>>> (d|0.1) + (d|0.2) # works as desired
Decimal('0.3')
It's a workaround but it surely allows savings in code typing and it's very readable.
The question is based on the wrong assertion that:
'Python Decimal doesn't support being constructed from float'.
In python3, Decimal class can do it as:
from decimal import *
getcontext().prec = 128 #high precision set
print(Decimal(100000.3))
A: 100000.300000000002910383045673370361328125 #SUCCESS (over 64 bit precision)
That's the right value with all decimals included, and so:
'there is no garbage after 15th decimal place ...'
You can verify on line with a IEEE754 converter like:
https://www.binaryconvert.com/convert_double.html
A: Most accurate representation = 1.00000300000000002910383045673E5 (64 bit precision)
or directly in python3 :
print(f'{100000.3:.128f}'.strip('0'))
A: 100000.300000000002910383045673370361328125
Preserving value as the user has entered, it's made with string conversion as:
Decimal(str(100000.3))
A: 100000.3

Categories

Resources