I'm trying to round money numbers in Decimal to the nearest 0.05. Right now, I'm doing this:
def round_down(amount):
amount *= 100
amount = (amount - amount % 5) / Decimal(100)
return Decimal(amount)
def round_up(amount):
amount = int(math.ceil(float(100 * amount) / 5)) * 5 / Decimal(100)
return Decimal(amount)
Is there any way I can do this more elegantly without dealing with floats using python Decimals (using quantize perhaps)?
With floats, simply use round(x * 2, 1) / 2. This doesn't give control over the rounding direction, though.
Using Decimal.quantize you also get complete control over the type and direction of rounding (Python 3.5.1):
>>> from decimal import Decimal, ROUND_UP
>>> x = Decimal("3.426")
>>> (x * 2).quantize(Decimal('.1'), rounding=ROUND_UP) / 2
Decimal('3.45')
>>> x = Decimal("3.456")
>>> (x * 2).quantize(Decimal('.1'), rounding=ROUND_UP) / 2
Decimal('3.5')
A more generic solution for any rounding base.
from decimal import ROUND_DOWN
def round_decimal(decimal_number, base=1, rounding=ROUND_DOWN):
"""
Round decimal number to the nearest base
:param decimal_number: decimal number to round to the nearest base
:type decimal_number: Decimal
:param base: rounding base, e.g. 5, Decimal('0.05')
:type base: int or Decimal
:param rounding: Decimal rounding type
:rtype: Decimal
"""
return base * (decimal_number / base).quantize(1, rounding=rounding)
Examples:
>>> from decimal import Decimal, ROUND_UP
>>> round_decimal(Decimal('123.34'), base=5)
Decimal('120')
>>> round_decimal(Decimal('123.34'), base=6, rounding=ROUND_UP)
Decimal('126')
>>> round_decimal(Decimal('123.34'), base=Decimal('0.05'))
Decimal('123.30')
>>> round_decimal(Decimal('123.34'), base=Decimal('0.5'), rounding=ROUND_UP)
Decimal('123.5')
First note this problem (unexpected rounding down) only sometimes occurs when the digit immediately inferior (to the left of) the digit you're rounding to has a 5.
i.e.
>>> round(1.0005,3)
1.0
>>> round(2.0005,3)
2.001
>>> round(3.0005,3)
3.001
>>> round(4.0005,3)
4.0
>>> round(1.005,2)
1.0
>>> round(5.005,2)
5.0
>>> round(6.005,2)
6.0
>>> round(7.005,2)
7.0
>>> round(3.005,2)
3.0
>>> round(8.005,2)
8.01
But there's an easy solution, I've found that seems to always work, and which doesn't rely upon the import of additional libraries. The solution is to add a 1e-X where X is the length of the number string you're trying to use round on plus 1.
>>> round(0.075,2)
0.07
>>> round(0.075+10**(-2*6),2)
0.08
Aha! So based on this we can make a handy wrapper function, which is standalone and does not need additional import calls...
def roundTraditional(val,digits):
return round(val+10**(-len(str(val))-1))
Basically this adds a value guaranteed to be smaller than the least given digit of the string you're trying to use round on. By adding that small quantity it preserve's round's behavior in most cases, while now ensuring if the digit inferior to the one being rounded to is 5 it rounds up, and if it is 4 it rounds down.
The approach of using 10**(-len(val)-1) was deliberate, as it the largest small number you can add to force the shift, while also ensuring that the value you add never changes the rounding even if the decimal . is missing. I could use just 10**(-len(val)) with a condiditional if (val>1) to subtract 1 more... but it's simpler to just always subtract the 1 as that won't change much the applicable range of decimal numbers this workaround can properly handle. This approach will fail if your values reaches the limits of the type, this will fail, but for nearly the entire range of valid decimal values it should work.
You can also use the decimal library to accomplish this, but the wrapper I propose is simpler and may be preferred in some cases.
Edit: Thanks Blckknght for pointing out that the 5 fringe case occurs only for certain values here.
Related
I want a to be rounded to 13.95. I tried using round, but I get:
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
For the analogous issue with the standard library Decimal class, see How can I format a decimal to always show 2 decimal places?.
You are running into the old problem with floating point numbers that not all numbers can be represented exactly. The command line is just showing you the full floating point form from memory.
With floating point representation, your rounded version is the same number. Since computers are binary, they store floating point numbers as an integer and then divide it by a power of two so 13.95 will be represented in a similar fashion to 125650429603636838/(2**53).
Double precision numbers have 53 bits (16 digits) of precision and regular floats have 24 bits (8 digits) of precision. The floating point type in Python uses double precision to store the values.
For example,
>>> 125650429603636838/(2**53)
13.949999999999999
>>> 234042163/(2**24)
13.949999988079071
>>> a = 13.946
>>> print(a)
13.946
>>> print("%.2f" % a)
13.95
>>> round(a,2)
13.949999999999999
>>> print("%.2f" % round(a, 2))
13.95
>>> print("{:.2f}".format(a))
13.95
>>> print("{:.2f}".format(round(a, 2)))
13.95
>>> print("{:.15f}".format(round(a, 2)))
13.949999999999999
If you are after only two decimal places (to display a currency value, for example), then you have a couple of better choices:
Use integers and store values in cents, not dollars and then divide by 100 to convert to dollars.
Or use a fixed point number like decimal.
There are new format specifications, String Format Specification Mini-Language:
You can do the same as:
"{:.2f}".format(13.949999999999999)
Note 1: the above returns a string. In order to get as float, simply wrap with float(...):
float("{:.2f}".format(13.949999999999999))
Note 2: wrapping with float() doesn't change anything:
>>> x = 13.949999999999999999
>>> x
13.95
>>> g = float("{:.2f}".format(x))
>>> g
13.95
>>> x == g
True
>>> h = round(x, 2)
>>> h
13.95
>>> x == h
True
The built-in round() works just fine in Python 2.7 or later.
Example:
>>> round(14.22222223, 2)
14.22
Check out the documentation.
Let me give an example in Python 3.6's f-string/template-string format, which I think is beautifully neat:
>>> f'{a:.2f}'
It works well with longer examples too, with operators and not needing parentheses:
>>> print(f'Completed in {time.time() - start:.2f}s')
I feel that the simplest approach is to use the format() function.
For example:
a = 13.949999999999999
format(a, '.2f')
13.95
This produces a float number as a string rounded to two decimal points.
Most numbers cannot be exactly represented in floats. If you want to round the number because that's what your mathematical formula or algorithm requires, then you want to use round. If you just want to restrict the display to a certain precision, then don't even use round and just format it as that string. (If you want to display it with some alternate rounding method, and there are tons, then you need to mix the two approaches.)
>>> "%.2f" % 3.14159
'3.14'
>>> "%.2f" % 13.9499999
'13.95'
And lastly, though perhaps most importantly, if you want exact math then you don't want floats at all. The usual example is dealing with money and to store 'cents' as an integer.
Use
print"{:.2f}".format(a)
instead of
print"{0:.2f}".format(a)
Because the latter may lead to output errors when trying to output multiple variables (see comments).
Try the code below:
>>> a = 0.99334
>>> a = int((a * 100) + 0.5) / 100.0 # Adding 0.5 rounds it up
>>> print a
0.99
TLDR ;)
The rounding problem of input and output has been solved definitively by Python 3.1 and the fix is backported also to Python 2.7.0.
Rounded numbers can be reversibly converted between float and string back and forth:
str -> float() -> repr() -> float() ... or Decimal -> float -> str -> Decimal
>>> 0.3
0.3
>>> float(repr(0.3)) == 0.3
True
A Decimal type is not necessary for storage anymore.
Results of arithmetic operations must be rounded again because rounding errors could accumulate more inaccuracy than that is possible after parsing one number. That is not fixed by the improved repr() algorithm (Python >= 3.1, >= 2.7.0):
>>> 0.1 + 0.2
0.30000000000000004
>>> 0.1, 0.2, 0.3
(0.1, 0.2, 0.3)
The output string function str(float(...)) was rounded to 12 valid digits in Python < 2.7x and < 3.1, to prevent excessive invalid digits similar to unfixed repr() output. That was still insufficientl after subtraction of very similar numbers and it was too much rounded after other operations. Python 2.7 and 3.1 use the same length of str() although the repr() is fixed. Some old versions of Numpy had also excessive invalid digits, even with fixed Python. The current Numpy is fixed. Python versions >= 3.2 have the same results of str() and repr() function and also output of similar functions in Numpy.
Test
import random
from decimal import Decimal
for _ in range(1000000):
x = random.random()
assert x == float(repr(x)) == float(Decimal(repr(x))) # Reversible repr()
assert str(x) == repr(x)
assert len(repr(round(x, 12))) <= 14 # no excessive decimal places.
Documentation
See the Release notes Python 2.7 - Other Language Changes the fourth paragraph:
Conversions between floating-point numbers and strings are now correctly rounded on most platforms. These conversions occur in many different places: str() on floats and complex numbers; the float and complex constructors; numeric formatting; serializing and de-serializing floats and complex numbers using the marshal, pickle and json modules; parsing of float and imaginary literals in Python code; and Decimal-to-float conversion.
Related to this, the repr() of a floating-point number x now returns a result based on the shortest decimal string that’s guaranteed to round back to x under correct rounding (with round-half-to-even rounding mode). Previously it gave a string based on rounding x to 17 decimal digits.
The related issue
More information: The formatting of float before Python 2.7 was similar to the current numpy.float64. Both types use the same 64 bit IEEE 754 double precision with 52 bit mantissa. A big difference is that np.float64.__repr__ is formatted frequently with an excessive decimal number so that no bit can be lost, but no valid IEEE 754 number exists between 13.949999999999999 and 13.950000000000001. The result is not nice and the conversion repr(float(number_as_string)) is not reversible with numpy. On the other hand: float.__repr__ is formatted so that every digit is important; the sequence is without gaps and the conversion is reversible. Simply: If you perhaps have a numpy.float64 number, convert it to normal float in order to be formatted for humans, not for numeric processors, otherwise nothing more is necessary with Python 2.7+.
Use:
float_number = 12.234325335563
round(float_number, 2)
This will return;
12.23
Explanation:
The round function takes two arguments;
The number to be rounded and the number of decimal places to be returned. Here I returned two decimal places.
You can modify the output format:
>>> a = 13.95
>>> a
13.949999999999999
>>> print "%.2f" % a
13.95
With Python < 3 (e.g. 2.6 or 2.7), there are two ways to do so.
# Option one
older_method_string = "%.9f" % numvar
# Option two (note ':' before the '.9f')
newer_method_string = "{:.9f}".format(numvar)
But note that for Python versions above 3 (e.g. 3.2 or 3.3), option two is preferred.
For more information on option two, I suggest this link on string formatting from the Python documentation.
And for more information on option one, this link will suffice and has information on the various flags.
Reference: Convert floating point number to a certain precision, and then copy to string
You can use format operator for rounding the value up to two decimal places in Python:
print(format(14.4499923, '.2f')) // The output is 14.45
As Matt pointed out, Python 3.6 provides f-strings, and they can also use nested parameters:
value = 2.34558
precision = 2
width = 4
print(f'result: {value:{width}.{precision}f}')
which will display result: 2.35
In Python 2.7:
a = 13.949999999999999
output = float("%0.2f"%a)
print output
We multiple options to do that:
Option 1:
x = 1.090675765757
g = float("{:.2f}".format(x))
print(g)
Option 2:
The built-in round() supports Python 2.7 or later.
x = 1.090675765757
g = round(x, 2)
print(g)
The Python tutorial has an appendix called Floating Point Arithmetic: Issues and Limitations. Read it. It explains what is happening and why Python is doing its best. It has even an example that matches yours. Let me quote a bit:
>>> 0.1
0.10000000000000001
you may be tempted to use the round()
function to chop it back to the single
digit you expect. But that makes no
difference:
>>> round(0.1, 1)
0.10000000000000001
The problem is that the binary
floating-point value stored for “0.1”
was already the best possible binary
approximation to 1/10, so trying to
round it again can’t make it better:
it was already as good as it gets.
Another consequence is that since 0.1
is not exactly 1/10, summing ten
values of 0.1 may not yield exactly
1.0, either:
>>> sum = 0.0
>>> for i in range(10):
... sum += 0.1
...
>>> sum
0.99999999999999989
One alternative and solution to your problems would be using the decimal module.
Use combination of Decimal object and round() method.
Python 3.7.3
>>> from decimal import Decimal
>>> d1 = Decimal (13.949999999999999) # define a Decimal
>>> d1
Decimal('13.949999999999999289457264239899814128875732421875')
>>> d2 = round(d1, 2) # round to 2 decimals
>>> d2
Decimal('13.95')
It's doing exactly what you told it to do and is working correctly. Read more about floating point confusion and maybe try decimal objects instead.
from decimal import Decimal
def round_float(v, ndigits=2, rt_str=False):
d = Decimal(v)
v_str = ("{0:.%sf}" % ndigits).format(round(d, ndigits))
if rt_str:
return v_str
return Decimal(v_str)
Results:
Python 3.6.1 (default, Dec 11 2018, 17:41:10)
>>> round_float(3.1415926)
Decimal('3.14')
>>> round_float(3.1445926)
Decimal('3.14')
>>> round_float(3.1455926)
Decimal('3.15')
>>> round_float(3.1455926, rt_str=True)
'3.15'
>>> str(round_float(3.1455926))
'3.15'
The simple solution is here
value = 5.34343
rounded_value = round(value, 2) # 5.34
Use a lambda function like this:
arred = lambda x,n : x*(10**n)//1/(10**n)
This way you could just do:
arred(3.141591657, 2)
and get
3.14
orig_float = 232569 / 16000.0
14.5355625
short_float = float("{:.2f}".format(orig_float))
14.54
For fixing the floating point in type-dynamic languages such as Python and JavaScript, I use this technique
# For example:
a = 70000
b = 0.14
c = a * b
print c # Prints 980.0000000002
# Try to fix
c = int(c * 10000)/100000
print c # Prints 980
You can also use Decimal as following:
from decimal import *
getcontext().prec = 6
Decimal(1) / Decimal(7)
# Results in 6 precision -> Decimal('0.142857')
getcontext().prec = 28
Decimal(1) / Decimal(7)
# Results in 28 precision -> Decimal('0.1428571428571428571428571429')
It's simple like:
use decimal module for fast correctly-rounded decimal floating point arithmetic:
d = Decimal(10000000.0000009)
to achieve rounding:
d.quantize(Decimal('0.01'))
will result with Decimal('10000000.00')
make the above DRY:
def round_decimal(number, exponent='0.01'):
decimal_value = Decimal(number)
return decimal_value.quantize(Decimal(exponent))
or
def round_decimal(number, decimal_places=2):
decimal_value = Decimal(number)
return decimal_value.quantize(Decimal(10) ** -decimal_places)
PS: critique of others: formatting is not rounding.
Here is the simple solution using the format function.
float(format(num, '.2f'))
Note: We are converting numbers to float, because the format method is returning a string.
If you want to handle money, use the Python decimal module:
from decimal import Decimal, ROUND_HALF_UP
# 'amount' can be integer, string, tuple, float, or another Decimal object
def to_money(amount) -> Decimal:
money = Decimal(amount).quantize(Decimal('.00'), rounding=ROUND_HALF_UP)
return money
lambda x, n:int(x*10^n + 0.5)/10^n
has worked for me for many years in many languages.
To round a number to a resolution, the best way is the following one, which can work with any resolution (0.01 for two decimals or even other steps):
>>> import numpy as np
>>> value = 13.949999999999999
>>> resolution = 0.01
>>> newValue = int(np.round(value/resolution))*resolution
>>> print newValue
13.95
>>> resolution = 0.5
>>> newValue = int(np.round(value/resolution))*resolution
>>> print newValue
14.0
The answers I saw didn't work with the float(52.15) case. After some tests, there is the solution that I'm using:
import decimal
def value_to_decimal(value, decimal_places):
decimal.getcontext().rounding = decimal.ROUND_HALF_UP # define rounding method
return decimal.Decimal(str(float(value))).quantize(decimal.Decimal('1e-{}'.format(decimal_places)))
(The conversion of the 'value' to float and then string is very important, that way, 'value' can be of the type float, decimal, integer or string!)
Hope this helps anyone.
The error below occurs on the 14th decimal:
>>> 1001*.2
200.20000000000002
Here* the error occurs on the 18th decimal digit:
>>> from decimal import Decimal
>>> Decimal.from_float(.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
# ^
# |_ here
*Note: I used Fraction since >>> 0.1 is displayed as 0.1 in the console, but I think this is related to how it's printed, not how it's stored.
Questions:
Is there a way to determine on which exactly decimal digit the error will occur?
Is there a difference between Python 2 and Python 3?
If we assume that the size of the widget is stored exactly, then there are 2 sources of error: the conversion of size_hint from decimal -> binary, and the multiplication. In Python, these should both be correctly rounded to nearest, so each should have relative error of half an ulp (unit in the last place). Since the second operation is a multiplication we can just add the bounds to get a total relative error which will be bounded 1 ulp, or 2-53.
Converting to decimal:
>>> math.trunc(math.log10(2.0**-53))
-15
this means you should be accurate to 15 significant figures.
There shouldn't be any difference between Python 2 and 3: Python has long been fairly strict about floating point behaviour, the only change I'm aware of is the behaviour of the round function, which isn't used here.
To answer the decimal to double-precision floating-point conversion part of your question...
The conversion of decimal fractions between 0.0 and 0.1 will be good to 15-16 decimal digits (Note: you start counting at the first non-zero digit after the point.)
0.1 = 0.1000000000000000055511151231257827021181583404541015625 is good to 16 digits (rounded to 17 it is 0.10000000000000001; rounded to 16 it is 0.1).
0.2 = 0.200000000000000011102230246251565404236316680908203125 is also good to 16 digits.
(An example only good to 15 digits:
0.81 = 0.810000000000000053290705182007513940334320068359375)
I'd recommend you take a read to pep485
Using == operator to compare floating-point values is not the right way to go, instead consider using math.isclose or cmath.isclose, here's a little example using your values:
try:
from math import isclose
v1 = 101 * 1 / 5
v2 = 101 * (1 / 5)
except:
v1 = float(101) * float(1) / float(5)
v2 = float(101) * (float(1) / float(5))
def isclose(a, b, rel_tol=1e-09, abs_tol=0.0):
return abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)
print("v1==v2 {0}".format(v1 == v2))
print("isclose(v1,v2) {0}".format(isclose(v1, v2)))
As you can see, I'm explicitly casting to float in python 2.x and using the function provided in the documentation, with python 3.x I just use directly your values and the provided function from math module.
i was doing some calculation and i got something like this:
newInteger = 200
newFloat = 200.0
if newInteger >= newFloat:
print "Something"
when i run my application it didn't print it out but when i test it on python shell, it print Something!!.
so i test this,
>>> number = 200.0000000000001
>>> number
200.0000000000001
but when decimals goes over 13, like so:
>>> number = 200.00000000000001
>>> number
200.0
does python hide the decimal numbers but showing as rounded? knowing the result is quite important when debugging.
is there any way that i can get the full decimals? (i did look up at python documentation, it didn't say anything about printing actual float number.)
It's called floating point round-off error. It has to do with how Python stores floats (in binary), which makes it impossible for floats to have 100% precision.
Here's more info in the docs.
See the decimal module if you need more precision.
If you just want to quickly compare two numbers, there are a couple of tricks for floating point comparison. One of the most popular is comparing the relative error to the machine precision (epsilon):
import sys
def float_equality(x, y, epsilon=sys.float_info.epsilon):
return abs(x - y) <= epsilon * max(abs(x), abs(y))
But this too, is not perfect. For a discussion of the imperfections of this method and some more accurate alternatives, see this article about comparing floats.
Python tends to round numbers:
>>> math.pi
3.141592653589793
>>> "{0:.50f}".format(math.pi)
'3.14159265358979311599796346854418516159057617187500'
>>> "{0:.2f}".format(math.pi)
'3.14'
However, floating point numbers have a specific precision and you can't go beyod it. You can't store arbitrary numbers in floating point:
>>> number = 200.00000000000001
>>> "{:.25f}".format(number)
'200.0000000000000000000000000'
For integers the floating point limit is 2**53:
>>> 2.0**53
9007199254740992.0
>>> 2.0**53 + 1
9007199254740992.0
>>> 2.0**53 + 2
9007199254740994.0
If you want to store arbitrary decimal numbers you should use Decimal module:
>>> from decimal import Decimal
>>> number = Decimal("200.0000000000000000000000000000000000000000001")
>>> number
Decimal('200.0000000000000000000000000000000000000000001')
I'm working on the math program and I have a quite big problem with round. So after my program did some math, it rounds the result.
Everything works fine but if the result == 2.49999999999999992 , round function return 3.0 instead of 2.0.
How can I fix that?
Thanks.
As #Pavel Anossov says in his comment there's no such thing as 2.49999999999999992 in IEEE 754, 2.49999999999999992 == 2.5.. Float might always be critical for your calculations, because in any case (32/64/128 bit float), you have a precision limit. This is obviously also limited for Python floats.
There are different options to deal with that, you could e.g. use the decimal library:
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
It's possible to set the precision yourself in that case. decimal is in the standard library.
There are also third party libraries like bigfloat, that you could use (I have no experience with it):
>>> from bigfloat import *
>>> sqrt(2, precision(100)) # compute sqrt(2) with 100 bits of precision
But as you can see, you always have to choose a precision. If you really don't want to lose any kind of precision, use fractions (also in the standard library):
>>> from fractions import Fraction
>>> a = Fraction(16, -10)
>>> a
Fraction(-8, 5)
>>> a / 23
Fraction(-8, 115)
>>> float(a/23)
-0.06956521739130435
The reason is that Python's float type (typically IEEE 754 double precision floating point numbers) does not have such a value as 2.49999999999999992. Floating point numbers are generally on the form mantissa*base**exponent, and in Python you can find the limits for float in particular within sys.float_info. For starters, let's calculate how many digits the mantissa itself can hold:
>>> from sys import float_info
>>> print float_info.radix**float_info.mant_dig # How big can the mantissa get?
9007199254740992
>>> print "2.49999999999999992"
2.49999999999999992
>>> 2.49999999999999992
2.5
Clearly the number we've entered is wider. Just how close can we go before things go wrong?
>>> print 2.5*float_info.epsilon
5.55111512313e-16
e-16 here means *10**-16, so let's reformat that for comparison:
>>> print "%.17f"%(2.5*float_info.epsilon); print "2.49999999999999992"
0.00000000000000056
2.49999999999999992
This indicates that at a magnitude around 2.5, differences lower than about 5.6e-16 (including this 8e-17) will be lost to the storage itself. Therefore this value is 2.5, which rounds up.
We can also calculate an estimate of how many significant digits we can use:
>>> import math, sys
>>> print math.log10(sys.float_info.radix**sys.float_info.mant_dig)
15.9545897702
Very nearly, but not quite, 16. In binary the first digit will always be 1, so we can have a known number of significant digits (mant_dig), but in decimal the first digit will consume between one and four bits. This means the last digit may be off by more than one. Usually we hide this by printing only with a limited precision, but it actually occurs to lots of numbers:
>>> print '%f = %.17f'%(1.1, 1.1)
1.100000 = 1.10000000000000009
>>> print '%f = %.17f'%(10.1, 10.1)
10.100000 = 10.09999999999999964
Such is the inherent imprecision of floating point numbers. Types like bigfloat, decimal and fractions (thanks to David Halter for these examples) can push the limits around, but if you start looking at many digits you need to be aware of them. Also note that this is not unique to computers; an irrational number, such as pi or sqrt(2), cannot be exactly written in any integer base.
Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first.
This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal("%.15f" % my_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (Decimal("%.15f" % 100000.3) == Decimal('100000.300000000002910')).
Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?
Python <2.7
"%.15g" % f
Or in Python 3.0:
format(f, ".15g")
Python 2.7+, 3.2+
Just pass the float to Decimal constructor directly, like this:
from decimal import Decimal
Decimal(f)
I suggest this
>>> a = 2.111111
>>> a
2.1111110000000002
>>> str(a)
'2.111111'
>>> decimal.Decimal(str(a))
Decimal('2.111111')
You said in your question:
Can someone suggest a good way to
convert from float to Decimal
preserving value as the user has
entered
But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.
you can convert and than quantize to keep 5 digits after comma via
Decimal(float).quantize(Decimal("1.00000"))
Python does support Decimal creation from a float. You just cast it as a string first. But the precision loss doesn't occur with string conversion. The float you are converting doesn't have that kind of precision in the first place. (Otherwise you wouldn't need Decimal)
I think the confusion here is that we can create float literals in decimal format, but as soon as the interpreter consumes that literal the inner representation becomes a floating point number.
The "official" string representation of a float is given by the repr() built-in:
>>> repr(1.5)
'1.5'
>>> repr(12345.678901234567890123456789)
'12345.678901234567'
You can use repr() instead of a formatted string, the result won't contain any unnecessary garbage.
When you say "preserving value as the user has entered", why not just store the user-entered value as a string, and pass that to the Decimal constructor?
The main answer is slightly misleading. The g format ignores any leading zeroes after the decimal point, so format(0.012345, ".2g") returns 0.012 - three decimal places. If you need a hard limit on the number of decimal places, use the f formatter: format(0.012345, ".2f") == 0.01
I've come across the the same problem / question today and I'm not completely satisfied with any of the answers given so far. The core of the question seems to be:
Can someone suggest a good way to convert from float to Decimal [...] perhaps limiting number of significant digits that can be supported?
Short answer / solution: Yes.
def ftod(val, prec = 15):
return Decimal(val).quantize(Decimal(10)**-prec)
Long Answer:
As nosklo pointed out it is not possible to preserve the input of the user after it has been converted to float.
It is possible though to round that value with a reasonable precision and convert it into Decimal.
In my case I only need 2 to 4 digits after the separator, but they need to be accurate. Let's consider the classic 0.1 + 0.2 == 0.3 check.
>>> 0.1 + 0.2 == 0.3
False
Now let's do this with conversion to decimal (complete example):
>>> from decimal import Decimal
>>> def ftod(val, prec = 15): # float to Decimal
... return Decimal(val).quantize(Decimal(10)**-prec)
...
>>> ftod(0.1) + ftod(0.2) == ftod(0.3)
True
The answer by Ryabchenko Alexander was really helpful for me. It only lacks a way to dynamically set the precision – a feature I want (and maybe also need). The Decimal documentation FAQ gives an example on how to construct the required argument string for quantize():
>>> Decimal(10)**-4
Decimal('0.0001')
Here's how the numbers look like printed with 18 digits after the separator (coming from C programming I like the fancy python expressions):
>>> for x in [0.1, 0.2, 0.3, ftod(0.1), ftod(0.2), ftod(0.3)]:
... print("{:8} {:.18f}".format(type(x).__name__+":", x))
...
float: 0.100000000000000006
float: 0.200000000000000011
float: 0.299999999999999989
Decimal: 0.100000000000000000
Decimal: 0.200000000000000000
Decimal: 0.300000000000000000
And last I want to know for which precision the comparision still works:
>>> for p in [15, 16, 17]:
... print("Rounding precision: {}. Check 0.1 + 0.2 == 0.3 is {}".format(p,
... ftod(0.1, p) + ftod(0.2, p) == ftod(0.3, p)))
...
Rounding precision: 15. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 16. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 17. Check 0.1 + 0.2 == 0.3 is False
15 seems to be a good default for maximum precision. That should work on most systems. If you need more info, try:
>>> import sys
>>> sys.float_info
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
With float having 53 bits mantissa on my system, I calculated the number of decimal digits:
>>> import math
>>> math.log10(2**53)
15.954589770191003
Which tells me with 53 bits we get almost 16 digits. So 15 ist fine for the precision value and should always work. 16 is error-prone and 17 definitly causes trouble (as seen above).
Anyway ... in my specific case I only need 2 to 4 digits of precision, but as a perfectionist I enjoyed investigating this :-)
Any suggestions / improvements / complaints are welcome.
The "right" way to do this was documented in 1990 by Steele and White's and
Clinger's PLDI 1990 papers.
You might also look at this SO discussion about Python Decimal, including my suggestion to try using something like frap to rationalize a float.
You can use JSON to accomplish it
import json
from decimal import Decimal
float_value = 123456.2365
decimal_value = json.loads(json.dumps(float_value), parse_float=Decimal)
Inspired by this answer I found a workaround that allows to shorten the construction of a Decimal from a float bypassing (only apparently) the string step:
import decimal
class DecimalBuilder(float):
def __or__(self, a):
return decimal.Decimal(str(a))
>>> d = DecimalBuilder()
>>> x = d|0.1
>>> y = d|0.2
>>> x + y # works as desired
Decimal('0.3')
>>> d|0.1 + d|0.2 # does not work as desired, needs parenthesis
TypeError: unsupported operand type(s) for |: 'decimal.Decimal' and 'float'
>>> (d|0.1) + (d|0.2) # works as desired
Decimal('0.3')
It's a workaround but it surely allows savings in code typing and it's very readable.
The question is based on the wrong assertion that:
'Python Decimal doesn't support being constructed from float'.
In python3, Decimal class can do it as:
from decimal import *
getcontext().prec = 128 #high precision set
print(Decimal(100000.3))
A: 100000.300000000002910383045673370361328125 #SUCCESS (over 64 bit precision)
That's the right value with all decimals included, and so:
'there is no garbage after 15th decimal place ...'
You can verify on line with a IEEE754 converter like:
https://www.binaryconvert.com/convert_double.html
A: Most accurate representation = 1.00000300000000002910383045673E5 (64 bit precision)
or directly in python3 :
print(f'{100000.3:.128f}'.strip('0'))
A: 100000.300000000002910383045673370361328125
Preserving value as the user has entered, it's made with string conversion as:
Decimal(str(100000.3))
A: 100000.3