Python. Limit decimal points in a variable when displaying value [duplicate] - python

I want a to be rounded to 13.95. I tried using round, but I get:
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
For the analogous issue with the standard library Decimal class, see How can I format a decimal to always show 2 decimal places?.

You are running into the old problem with floating point numbers that not all numbers can be represented exactly. The command line is just showing you the full floating point form from memory.
With floating point representation, your rounded version is the same number. Since computers are binary, they store floating point numbers as an integer and then divide it by a power of two so 13.95 will be represented in a similar fashion to 125650429603636838/(2**53).
Double precision numbers have 53 bits (16 digits) of precision and regular floats have 24 bits (8 digits) of precision. The floating point type in Python uses double precision to store the values.
For example,
>>> 125650429603636838/(2**53)
13.949999999999999
>>> 234042163/(2**24)
13.949999988079071
>>> a = 13.946
>>> print(a)
13.946
>>> print("%.2f" % a)
13.95
>>> round(a,2)
13.949999999999999
>>> print("%.2f" % round(a, 2))
13.95
>>> print("{:.2f}".format(a))
13.95
>>> print("{:.2f}".format(round(a, 2)))
13.95
>>> print("{:.15f}".format(round(a, 2)))
13.949999999999999
If you are after only two decimal places (to display a currency value, for example), then you have a couple of better choices:
Use integers and store values in cents, not dollars and then divide by 100 to convert to dollars.
Or use a fixed point number like decimal.

There are new format specifications, String Format Specification Mini-Language:
You can do the same as:
"{:.2f}".format(13.949999999999999)
Note 1: the above returns a string. In order to get as float, simply wrap with float(...):
float("{:.2f}".format(13.949999999999999))
Note 2: wrapping with float() doesn't change anything:
>>> x = 13.949999999999999999
>>> x
13.95
>>> g = float("{:.2f}".format(x))
>>> g
13.95
>>> x == g
True
>>> h = round(x, 2)
>>> h
13.95
>>> x == h
True

The built-in round() works just fine in Python 2.7 or later.
Example:
>>> round(14.22222223, 2)
14.22
Check out the documentation.

Let me give an example in Python 3.6's f-string/template-string format, which I think is beautifully neat:
>>> f'{a:.2f}'
It works well with longer examples too, with operators and not needing parentheses:
>>> print(f'Completed in {time.time() - start:.2f}s')

I feel that the simplest approach is to use the format() function.
For example:
a = 13.949999999999999
format(a, '.2f')
13.95
This produces a float number as a string rounded to two decimal points.

Most numbers cannot be exactly represented in floats. If you want to round the number because that's what your mathematical formula or algorithm requires, then you want to use round. If you just want to restrict the display to a certain precision, then don't even use round and just format it as that string. (If you want to display it with some alternate rounding method, and there are tons, then you need to mix the two approaches.)
>>> "%.2f" % 3.14159
'3.14'
>>> "%.2f" % 13.9499999
'13.95'
And lastly, though perhaps most importantly, if you want exact math then you don't want floats at all. The usual example is dealing with money and to store 'cents' as an integer.

Use
print"{:.2f}".format(a)
instead of
print"{0:.2f}".format(a)
Because the latter may lead to output errors when trying to output multiple variables (see comments).

Try the code below:
>>> a = 0.99334
>>> a = int((a * 100) + 0.5) / 100.0 # Adding 0.5 rounds it up
>>> print a
0.99

TLDR ;)
The rounding problem of input and output has been solved definitively by Python 3.1 and the fix is backported also to Python 2.7.0.
Rounded numbers can be reversibly converted between float and string back and forth:
str -> float() -> repr() -> float() ... or Decimal -> float -> str -> Decimal
>>> 0.3
0.3
>>> float(repr(0.3)) == 0.3
True
A Decimal type is not necessary for storage anymore.
Results of arithmetic operations must be rounded again because rounding errors could accumulate more inaccuracy than that is possible after parsing one number. That is not fixed by the improved repr() algorithm (Python >= 3.1, >= 2.7.0):
>>> 0.1 + 0.2
0.30000000000000004
>>> 0.1, 0.2, 0.3
(0.1, 0.2, 0.3)
The output string function str(float(...)) was rounded to 12 valid digits in Python < 2.7x and < 3.1, to prevent excessive invalid digits similar to unfixed repr() output. That was still insufficientl after subtraction of very similar numbers and it was too much rounded after other operations. Python 2.7 and 3.1 use the same length of str() although the repr() is fixed. Some old versions of Numpy had also excessive invalid digits, even with fixed Python. The current Numpy is fixed. Python versions >= 3.2 have the same results of str() and repr() function and also output of similar functions in Numpy.
Test
import random
from decimal import Decimal
for _ in range(1000000):
x = random.random()
assert x == float(repr(x)) == float(Decimal(repr(x))) # Reversible repr()
assert str(x) == repr(x)
assert len(repr(round(x, 12))) <= 14 # no excessive decimal places.
Documentation
See the Release notes Python 2.7 - Other Language Changes the fourth paragraph:
Conversions between floating-point numbers and strings are now correctly rounded on most platforms. These conversions occur in many different places: str() on floats and complex numbers; the float and complex constructors; numeric formatting; serializing and de-serializing floats and complex numbers using the marshal, pickle and json modules; parsing of float and imaginary literals in Python code; and Decimal-to-float conversion.
Related to this, the repr() of a floating-point number x now returns a result based on the shortest decimal string that’s guaranteed to round back to x under correct rounding (with round-half-to-even rounding mode). Previously it gave a string based on rounding x to 17 decimal digits.
The related issue
More information: The formatting of float before Python 2.7 was similar to the current numpy.float64. Both types use the same 64 bit IEEE 754 double precision with 52 bit mantissa. A big difference is that np.float64.__repr__ is formatted frequently with an excessive decimal number so that no bit can be lost, but no valid IEEE 754 number exists between 13.949999999999999 and 13.950000000000001. The result is not nice and the conversion repr(float(number_as_string)) is not reversible with numpy. On the other hand: float.__repr__ is formatted so that every digit is important; the sequence is without gaps and the conversion is reversible. Simply: If you perhaps have a numpy.float64 number, convert it to normal float in order to be formatted for humans, not for numeric processors, otherwise nothing more is necessary with Python 2.7+.

Use:
float_number = 12.234325335563
round(float_number, 2)
This will return;
12.23
Explanation:
The round function takes two arguments;
The number to be rounded and the number of decimal places to be returned. Here I returned two decimal places.

You can modify the output format:
>>> a = 13.95
>>> a
13.949999999999999
>>> print "%.2f" % a
13.95

With Python < 3 (e.g. 2.6 or 2.7), there are two ways to do so.
# Option one
older_method_string = "%.9f" % numvar
# Option two (note ':' before the '.9f')
newer_method_string = "{:.9f}".format(numvar)
But note that for Python versions above 3 (e.g. 3.2 or 3.3), option two is preferred.
For more information on option two, I suggest this link on string formatting from the Python documentation.
And for more information on option one, this link will suffice and has information on the various flags.
Reference: Convert floating point number to a certain precision, and then copy to string

You can use format operator for rounding the value up to two decimal places in Python:
print(format(14.4499923, '.2f')) // The output is 14.45

As Matt pointed out, Python 3.6 provides f-strings, and they can also use nested parameters:
value = 2.34558
precision = 2
width = 4
print(f'result: {value:{width}.{precision}f}')
which will display result: 2.35

In Python 2.7:
a = 13.949999999999999
output = float("%0.2f"%a)
print output

We multiple options to do that:
Option 1:
x = 1.090675765757
g = float("{:.2f}".format(x))
print(g)
Option 2:
The built-in round() supports Python 2.7 or later.
x = 1.090675765757
g = round(x, 2)
print(g)

The Python tutorial has an appendix called Floating Point Arithmetic: Issues and Limitations. Read it. It explains what is happening and why Python is doing its best. It has even an example that matches yours. Let me quote a bit:
>>> 0.1
0.10000000000000001
you may be tempted to use the round()
function to chop it back to the single
digit you expect. But that makes no
difference:
>>> round(0.1, 1)
0.10000000000000001
The problem is that the binary
floating-point value stored for “0.1”
was already the best possible binary
approximation to 1/10, so trying to
round it again can’t make it better:
it was already as good as it gets.
Another consequence is that since 0.1
is not exactly 1/10, summing ten
values of 0.1 may not yield exactly
1.0, either:
>>> sum = 0.0
>>> for i in range(10):
... sum += 0.1
...
>>> sum
0.99999999999999989
One alternative and solution to your problems would be using the decimal module.

Use combination of Decimal object and round() method.
Python 3.7.3
>>> from decimal import Decimal
>>> d1 = Decimal (13.949999999999999) # define a Decimal
>>> d1
Decimal('13.949999999999999289457264239899814128875732421875')
>>> d2 = round(d1, 2) # round to 2 decimals
>>> d2
Decimal('13.95')

It's doing exactly what you told it to do and is working correctly. Read more about floating point confusion and maybe try decimal objects instead.

from decimal import Decimal
def round_float(v, ndigits=2, rt_str=False):
d = Decimal(v)
v_str = ("{0:.%sf}" % ndigits).format(round(d, ndigits))
if rt_str:
return v_str
return Decimal(v_str)
Results:
Python 3.6.1 (default, Dec 11 2018, 17:41:10)
>>> round_float(3.1415926)
Decimal('3.14')
>>> round_float(3.1445926)
Decimal('3.14')
>>> round_float(3.1455926)
Decimal('3.15')
>>> round_float(3.1455926, rt_str=True)
'3.15'
>>> str(round_float(3.1455926))
'3.15'

The simple solution is here
value = 5.34343
rounded_value = round(value, 2) # 5.34

Use a lambda function like this:
arred = lambda x,n : x*(10**n)//1/(10**n)
This way you could just do:
arred(3.141591657, 2)
and get
3.14

orig_float = 232569 / 16000.0
14.5355625
short_float = float("{:.2f}".format(orig_float))
14.54

For fixing the floating point in type-dynamic languages such as Python and JavaScript, I use this technique
# For example:
a = 70000
b = 0.14
c = a * b
print c # Prints 980.0000000002
# Try to fix
c = int(c * 10000)/100000
print c # Prints 980
You can also use Decimal as following:
from decimal import *
getcontext().prec = 6
Decimal(1) / Decimal(7)
# Results in 6 precision -> Decimal('0.142857')
getcontext().prec = 28
Decimal(1) / Decimal(7)
# Results in 28 precision -> Decimal('0.1428571428571428571428571429')

It's simple like:
use decimal module for fast correctly-rounded decimal floating point arithmetic:
d = Decimal(10000000.0000009)
to achieve rounding:
d.quantize(Decimal('0.01'))
will result with Decimal('10000000.00')
make the above DRY:
def round_decimal(number, exponent='0.01'):
decimal_value = Decimal(number)
return decimal_value.quantize(Decimal(exponent))
or
def round_decimal(number, decimal_places=2):
decimal_value = Decimal(number)
return decimal_value.quantize(Decimal(10) ** -decimal_places)
PS: critique of others: formatting is not rounding.

Here is the simple solution using the format function.
float(format(num, '.2f'))
Note: We are converting numbers to float, because the format method is returning a string.

If you want to handle money, use the Python decimal module:
from decimal import Decimal, ROUND_HALF_UP
# 'amount' can be integer, string, tuple, float, or another Decimal object
def to_money(amount) -> Decimal:
money = Decimal(amount).quantize(Decimal('.00'), rounding=ROUND_HALF_UP)
return money

lambda x, n:int(x*10^n + 0.5)/10^n
has worked for me for many years in many languages.

To round a number to a resolution, the best way is the following one, which can work with any resolution (0.01 for two decimals or even other steps):
>>> import numpy as np
>>> value = 13.949999999999999
>>> resolution = 0.01
>>> newValue = int(np.round(value/resolution))*resolution
>>> print newValue
13.95
>>> resolution = 0.5
>>> newValue = int(np.round(value/resolution))*resolution
>>> print newValue
14.0

The answers I saw didn't work with the float(52.15) case. After some tests, there is the solution that I'm using:
import decimal
def value_to_decimal(value, decimal_places):
decimal.getcontext().rounding = decimal.ROUND_HALF_UP # define rounding method
return decimal.Decimal(str(float(value))).quantize(decimal.Decimal('1e-{}'.format(decimal_places)))
(The conversion of the 'value' to float and then string is very important, that way, 'value' can be of the type float, decimal, integer or string!)
Hope this helps anyone.

Related

How to force the number of digits in python [duplicate]

I need to print or convert a float number to 15 decimal place string even if the result has many trailing 0s eg:
1.6 becomes 1.6000000000000000
I tried round(6.2,15) but it returns 6.2000000000000002 adding a rounding error
I also saw various people online who put the float into a string and then added trailing 0's manually but that seems bad...
What is the best way to do this?
For Python versions in 2.6+ and 3.x
You can use the str.format method. Examples:
>>> print('{0:.16f}'.format(1.6))
1.6000000000000001
>>> print('{0:.15f}'.format(1.6))
1.600000000000000
Note the 1 at the end of the first example is rounding error; it happens because exact representation of the decimal number 1.6 requires an infinite number binary digits. Since floating-point numbers have a finite number of bits, the number is rounded to a nearby, but not equal, value.
For Python versions prior to 2.6 (at least back to 2.0)
You can use the "modulo-formatting" syntax (this works for Python 2.6 and 2.7 too):
>>> print '%.16f' % 1.6
1.6000000000000001
>>> print '%.15f' % 1.6
1.600000000000000
The cleanest way in modern Python >=3.6, is to use an f-string with string formatting:
>>> var = 1.6
>>> f"{var:.15f}"
'1.600000000000000'
Floating point numbers lack precision to accurately represent "1.6" out to that many decimal places. The rounding errors are real. Your number is not actually 1.6.
Check out: http://docs.python.org/library/decimal.html
I guess this is essentially putting it in a string, but this avoids the rounding error:
import decimal
def display(x):
digits = 15
temp = str(decimal.Decimal(str(x) + '0' * digits))
return temp[:temp.find('.') + digits + 1]
We can use format() to print digits after the decimal places.
Taken from http://docs.python.org/tutorial/floatingpoint.html
>>> format(math.pi, '.12g') # give 12 significant digits
'3.14159265359'
>>> format(math.pi, '.2f') # give 2 digits after the point
'3.14'

how can i do calibration for the numbers after a comma in scipy basinhopping? [duplicate]

Using Python 2.7 how do I round my numbers to two decimal places rather than the 10 or so it gives?
print "financial return of outcome 1 =","$"+str(out1)
Use the built-in function round():
>>> round(1.2345,2)
1.23
>>> round(1.5145,2)
1.51
>>> round(1.679,2)
1.68
Or built-in function format():
>>> format(1.2345, '.2f')
'1.23'
>>> format(1.679, '.2f')
'1.68'
Or new style string formatting:
>>> "{:.2f}".format(1.2345)
'1.23
>>> "{:.2f}".format(1.679)
'1.68'
Or old style string formatting:
>>> "%.2f" % (1.679)
'1.68'
help on round:
>>> print round.__doc__
round(number[, ndigits]) -> floating point number
Round a number to a given precision in decimal digits (default 0 digits).
This always returns a floating point number. Precision may be negative.
Since you're talking about financial figures, you DO NOT WANT to use floating-point arithmetic. You're better off using Decimal.
>>> from decimal import Decimal
>>> Decimal("33.505")
Decimal('33.505')
Text output formatting with new-style format() (defaults to half-even rounding):
>>> print("financial return of outcome 1 = {:.2f}".format(Decimal("33.505")))
financial return of outcome 1 = 33.50
>>> print("financial return of outcome 1 = {:.2f}".format(Decimal("33.515")))
financial return of outcome 1 = 33.52
See the differences in rounding due to floating-point imprecision:
>>> round(33.505, 2)
33.51
>>> round(Decimal("33.505"), 2) # This converts back to float (wrong)
33.51
>>> Decimal(33.505) # Don't init Decimal from floating-point
Decimal('33.50500000000000255795384873636066913604736328125')
Proper way to round financial values:
>>> Decimal("33.505").quantize(Decimal("0.01")) # Half-even rounding by default
Decimal('33.50')
It is also common to have other types of rounding in different transactions:
>>> import decimal
>>> Decimal("33.505").quantize(Decimal("0.01"), decimal.ROUND_HALF_DOWN)
Decimal('33.50')
>>> Decimal("33.505").quantize(Decimal("0.01"), decimal.ROUND_HALF_UP)
Decimal('33.51')
Remember that if you're simulating return outcome, you possibly will have to round at each interest period, since you can't pay/receive cent fractions, nor receive interest over cent fractions. For simulations it's pretty common to just use floating-point due to inherent uncertainties, but if doing so, always remember that the error is there. As such, even fixed-interest investments might differ a bit in returns because of this.
You can use str.format(), too:
>>> print "financial return of outcome 1 = {:.2f}".format(1.23456)
financial return of outcome 1 = 1.23
When working with pennies/integers. You will run into a problem with 115 (as in $1.15) and other numbers.
I had a function that would convert an Integer to a Float.
...
return float(115 * 0.01)
That worked most of the time but sometimes it would return something like 1.1500000000000001.
So I changed my function to return like this...
...
return float(format(115 * 0.01, '.2f'))
and that will return 1.15. Not '1.15' or 1.1500000000000001 (returns a float, not a string)
I'm mostly posting this so I can remember what I did in this scenario since this is the first result in google.
The best, I think, is to use the format() function:
>>> print("financial return of outcome 1 = $ " + format(str(out1), '.2f'))
// Should print: financial return of outcome 1 = $ 752.60
But I have to say: don't use round or format when working with financial values.
When we use the round() function, it will not give correct values.
you can check it using,
round (2.735) and round(2.725)
please use
import math
num = input('Enter a number')
print(math.ceil(num*100)/100)
print "financial return of outcome 1 = $%.2f" % (out1)
A rather simple workaround is to convert the float into string first, the select the substring of the first four numbers, finally convert the substring back to float.
For example:
>>> out1 = 1.2345
>>> out1 = float(str(out1)[0:4])
>>> out1
May not be super efficient but simple and works :)
Rounding up to the next 0.05, I would do this way:
def roundup(x):
return round(int(math.ceil(x / 0.05)) * 0.05,2)

Python hide float decimals if it's over 13?

i was doing some calculation and i got something like this:
newInteger = 200
newFloat = 200.0
if newInteger >= newFloat:
print "Something"
when i run my application it didn't print it out but when i test it on python shell, it print Something!!.
so i test this,
>>> number = 200.0000000000001
>>> number
200.0000000000001
but when decimals goes over 13, like so:
>>> number = 200.00000000000001
>>> number
200.0
does python hide the decimal numbers but showing as rounded? knowing the result is quite important when debugging.
is there any way that i can get the full decimals? (i did look up at python documentation, it didn't say anything about printing actual float number.)
It's called floating point round-off error. It has to do with how Python stores floats (in binary), which makes it impossible for floats to have 100% precision.
Here's more info in the docs.
See the decimal module if you need more precision.
If you just want to quickly compare two numbers, there are a couple of tricks for floating point comparison. One of the most popular is comparing the relative error to the machine precision (epsilon):
import sys
def float_equality(x, y, epsilon=sys.float_info.epsilon):
return abs(x - y) <= epsilon * max(abs(x), abs(y))
But this too, is not perfect. For a discussion of the imperfections of this method and some more accurate alternatives, see this article about comparing floats.
Python tends to round numbers:
>>> math.pi
3.141592653589793
>>> "{0:.50f}".format(math.pi)
'3.14159265358979311599796346854418516159057617187500'
>>> "{0:.2f}".format(math.pi)
'3.14'
However, floating point numbers have a specific precision and you can't go beyod it. You can't store arbitrary numbers in floating point:
>>> number = 200.00000000000001
>>> "{:.25f}".format(number)
'200.0000000000000000000000000'
For integers the floating point limit is 2**53:
>>> 2.0**53
9007199254740992.0
>>> 2.0**53 + 1
9007199254740992.0
>>> 2.0**53 + 2
9007199254740994.0
If you want to store arbitrary decimal numbers you should use Decimal module:
>>> from decimal import Decimal
>>> number = Decimal("200.0000000000000000000000000000000000000000001")
>>> number
Decimal('200.0000000000000000000000000000000000000000001')

How to fix floating point decimal to two places even if number is 2.00000

This is what I have:
x = 2.00001
This is what I need:
x = 2.00
I am using:
float("%.2f" % x)
But all I get is:
2
How can I limit the decimal places to two AND make sure there are always two decimal places even if they are zero?
Note: I do not want the final output to be a string.
This works:
'%.2f" % float(x)
Previously I answered with this:
How about this?
def fmt2decimals(x):
s = str(int(x*100))
return s[0:-2] + '.' + s[-2:]
AFAIK you can't get trailing zeros with a format specification like %.2f.
If you can use decimal (https://docs.python.org/2/library/decimal.html) instead of float:
from decimal import Decimal
Decimal('7').quantize(Decimal('.01'))
quantize() specifies where to round to.
https://docs.python.org/2/library/decimal.html#decimal.Decimal.quantize
Have you taken a look at the decimal module? It allows you to do arithmetic while maintaining the proper precision:
>>> from decimal import Decimal
>>> a = Decimal("2.00")
>>> a * 5
Decimal('10.00')
>>> b = Decimal("0.05")
>>> a * b
Decimal('0.1000')
Python also has a builtin "round" function: x = round(2.00001, 2) I believe is the command you would use.
Well, in Python, you can't really round to two zeroes without the result being a string. Python will usually always round to the first zero because of how floating point integers are stored. You can round to two digits if the second digit is not zero, though.
For example, this:
round(2.00001, 2)
#Output: 2.0
vs this:
round(2.00601, 2)
#Output: 2.01

Python float to Decimal conversion

Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first.
This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal("%.15f" % my_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (Decimal("%.15f" % 100000.3) == Decimal('100000.300000000002910')).
Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?
Python <2.7
"%.15g" % f
Or in Python 3.0:
format(f, ".15g")
Python 2.7+, 3.2+
Just pass the float to Decimal constructor directly, like this:
from decimal import Decimal
Decimal(f)
I suggest this
>>> a = 2.111111
>>> a
2.1111110000000002
>>> str(a)
'2.111111'
>>> decimal.Decimal(str(a))
Decimal('2.111111')
You said in your question:
Can someone suggest a good way to
convert from float to Decimal
preserving value as the user has
entered
But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.
you can convert and than quantize to keep 5 digits after comma via
Decimal(float).quantize(Decimal("1.00000"))
Python does support Decimal creation from a float. You just cast it as a string first. But the precision loss doesn't occur with string conversion. The float you are converting doesn't have that kind of precision in the first place. (Otherwise you wouldn't need Decimal)
I think the confusion here is that we can create float literals in decimal format, but as soon as the interpreter consumes that literal the inner representation becomes a floating point number.
The "official" string representation of a float is given by the repr() built-in:
>>> repr(1.5)
'1.5'
>>> repr(12345.678901234567890123456789)
'12345.678901234567'
You can use repr() instead of a formatted string, the result won't contain any unnecessary garbage.
When you say "preserving value as the user has entered", why not just store the user-entered value as a string, and pass that to the Decimal constructor?
The main answer is slightly misleading. The g format ignores any leading zeroes after the decimal point, so format(0.012345, ".2g") returns 0.012 - three decimal places. If you need a hard limit on the number of decimal places, use the f formatter: format(0.012345, ".2f") == 0.01
I've come across the the same problem / question today and I'm not completely satisfied with any of the answers given so far. The core of the question seems to be:
Can someone suggest a good way to convert from float to Decimal [...] perhaps limiting number of significant digits that can be supported?
Short answer / solution: Yes.
def ftod(val, prec = 15):
return Decimal(val).quantize(Decimal(10)**-prec)
Long Answer:
As nosklo pointed out it is not possible to preserve the input of the user after it has been converted to float.
It is possible though to round that value with a reasonable precision and convert it into Decimal.
In my case I only need 2 to 4 digits after the separator, but they need to be accurate. Let's consider the classic 0.1 + 0.2 == 0.3 check.
>>> 0.1 + 0.2 == 0.3
False
Now let's do this with conversion to decimal (complete example):
>>> from decimal import Decimal
>>> def ftod(val, prec = 15): # float to Decimal
... return Decimal(val).quantize(Decimal(10)**-prec)
...
>>> ftod(0.1) + ftod(0.2) == ftod(0.3)
True
The answer by Ryabchenko Alexander was really helpful for me. It only lacks a way to dynamically set the precision – a feature I want (and maybe also need). The Decimal documentation FAQ gives an example on how to construct the required argument string for quantize():
>>> Decimal(10)**-4
Decimal('0.0001')
Here's how the numbers look like printed with 18 digits after the separator (coming from C programming I like the fancy python expressions):
>>> for x in [0.1, 0.2, 0.3, ftod(0.1), ftod(0.2), ftod(0.3)]:
... print("{:8} {:.18f}".format(type(x).__name__+":", x))
...
float: 0.100000000000000006
float: 0.200000000000000011
float: 0.299999999999999989
Decimal: 0.100000000000000000
Decimal: 0.200000000000000000
Decimal: 0.300000000000000000
And last I want to know for which precision the comparision still works:
>>> for p in [15, 16, 17]:
... print("Rounding precision: {}. Check 0.1 + 0.2 == 0.3 is {}".format(p,
... ftod(0.1, p) + ftod(0.2, p) == ftod(0.3, p)))
...
Rounding precision: 15. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 16. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 17. Check 0.1 + 0.2 == 0.3 is False
15 seems to be a good default for maximum precision. That should work on most systems. If you need more info, try:
>>> import sys
>>> sys.float_info
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
With float having 53 bits mantissa on my system, I calculated the number of decimal digits:
>>> import math
>>> math.log10(2**53)
15.954589770191003
Which tells me with 53 bits we get almost 16 digits. So 15 ist fine for the precision value and should always work. 16 is error-prone and 17 definitly causes trouble (as seen above).
Anyway ... in my specific case I only need 2 to 4 digits of precision, but as a perfectionist I enjoyed investigating this :-)
Any suggestions / improvements / complaints are welcome.
The "right" way to do this was documented in 1990 by Steele and White's and
Clinger's PLDI 1990 papers.
You might also look at this SO discussion about Python Decimal, including my suggestion to try using something like frap to rationalize a float.
You can use JSON to accomplish it
import json
from decimal import Decimal
float_value = 123456.2365
decimal_value = json.loads(json.dumps(float_value), parse_float=Decimal)
Inspired by this answer I found a workaround that allows to shorten the construction of a Decimal from a float bypassing (only apparently) the string step:
import decimal
class DecimalBuilder(float):
def __or__(self, a):
return decimal.Decimal(str(a))
>>> d = DecimalBuilder()
>>> x = d|0.1
>>> y = d|0.2
>>> x + y # works as desired
Decimal('0.3')
>>> d|0.1 + d|0.2 # does not work as desired, needs parenthesis
TypeError: unsupported operand type(s) for |: 'decimal.Decimal' and 'float'
>>> (d|0.1) + (d|0.2) # works as desired
Decimal('0.3')
It's a workaround but it surely allows savings in code typing and it's very readable.
The question is based on the wrong assertion that:
'Python Decimal doesn't support being constructed from float'.
In python3, Decimal class can do it as:
from decimal import *
getcontext().prec = 128 #high precision set
print(Decimal(100000.3))
A: 100000.300000000002910383045673370361328125 #SUCCESS (over 64 bit precision)
That's the right value with all decimals included, and so:
'there is no garbage after 15th decimal place ...'
You can verify on line with a IEEE754 converter like:
https://www.binaryconvert.com/convert_double.html
A: Most accurate representation = 1.00000300000000002910383045673E5 (64 bit precision)
or directly in python3 :
print(f'{100000.3:.128f}'.strip('0'))
A: 100000.300000000002910383045673370361328125
Preserving value as the user has entered, it's made with string conversion as:
Decimal(str(100000.3))
A: 100000.3

Categories

Resources