How to force the number of digits in python [duplicate] - python

I need to print or convert a float number to 15 decimal place string even if the result has many trailing 0s eg:
1.6 becomes 1.6000000000000000
I tried round(6.2,15) but it returns 6.2000000000000002 adding a rounding error
I also saw various people online who put the float into a string and then added trailing 0's manually but that seems bad...
What is the best way to do this?

For Python versions in 2.6+ and 3.x
You can use the str.format method. Examples:
>>> print('{0:.16f}'.format(1.6))
1.6000000000000001
>>> print('{0:.15f}'.format(1.6))
1.600000000000000
Note the 1 at the end of the first example is rounding error; it happens because exact representation of the decimal number 1.6 requires an infinite number binary digits. Since floating-point numbers have a finite number of bits, the number is rounded to a nearby, but not equal, value.
For Python versions prior to 2.6 (at least back to 2.0)
You can use the "modulo-formatting" syntax (this works for Python 2.6 and 2.7 too):
>>> print '%.16f' % 1.6
1.6000000000000001
>>> print '%.15f' % 1.6
1.600000000000000

The cleanest way in modern Python >=3.6, is to use an f-string with string formatting:
>>> var = 1.6
>>> f"{var:.15f}"
'1.600000000000000'

Floating point numbers lack precision to accurately represent "1.6" out to that many decimal places. The rounding errors are real. Your number is not actually 1.6.
Check out: http://docs.python.org/library/decimal.html

I guess this is essentially putting it in a string, but this avoids the rounding error:
import decimal
def display(x):
digits = 15
temp = str(decimal.Decimal(str(x) + '0' * digits))
return temp[:temp.find('.') + digits + 1]

We can use format() to print digits after the decimal places.
Taken from http://docs.python.org/tutorial/floatingpoint.html
>>> format(math.pi, '.12g') # give 12 significant digits
'3.14159265359'
>>> format(math.pi, '.2f') # give 2 digits after the point
'3.14'

Related

Python. Limit decimal points in a variable when displaying value [duplicate]

I want a to be rounded to 13.95. I tried using round, but I get:
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
For the analogous issue with the standard library Decimal class, see How can I format a decimal to always show 2 decimal places?.
You are running into the old problem with floating point numbers that not all numbers can be represented exactly. The command line is just showing you the full floating point form from memory.
With floating point representation, your rounded version is the same number. Since computers are binary, they store floating point numbers as an integer and then divide it by a power of two so 13.95 will be represented in a similar fashion to 125650429603636838/(2**53).
Double precision numbers have 53 bits (16 digits) of precision and regular floats have 24 bits (8 digits) of precision. The floating point type in Python uses double precision to store the values.
For example,
>>> 125650429603636838/(2**53)
13.949999999999999
>>> 234042163/(2**24)
13.949999988079071
>>> a = 13.946
>>> print(a)
13.946
>>> print("%.2f" % a)
13.95
>>> round(a,2)
13.949999999999999
>>> print("%.2f" % round(a, 2))
13.95
>>> print("{:.2f}".format(a))
13.95
>>> print("{:.2f}".format(round(a, 2)))
13.95
>>> print("{:.15f}".format(round(a, 2)))
13.949999999999999
If you are after only two decimal places (to display a currency value, for example), then you have a couple of better choices:
Use integers and store values in cents, not dollars and then divide by 100 to convert to dollars.
Or use a fixed point number like decimal.
There are new format specifications, String Format Specification Mini-Language:
You can do the same as:
"{:.2f}".format(13.949999999999999)
Note 1: the above returns a string. In order to get as float, simply wrap with float(...):
float("{:.2f}".format(13.949999999999999))
Note 2: wrapping with float() doesn't change anything:
>>> x = 13.949999999999999999
>>> x
13.95
>>> g = float("{:.2f}".format(x))
>>> g
13.95
>>> x == g
True
>>> h = round(x, 2)
>>> h
13.95
>>> x == h
True
The built-in round() works just fine in Python 2.7 or later.
Example:
>>> round(14.22222223, 2)
14.22
Check out the documentation.
Let me give an example in Python 3.6's f-string/template-string format, which I think is beautifully neat:
>>> f'{a:.2f}'
It works well with longer examples too, with operators and not needing parentheses:
>>> print(f'Completed in {time.time() - start:.2f}s')
I feel that the simplest approach is to use the format() function.
For example:
a = 13.949999999999999
format(a, '.2f')
13.95
This produces a float number as a string rounded to two decimal points.
Most numbers cannot be exactly represented in floats. If you want to round the number because that's what your mathematical formula or algorithm requires, then you want to use round. If you just want to restrict the display to a certain precision, then don't even use round and just format it as that string. (If you want to display it with some alternate rounding method, and there are tons, then you need to mix the two approaches.)
>>> "%.2f" % 3.14159
'3.14'
>>> "%.2f" % 13.9499999
'13.95'
And lastly, though perhaps most importantly, if you want exact math then you don't want floats at all. The usual example is dealing with money and to store 'cents' as an integer.
Use
print"{:.2f}".format(a)
instead of
print"{0:.2f}".format(a)
Because the latter may lead to output errors when trying to output multiple variables (see comments).
Try the code below:
>>> a = 0.99334
>>> a = int((a * 100) + 0.5) / 100.0 # Adding 0.5 rounds it up
>>> print a
0.99
TLDR ;)
The rounding problem of input and output has been solved definitively by Python 3.1 and the fix is backported also to Python 2.7.0.
Rounded numbers can be reversibly converted between float and string back and forth:
str -> float() -> repr() -> float() ... or Decimal -> float -> str -> Decimal
>>> 0.3
0.3
>>> float(repr(0.3)) == 0.3
True
A Decimal type is not necessary for storage anymore.
Results of arithmetic operations must be rounded again because rounding errors could accumulate more inaccuracy than that is possible after parsing one number. That is not fixed by the improved repr() algorithm (Python >= 3.1, >= 2.7.0):
>>> 0.1 + 0.2
0.30000000000000004
>>> 0.1, 0.2, 0.3
(0.1, 0.2, 0.3)
The output string function str(float(...)) was rounded to 12 valid digits in Python < 2.7x and < 3.1, to prevent excessive invalid digits similar to unfixed repr() output. That was still insufficientl after subtraction of very similar numbers and it was too much rounded after other operations. Python 2.7 and 3.1 use the same length of str() although the repr() is fixed. Some old versions of Numpy had also excessive invalid digits, even with fixed Python. The current Numpy is fixed. Python versions >= 3.2 have the same results of str() and repr() function and also output of similar functions in Numpy.
Test
import random
from decimal import Decimal
for _ in range(1000000):
x = random.random()
assert x == float(repr(x)) == float(Decimal(repr(x))) # Reversible repr()
assert str(x) == repr(x)
assert len(repr(round(x, 12))) <= 14 # no excessive decimal places.
Documentation
See the Release notes Python 2.7 - Other Language Changes the fourth paragraph:
Conversions between floating-point numbers and strings are now correctly rounded on most platforms. These conversions occur in many different places: str() on floats and complex numbers; the float and complex constructors; numeric formatting; serializing and de-serializing floats and complex numbers using the marshal, pickle and json modules; parsing of float and imaginary literals in Python code; and Decimal-to-float conversion.
Related to this, the repr() of a floating-point number x now returns a result based on the shortest decimal string that’s guaranteed to round back to x under correct rounding (with round-half-to-even rounding mode). Previously it gave a string based on rounding x to 17 decimal digits.
The related issue
More information: The formatting of float before Python 2.7 was similar to the current numpy.float64. Both types use the same 64 bit IEEE 754 double precision with 52 bit mantissa. A big difference is that np.float64.__repr__ is formatted frequently with an excessive decimal number so that no bit can be lost, but no valid IEEE 754 number exists between 13.949999999999999 and 13.950000000000001. The result is not nice and the conversion repr(float(number_as_string)) is not reversible with numpy. On the other hand: float.__repr__ is formatted so that every digit is important; the sequence is without gaps and the conversion is reversible. Simply: If you perhaps have a numpy.float64 number, convert it to normal float in order to be formatted for humans, not for numeric processors, otherwise nothing more is necessary with Python 2.7+.
Use:
float_number = 12.234325335563
round(float_number, 2)
This will return;
12.23
Explanation:
The round function takes two arguments;
The number to be rounded and the number of decimal places to be returned. Here I returned two decimal places.
You can modify the output format:
>>> a = 13.95
>>> a
13.949999999999999
>>> print "%.2f" % a
13.95
With Python < 3 (e.g. 2.6 or 2.7), there are two ways to do so.
# Option one
older_method_string = "%.9f" % numvar
# Option two (note ':' before the '.9f')
newer_method_string = "{:.9f}".format(numvar)
But note that for Python versions above 3 (e.g. 3.2 or 3.3), option two is preferred.
For more information on option two, I suggest this link on string formatting from the Python documentation.
And for more information on option one, this link will suffice and has information on the various flags.
Reference: Convert floating point number to a certain precision, and then copy to string
You can use format operator for rounding the value up to two decimal places in Python:
print(format(14.4499923, '.2f')) // The output is 14.45
As Matt pointed out, Python 3.6 provides f-strings, and they can also use nested parameters:
value = 2.34558
precision = 2
width = 4
print(f'result: {value:{width}.{precision}f}')
which will display result: 2.35
In Python 2.7:
a = 13.949999999999999
output = float("%0.2f"%a)
print output
We multiple options to do that:
Option 1:
x = 1.090675765757
g = float("{:.2f}".format(x))
print(g)
Option 2:
The built-in round() supports Python 2.7 or later.
x = 1.090675765757
g = round(x, 2)
print(g)
The Python tutorial has an appendix called Floating Point Arithmetic: Issues and Limitations. Read it. It explains what is happening and why Python is doing its best. It has even an example that matches yours. Let me quote a bit:
>>> 0.1
0.10000000000000001
you may be tempted to use the round()
function to chop it back to the single
digit you expect. But that makes no
difference:
>>> round(0.1, 1)
0.10000000000000001
The problem is that the binary
floating-point value stored for “0.1”
was already the best possible binary
approximation to 1/10, so trying to
round it again can’t make it better:
it was already as good as it gets.
Another consequence is that since 0.1
is not exactly 1/10, summing ten
values of 0.1 may not yield exactly
1.0, either:
>>> sum = 0.0
>>> for i in range(10):
... sum += 0.1
...
>>> sum
0.99999999999999989
One alternative and solution to your problems would be using the decimal module.
Use combination of Decimal object and round() method.
Python 3.7.3
>>> from decimal import Decimal
>>> d1 = Decimal (13.949999999999999) # define a Decimal
>>> d1
Decimal('13.949999999999999289457264239899814128875732421875')
>>> d2 = round(d1, 2) # round to 2 decimals
>>> d2
Decimal('13.95')
It's doing exactly what you told it to do and is working correctly. Read more about floating point confusion and maybe try decimal objects instead.
from decimal import Decimal
def round_float(v, ndigits=2, rt_str=False):
d = Decimal(v)
v_str = ("{0:.%sf}" % ndigits).format(round(d, ndigits))
if rt_str:
return v_str
return Decimal(v_str)
Results:
Python 3.6.1 (default, Dec 11 2018, 17:41:10)
>>> round_float(3.1415926)
Decimal('3.14')
>>> round_float(3.1445926)
Decimal('3.14')
>>> round_float(3.1455926)
Decimal('3.15')
>>> round_float(3.1455926, rt_str=True)
'3.15'
>>> str(round_float(3.1455926))
'3.15'
The simple solution is here
value = 5.34343
rounded_value = round(value, 2) # 5.34
Use a lambda function like this:
arred = lambda x,n : x*(10**n)//1/(10**n)
This way you could just do:
arred(3.141591657, 2)
and get
3.14
orig_float = 232569 / 16000.0
14.5355625
short_float = float("{:.2f}".format(orig_float))
14.54
For fixing the floating point in type-dynamic languages such as Python and JavaScript, I use this technique
# For example:
a = 70000
b = 0.14
c = a * b
print c # Prints 980.0000000002
# Try to fix
c = int(c * 10000)/100000
print c # Prints 980
You can also use Decimal as following:
from decimal import *
getcontext().prec = 6
Decimal(1) / Decimal(7)
# Results in 6 precision -> Decimal('0.142857')
getcontext().prec = 28
Decimal(1) / Decimal(7)
# Results in 28 precision -> Decimal('0.1428571428571428571428571429')
It's simple like:
use decimal module for fast correctly-rounded decimal floating point arithmetic:
d = Decimal(10000000.0000009)
to achieve rounding:
d.quantize(Decimal('0.01'))
will result with Decimal('10000000.00')
make the above DRY:
def round_decimal(number, exponent='0.01'):
decimal_value = Decimal(number)
return decimal_value.quantize(Decimal(exponent))
or
def round_decimal(number, decimal_places=2):
decimal_value = Decimal(number)
return decimal_value.quantize(Decimal(10) ** -decimal_places)
PS: critique of others: formatting is not rounding.
Here is the simple solution using the format function.
float(format(num, '.2f'))
Note: We are converting numbers to float, because the format method is returning a string.
If you want to handle money, use the Python decimal module:
from decimal import Decimal, ROUND_HALF_UP
# 'amount' can be integer, string, tuple, float, or another Decimal object
def to_money(amount) -> Decimal:
money = Decimal(amount).quantize(Decimal('.00'), rounding=ROUND_HALF_UP)
return money
lambda x, n:int(x*10^n + 0.5)/10^n
has worked for me for many years in many languages.
To round a number to a resolution, the best way is the following one, which can work with any resolution (0.01 for two decimals or even other steps):
>>> import numpy as np
>>> value = 13.949999999999999
>>> resolution = 0.01
>>> newValue = int(np.round(value/resolution))*resolution
>>> print newValue
13.95
>>> resolution = 0.5
>>> newValue = int(np.round(value/resolution))*resolution
>>> print newValue
14.0
The answers I saw didn't work with the float(52.15) case. After some tests, there is the solution that I'm using:
import decimal
def value_to_decimal(value, decimal_places):
decimal.getcontext().rounding = decimal.ROUND_HALF_UP # define rounding method
return decimal.Decimal(str(float(value))).quantize(decimal.Decimal('1e-{}'.format(decimal_places)))
(The conversion of the 'value' to float and then string is very important, that way, 'value' can be of the type float, decimal, integer or string!)
Hope this helps anyone.

In python, is there hidden rules to control how to display the precision of decimal number

For python, do read this link: https://docs.python.org/3/tutorial/floatingpoint.html, "Floating Point Arithmetic: Issues and Limitations"
I do understand that there is mismatch(tiny difference) between a binary-represented float & exact-decimal represented float, ex.
exact-decimal represented float:: 1.005
python binary-represented float:: 1.00499999999999989341858963598497211933135986328125
here is what I typed in python:
>>> 1.005
1.005
>>> from decimal import Decimal
>>> Decimal(1.005)
Decimal('1.00499999999999989341858963598497211933135986328125')
Here is my question:
why python showed 1.005 when I type in 1.005? why it is not 1.00499999999999989341858963598497211933135986328125?
if you tell me that python round result to some digits after decimal point, then what is rounding rule for my situation? it looks there is default rounding rule when start python, if this default rounding rule exists, how to change it?
Thanks
When asked to convert the float value 1.0049999999999999 to string, Python displays it with rounding:
>>> x = 1.0049999999999999; print(x)
1.005
According to the post that juanpa linked, Python uses the David Gay algorithm to decide how many digits to show when printing a float. Usually around 16 digits are shown, which makes sense, since 64-bit floats can represent 15 to 17 digits of significance.
If you want to print a float with some other number of digits shown, use an f-string or string interpolation with a precision specifier (see e.g. Input and Output - The Python Tutorial). For instance to print x with 20 digits:
>>> print(f'{x:.20}')
1.0049999999999998934
>>> print('%.20g' % x)
1.0049999999999998934

Why doesn't Python include number of digits I specify after the decimal point when rounding?

For example,
a = 5 * 6.2
print (round(a, 2)
The output is 31.0. I would have expected 31.00.
b = 2.3 * 3.2
print (round(b, 3))
The output is 7.36. I would have expected 7.360.
You are confusing rounding with formatting. Rounding produces a new float object with the rounded value, which is still going to print the same way as any other float:
>>> print(31.00)
31.0
Use the format() function if you need to produce a string with a specific number of decimals:
>>> print(format(31.0, '.2f'))
31.00
See the Format Specification Mini-Language section for what options you have available.
If the value is part of a larger string, you can use the str.format() method to embed values into a string template, using the same formatting specifications:
>>> a = 5 * 6.2
>>> print('The value of "a" is {:.2f}'.format(a))
Python always prints at least one digit after the decimal point so you can tell the difference between integers and floats.
The round() function merely rounds the number to the specified number of decimal places. It does not control how it is printed. 7.36 and 7.360 are the same number, so the shorter is printed.
To control the printing, you can use formatting. For example:
print(".3f" % b)
Python does round to 3 decimal places. It is the printing that cuts additional zeros. Try something like print("%.3f" % number)

How to check if a float value is within a certain range and has a given number of decimal digits?

How to check if a float value is within a range (0.50,150.00) and has 2 decimal digits?
For example, 15.22366 should be false (too many decimal digits). But 15.22 should be true.
I tried something like:
data= input()
if data in range(0.50,150.00):
return True
Is that you are looking for?
def check(value):
if 0.50 <= value <= 150 and round(value,2)==value:
return True
return False
Given your comment:
i input 15.22366 it is going to return true; that is why i specified the range; it should accept 15.22
Simply said, floating point values are imprecise. Many values don't have a precise representation. Say for example 1.40. It might be displayed "as it":
>>> f = 1.40
>>> print f
1.4
But this is an illusion. Python has rounded that value in order to nicely display it. The real value as referenced by the variable f is quite different:
>>> from decimal import Decimal
>>> Decimal(f)
Decimal('1.399999999999999911182158029987476766109466552734375')
According to your rule of having only 2 decimals, should f reference a valid value or not?
The easiest way to fix that issue is probably to use round(...,2) as I suggested in the code above. But this in only an heuristic -- only able to reject "largely wrong" values. See my point here:
>>> for v in [ 1.40,
... 1.405,
... 1.399999999999999911182158029987476766109466552734375,
... 1.39999999999999991118,
... 1.3999999999999991118]:
... print check(v), v
...
True 1.4
False 1.405
True 1.4
True 1.4
False 1.4
Notice how the last few results might seems surprising at first. I hope my above explanations put some light on this.
As a final advice, for your needs as I guess them from your question, you should definitively consider using "decimal arithmetic". Python provides the decimal module for that purpose.
float is the wrong data type to use for your case, Use Decimal instead.
Check python docs for issues and limitations. To quote from there (I've generalised the text in Italics)
Floating-point numbers are represented in computer hardware as base 2 (binary) fractions.
no matter how many base 2 digits you’re willing to use, some decimal value (like 0.1) cannot be represented exactly as a base 2 fraction.
Stop at any finite number of bits, and you get an approximation
On a typical machine running Python, there are 53 bits of precision available for a Python float, so the value stored internally when you enter a decimal number is the binary fraction which is close to, but not exactly equal to it.
The documentation for the built-in round() function says that it rounds to the nearest value, rounding ties away from zero.
And finally, it recommends
If you’re in a situation where you care which way your decimal halfway-cases are rounded, you should consider using the decimal module.
And this will hold for your case as well, as you are looking for a precision of 2 digits after decimal points, which float just can't guarantee.
EDIT Note: The answer below corresponds to original question related to random float generation
Seeing that you need 2 digits of sure shot precision, I would suggest generating integer random numbers in range [50, 15000] and dividing them by 100 to convert them to float yourself.
import random
random.randint(50, 15000)/100.0
Why don't you just use round?
round(random.uniform(0.5, 150.0), 2)
Probably what you want to do is not to change the value itself. As said by Cyber in the comment, even if your round a floating point number, it will always store the same precision. If you need to change the way it is printed:
n = random.uniform(0.5, 150)
print '%.2f' % n # 58.03
The easiest way is to first convert the decimal to string and split with '.' and check if the length of the character. If it is >2 then pass on. i.e. Convert use input number to check if it is in a given range.
a=15.22366
if len(str(a).split('.')[1])>2:
if 0.50 <= value <= 150:
<do your stuff>>

Python float to Decimal conversion

Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first.
This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal("%.15f" % my_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (Decimal("%.15f" % 100000.3) == Decimal('100000.300000000002910')).
Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?
Python <2.7
"%.15g" % f
Or in Python 3.0:
format(f, ".15g")
Python 2.7+, 3.2+
Just pass the float to Decimal constructor directly, like this:
from decimal import Decimal
Decimal(f)
I suggest this
>>> a = 2.111111
>>> a
2.1111110000000002
>>> str(a)
'2.111111'
>>> decimal.Decimal(str(a))
Decimal('2.111111')
You said in your question:
Can someone suggest a good way to
convert from float to Decimal
preserving value as the user has
entered
But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.
you can convert and than quantize to keep 5 digits after comma via
Decimal(float).quantize(Decimal("1.00000"))
Python does support Decimal creation from a float. You just cast it as a string first. But the precision loss doesn't occur with string conversion. The float you are converting doesn't have that kind of precision in the first place. (Otherwise you wouldn't need Decimal)
I think the confusion here is that we can create float literals in decimal format, but as soon as the interpreter consumes that literal the inner representation becomes a floating point number.
The "official" string representation of a float is given by the repr() built-in:
>>> repr(1.5)
'1.5'
>>> repr(12345.678901234567890123456789)
'12345.678901234567'
You can use repr() instead of a formatted string, the result won't contain any unnecessary garbage.
When you say "preserving value as the user has entered", why not just store the user-entered value as a string, and pass that to the Decimal constructor?
The main answer is slightly misleading. The g format ignores any leading zeroes after the decimal point, so format(0.012345, ".2g") returns 0.012 - three decimal places. If you need a hard limit on the number of decimal places, use the f formatter: format(0.012345, ".2f") == 0.01
I've come across the the same problem / question today and I'm not completely satisfied with any of the answers given so far. The core of the question seems to be:
Can someone suggest a good way to convert from float to Decimal [...] perhaps limiting number of significant digits that can be supported?
Short answer / solution: Yes.
def ftod(val, prec = 15):
return Decimal(val).quantize(Decimal(10)**-prec)
Long Answer:
As nosklo pointed out it is not possible to preserve the input of the user after it has been converted to float.
It is possible though to round that value with a reasonable precision and convert it into Decimal.
In my case I only need 2 to 4 digits after the separator, but they need to be accurate. Let's consider the classic 0.1 + 0.2 == 0.3 check.
>>> 0.1 + 0.2 == 0.3
False
Now let's do this with conversion to decimal (complete example):
>>> from decimal import Decimal
>>> def ftod(val, prec = 15): # float to Decimal
... return Decimal(val).quantize(Decimal(10)**-prec)
...
>>> ftod(0.1) + ftod(0.2) == ftod(0.3)
True
The answer by Ryabchenko Alexander was really helpful for me. It only lacks a way to dynamically set the precision – a feature I want (and maybe also need). The Decimal documentation FAQ gives an example on how to construct the required argument string for quantize():
>>> Decimal(10)**-4
Decimal('0.0001')
Here's how the numbers look like printed with 18 digits after the separator (coming from C programming I like the fancy python expressions):
>>> for x in [0.1, 0.2, 0.3, ftod(0.1), ftod(0.2), ftod(0.3)]:
... print("{:8} {:.18f}".format(type(x).__name__+":", x))
...
float: 0.100000000000000006
float: 0.200000000000000011
float: 0.299999999999999989
Decimal: 0.100000000000000000
Decimal: 0.200000000000000000
Decimal: 0.300000000000000000
And last I want to know for which precision the comparision still works:
>>> for p in [15, 16, 17]:
... print("Rounding precision: {}. Check 0.1 + 0.2 == 0.3 is {}".format(p,
... ftod(0.1, p) + ftod(0.2, p) == ftod(0.3, p)))
...
Rounding precision: 15. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 16. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 17. Check 0.1 + 0.2 == 0.3 is False
15 seems to be a good default for maximum precision. That should work on most systems. If you need more info, try:
>>> import sys
>>> sys.float_info
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
With float having 53 bits mantissa on my system, I calculated the number of decimal digits:
>>> import math
>>> math.log10(2**53)
15.954589770191003
Which tells me with 53 bits we get almost 16 digits. So 15 ist fine for the precision value and should always work. 16 is error-prone and 17 definitly causes trouble (as seen above).
Anyway ... in my specific case I only need 2 to 4 digits of precision, but as a perfectionist I enjoyed investigating this :-)
Any suggestions / improvements / complaints are welcome.
The "right" way to do this was documented in 1990 by Steele and White's and
Clinger's PLDI 1990 papers.
You might also look at this SO discussion about Python Decimal, including my suggestion to try using something like frap to rationalize a float.
You can use JSON to accomplish it
import json
from decimal import Decimal
float_value = 123456.2365
decimal_value = json.loads(json.dumps(float_value), parse_float=Decimal)
Inspired by this answer I found a workaround that allows to shorten the construction of a Decimal from a float bypassing (only apparently) the string step:
import decimal
class DecimalBuilder(float):
def __or__(self, a):
return decimal.Decimal(str(a))
>>> d = DecimalBuilder()
>>> x = d|0.1
>>> y = d|0.2
>>> x + y # works as desired
Decimal('0.3')
>>> d|0.1 + d|0.2 # does not work as desired, needs parenthesis
TypeError: unsupported operand type(s) for |: 'decimal.Decimal' and 'float'
>>> (d|0.1) + (d|0.2) # works as desired
Decimal('0.3')
It's a workaround but it surely allows savings in code typing and it's very readable.
The question is based on the wrong assertion that:
'Python Decimal doesn't support being constructed from float'.
In python3, Decimal class can do it as:
from decimal import *
getcontext().prec = 128 #high precision set
print(Decimal(100000.3))
A: 100000.300000000002910383045673370361328125 #SUCCESS (over 64 bit precision)
That's the right value with all decimals included, and so:
'there is no garbage after 15th decimal place ...'
You can verify on line with a IEEE754 converter like:
https://www.binaryconvert.com/convert_double.html
A: Most accurate representation = 1.00000300000000002910383045673E5 (64 bit precision)
or directly in python3 :
print(f'{100000.3:.128f}'.strip('0'))
A: 100000.300000000002910383045673370361328125
Preserving value as the user has entered, it's made with string conversion as:
Decimal(str(100000.3))
A: 100000.3

Categories

Resources