I want a to be rounded to 13.95. I tried using round, but I get:
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
For the analogous issue with the standard library Decimal class, see How can I format a decimal to always show 2 decimal places?.
You are running into the old problem with floating point numbers that not all numbers can be represented exactly. The command line is just showing you the full floating point form from memory.
With floating point representation, your rounded version is the same number. Since computers are binary, they store floating point numbers as an integer and then divide it by a power of two so 13.95 will be represented in a similar fashion to 125650429603636838/(2**53).
Double precision numbers have 53 bits (16 digits) of precision and regular floats have 24 bits (8 digits) of precision. The floating point type in Python uses double precision to store the values.
For example,
>>> 125650429603636838/(2**53)
13.949999999999999
>>> 234042163/(2**24)
13.949999988079071
>>> a = 13.946
>>> print(a)
13.946
>>> print("%.2f" % a)
13.95
>>> round(a,2)
13.949999999999999
>>> print("%.2f" % round(a, 2))
13.95
>>> print("{:.2f}".format(a))
13.95
>>> print("{:.2f}".format(round(a, 2)))
13.95
>>> print("{:.15f}".format(round(a, 2)))
13.949999999999999
If you are after only two decimal places (to display a currency value, for example), then you have a couple of better choices:
Use integers and store values in cents, not dollars and then divide by 100 to convert to dollars.
Or use a fixed point number like decimal.
There are new format specifications, String Format Specification Mini-Language:
You can do the same as:
"{:.2f}".format(13.949999999999999)
Note 1: the above returns a string. In order to get as float, simply wrap with float(...):
float("{:.2f}".format(13.949999999999999))
Note 2: wrapping with float() doesn't change anything:
>>> x = 13.949999999999999999
>>> x
13.95
>>> g = float("{:.2f}".format(x))
>>> g
13.95
>>> x == g
True
>>> h = round(x, 2)
>>> h
13.95
>>> x == h
True
The built-in round() works just fine in Python 2.7 or later.
Example:
>>> round(14.22222223, 2)
14.22
Check out the documentation.
Let me give an example in Python 3.6's f-string/template-string format, which I think is beautifully neat:
>>> f'{a:.2f}'
It works well with longer examples too, with operators and not needing parentheses:
>>> print(f'Completed in {time.time() - start:.2f}s')
I feel that the simplest approach is to use the format() function.
For example:
a = 13.949999999999999
format(a, '.2f')
13.95
This produces a float number as a string rounded to two decimal points.
Most numbers cannot be exactly represented in floats. If you want to round the number because that's what your mathematical formula or algorithm requires, then you want to use round. If you just want to restrict the display to a certain precision, then don't even use round and just format it as that string. (If you want to display it with some alternate rounding method, and there are tons, then you need to mix the two approaches.)
>>> "%.2f" % 3.14159
'3.14'
>>> "%.2f" % 13.9499999
'13.95'
And lastly, though perhaps most importantly, if you want exact math then you don't want floats at all. The usual example is dealing with money and to store 'cents' as an integer.
Use
print"{:.2f}".format(a)
instead of
print"{0:.2f}".format(a)
Because the latter may lead to output errors when trying to output multiple variables (see comments).
Try the code below:
>>> a = 0.99334
>>> a = int((a * 100) + 0.5) / 100.0 # Adding 0.5 rounds it up
>>> print a
0.99
TLDR ;)
The rounding problem of input and output has been solved definitively by Python 3.1 and the fix is backported also to Python 2.7.0.
Rounded numbers can be reversibly converted between float and string back and forth:
str -> float() -> repr() -> float() ... or Decimal -> float -> str -> Decimal
>>> 0.3
0.3
>>> float(repr(0.3)) == 0.3
True
A Decimal type is not necessary for storage anymore.
Results of arithmetic operations must be rounded again because rounding errors could accumulate more inaccuracy than that is possible after parsing one number. That is not fixed by the improved repr() algorithm (Python >= 3.1, >= 2.7.0):
>>> 0.1 + 0.2
0.30000000000000004
>>> 0.1, 0.2, 0.3
(0.1, 0.2, 0.3)
The output string function str(float(...)) was rounded to 12 valid digits in Python < 2.7x and < 3.1, to prevent excessive invalid digits similar to unfixed repr() output. That was still insufficientl after subtraction of very similar numbers and it was too much rounded after other operations. Python 2.7 and 3.1 use the same length of str() although the repr() is fixed. Some old versions of Numpy had also excessive invalid digits, even with fixed Python. The current Numpy is fixed. Python versions >= 3.2 have the same results of str() and repr() function and also output of similar functions in Numpy.
Test
import random
from decimal import Decimal
for _ in range(1000000):
x = random.random()
assert x == float(repr(x)) == float(Decimal(repr(x))) # Reversible repr()
assert str(x) == repr(x)
assert len(repr(round(x, 12))) <= 14 # no excessive decimal places.
Documentation
See the Release notes Python 2.7 - Other Language Changes the fourth paragraph:
Conversions between floating-point numbers and strings are now correctly rounded on most platforms. These conversions occur in many different places: str() on floats and complex numbers; the float and complex constructors; numeric formatting; serializing and de-serializing floats and complex numbers using the marshal, pickle and json modules; parsing of float and imaginary literals in Python code; and Decimal-to-float conversion.
Related to this, the repr() of a floating-point number x now returns a result based on the shortest decimal string that’s guaranteed to round back to x under correct rounding (with round-half-to-even rounding mode). Previously it gave a string based on rounding x to 17 decimal digits.
The related issue
More information: The formatting of float before Python 2.7 was similar to the current numpy.float64. Both types use the same 64 bit IEEE 754 double precision with 52 bit mantissa. A big difference is that np.float64.__repr__ is formatted frequently with an excessive decimal number so that no bit can be lost, but no valid IEEE 754 number exists between 13.949999999999999 and 13.950000000000001. The result is not nice and the conversion repr(float(number_as_string)) is not reversible with numpy. On the other hand: float.__repr__ is formatted so that every digit is important; the sequence is without gaps and the conversion is reversible. Simply: If you perhaps have a numpy.float64 number, convert it to normal float in order to be formatted for humans, not for numeric processors, otherwise nothing more is necessary with Python 2.7+.
Use:
float_number = 12.234325335563
round(float_number, 2)
This will return;
12.23
Explanation:
The round function takes two arguments;
The number to be rounded and the number of decimal places to be returned. Here I returned two decimal places.
You can modify the output format:
>>> a = 13.95
>>> a
13.949999999999999
>>> print "%.2f" % a
13.95
With Python < 3 (e.g. 2.6 or 2.7), there are two ways to do so.
# Option one
older_method_string = "%.9f" % numvar
# Option two (note ':' before the '.9f')
newer_method_string = "{:.9f}".format(numvar)
But note that for Python versions above 3 (e.g. 3.2 or 3.3), option two is preferred.
For more information on option two, I suggest this link on string formatting from the Python documentation.
And for more information on option one, this link will suffice and has information on the various flags.
Reference: Convert floating point number to a certain precision, and then copy to string
You can use format operator for rounding the value up to two decimal places in Python:
print(format(14.4499923, '.2f')) // The output is 14.45
As Matt pointed out, Python 3.6 provides f-strings, and they can also use nested parameters:
value = 2.34558
precision = 2
width = 4
print(f'result: {value:{width}.{precision}f}')
which will display result: 2.35
In Python 2.7:
a = 13.949999999999999
output = float("%0.2f"%a)
print output
We multiple options to do that:
Option 1:
x = 1.090675765757
g = float("{:.2f}".format(x))
print(g)
Option 2:
The built-in round() supports Python 2.7 or later.
x = 1.090675765757
g = round(x, 2)
print(g)
The Python tutorial has an appendix called Floating Point Arithmetic: Issues and Limitations. Read it. It explains what is happening and why Python is doing its best. It has even an example that matches yours. Let me quote a bit:
>>> 0.1
0.10000000000000001
you may be tempted to use the round()
function to chop it back to the single
digit you expect. But that makes no
difference:
>>> round(0.1, 1)
0.10000000000000001
The problem is that the binary
floating-point value stored for “0.1”
was already the best possible binary
approximation to 1/10, so trying to
round it again can’t make it better:
it was already as good as it gets.
Another consequence is that since 0.1
is not exactly 1/10, summing ten
values of 0.1 may not yield exactly
1.0, either:
>>> sum = 0.0
>>> for i in range(10):
... sum += 0.1
...
>>> sum
0.99999999999999989
One alternative and solution to your problems would be using the decimal module.
Use combination of Decimal object and round() method.
Python 3.7.3
>>> from decimal import Decimal
>>> d1 = Decimal (13.949999999999999) # define a Decimal
>>> d1
Decimal('13.949999999999999289457264239899814128875732421875')
>>> d2 = round(d1, 2) # round to 2 decimals
>>> d2
Decimal('13.95')
It's doing exactly what you told it to do and is working correctly. Read more about floating point confusion and maybe try decimal objects instead.
from decimal import Decimal
def round_float(v, ndigits=2, rt_str=False):
d = Decimal(v)
v_str = ("{0:.%sf}" % ndigits).format(round(d, ndigits))
if rt_str:
return v_str
return Decimal(v_str)
Results:
Python 3.6.1 (default, Dec 11 2018, 17:41:10)
>>> round_float(3.1415926)
Decimal('3.14')
>>> round_float(3.1445926)
Decimal('3.14')
>>> round_float(3.1455926)
Decimal('3.15')
>>> round_float(3.1455926, rt_str=True)
'3.15'
>>> str(round_float(3.1455926))
'3.15'
The simple solution is here
value = 5.34343
rounded_value = round(value, 2) # 5.34
Use a lambda function like this:
arred = lambda x,n : x*(10**n)//1/(10**n)
This way you could just do:
arred(3.141591657, 2)
and get
3.14
orig_float = 232569 / 16000.0
14.5355625
short_float = float("{:.2f}".format(orig_float))
14.54
For fixing the floating point in type-dynamic languages such as Python and JavaScript, I use this technique
# For example:
a = 70000
b = 0.14
c = a * b
print c # Prints 980.0000000002
# Try to fix
c = int(c * 10000)/100000
print c # Prints 980
You can also use Decimal as following:
from decimal import *
getcontext().prec = 6
Decimal(1) / Decimal(7)
# Results in 6 precision -> Decimal('0.142857')
getcontext().prec = 28
Decimal(1) / Decimal(7)
# Results in 28 precision -> Decimal('0.1428571428571428571428571429')
It's simple like:
use decimal module for fast correctly-rounded decimal floating point arithmetic:
d = Decimal(10000000.0000009)
to achieve rounding:
d.quantize(Decimal('0.01'))
will result with Decimal('10000000.00')
make the above DRY:
def round_decimal(number, exponent='0.01'):
decimal_value = Decimal(number)
return decimal_value.quantize(Decimal(exponent))
or
def round_decimal(number, decimal_places=2):
decimal_value = Decimal(number)
return decimal_value.quantize(Decimal(10) ** -decimal_places)
PS: critique of others: formatting is not rounding.
Here is the simple solution using the format function.
float(format(num, '.2f'))
Note: We are converting numbers to float, because the format method is returning a string.
If you want to handle money, use the Python decimal module:
from decimal import Decimal, ROUND_HALF_UP
# 'amount' can be integer, string, tuple, float, or another Decimal object
def to_money(amount) -> Decimal:
money = Decimal(amount).quantize(Decimal('.00'), rounding=ROUND_HALF_UP)
return money
lambda x, n:int(x*10^n + 0.5)/10^n
has worked for me for many years in many languages.
To round a number to a resolution, the best way is the following one, which can work with any resolution (0.01 for two decimals or even other steps):
>>> import numpy as np
>>> value = 13.949999999999999
>>> resolution = 0.01
>>> newValue = int(np.round(value/resolution))*resolution
>>> print newValue
13.95
>>> resolution = 0.5
>>> newValue = int(np.round(value/resolution))*resolution
>>> print newValue
14.0
The answers I saw didn't work with the float(52.15) case. After some tests, there is the solution that I'm using:
import decimal
def value_to_decimal(value, decimal_places):
decimal.getcontext().rounding = decimal.ROUND_HALF_UP # define rounding method
return decimal.Decimal(str(float(value))).quantize(decimal.Decimal('1e-{}'.format(decimal_places)))
(The conversion of the 'value' to float and then string is very important, that way, 'value' can be of the type float, decimal, integer or string!)
Hope this helps anyone.
Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first.
This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal("%.15f" % my_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (Decimal("%.15f" % 100000.3) == Decimal('100000.300000000002910')).
Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?
Python <2.7
"%.15g" % f
Or in Python 3.0:
format(f, ".15g")
Python 2.7+, 3.2+
Just pass the float to Decimal constructor directly, like this:
from decimal import Decimal
Decimal(f)
I suggest this
>>> a = 2.111111
>>> a
2.1111110000000002
>>> str(a)
'2.111111'
>>> decimal.Decimal(str(a))
Decimal('2.111111')
You said in your question:
Can someone suggest a good way to
convert from float to Decimal
preserving value as the user has
entered
But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.
you can convert and than quantize to keep 5 digits after comma via
Decimal(float).quantize(Decimal("1.00000"))
Python does support Decimal creation from a float. You just cast it as a string first. But the precision loss doesn't occur with string conversion. The float you are converting doesn't have that kind of precision in the first place. (Otherwise you wouldn't need Decimal)
I think the confusion here is that we can create float literals in decimal format, but as soon as the interpreter consumes that literal the inner representation becomes a floating point number.
The "official" string representation of a float is given by the repr() built-in:
>>> repr(1.5)
'1.5'
>>> repr(12345.678901234567890123456789)
'12345.678901234567'
You can use repr() instead of a formatted string, the result won't contain any unnecessary garbage.
When you say "preserving value as the user has entered", why not just store the user-entered value as a string, and pass that to the Decimal constructor?
The main answer is slightly misleading. The g format ignores any leading zeroes after the decimal point, so format(0.012345, ".2g") returns 0.012 - three decimal places. If you need a hard limit on the number of decimal places, use the f formatter: format(0.012345, ".2f") == 0.01
I've come across the the same problem / question today and I'm not completely satisfied with any of the answers given so far. The core of the question seems to be:
Can someone suggest a good way to convert from float to Decimal [...] perhaps limiting number of significant digits that can be supported?
Short answer / solution: Yes.
def ftod(val, prec = 15):
return Decimal(val).quantize(Decimal(10)**-prec)
Long Answer:
As nosklo pointed out it is not possible to preserve the input of the user after it has been converted to float.
It is possible though to round that value with a reasonable precision and convert it into Decimal.
In my case I only need 2 to 4 digits after the separator, but they need to be accurate. Let's consider the classic 0.1 + 0.2 == 0.3 check.
>>> 0.1 + 0.2 == 0.3
False
Now let's do this with conversion to decimal (complete example):
>>> from decimal import Decimal
>>> def ftod(val, prec = 15): # float to Decimal
... return Decimal(val).quantize(Decimal(10)**-prec)
...
>>> ftod(0.1) + ftod(0.2) == ftod(0.3)
True
The answer by Ryabchenko Alexander was really helpful for me. It only lacks a way to dynamically set the precision – a feature I want (and maybe also need). The Decimal documentation FAQ gives an example on how to construct the required argument string for quantize():
>>> Decimal(10)**-4
Decimal('0.0001')
Here's how the numbers look like printed with 18 digits after the separator (coming from C programming I like the fancy python expressions):
>>> for x in [0.1, 0.2, 0.3, ftod(0.1), ftod(0.2), ftod(0.3)]:
... print("{:8} {:.18f}".format(type(x).__name__+":", x))
...
float: 0.100000000000000006
float: 0.200000000000000011
float: 0.299999999999999989
Decimal: 0.100000000000000000
Decimal: 0.200000000000000000
Decimal: 0.300000000000000000
And last I want to know for which precision the comparision still works:
>>> for p in [15, 16, 17]:
... print("Rounding precision: {}. Check 0.1 + 0.2 == 0.3 is {}".format(p,
... ftod(0.1, p) + ftod(0.2, p) == ftod(0.3, p)))
...
Rounding precision: 15. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 16. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 17. Check 0.1 + 0.2 == 0.3 is False
15 seems to be a good default for maximum precision. That should work on most systems. If you need more info, try:
>>> import sys
>>> sys.float_info
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
With float having 53 bits mantissa on my system, I calculated the number of decimal digits:
>>> import math
>>> math.log10(2**53)
15.954589770191003
Which tells me with 53 bits we get almost 16 digits. So 15 ist fine for the precision value and should always work. 16 is error-prone and 17 definitly causes trouble (as seen above).
Anyway ... in my specific case I only need 2 to 4 digits of precision, but as a perfectionist I enjoyed investigating this :-)
Any suggestions / improvements / complaints are welcome.
The "right" way to do this was documented in 1990 by Steele and White's and
Clinger's PLDI 1990 papers.
You might also look at this SO discussion about Python Decimal, including my suggestion to try using something like frap to rationalize a float.
You can use JSON to accomplish it
import json
from decimal import Decimal
float_value = 123456.2365
decimal_value = json.loads(json.dumps(float_value), parse_float=Decimal)
Inspired by this answer I found a workaround that allows to shorten the construction of a Decimal from a float bypassing (only apparently) the string step:
import decimal
class DecimalBuilder(float):
def __or__(self, a):
return decimal.Decimal(str(a))
>>> d = DecimalBuilder()
>>> x = d|0.1
>>> y = d|0.2
>>> x + y # works as desired
Decimal('0.3')
>>> d|0.1 + d|0.2 # does not work as desired, needs parenthesis
TypeError: unsupported operand type(s) for |: 'decimal.Decimal' and 'float'
>>> (d|0.1) + (d|0.2) # works as desired
Decimal('0.3')
It's a workaround but it surely allows savings in code typing and it's very readable.
The question is based on the wrong assertion that:
'Python Decimal doesn't support being constructed from float'.
In python3, Decimal class can do it as:
from decimal import *
getcontext().prec = 128 #high precision set
print(Decimal(100000.3))
A: 100000.300000000002910383045673370361328125 #SUCCESS (over 64 bit precision)
That's the right value with all decimals included, and so:
'there is no garbage after 15th decimal place ...'
You can verify on line with a IEEE754 converter like:
https://www.binaryconvert.com/convert_double.html
A: Most accurate representation = 1.00000300000000002910383045673E5 (64 bit precision)
or directly in python3 :
print(f'{100000.3:.128f}'.strip('0'))
A: 100000.300000000002910383045673370361328125
Preserving value as the user has entered, it's made with string conversion as:
Decimal(str(100000.3))
A: 100000.3