In the following example:
import math
x = math.log(2)
print("{:.500f}".format(x))
I tried to get 500 digits output I get only 53 decimals output of ln(2) as follows:
0.69314718055994528622676398299518041312694549560546875000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
How I can fix this problem?
You can't with the Python float type. It's dependent on the underlying machine architecture, and in most cases you're limited to a double-precision float.
However, you can get higher precision with the decimal module:
>>> from decimal import Decimal, getcontext
>>> getcontext().prec = 500
>>> d = Decimal(2)
>>> d.ln()
Decimal('0.69314718055994530941723212145817656807550013436025525412068000949339362196969471560586332699641868754200148102057068573368552023575813055703267075163507596193072757082837143519030703862389167347112335011536449795523912047517268157493206515552473413952588295045300709532636664265410423915781495204374043038550080194417064167151864471283996817178454695702627163106454615025720740248163777338963855069526066834113727387372292895649354702576265209885969320196505855476470330679365443254763274495125040607')
>>> print(d.ln())
0.69314718055994530941723212145817656807550013436025525412068000949339362196969471560586332699641868754200148102057068573368552023575813055703267075163507596193072757082837143519030703862389167347112335011536449795523912047517268157493206515552473413952588295045300709532636664265410423915781495204374043038550080194417064167151864471283996817178454695702627163106454615025720740248163777338963855069526066834113727387372292895649354702576265209885969320196505855476470330679365443254763274495125040607
I tried to get 500 digits output I get only 53 decimals output of ln(2) as follows:
The problem is not in the printing. The 500 digit output is the exact value returned from math.log(2).
The return value of math.log(2) is encoded using binary64 which can only represent about 264 different finite values - each of them is a dyadic rational. Mathematically log(2) is an irrational number, thus it is impossible for x to encode the math result exactly.
Instead math.log(2) returns the nearest encodable value.
That value is exactly 0.6931471805599452862267639829951804131269454956054687500...
Printing binary64 with more than 17 significant digits typically does not add important value information.
Within the realm of real numbers, which is an infinite set of numbers with arbitrary precision, the floating point numbers are a small subset of numbers with a finite precision. They are the numbers that are represented by a linear combination of powers of two (See Double Precision floating point format).
As Ln(2) is not re-presentable as a floating-point number, a computer finds the nearest number by numerical approximations. In case of Ln(2), this number is:
6243314768165359 * 2^-53 = 0.69314718055994528622676398299518041312694549560546875
If you need to do arbitrary precision arithmetic, you are required to make use of different computational methods. Various software packages exist that allow this. For Python, MPmath is fairly standard:
>>> from mpmath import *
>>> mp.dps = 500
>>> mp.pretty=True
>>> ln(2)
0.69314718055994530941723212145817656807550013436025525412068000949339362196969471560586332699641868754200148102057068573368552023575813055703267075163507596193072757082837143519030703862389167347112335011536449795523912047517268157493206515552473413952588295045300709532636664265410423915781495204374043038550080194417064167151864471283996817178454695702627163106454615025720740248163777338963855069526066834113727387372292895649354702576265209885969320196505855476470330679365443254763274495125040607
The decimal module provides support for fast correctly-rounded decimal floating point arithmetic.
I wrote this to learn this module.
from decimal import *
getcontext().prec = 19
print(Decimal(math.pow(2,60)-1))
print(Decimal(math.pow(2,60))-Decimal(1))
the weird this is, I got 2 different results.
1152921504606846976
1152921504606846975
why is that?
Note the number is a long integer rather than a float/double
That is not weird at all. math.pow(2,60) will return a float (1.152921504606847e+18) with all the float limitations, such as deducting 1 from this large number will not change the outcome and you use this arithmetic before applying Decimal.
Indeed Using Decimal overcomes this as well as using the ** instead of math.pow.
>>> 2**60
>>> 1152921504606846976
>>> 2**60 - 1
>>> 1152921504606846975
I have a some numbers like:
1.8816764231589208e-06 <type 'float'>
how I can convert to
0.00000018816764231589208
Preserving all precision
This is not an easy problem, because binary floating point numbers cannot always represent decimal fractions exactly, and the number you have chosen is one such.
Therefore, you need to know how many digits of precision you want. In your exact case, see what happens when I try to print it with various formats.
>>> x = 1.8816764231589208e-06
>>> for i in range(10, 30):
... fmt = "{:.%df}" % i
... print fmt, fmt.format(x)
...
{:.10f} 0.0000018817
{:.11f} 0.00000188168
{:.12f} 0.000001881676
{:.13f} 0.0000018816764
{:.14f} 0.00000188167642
{:.15f} 0.000001881676423
{:.16f} 0.0000018816764232
{:.17f} 0.00000188167642316
{:.18f} 0.000001881676423159
{:.19f} 0.0000018816764231589
{:.20f} 0.00000188167642315892
{:.21f} 0.000001881676423158921
{:.22f} 0.0000018816764231589208
{:.23f} 0.00000188167642315892079
{:.24f} 0.000001881676423158920791
{:.25f} 0.0000018816764231589207915
{:.26f} 0.00000188167642315892079146
{:.27f} 0.000001881676423158920791458
{:.28f} 0.0000018816764231589207914582
{:.29f} 0.00000188167642315892079145823
>>>
As you will observe, Python is happy to provide many digits of precision, but in fact the later ones are spurious: a standard Python float is stored in 64 bits, but only 52 of those are used to represent the significant figures, meaning you can get at most 16 significant digits.
The real lesson here is that Python has no way to exactly store 1.8816764231589208e-06 as a floating point number. This is not so much a language limitation as a representational limitation of the floating-point implementation.
The formatting shown above may, however, allow you to solve your problem.
The value you presented is not properly stored as Rory Daulton suggests in his comment. So your float 1.8816764231589208e-06 <type 'float'> could be explained by this example:
>>> from decimal import Decimal
>>> a = 1.8816764231589208e-06
>>> g = 0.0000018816764231589208 # g == a
>>> Decimal(a) # Creates a Decimal object with the passed float
Decimal('0.000001881676423158920791458225373060653140555587015114724636077880859375')
>>> Decimal('0.0000018816764231589208') # Exact value stored using str
Decimal('0.0000018816764231589208')
>>> Decimal(a) == Decimal('0.0000018816764231589208')
False # See the problem here? Your float did not
# represent the number you "saw"
>>> Decimal(a).__float__()
1.8816764231589208e-06
>>> Decimal(a).__float__() == a
True
If you want precise decimals, use Decimal or some other class to represent numbers rather than binary representations such as float. Your 0.0000018816764231589208 of type float, is actually the number shown by Decimal().
i was doing some calculation and i got something like this:
newInteger = 200
newFloat = 200.0
if newInteger >= newFloat:
print "Something"
when i run my application it didn't print it out but when i test it on python shell, it print Something!!.
so i test this,
>>> number = 200.0000000000001
>>> number
200.0000000000001
but when decimals goes over 13, like so:
>>> number = 200.00000000000001
>>> number
200.0
does python hide the decimal numbers but showing as rounded? knowing the result is quite important when debugging.
is there any way that i can get the full decimals? (i did look up at python documentation, it didn't say anything about printing actual float number.)
It's called floating point round-off error. It has to do with how Python stores floats (in binary), which makes it impossible for floats to have 100% precision.
Here's more info in the docs.
See the decimal module if you need more precision.
If you just want to quickly compare two numbers, there are a couple of tricks for floating point comparison. One of the most popular is comparing the relative error to the machine precision (epsilon):
import sys
def float_equality(x, y, epsilon=sys.float_info.epsilon):
return abs(x - y) <= epsilon * max(abs(x), abs(y))
But this too, is not perfect. For a discussion of the imperfections of this method and some more accurate alternatives, see this article about comparing floats.
Python tends to round numbers:
>>> math.pi
3.141592653589793
>>> "{0:.50f}".format(math.pi)
'3.14159265358979311599796346854418516159057617187500'
>>> "{0:.2f}".format(math.pi)
'3.14'
However, floating point numbers have a specific precision and you can't go beyod it. You can't store arbitrary numbers in floating point:
>>> number = 200.00000000000001
>>> "{:.25f}".format(number)
'200.0000000000000000000000000'
For integers the floating point limit is 2**53:
>>> 2.0**53
9007199254740992.0
>>> 2.0**53 + 1
9007199254740992.0
>>> 2.0**53 + 2
9007199254740994.0
If you want to store arbitrary decimal numbers you should use Decimal module:
>>> from decimal import Decimal
>>> number = Decimal("200.0000000000000000000000000000000000000000001")
>>> number
Decimal('200.0000000000000000000000000000000000000000001')
Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first.
This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal("%.15f" % my_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (Decimal("%.15f" % 100000.3) == Decimal('100000.300000000002910')).
Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?
Python <2.7
"%.15g" % f
Or in Python 3.0:
format(f, ".15g")
Python 2.7+, 3.2+
Just pass the float to Decimal constructor directly, like this:
from decimal import Decimal
Decimal(f)
I suggest this
>>> a = 2.111111
>>> a
2.1111110000000002
>>> str(a)
'2.111111'
>>> decimal.Decimal(str(a))
Decimal('2.111111')
You said in your question:
Can someone suggest a good way to
convert from float to Decimal
preserving value as the user has
entered
But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.
you can convert and than quantize to keep 5 digits after comma via
Decimal(float).quantize(Decimal("1.00000"))
Python does support Decimal creation from a float. You just cast it as a string first. But the precision loss doesn't occur with string conversion. The float you are converting doesn't have that kind of precision in the first place. (Otherwise you wouldn't need Decimal)
I think the confusion here is that we can create float literals in decimal format, but as soon as the interpreter consumes that literal the inner representation becomes a floating point number.
The "official" string representation of a float is given by the repr() built-in:
>>> repr(1.5)
'1.5'
>>> repr(12345.678901234567890123456789)
'12345.678901234567'
You can use repr() instead of a formatted string, the result won't contain any unnecessary garbage.
When you say "preserving value as the user has entered", why not just store the user-entered value as a string, and pass that to the Decimal constructor?
The main answer is slightly misleading. The g format ignores any leading zeroes after the decimal point, so format(0.012345, ".2g") returns 0.012 - three decimal places. If you need a hard limit on the number of decimal places, use the f formatter: format(0.012345, ".2f") == 0.01
I've come across the the same problem / question today and I'm not completely satisfied with any of the answers given so far. The core of the question seems to be:
Can someone suggest a good way to convert from float to Decimal [...] perhaps limiting number of significant digits that can be supported?
Short answer / solution: Yes.
def ftod(val, prec = 15):
return Decimal(val).quantize(Decimal(10)**-prec)
Long Answer:
As nosklo pointed out it is not possible to preserve the input of the user after it has been converted to float.
It is possible though to round that value with a reasonable precision and convert it into Decimal.
In my case I only need 2 to 4 digits after the separator, but they need to be accurate. Let's consider the classic 0.1 + 0.2 == 0.3 check.
>>> 0.1 + 0.2 == 0.3
False
Now let's do this with conversion to decimal (complete example):
>>> from decimal import Decimal
>>> def ftod(val, prec = 15): # float to Decimal
... return Decimal(val).quantize(Decimal(10)**-prec)
...
>>> ftod(0.1) + ftod(0.2) == ftod(0.3)
True
The answer by Ryabchenko Alexander was really helpful for me. It only lacks a way to dynamically set the precision – a feature I want (and maybe also need). The Decimal documentation FAQ gives an example on how to construct the required argument string for quantize():
>>> Decimal(10)**-4
Decimal('0.0001')
Here's how the numbers look like printed with 18 digits after the separator (coming from C programming I like the fancy python expressions):
>>> for x in [0.1, 0.2, 0.3, ftod(0.1), ftod(0.2), ftod(0.3)]:
... print("{:8} {:.18f}".format(type(x).__name__+":", x))
...
float: 0.100000000000000006
float: 0.200000000000000011
float: 0.299999999999999989
Decimal: 0.100000000000000000
Decimal: 0.200000000000000000
Decimal: 0.300000000000000000
And last I want to know for which precision the comparision still works:
>>> for p in [15, 16, 17]:
... print("Rounding precision: {}. Check 0.1 + 0.2 == 0.3 is {}".format(p,
... ftod(0.1, p) + ftod(0.2, p) == ftod(0.3, p)))
...
Rounding precision: 15. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 16. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 17. Check 0.1 + 0.2 == 0.3 is False
15 seems to be a good default for maximum precision. That should work on most systems. If you need more info, try:
>>> import sys
>>> sys.float_info
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
With float having 53 bits mantissa on my system, I calculated the number of decimal digits:
>>> import math
>>> math.log10(2**53)
15.954589770191003
Which tells me with 53 bits we get almost 16 digits. So 15 ist fine for the precision value and should always work. 16 is error-prone and 17 definitly causes trouble (as seen above).
Anyway ... in my specific case I only need 2 to 4 digits of precision, but as a perfectionist I enjoyed investigating this :-)
Any suggestions / improvements / complaints are welcome.
The "right" way to do this was documented in 1990 by Steele and White's and
Clinger's PLDI 1990 papers.
You might also look at this SO discussion about Python Decimal, including my suggestion to try using something like frap to rationalize a float.
You can use JSON to accomplish it
import json
from decimal import Decimal
float_value = 123456.2365
decimal_value = json.loads(json.dumps(float_value), parse_float=Decimal)
Inspired by this answer I found a workaround that allows to shorten the construction of a Decimal from a float bypassing (only apparently) the string step:
import decimal
class DecimalBuilder(float):
def __or__(self, a):
return decimal.Decimal(str(a))
>>> d = DecimalBuilder()
>>> x = d|0.1
>>> y = d|0.2
>>> x + y # works as desired
Decimal('0.3')
>>> d|0.1 + d|0.2 # does not work as desired, needs parenthesis
TypeError: unsupported operand type(s) for |: 'decimal.Decimal' and 'float'
>>> (d|0.1) + (d|0.2) # works as desired
Decimal('0.3')
It's a workaround but it surely allows savings in code typing and it's very readable.
The question is based on the wrong assertion that:
'Python Decimal doesn't support being constructed from float'.
In python3, Decimal class can do it as:
from decimal import *
getcontext().prec = 128 #high precision set
print(Decimal(100000.3))
A: 100000.300000000002910383045673370361328125 #SUCCESS (over 64 bit precision)
That's the right value with all decimals included, and so:
'there is no garbage after 15th decimal place ...'
You can verify on line with a IEEE754 converter like:
https://www.binaryconvert.com/convert_double.html
A: Most accurate representation = 1.00000300000000002910383045673E5 (64 bit precision)
or directly in python3 :
print(f'{100000.3:.128f}'.strip('0'))
A: 100000.300000000002910383045673370361328125
Preserving value as the user has entered, it's made with string conversion as:
Decimal(str(100000.3))
A: 100000.3