how to convert a scientific notation number preserving precision - python

I have a some numbers like:
1.8816764231589208e-06 <type 'float'>
how I can convert to
0.00000018816764231589208
Preserving all precision

This is not an easy problem, because binary floating point numbers cannot always represent decimal fractions exactly, and the number you have chosen is one such.
Therefore, you need to know how many digits of precision you want. In your exact case, see what happens when I try to print it with various formats.
>>> x = 1.8816764231589208e-06
>>> for i in range(10, 30):
... fmt = "{:.%df}" % i
... print fmt, fmt.format(x)
...
{:.10f} 0.0000018817
{:.11f} 0.00000188168
{:.12f} 0.000001881676
{:.13f} 0.0000018816764
{:.14f} 0.00000188167642
{:.15f} 0.000001881676423
{:.16f} 0.0000018816764232
{:.17f} 0.00000188167642316
{:.18f} 0.000001881676423159
{:.19f} 0.0000018816764231589
{:.20f} 0.00000188167642315892
{:.21f} 0.000001881676423158921
{:.22f} 0.0000018816764231589208
{:.23f} 0.00000188167642315892079
{:.24f} 0.000001881676423158920791
{:.25f} 0.0000018816764231589207915
{:.26f} 0.00000188167642315892079146
{:.27f} 0.000001881676423158920791458
{:.28f} 0.0000018816764231589207914582
{:.29f} 0.00000188167642315892079145823
>>>
As you will observe, Python is happy to provide many digits of precision, but in fact the later ones are spurious: a standard Python float is stored in 64 bits, but only 52 of those are used to represent the significant figures, meaning you can get at most 16 significant digits.
The real lesson here is that Python has no way to exactly store 1.8816764231589208e-06 as a floating point number. This is not so much a language limitation as a representational limitation of the floating-point implementation.
The formatting shown above may, however, allow you to solve your problem.

The value you presented is not properly stored as Rory Daulton suggests in his comment. So your float 1.8816764231589208e-06 <type 'float'> could be explained by this example:
>>> from decimal import Decimal
>>> a = 1.8816764231589208e-06
>>> g = 0.0000018816764231589208 # g == a
>>> Decimal(a) # Creates a Decimal object with the passed float
Decimal('0.000001881676423158920791458225373060653140555587015114724636077880859375')
>>> Decimal('0.0000018816764231589208') # Exact value stored using str
Decimal('0.0000018816764231589208')
>>> Decimal(a) == Decimal('0.0000018816764231589208')
False # See the problem here? Your float did not
# represent the number you "saw"
>>> Decimal(a).__float__()
1.8816764231589208e-06
>>> Decimal(a).__float__() == a
True
If you want precise decimals, use Decimal or some other class to represent numbers rather than binary representations such as float. Your 0.0000018816764231589208 of type float, is actually the number shown by Decimal().

Related

How to Convert Fixed Point Number to Decimal Number in Python?

How do I convert a signed 64.64-bit fixed point integer to a decimal number in python?
For example the integer 1844674407370955161600 which is a signed 64.64-bit fixed point represents the decimal number +100.00, my understanding is that a python float has insufficient bits(only 18) to represent the fractional part, hence my choice of the decimal type.
Perhaps a more general function for converting Qm.n to a decimal can be provided.
You can use decimal.Decimal and divide by the fixed point like so:
>>> import decimal
>>> decimal.Decimal(1844674407370955161600) / (1 << 64)
Decimal('100')
Keep in mind you'll need at least 39 digits for full precision. Make sure you set it before you start converting:
>>> decimal.getcontext().prec = 39
A different option is to use fractions, which will offer full precision as well:
>>> import fractions
>>> fractions.Fraction(1844674407370955161600, 1<<64)
Fraction(100, 1)
In a general way, you can use fxpmath module to convert a Qm.n fixed-point type:
from fxpmath import Fxp
x_fxp = Fxp(1844674407370955161600, dtype='Q64.64', raw=True) # a fixed-point object
x_float = x_fxp.get_val() # or just x_fxp()
100.0
If you want a more short code:
x = Fxp(1844674407370955161600, dtype='Q64.64', raw=True)()

In Python 3.5, when dividing an even number, why divide and floor divide gives different answer

I am trying to divide a very large even number 13144131834269512219260941993714669605006625743172006030529504645527800951523697620149903055663251854220067020503783524785523675819158836547734770656069476
I used both division and floor division, but it provides me two different answers which I think should be the same.
So I got
int(x/2) = 6572065917134756165333387211683112531415896759844144557192219233347999705289073358407747856661759761476763448808302430806962124152349175018830474952835072
int(x//2) = 6572065917134756109630470996857334802503312871586003015264752322763900475761848810074951527831625927110033510251891762392761837909579418273867385328034738
is anyone could tell me what results in the differences?
Thanks
/ true division always produces a floating point result, and you can't accurately model your number with floats:
>>> huge = 13144131834269512219260941993714669605006625743172006030529504645527800951523697620149903055663251854220067020503783524785523675819158836547734770656069476
>>> huge / 2
6.572065917134756e+153
>>> type(huge / 2)
<class 'float'>
That's 6 times 10^153, but float can only carry 53 binary digits of precision in the mantissa:
>>> import sys
>>> sys.float_info.mant_dig
53
Floating point uses binary fractions to model the decimal portion, which means that for the majority of possible decimal values, this is only an approximation anyway.
Converting that value to int() is not going to bring back the precision that was lost.
// floor division on the other hand, produces an integer for integer inputs, and integers have arbitrary precision, so nothing is lost:
>>> type(huge // 2)
<class 'int'>

How to convert large float values to int?

I have a variable containing a large floating point number, say a = 999999999999999.99
When I type int(a) in the interpreter, it returns 1000000000000000.
How do I get the output as 999999999999999 for long numbers like these?
999999999999999.99 is a number that can't be precisely represented in the floating-point format, so Python compromises and picks the closest value that can be represented. In this case, that happens to be 1000000000000000. That's why converting that to an integer gives you 1000000000000000.
If you need more precision than floats can provide, consider using decimal.Decimal.
>>> import decimal
>>> a = decimal.Decimal("999999999999999.99")
>>> a
Decimal('999999999999999.99')
>>> int(a)
999999999999999
The problem is not int, the problem is the floating point value itself. Your value would need 17 digits of precision to be represented correctly, while double precision floating point values have between 15 and 16 digits of precision. So, when you input it, it is rounded to the nearest representable float value, which is 1000000000000000.0. When int is called it cannot do a thing - the precision is already lost.
If you need to represent this kind of values exactly you can use the decimal data type, keeping in mind that performance does suffer compared to regular floats.

Python hide float decimals if it's over 13?

i was doing some calculation and i got something like this:
newInteger = 200
newFloat = 200.0
if newInteger >= newFloat:
print "Something"
when i run my application it didn't print it out but when i test it on python shell, it print Something!!.
so i test this,
>>> number = 200.0000000000001
>>> number
200.0000000000001
but when decimals goes over 13, like so:
>>> number = 200.00000000000001
>>> number
200.0
does python hide the decimal numbers but showing as rounded? knowing the result is quite important when debugging.
is there any way that i can get the full decimals? (i did look up at python documentation, it didn't say anything about printing actual float number.)
It's called floating point round-off error. It has to do with how Python stores floats (in binary), which makes it impossible for floats to have 100% precision.
Here's more info in the docs.
See the decimal module if you need more precision.
If you just want to quickly compare two numbers, there are a couple of tricks for floating point comparison. One of the most popular is comparing the relative error to the machine precision (epsilon):
import sys
def float_equality(x, y, epsilon=sys.float_info.epsilon):
return abs(x - y) <= epsilon * max(abs(x), abs(y))
But this too, is not perfect. For a discussion of the imperfections of this method and some more accurate alternatives, see this article about comparing floats.
Python tends to round numbers:
>>> math.pi
3.141592653589793
>>> "{0:.50f}".format(math.pi)
'3.14159265358979311599796346854418516159057617187500'
>>> "{0:.2f}".format(math.pi)
'3.14'
However, floating point numbers have a specific precision and you can't go beyod it. You can't store arbitrary numbers in floating point:
>>> number = 200.00000000000001
>>> "{:.25f}".format(number)
'200.0000000000000000000000000'
For integers the floating point limit is 2**53:
>>> 2.0**53
9007199254740992.0
>>> 2.0**53 + 1
9007199254740992.0
>>> 2.0**53 + 2
9007199254740994.0
If you want to store arbitrary decimal numbers you should use Decimal module:
>>> from decimal import Decimal
>>> number = Decimal("200.0000000000000000000000000000000000000000001")
>>> number
Decimal('200.0000000000000000000000000000000000000000001')

Python float to Decimal conversion

Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first.
This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal("%.15f" % my_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (Decimal("%.15f" % 100000.3) == Decimal('100000.300000000002910')).
Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?
Python <2.7
"%.15g" % f
Or in Python 3.0:
format(f, ".15g")
Python 2.7+, 3.2+
Just pass the float to Decimal constructor directly, like this:
from decimal import Decimal
Decimal(f)
I suggest this
>>> a = 2.111111
>>> a
2.1111110000000002
>>> str(a)
'2.111111'
>>> decimal.Decimal(str(a))
Decimal('2.111111')
You said in your question:
Can someone suggest a good way to
convert from float to Decimal
preserving value as the user has
entered
But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.
you can convert and than quantize to keep 5 digits after comma via
Decimal(float).quantize(Decimal("1.00000"))
Python does support Decimal creation from a float. You just cast it as a string first. But the precision loss doesn't occur with string conversion. The float you are converting doesn't have that kind of precision in the first place. (Otherwise you wouldn't need Decimal)
I think the confusion here is that we can create float literals in decimal format, but as soon as the interpreter consumes that literal the inner representation becomes a floating point number.
The "official" string representation of a float is given by the repr() built-in:
>>> repr(1.5)
'1.5'
>>> repr(12345.678901234567890123456789)
'12345.678901234567'
You can use repr() instead of a formatted string, the result won't contain any unnecessary garbage.
When you say "preserving value as the user has entered", why not just store the user-entered value as a string, and pass that to the Decimal constructor?
The main answer is slightly misleading. The g format ignores any leading zeroes after the decimal point, so format(0.012345, ".2g") returns 0.012 - three decimal places. If you need a hard limit on the number of decimal places, use the f formatter: format(0.012345, ".2f") == 0.01
I've come across the the same problem / question today and I'm not completely satisfied with any of the answers given so far. The core of the question seems to be:
Can someone suggest a good way to convert from float to Decimal [...] perhaps limiting number of significant digits that can be supported?
Short answer / solution: Yes.
def ftod(val, prec = 15):
return Decimal(val).quantize(Decimal(10)**-prec)
Long Answer:
As nosklo pointed out it is not possible to preserve the input of the user after it has been converted to float.
It is possible though to round that value with a reasonable precision and convert it into Decimal.
In my case I only need 2 to 4 digits after the separator, but they need to be accurate. Let's consider the classic 0.1 + 0.2 == 0.3 check.
>>> 0.1 + 0.2 == 0.3
False
Now let's do this with conversion to decimal (complete example):
>>> from decimal import Decimal
>>> def ftod(val, prec = 15): # float to Decimal
... return Decimal(val).quantize(Decimal(10)**-prec)
...
>>> ftod(0.1) + ftod(0.2) == ftod(0.3)
True
The answer by Ryabchenko Alexander was really helpful for me. It only lacks a way to dynamically set the precision – a feature I want (and maybe also need). The Decimal documentation FAQ gives an example on how to construct the required argument string for quantize():
>>> Decimal(10)**-4
Decimal('0.0001')
Here's how the numbers look like printed with 18 digits after the separator (coming from C programming I like the fancy python expressions):
>>> for x in [0.1, 0.2, 0.3, ftod(0.1), ftod(0.2), ftod(0.3)]:
... print("{:8} {:.18f}".format(type(x).__name__+":", x))
...
float: 0.100000000000000006
float: 0.200000000000000011
float: 0.299999999999999989
Decimal: 0.100000000000000000
Decimal: 0.200000000000000000
Decimal: 0.300000000000000000
And last I want to know for which precision the comparision still works:
>>> for p in [15, 16, 17]:
... print("Rounding precision: {}. Check 0.1 + 0.2 == 0.3 is {}".format(p,
... ftod(0.1, p) + ftod(0.2, p) == ftod(0.3, p)))
...
Rounding precision: 15. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 16. Check 0.1 + 0.2 == 0.3 is True
Rounding precision: 17. Check 0.1 + 0.2 == 0.3 is False
15 seems to be a good default for maximum precision. That should work on most systems. If you need more info, try:
>>> import sys
>>> sys.float_info
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
With float having 53 bits mantissa on my system, I calculated the number of decimal digits:
>>> import math
>>> math.log10(2**53)
15.954589770191003
Which tells me with 53 bits we get almost 16 digits. So 15 ist fine for the precision value and should always work. 16 is error-prone and 17 definitly causes trouble (as seen above).
Anyway ... in my specific case I only need 2 to 4 digits of precision, but as a perfectionist I enjoyed investigating this :-)
Any suggestions / improvements / complaints are welcome.
The "right" way to do this was documented in 1990 by Steele and White's and
Clinger's PLDI 1990 papers.
You might also look at this SO discussion about Python Decimal, including my suggestion to try using something like frap to rationalize a float.
You can use JSON to accomplish it
import json
from decimal import Decimal
float_value = 123456.2365
decimal_value = json.loads(json.dumps(float_value), parse_float=Decimal)
Inspired by this answer I found a workaround that allows to shorten the construction of a Decimal from a float bypassing (only apparently) the string step:
import decimal
class DecimalBuilder(float):
def __or__(self, a):
return decimal.Decimal(str(a))
>>> d = DecimalBuilder()
>>> x = d|0.1
>>> y = d|0.2
>>> x + y # works as desired
Decimal('0.3')
>>> d|0.1 + d|0.2 # does not work as desired, needs parenthesis
TypeError: unsupported operand type(s) for |: 'decimal.Decimal' and 'float'
>>> (d|0.1) + (d|0.2) # works as desired
Decimal('0.3')
It's a workaround but it surely allows savings in code typing and it's very readable.
The question is based on the wrong assertion that:
'Python Decimal doesn't support being constructed from float'.
In python3, Decimal class can do it as:
from decimal import *
getcontext().prec = 128 #high precision set
print(Decimal(100000.3))
A: 100000.300000000002910383045673370361328125 #SUCCESS (over 64 bit precision)
That's the right value with all decimals included, and so:
'there is no garbage after 15th decimal place ...'
You can verify on line with a IEEE754 converter like:
https://www.binaryconvert.com/convert_double.html
A: Most accurate representation = 1.00000300000000002910383045673E5 (64 bit precision)
or directly in python3 :
print(f'{100000.3:.128f}'.strip('0'))
A: 100000.300000000002910383045673370361328125
Preserving value as the user has entered, it's made with string conversion as:
Decimal(str(100000.3))
A: 100000.3

Categories

Resources