Precision of repr(f), str(f), print(f) when f is float - python

If I run:
>>> import math
>>> print(math.pi)
3.141592653589793
Then pi is printed with 16 digits,
However, according to:
>>> import sys
>>> sys.float_info.dig
15
My precision is 15 digits.
So, should I rely on the last digit of that value (i.e. that the value of π indeed is 3.141592653589793nnnnnn).

TL;DR
The last digit of str(float) or repr(float) can be "wrong" in that it seems that the decimal representation is not correctly rounded.
>>> 0.100000000000000040123456
0.10000000000000003
But that value is still closer to the original than 0.1000000000000000 (with 1 digit less) is.
In the case of math.pi, the decimal approximation of pi is 3.141592653589793238463..., in this case the last digit is right.
The sys.float_info.dig tells how many decimal digits are guaranteed to be always precise.
The default output for both str(float) and repr(float) in Python 3.1+ (and 2.7 for repr) is the shortest string that when converted to float will return the original value; in case of ambiguity, the last digit is rounded to the closest value. A float provides ~15.9 decimal digits of precision; but actually up to 17 decimal digit precision is required to represent a 53 binary digits unambiguously,
For example 0.10000000000000004 is between 0x1.999999999999dp-4 and 0x1.999999999999cp-4, but the latter is closer; these 2 have the decimal expansions
0.10000000000000004718447854656915296800434589385986328125
and
0.100000000000000033306690738754696212708950042724609375
respectively. Clearly the latter is closer, so that binary representation is chosen.
Now when these are converted back to string with str(), or repr(), the shortest string that yields the exactly same value is chosen; for these 2 values they are 0.10000000000000005 and 0.10000000000000003 respectively
The precision of a double in IEEE-754 is 53 binary digits; in decimal you can calculate the precision by taking 10-based logarithm of 2^53,
>>> math.log(2 ** 53, 10)
15.954589770191001
meaning almost 16 digits of precision. The float_info precision tells how much you can always expect to be presentable, and this number is 15, for there are some numbers with 16 decimal digits that are indistinguishable.
However this is not the whole story. Internally what happens in Python 3.2+ is that the float.__str__ and float.__repr__ end up calling the same C method float_repr:
float_repr(PyFloatObject *v)
{
PyObject *result;
char *buf;
buf = PyOS_double_to_string(PyFloat_AS_DOUBLE(v),
'r', 0,
Py_DTSF_ADD_DOT_0,
NULL);
if (!buf)
return PyErr_NoMemory();
result = _PyUnicode_FromASCII(buf, strlen(buf));
PyMem_Free(buf);
return result;
}
The PyOS_double_to_string then, for the 'r' mode (standing for repr), calls either the _Py_dg_dtoa with mode 0, which is an internal routine to convert the double to a string, or snprintf with %17g for those platforms for which the _Py_dg_dtoa wouldn't work.
The behaviour snprintf is entirely platform dependent, but if _Py_dg_dtoa is used (as far as I understand, it should be used on most machines), it should be predictable.
The _Py_dg_dtoa mode 0 is specified as follows:
0 ==> shortest string that yields d when read in and rounded to nearest.
So, that is what happens - the yielded string must exactly reproduce the double value when read in, and it must be the shortest representation possible, and among multiple decimal representations that would be read in, it would be the one that is closest to the binary value. Now, this might also mean that the last digit of decimal expansion does not match the original value rounded at that length, only that the decimal representation is as close to the original binary representation as possible. Thus YMMV.

Related

Reason for residuum in python float multiplication

Why are in some float multiplications in python those weird residuum?
e.g.
>>> 50*1.1
55.00000000000001
but
>>> 30*1.1
33.0
The reason should be somewhere in the binary representation of floats, but where is the difference in particular of both examples?
(This answer assumes your Python implementation uses IEEE-754 binary64, which is common.)
When 1.1 is converted to floating-point, the result is exactly 1.100000000000000088817841970012523233890533447265625, because this is the nearest representable value. (This number is 4953959590107546 • 2−52 — an integer with at most 53 bits multiplied by a power of two.)
When that is multiplied by 50, the exact mathematical result is 55.00000000000000444089209850062616169452667236328125. That cannot be exactly represented in binary64. To fit it into the binary64 format, it is rounded to the nearest representable value, which is 55.00000000000000710542735760100185871124267578125 (which is 7740561859543041 • 2−47).
When it is multiplied by 30, the exact result is 33.00000000000000266453525910037569701671600341796875. it also cannot be represented exactly in binary64. It is rounded to the nearest representable value, which is 33. (The next higher representable value is 33.00000000000000710542735760100185871124267578125, and we can see …026 is closer to …000 than to …071.)
That explains what the internal results are. Next there is an issue of how your Python implementation formats the output. I do not believe the Python implementation is strict about this, but it is likely one of two methods is used:
In effect, the number is converted to a certain number of decimal digits, and then trailing insignificant zeros are removed. Converting 55.00000000000000710542735760100185871124267578125 to a numeral with 16 digits after the decimal point yields 55.00000000000001, which has no trailing zeros to remove. Converting 33 to a numeral with 16 digits after the decimal point yields 33.00000000000000, which has 15 trailing zeros to remove. (Presumably your Python implementation always leaves at least one trailing zero after a decimal point to clearly distinguish that it is a floating-point number rather than an integer.)
Just enough decimal digits are used to uniquely distinguish the number from adjacent representable values. This method is required in Java and JavaScript but is not yet common in other programming languages. In the case of 55.00000000000000710542735760100185871124267578125, printing “55.00000000000001” distinguishes it from the neighboring values 55 (which would be formatted as “55.0”) and 55.0000000000000142108547152020037174224853515625 (which would be “55.000000000000014”).

seeing the true value of a float in a python program

The python cmd line shows the true value of a float readily
>>> 1.5-1.4
0.10000000000000009
The obvious way to see it from within a python program is to print it
>>> print 1.5-1.4
0.1
which seems to automatically round it? Is there a way to see the true value of a float from within a program?
Given that IEEE 754 double precision can require up to 767 significand digits to print true value in base 10 (not accounting leading zeros), but only 53 bits, maybe true value in base 10 is not what you want.
repr is good enough: it is shortest base 10 number rounding back to same float.
Thus, every two different float have a different repr, and it will identify your float uniquely.
If it's for having good view of internal representation, you can print in base 16 with hex, you'll get a leading 1 (or 0 for subnormals) and 13 hexadecimal "digits" encoding 4 bits each, plus base 2 exponent (written in base 10).
Here is an example:
import decimal
f=1<<1022
u=1<<(1022+53-1)
y=2/f-1/u
print(repr(y))
print(decimal.Decimal(y))
print(len(str(decimal.Decimal(y))))
print(float.hex(y))
Output is
4.4501477170144023e-308
4.4501477170144022721148195934182639518696390927032912960468522194496444440421538910330590478162701758282983178260792422
137401728773891892910553144148156412434867599762821265346585071045737627442980259622449029037796981144446145705102663115
100318287949527959668236039986479250965780342141637013812613333119898765515451440315261253813266652951306000184917766328
660755595837392240989947807556594098101021612198814605258742579179000071675999344145086087205681577915435923018910334964
869420614052182892431445797605163650903606514140377217442262561590244668525767372446430075513332450079650686719491377688
478005309963967709758965844137894433796621993967316936280457084866613206797017728916080020698679408551343728867675409720
757232455434770912461317493580281734466552734375E-308
773
0x1.fffffffffffffp-1022
You can hardly decipher the second form with its 773 characters (767 significand digits+1char for dot+5chars for exponent).
NOTA: in python 2.7, set y with this line
y=float(2/decimal.Decimal(f)-1/decimal.Decimal(u))
In some Python implementations, you can use print("%.9999g" % (1.5-1.4)). This should print the number with 9999 significant digits but with trailing zeroes suppressed—in effect all the significant digits of the number.
Python implementations may rely on underlying hardware and software for floating-point services, possibly including the formatting provided by %.9999g. Some implementations might not provide all digits needed to see the exact value. They may show the value rounded to about 16 digits, for example, in spite of the fact 9999 were requested.
In Python 2.7.10 on macOS 10.14.2, the above prints “0.100000000000000088817841970012523233890533447265625”, which is the exact value.
(In comparison, print("%.9999g" % .1) prints “0.1000000000000000055511151231257827021181583404541015625”.)

How to convert large float values to int?

I have a variable containing a large floating point number, say a = 999999999999999.99
When I type int(a) in the interpreter, it returns 1000000000000000.
How do I get the output as 999999999999999 for long numbers like these?
999999999999999.99 is a number that can't be precisely represented in the floating-point format, so Python compromises and picks the closest value that can be represented. In this case, that happens to be 1000000000000000. That's why converting that to an integer gives you 1000000000000000.
If you need more precision than floats can provide, consider using decimal.Decimal.
>>> import decimal
>>> a = decimal.Decimal("999999999999999.99")
>>> a
Decimal('999999999999999.99')
>>> int(a)
999999999999999
The problem is not int, the problem is the floating point value itself. Your value would need 17 digits of precision to be represented correctly, while double precision floating point values have between 15 and 16 digits of precision. So, when you input it, it is rounded to the nearest representable float value, which is 1000000000000000.0. When int is called it cannot do a thing - the precision is already lost.
If you need to represent this kind of values exactly you can use the decimal data type, keeping in mind that performance does suffer compared to regular floats.

How to find an original text representation for lower precision float values in Python?

I've run into an issue displaying float values in Python, loaded from an external data-source(they're 32bit floats, but this would apply to lower precision floats too).
(In case its important - These values were typed in by humans in C/C++, so unlike arbitrary calculated values, deviations from round numbers is likely not intended, though can't be ignored since the values may be constants such as M_PI or multiplied by constants).
Since CPython uses higher precision, (64bit typically), a value entered in as a lower precision float may repr() showing precision loss from being a 32bit-float, where the 64bit-float would show round values.
eg:
# Examples of 32bit float's displayed as 64bit floats in CPython.
0.0005 -> 0.0005000000237487257
0.025 -> 0.02500000037252903
0.04 -> 0.03999999910593033
0.05 -> 0.05000000074505806
0.3 -> 0.30000001192092896
0.98 -> 0.9800000190734863
1.2 -> 1.2000000476837158
4096.3 -> 4096.2998046875
Simply rounding the values to some arbitrary precision works in most cases, but may be incorrect since it could loose significant values with eg: 0.00000001.
An example of this can be shown by printing a float converted to a 32bit float.
def as_float_32(f):
from struct import pack, unpack
return unpack("f", pack("f", f))[0]
print(0.025) # --> 0.025
print(as_float_32(0.025)) # --> 0.02500000037252903
So my question is:
Whats the most efficient & straightforward way to get the original representation for a 32bit float, without making assumptions or loosing precision?
Put differently, if I have a data-source containing of 32bit floats, These were originally entered in by a human as round values, (examples above), but having them represented as higher precision values exposes that the value as a 32bit float is an approximation of the original value.
I would like to reverse this process, and get the round number back from the 32bit float data, but without loosing the precision which a 32bit float gives us. (which is why simply rounding isn't a good option).
Examples of why you might want to do this:
Generating API documentation where Python extracts values from a C-API that uses single precision floats internally.
When people need to read/review values of data generated which happens to be provided as single precision floats.
In both cases it's important not to loose significant precision, or show values which can't be easily read by humans at a glance.
Update, I've made a solution which I'll include as an answer (for reference and to show its possible), but highly doubt its an efficient or elegant solution.
Of course you can't know the notation used: 0.1f, 0.1F or 1e-1f where entered, that's not the purpose of this question.
You're looking to solve essentially the same problem that Python's repr solves, namely, finding the shortest decimal string that rounds to a given float. Except that in your case, the float isn't an IEEE 754 binary64 ("double precision") float, but an IEEE 754 binary32 ("single precision") float.
Just for the record, I should of course point out that retrieving the original string representation is impossible, since for example the strings '0.10', '0.1', '1e-1' and '10e-2' all get converted to the same float (or in this case float32). But under suitable conditions we can still hope to produce a string that has the same decimal value as the original string, and that's what I'll do below.
The approach you outline in your answer more-or-less works, but it can be streamlined a bit.
First, some bounds: when it comes to decimal representations of single-precision floats, there are two magic numbers: 6 and 9. The significance of 6 is that any (not-too-large, not-too-small) decimal numeric string with 6 or fewer significant decimal digits will round-trip correctly through a single-precision IEEE 754 float: that is, converting that string to the nearest float32, and then converting that value back to the nearest 6-digit decimal string, will produce a string with the same value as the original. For example:
>>> x = "634278e13"
>>> y = float(np.float32(x))
>>> y
6.342780214942106e+18
>>> "{:.6g}".format(y)
'6.34278e+18'
(Here, by "not-too-large, not-too-small" I just mean that the underflow and overflow ranges of float32 should be avoided. The property above applies for all normal values.)
This means that for your problem, if the original string had 6 or fewer digits, we can recover it by simply formatting the value to 6 significant digits. So if you only care about recovering strings that had 6 or fewer significant decimal digits in the first place, you can stop reading here: a simple '{:.6g}'.format(x) is enough. If you want to solve the problem more generally, read on.
For roundtripping in the other direction, we have the opposite property: given any single-precision float x, converting that float to a 9-digit decimal string (rounding to nearest, as always), and then converting that string back to a single-precision float, will always exactly recover the value of that float.
>>> x = np.float32(3.14159265358979)
>>> x
3.1415927
>>> np.float32('{:.9g}'.format(x)) == x
True
The relevance to your problem is there's always at least one 9-digit string that rounds to x, so we never have to look beyond 9 digits.
Now we can follow the same approach that you used in your answer: first try for a 6-digit string, then a 7-digit, then an 8-digit. If none of those work, the 9-digit string surely will, by the above. Here's some code.
def original_string(x):
for places in range(6, 10): # try 6, 7, 8, 9
s = '{:.{}g}'.format(x, places)
y = np.float32(s)
if x == y:
return s
# If x was genuinely a float32, we should never get here.
raise RuntimeError("We should never get here")
Example outputs:
>>> original_string(0.02500000037252903)
'0.025'
>>> original_string(0.03999999910593033)
'0.04'
>>> original_string(0.05000000074505806)
'0.05'
>>> original_string(0.30000001192092896)
'0.3'
>>> original_string(0.9800000190734863)
'0.98'
However, the above comes with several caveats.
First, for the key properties we're using to be true, we have to assume that np.float32 always does correct rounding. That may or may not be the case, depending on the operating system. (Even in cases where the relevant operating system calls claim to be correctly rounded, there may still be corner cases where that claim fails to be true.) In practice, it's likely that np.float32 is close enough to correctly rounded not to cause issues, but for complete confidence you'd want to know that it was correctly rounded.
Second, the above won't work for values in the subnormal range (so for float32, anything smaller than 2**-126). In the subnormal range, it's no longer true that a 6-digit decimal numeric string will roundtrip correctly through a single-precision float. If you care about subnormals, you'd need to do something more sophisticated there.
Third, there's a really subtle (and interesting!) error in the above that almost doesn't matter at all. The string formatting we're using always rounds x to the nearest places-digit decimal string to the true value of x. However, we want to know simply whether there's any places-digit decimal string that rounds back to x. We're implicitly assuming the (seemingly obvious) fact that if there's any places-digit decimal string that rounds to x, then the closest places-digit decimal string rounds to x. And that's almost true: it follows from the property that the interval of all real numbers that rounds to x is symmetric around x. But that symmetry property fails in one particular case, namely when x is a power of 2.
So when x is an exact power of 2, it's possible (but fairly unlikely) that (for example) the closest 8-digit decimal string to x doesn't round to x, but nevertheless there is an 8-digit decimal string that does round to x. You can do an exhaustive search for cases where this happens within the range of a float32, and it turns out that there are exactly three values of x for which this occurs, namely x = 2**-96, x = 2**87 and x = 2**90. For 7 digits, there are no such values. (And for 6 and 9 digits, this can never happen.) Let's take a closer look at the case x = 2**87:
>>> x = 2.0**87
>>> x
1.5474250491067253e+26
Let's take the closest 8-digit decimal value to x:
>>> s = '{:.8g}'.format(x)
>>> s
'1.547425e+26'
It turns out that this value doesn't round back to x:
>>> np.float32(s) == x
False
But the next 8-digit decimal string up from it does:
>>> np.float32('1.5474251e+26') == x
True
Similarly, here's the case x = 2**-96:
>>> x = 2**-96.
>>> x
1.262177448353619e-29
>>> s = '{:.8g}'.format(x)
>>> s
'1.2621774e-29'
>>> np.float32(s) == x
False
>>> np.float32('1.2621775e-29') == x
True
So ignoring subnormals and overflows, out of all 2 billion or so positive normal single-precision values, there are precisely three values x for which the above code doesn't work. (Note: I originally thought there was just one; thanks to #RickRegan for pointing out the error in comments.) So here's our (slightly tongue-in-cheek) fixed code:
def original_string(x):
"""
Given a single-precision positive normal value x,
return the shortest decimal numeric string which produces x.
"""
# Deal with the three awkward cases.
if x == 2**-96.:
return '1.2621775e-29'
elif x == 2**87:
return '1.5474251e+26'
elif x == 2**90:
return '1.2379401e+27'
for places in range(6, 10): # try 6, 7, 8, 9
s = '{:.{}g}'.format(x, places)
y = np.float32(s)
if x == y:
return s
# If x was genuinely a float32, we should never get here.
raise RuntimeError("We should never get here")
I think Decimal.quantize() (to round to a given number of decimal digits) and .normalize() (to strip trailing 0's) is what you need.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from decimal import Decimal
data = (
0.02500000037252903,
0.03999999910593033,
0.05000000074505806,
0.30000001192092896,
0.9800000190734863,
)
for f in data:
dec = Decimal(f).quantize(Decimal('1.0000000')).normalize()
print("Original %s -> %s" % (f, dec))
Result:
Original 0.0250000003725 -> 0.025
Original 0.0399999991059 -> 0.04
Original 0.0500000007451 -> 0.05
Original 0.300000011921 -> 0.3
Original 0.980000019073 -> 0.98
Heres a solution I've come up with which works (perfectly as far as I can tell) but isn't efficient.
It works by rounding at increasing decimal places, and returning the string when the rounded and non-rounded inputs match (when compared as values converted to lower precision).
Code:
def round_float_32(f):
from struct import pack, unpack
return unpack("f", pack("f", f))[0]
def as_float_low_precision_repr(f, round_fn):
f_round = round_fn(f)
f_str = repr(f)
f_str_frac = f_str.partition(".")[2]
if not f_str_frac:
return f_str
for i in range(1, len(f_str_frac)):
f_test = round(f, i)
f_test_round = round_fn(f_test)
if f_test_round == f_round:
return "%.*f" % (i, f_test)
return f_str
# ----
data = (
0.02500000037252903,
0.03999999910593033,
0.05000000074505806,
0.30000001192092896,
0.9800000190734863,
1.2000000476837158,
4096.2998046875,
)
for f in data:
f_as_float_32 = as_float_low_precision_repr(f, round_float_32)
print("%s -> %s" % (f, f_as_float_32))
Outputs:
0.02500000037252903 -> 0.025
0.03999999910593033 -> 0.04
0.05000000074505806 -> 0.05
0.30000001192092896 -> 0.3
0.9800000190734863 -> 0.98
1.2000000476837158 -> 1.2
4096.2998046875 -> 4096.3
If you have at least NumPy 1.14.0, you can just use repr(numpy.float32(your_value)). Quoting the release notes:
Float printing now uses “dragon4” algorithm for shortest decimal representation
The str and repr of floating-point values (16, 32, 64 and 128 bit) are now printed to give the shortest decimal representation which uniquely identifies the value from others of the same type. Previously this was only true for float64 values. The remaining float types will now often be shorter than in numpy 1.13.
Here's a demo running against a few of your example values:
>>> repr(numpy.float32(0.0005000000237487257))
'0.0005'
>>> repr(numpy.float32(0.02500000037252903))
'0.025'
>>> repr(numpy.float32(0.03999999910593033))
'0.04'
Probably what you are looking for is decimal:
Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.”
At least in python3 you can use .as_integer_ratio. That's not exactly a string but the floating point definition as such is not really well suited for giving an exact representation in "finite" strings.
a = 0.1
a.as_integer_ratio()
(3602879701896397, 36028797018963968)
So by saving these two numbers you'll never lose precision because these two exactly represent the saved floating point number. (Just divide the first by the second to get the value).
As an example using numpy dtypes (very similar to c dtypes):
# A value in python floating point precision
a = 0.1
# The value as ratio of integers
b = a.as_integer_ratio()
import numpy as np
# Force the result to have some precision:
res = np.array([0], dtype=np.float16)
np.true_divide(b[0], b[1], res)
print(res)
# Compare that two the wanted result when inputting 0.01
np.true_divide(1, 10, res)
print(res)
# Other precisions:
res = np.array([0], dtype=np.float32)
np.true_divide(b[0], b[1], res)
print(res)
res = np.array([0], dtype=np.float64)
np.true_divide(b[0], b[1], res)
print(res)
The result of all these calculations is:
[ 0.09997559] # Float16 with integer-ratio
[ 0.09997559] # Float16 reference
[ 0.1] # Float32
[ 0.1] # Float64

What is difference between {:.4e} and {:2.4} in Python scientific notation

I can't quite understand what the difference is between the two print statements below for the number I am trying to express in scientific notation. I thought the the bottom one is supposed to allow 2 spaces for the printed result, and move the decimal place 4 times, but the result I get does not corroborate that understanding. As far as the first one, What does 4e mean?
>>> print('{:.4e}'.format(3454356.7))
3.4544e+06
>>> print('{:2.4}'.format(3454356.7))
3.454e+06
All help greatly appreciated.
In the first example , 4e means, 4 decimal places in scientific notation. You can come to know that by doing
>>> print('{:.4e}'.format(3454356.7))
3.4544e+06
>>> print('{:.5e}'.format(3454356.7))
3.45436e+06
>>> print('{:.6e}'.format(3454356.7))
3.454357e+06
In the second example, .4 , means 4 significant figures. And 2 means to fit the whole data into 2 characters
>>> print('{:2.4}'.format(3454356.7))
3.454e+06
>>> print('{:2.5}'.format(3454356.7))
3.4544e+06
>>> print('{:2.6}'.format(3454356.7))
3.45436e+06
Testing with different value of 2
>>> print('-{:20.6}'.format(3454356.7))
- 3.45436e+06
You can learn more from the python documentation on format
If you want to produce a float, you will have to specify the float type:
>>> '{:2.4f}'.format(3454356.7)
'3454356.7000'
Otherwise, if you don’t specify a type, Python will choose g as the type for which precision will mean the precision based on its significant figures, the digits before and after the decimal point. And since you have a precision of 4, it will only display 4 digits, falling back to scientific notation so it doesn’t add false precision.
The precision is a decimal number indicating how many digits should be displayed after the decimal point for a floating point value formatted with 'f' and 'F', or before and after the decimal point for a floating point value formatted with 'g' or 'G'. For non-number types the field indicates the maximum field size - in other words, how many characters will be used from the field content. The precision is not allowed for integer values.
(source, emphasis mine)
Finally, note that the width (the 2 in above format string) includes the full width, including digits before the decimal point, digits after it, the decimal point itself, and the components of the scientific notation. The above result would have a width of 12, so in this case, the width of the format string is simply ignored.

Categories

Resources