The python cmd line shows the true value of a float readily
>>> 1.5-1.4
0.10000000000000009
The obvious way to see it from within a python program is to print it
>>> print 1.5-1.4
0.1
which seems to automatically round it? Is there a way to see the true value of a float from within a program?
Given that IEEE 754 double precision can require up to 767 significand digits to print true value in base 10 (not accounting leading zeros), but only 53 bits, maybe true value in base 10 is not what you want.
repr is good enough: it is shortest base 10 number rounding back to same float.
Thus, every two different float have a different repr, and it will identify your float uniquely.
If it's for having good view of internal representation, you can print in base 16 with hex, you'll get a leading 1 (or 0 for subnormals) and 13 hexadecimal "digits" encoding 4 bits each, plus base 2 exponent (written in base 10).
Here is an example:
import decimal
f=1<<1022
u=1<<(1022+53-1)
y=2/f-1/u
print(repr(y))
print(decimal.Decimal(y))
print(len(str(decimal.Decimal(y))))
print(float.hex(y))
Output is
4.4501477170144023e-308
4.4501477170144022721148195934182639518696390927032912960468522194496444440421538910330590478162701758282983178260792422
137401728773891892910553144148156412434867599762821265346585071045737627442980259622449029037796981144446145705102663115
100318287949527959668236039986479250965780342141637013812613333119898765515451440315261253813266652951306000184917766328
660755595837392240989947807556594098101021612198814605258742579179000071675999344145086087205681577915435923018910334964
869420614052182892431445797605163650903606514140377217442262561590244668525767372446430075513332450079650686719491377688
478005309963967709758965844137894433796621993967316936280457084866613206797017728916080020698679408551343728867675409720
757232455434770912461317493580281734466552734375E-308
773
0x1.fffffffffffffp-1022
You can hardly decipher the second form with its 773 characters (767 significand digits+1char for dot+5chars for exponent).
NOTA: in python 2.7, set y with this line
y=float(2/decimal.Decimal(f)-1/decimal.Decimal(u))
In some Python implementations, you can use print("%.9999g" % (1.5-1.4)). This should print the number with 9999 significant digits but with trailing zeroes suppressed—in effect all the significant digits of the number.
Python implementations may rely on underlying hardware and software for floating-point services, possibly including the formatting provided by %.9999g. Some implementations might not provide all digits needed to see the exact value. They may show the value rounded to about 16 digits, for example, in spite of the fact 9999 were requested.
In Python 2.7.10 on macOS 10.14.2, the above prints “0.100000000000000088817841970012523233890533447265625”, which is the exact value.
(In comparison, print("%.9999g" % .1) prints “0.1000000000000000055511151231257827021181583404541015625”.)
Related
For python, do read this link: https://docs.python.org/3/tutorial/floatingpoint.html, "Floating Point Arithmetic: Issues and Limitations"
I do understand that there is mismatch(tiny difference) between a binary-represented float & exact-decimal represented float, ex.
exact-decimal represented float:: 1.005
python binary-represented float:: 1.00499999999999989341858963598497211933135986328125
here is what I typed in python:
>>> 1.005
1.005
>>> from decimal import Decimal
>>> Decimal(1.005)
Decimal('1.00499999999999989341858963598497211933135986328125')
Here is my question:
why python showed 1.005 when I type in 1.005? why it is not 1.00499999999999989341858963598497211933135986328125?
if you tell me that python round result to some digits after decimal point, then what is rounding rule for my situation? it looks there is default rounding rule when start python, if this default rounding rule exists, how to change it?
Thanks
When asked to convert the float value 1.0049999999999999 to string, Python displays it with rounding:
>>> x = 1.0049999999999999; print(x)
1.005
According to the post that juanpa linked, Python uses the David Gay algorithm to decide how many digits to show when printing a float. Usually around 16 digits are shown, which makes sense, since 64-bit floats can represent 15 to 17 digits of significance.
If you want to print a float with some other number of digits shown, use an f-string or string interpolation with a precision specifier (see e.g. Input and Output - The Python Tutorial). For instance to print x with 20 digits:
>>> print(f'{x:.20}')
1.0049999999999998934
>>> print('%.20g' % x)
1.0049999999999998934
Why are in some float multiplications in python those weird residuum?
e.g.
>>> 50*1.1
55.00000000000001
but
>>> 30*1.1
33.0
The reason should be somewhere in the binary representation of floats, but where is the difference in particular of both examples?
(This answer assumes your Python implementation uses IEEE-754 binary64, which is common.)
When 1.1 is converted to floating-point, the result is exactly 1.100000000000000088817841970012523233890533447265625, because this is the nearest representable value. (This number is 4953959590107546 • 2−52 — an integer with at most 53 bits multiplied by a power of two.)
When that is multiplied by 50, the exact mathematical result is 55.00000000000000444089209850062616169452667236328125. That cannot be exactly represented in binary64. To fit it into the binary64 format, it is rounded to the nearest representable value, which is 55.00000000000000710542735760100185871124267578125 (which is 7740561859543041 • 2−47).
When it is multiplied by 30, the exact result is 33.00000000000000266453525910037569701671600341796875. it also cannot be represented exactly in binary64. It is rounded to the nearest representable value, which is 33. (The next higher representable value is 33.00000000000000710542735760100185871124267578125, and we can see …026 is closer to …000 than to …071.)
That explains what the internal results are. Next there is an issue of how your Python implementation formats the output. I do not believe the Python implementation is strict about this, but it is likely one of two methods is used:
In effect, the number is converted to a certain number of decimal digits, and then trailing insignificant zeros are removed. Converting 55.00000000000000710542735760100185871124267578125 to a numeral with 16 digits after the decimal point yields 55.00000000000001, which has no trailing zeros to remove. Converting 33 to a numeral with 16 digits after the decimal point yields 33.00000000000000, which has 15 trailing zeros to remove. (Presumably your Python implementation always leaves at least one trailing zero after a decimal point to clearly distinguish that it is a floating-point number rather than an integer.)
Just enough decimal digits are used to uniquely distinguish the number from adjacent representable values. This method is required in Java and JavaScript but is not yet common in other programming languages. In the case of 55.00000000000000710542735760100185871124267578125, printing “55.00000000000001” distinguishes it from the neighboring values 55 (which would be formatted as “55.0”) and 55.0000000000000142108547152020037174224853515625 (which would be “55.000000000000014”).
If I run:
>>> import math
>>> print(math.pi)
3.141592653589793
Then pi is printed with 16 digits,
However, according to:
>>> import sys
>>> sys.float_info.dig
15
My precision is 15 digits.
So, should I rely on the last digit of that value (i.e. that the value of π indeed is 3.141592653589793nnnnnn).
TL;DR
The last digit of str(float) or repr(float) can be "wrong" in that it seems that the decimal representation is not correctly rounded.
>>> 0.100000000000000040123456
0.10000000000000003
But that value is still closer to the original than 0.1000000000000000 (with 1 digit less) is.
In the case of math.pi, the decimal approximation of pi is 3.141592653589793238463..., in this case the last digit is right.
The sys.float_info.dig tells how many decimal digits are guaranteed to be always precise.
The default output for both str(float) and repr(float) in Python 3.1+ (and 2.7 for repr) is the shortest string that when converted to float will return the original value; in case of ambiguity, the last digit is rounded to the closest value. A float provides ~15.9 decimal digits of precision; but actually up to 17 decimal digit precision is required to represent a 53 binary digits unambiguously,
For example 0.10000000000000004 is between 0x1.999999999999dp-4 and 0x1.999999999999cp-4, but the latter is closer; these 2 have the decimal expansions
0.10000000000000004718447854656915296800434589385986328125
and
0.100000000000000033306690738754696212708950042724609375
respectively. Clearly the latter is closer, so that binary representation is chosen.
Now when these are converted back to string with str(), or repr(), the shortest string that yields the exactly same value is chosen; for these 2 values they are 0.10000000000000005 and 0.10000000000000003 respectively
The precision of a double in IEEE-754 is 53 binary digits; in decimal you can calculate the precision by taking 10-based logarithm of 2^53,
>>> math.log(2 ** 53, 10)
15.954589770191001
meaning almost 16 digits of precision. The float_info precision tells how much you can always expect to be presentable, and this number is 15, for there are some numbers with 16 decimal digits that are indistinguishable.
However this is not the whole story. Internally what happens in Python 3.2+ is that the float.__str__ and float.__repr__ end up calling the same C method float_repr:
float_repr(PyFloatObject *v)
{
PyObject *result;
char *buf;
buf = PyOS_double_to_string(PyFloat_AS_DOUBLE(v),
'r', 0,
Py_DTSF_ADD_DOT_0,
NULL);
if (!buf)
return PyErr_NoMemory();
result = _PyUnicode_FromASCII(buf, strlen(buf));
PyMem_Free(buf);
return result;
}
The PyOS_double_to_string then, for the 'r' mode (standing for repr), calls either the _Py_dg_dtoa with mode 0, which is an internal routine to convert the double to a string, or snprintf with %17g for those platforms for which the _Py_dg_dtoa wouldn't work.
The behaviour snprintf is entirely platform dependent, but if _Py_dg_dtoa is used (as far as I understand, it should be used on most machines), it should be predictable.
The _Py_dg_dtoa mode 0 is specified as follows:
0 ==> shortest string that yields d when read in and rounded to nearest.
So, that is what happens - the yielded string must exactly reproduce the double value when read in, and it must be the shortest representation possible, and among multiple decimal representations that would be read in, it would be the one that is closest to the binary value. Now, this might also mean that the last digit of decimal expansion does not match the original value rounded at that length, only that the decimal representation is as close to the original binary representation as possible. Thus YMMV.
How to check if a float value is within a range (0.50,150.00) and has 2 decimal digits?
For example, 15.22366 should be false (too many decimal digits). But 15.22 should be true.
I tried something like:
data= input()
if data in range(0.50,150.00):
return True
Is that you are looking for?
def check(value):
if 0.50 <= value <= 150 and round(value,2)==value:
return True
return False
Given your comment:
i input 15.22366 it is going to return true; that is why i specified the range; it should accept 15.22
Simply said, floating point values are imprecise. Many values don't have a precise representation. Say for example 1.40. It might be displayed "as it":
>>> f = 1.40
>>> print f
1.4
But this is an illusion. Python has rounded that value in order to nicely display it. The real value as referenced by the variable f is quite different:
>>> from decimal import Decimal
>>> Decimal(f)
Decimal('1.399999999999999911182158029987476766109466552734375')
According to your rule of having only 2 decimals, should f reference a valid value or not?
The easiest way to fix that issue is probably to use round(...,2) as I suggested in the code above. But this in only an heuristic -- only able to reject "largely wrong" values. See my point here:
>>> for v in [ 1.40,
... 1.405,
... 1.399999999999999911182158029987476766109466552734375,
... 1.39999999999999991118,
... 1.3999999999999991118]:
... print check(v), v
...
True 1.4
False 1.405
True 1.4
True 1.4
False 1.4
Notice how the last few results might seems surprising at first. I hope my above explanations put some light on this.
As a final advice, for your needs as I guess them from your question, you should definitively consider using "decimal arithmetic". Python provides the decimal module for that purpose.
float is the wrong data type to use for your case, Use Decimal instead.
Check python docs for issues and limitations. To quote from there (I've generalised the text in Italics)
Floating-point numbers are represented in computer hardware as base 2 (binary) fractions.
no matter how many base 2 digits you’re willing to use, some decimal value (like 0.1) cannot be represented exactly as a base 2 fraction.
Stop at any finite number of bits, and you get an approximation
On a typical machine running Python, there are 53 bits of precision available for a Python float, so the value stored internally when you enter a decimal number is the binary fraction which is close to, but not exactly equal to it.
The documentation for the built-in round() function says that it rounds to the nearest value, rounding ties away from zero.
And finally, it recommends
If you’re in a situation where you care which way your decimal halfway-cases are rounded, you should consider using the decimal module.
And this will hold for your case as well, as you are looking for a precision of 2 digits after decimal points, which float just can't guarantee.
EDIT Note: The answer below corresponds to original question related to random float generation
Seeing that you need 2 digits of sure shot precision, I would suggest generating integer random numbers in range [50, 15000] and dividing them by 100 to convert them to float yourself.
import random
random.randint(50, 15000)/100.0
Why don't you just use round?
round(random.uniform(0.5, 150.0), 2)
Probably what you want to do is not to change the value itself. As said by Cyber in the comment, even if your round a floating point number, it will always store the same precision. If you need to change the way it is printed:
n = random.uniform(0.5, 150)
print '%.2f' % n # 58.03
The easiest way is to first convert the decimal to string and split with '.' and check if the length of the character. If it is >2 then pass on. i.e. Convert use input number to check if it is in a given range.
a=15.22366
if len(str(a).split('.')[1])>2:
if 0.50 <= value <= 150:
<do your stuff>>
Inspired by this question, I was trying to find out what exactly happens there (my answer was more intuitive, but I cannot exactly understand the why of it).
I believe it comes down to this (running 64 bit Python):
>>> sys.maxint
9223372036854775807
>>> float(sys.maxint)
9.2233720368547758e+18
Python uses the IEEE 754 floating-point representation, which effectively has 53 bits for the significant. However, as far as I understand it, the significant in the above example would require 57 bits (56 if you drop the implied leading 1) to be represented. Can someone explain this discrepancy?
Perhaps the following will help clear things up:
>>> hex(int(float(sys.maxint)))
'0x8000000000000000L'
This shows that float(sys.maxint) is in fact a power of 2. Therefore, in binary its mantissa is exactly 1. In IEEE 754 the leading 1. is implied, so in the machine representation this number's mantissa consists of all zero bits.
In fact, the IEEE bit pattern representing this number is as follows:
0x43E0000000000000
Observe that only the first three nibbles (the sign and the exponent) are non-zero. The significand consists entirely of zeroes. As such it doesn't require 56 (nor indeed 53) bits to be represented.
You're wrong. It requires 1 bit.
>>> (9.2233720368547758e+18).hex()
'0x1.0000000000000p+63'
When you convert sys.maxint to a float or double, the result is exactly 0x1p63, because the significand contains only 24 or 53 bits (including the implicit bit), so the trailing bits cause a round up. (sys.maxint is 2^63 - 1, and rounding it up produces 2^63.)
Then, when you print this float, some subroutine formats it as a decimal numeral. To do this, it calculates digits to represent 2^63. The fact that it is able to print 9.2233720368547758e+18 does not imply that the original number contains bits that would distinguish it from 9.2233720368547759e+18. It simple means that the bits in it do represent 9.2233720368547758e+18 (approximately). In fact, the next representable floating-point number in double precision is 9223372036854777856 (approximately 9.2233720368547778e+18), which is 2^63 + 2048. So the low 11 bits of these integers are not present in the double. The formatter merely displays the number as if those bits are zero.