This question already has answers here:
Python float - str - float weirdness
(4 answers)
Closed 9 years ago.
This question is more for curiosity.
I'm creating the following array:
A = zeros((2,2))
for i in range(2):
A[i,i] = 0.6
A[(i+1)%2,i] = 0.4
print A
>>>
[[ 0.6 0.4]
[ 0.4 0.6]]
Then, printing it:
for i,c in enumerate(A):
for j,d in enumerate(c):
print j, d
But, if I remove the j, I got:
>>>
0 0.6
1 0.4
0 0.4
1 0.6
But if I remove the j from the for, I got:
(0, 0.59999999999999998)
(1, 0.40000000000000002)
(0, 0.40000000000000002)
(1, 0.59999999999999998)
It because the way I'm creating the matrix, using 0.6? How does it represent internally real values?
There are a few different things going on here.
First, Python has two mechanisms for turning an object into a string, called repr and str. repr is supposed to give 'faithful' output that would (ideally) make it easy to recreate exactly that object, while str aims for more human-readable output. For floats in Python versions up to and including Python 3.1, repr gives enough digits to determine the value of the float completely (so that evaluating the returned string gives back exactly that float), while str rounds to 12 decimal places; this has the effect of hiding inaccuracies, but means that two distinct floats that are very close together can end up with the same str value - something that can't happen with repr. When you print an object, you get the str of that object. In contrast, when you just evaluate an expression at the interpreter prompt, you get the repr.
For example (here using Python 2.7):
>>> x = 1.0 / 7.0
>>> str(x)
'0.142857142857'
>>> repr(x)
'0.14285714285714285'
>>> print x # print uses 'str'
0.142857142857
>>> x # the interpreter read-eval-print loop uses 'repr'
0.14285714285714285
But also, a little bit confusingly from your point of view, we get:
>>> x = 0.4
>>> str(x)
'0.4'
>>> repr(x)
'0.4'
That doesn't seem to tie in too well with what you were seeing above, but we'll come back to this below.
The second thing to bear in mind is that in your first example, you're printing two separate items, while in your second example (with the j removed), you're printing a single item: a tuple of length 2. Somewhat surprisingly, when converting a tuple for printing with str, Python nevertheless uses repr to compute the string representation of the elements of that tuple:
>>> x = 1.0 / 7.0
>>> print x, x # print x twice; uses str(x)
0.142857142857 0.142857142857
>>> print(x, x) # print a single tuple; uses repr(x)
(0.14285714285714285, 0.14285714285714285)
That explains why you're seeing different results in the two cases, even though the underlying floats are the same.
But there's one last piece to the puzzle. In Python >= 2.7, we saw above that for the particular float 0.4, the str and repr of that float were the same. So where does the 0.40000000000000002 come from? Well, you don't have Python floats here: because you're getting these values from a NumPy array, they're actually of type numpy.float64:
>>> from numpy import zeros
>>> A = zeros((2, 2))
>>> A[:] = [[0.6, 0.4], [0.4, 0.6]]
>>> A
array([[ 0.6, 0.4],
[ 0.4, 0.6]])
>>> type(A[0, 0])
<type 'numpy.float64'>
That type still stores a double-precision float, just like Python's float, but it's got some extra goodies that make it interact nicely with the rest of NumPy. And it turns out that NumPy uses a slightly different algorithm for computing the repr of a numpy.float64 than Python uses for computing the repr of a float. Python (in versions >= 2.7) aims to give the shortest string that still gives an accurate representation of the float, while NumPy simply outputs a string based on rounding the underlying value to 17 significant digits. Going back to that 0.4 example above, here's what NumPy does:
>>> from numpy import float64
>>> x = float64(1.0 / 7.0)
>>> str(x)
'0.142857142857'
>>> repr(x)
'0.14285714285714285'
>>> x = float64(0.4)
>>> str(x)
'0.4'
>>> repr(x)
'0.40000000000000002'
So these three things together should explain the results you're seeing. Rest assured that this is all completely cosmetic: the underlying floating-point value is not being changed in any way; it's just being displayed differently by the four different possible combinations of str and repr for the two types: float and numpy.float64.
The Python tutorial give more details of how Python floats are stored and displayed, together with some of the potential pitfalls. The answers to this SO question have more information on the difference between str and repr.
Edit:
Don't mind me, I failed to realise that the question was about NumPy.
The strange 0.59999999999999998 and friends is Python's best attempt to accurately represent how all computers store floating point values: as a bunch of bits, according to the IEEE 754 standard. Notably, 0.1 is a non-terminating decimal in binary, and so cannot be stored exactly. (So, presumably, are 0.6 and 0.4.)
The reason you normally see 0.6 is most floating-point printing functions round off these imprecisely-stored floats, to make them more understandable to us humans. That's what your first printing example is doing.
Under some circumstances (that is, when the printing functions aren't trying for human-readable), the full, slightly-off number 0.59999999999999998 will be printed. That's what your second printing example is doing.
tl;dr
This is not Python's fault; it is just how floats are stored.
Related
I want a to be rounded to 13.95. I tried using round, but I get:
>>> a
13.949999999999999
>>> round(a, 2)
13.949999999999999
For the analogous issue with the standard library Decimal class, see How can I format a decimal to always show 2 decimal places?.
You are running into the old problem with floating point numbers that not all numbers can be represented exactly. The command line is just showing you the full floating point form from memory.
With floating point representation, your rounded version is the same number. Since computers are binary, they store floating point numbers as an integer and then divide it by a power of two so 13.95 will be represented in a similar fashion to 125650429603636838/(2**53).
Double precision numbers have 53 bits (16 digits) of precision and regular floats have 24 bits (8 digits) of precision. The floating point type in Python uses double precision to store the values.
For example,
>>> 125650429603636838/(2**53)
13.949999999999999
>>> 234042163/(2**24)
13.949999988079071
>>> a = 13.946
>>> print(a)
13.946
>>> print("%.2f" % a)
13.95
>>> round(a,2)
13.949999999999999
>>> print("%.2f" % round(a, 2))
13.95
>>> print("{:.2f}".format(a))
13.95
>>> print("{:.2f}".format(round(a, 2)))
13.95
>>> print("{:.15f}".format(round(a, 2)))
13.949999999999999
If you are after only two decimal places (to display a currency value, for example), then you have a couple of better choices:
Use integers and store values in cents, not dollars and then divide by 100 to convert to dollars.
Or use a fixed point number like decimal.
There are new format specifications, String Format Specification Mini-Language:
You can do the same as:
"{:.2f}".format(13.949999999999999)
Note 1: the above returns a string. In order to get as float, simply wrap with float(...):
float("{:.2f}".format(13.949999999999999))
Note 2: wrapping with float() doesn't change anything:
>>> x = 13.949999999999999999
>>> x
13.95
>>> g = float("{:.2f}".format(x))
>>> g
13.95
>>> x == g
True
>>> h = round(x, 2)
>>> h
13.95
>>> x == h
True
The built-in round() works just fine in Python 2.7 or later.
Example:
>>> round(14.22222223, 2)
14.22
Check out the documentation.
Let me give an example in Python 3.6's f-string/template-string format, which I think is beautifully neat:
>>> f'{a:.2f}'
It works well with longer examples too, with operators and not needing parentheses:
>>> print(f'Completed in {time.time() - start:.2f}s')
I feel that the simplest approach is to use the format() function.
For example:
a = 13.949999999999999
format(a, '.2f')
13.95
This produces a float number as a string rounded to two decimal points.
Most numbers cannot be exactly represented in floats. If you want to round the number because that's what your mathematical formula or algorithm requires, then you want to use round. If you just want to restrict the display to a certain precision, then don't even use round and just format it as that string. (If you want to display it with some alternate rounding method, and there are tons, then you need to mix the two approaches.)
>>> "%.2f" % 3.14159
'3.14'
>>> "%.2f" % 13.9499999
'13.95'
And lastly, though perhaps most importantly, if you want exact math then you don't want floats at all. The usual example is dealing with money and to store 'cents' as an integer.
Use
print"{:.2f}".format(a)
instead of
print"{0:.2f}".format(a)
Because the latter may lead to output errors when trying to output multiple variables (see comments).
Try the code below:
>>> a = 0.99334
>>> a = int((a * 100) + 0.5) / 100.0 # Adding 0.5 rounds it up
>>> print a
0.99
TLDR ;)
The rounding problem of input and output has been solved definitively by Python 3.1 and the fix is backported also to Python 2.7.0.
Rounded numbers can be reversibly converted between float and string back and forth:
str -> float() -> repr() -> float() ... or Decimal -> float -> str -> Decimal
>>> 0.3
0.3
>>> float(repr(0.3)) == 0.3
True
A Decimal type is not necessary for storage anymore.
Results of arithmetic operations must be rounded again because rounding errors could accumulate more inaccuracy than that is possible after parsing one number. That is not fixed by the improved repr() algorithm (Python >= 3.1, >= 2.7.0):
>>> 0.1 + 0.2
0.30000000000000004
>>> 0.1, 0.2, 0.3
(0.1, 0.2, 0.3)
The output string function str(float(...)) was rounded to 12 valid digits in Python < 2.7x and < 3.1, to prevent excessive invalid digits similar to unfixed repr() output. That was still insufficientl after subtraction of very similar numbers and it was too much rounded after other operations. Python 2.7 and 3.1 use the same length of str() although the repr() is fixed. Some old versions of Numpy had also excessive invalid digits, even with fixed Python. The current Numpy is fixed. Python versions >= 3.2 have the same results of str() and repr() function and also output of similar functions in Numpy.
Test
import random
from decimal import Decimal
for _ in range(1000000):
x = random.random()
assert x == float(repr(x)) == float(Decimal(repr(x))) # Reversible repr()
assert str(x) == repr(x)
assert len(repr(round(x, 12))) <= 14 # no excessive decimal places.
Documentation
See the Release notes Python 2.7 - Other Language Changes the fourth paragraph:
Conversions between floating-point numbers and strings are now correctly rounded on most platforms. These conversions occur in many different places: str() on floats and complex numbers; the float and complex constructors; numeric formatting; serializing and de-serializing floats and complex numbers using the marshal, pickle and json modules; parsing of float and imaginary literals in Python code; and Decimal-to-float conversion.
Related to this, the repr() of a floating-point number x now returns a result based on the shortest decimal string that’s guaranteed to round back to x under correct rounding (with round-half-to-even rounding mode). Previously it gave a string based on rounding x to 17 decimal digits.
The related issue
More information: The formatting of float before Python 2.7 was similar to the current numpy.float64. Both types use the same 64 bit IEEE 754 double precision with 52 bit mantissa. A big difference is that np.float64.__repr__ is formatted frequently with an excessive decimal number so that no bit can be lost, but no valid IEEE 754 number exists between 13.949999999999999 and 13.950000000000001. The result is not nice and the conversion repr(float(number_as_string)) is not reversible with numpy. On the other hand: float.__repr__ is formatted so that every digit is important; the sequence is without gaps and the conversion is reversible. Simply: If you perhaps have a numpy.float64 number, convert it to normal float in order to be formatted for humans, not for numeric processors, otherwise nothing more is necessary with Python 2.7+.
Use:
float_number = 12.234325335563
round(float_number, 2)
This will return;
12.23
Explanation:
The round function takes two arguments;
The number to be rounded and the number of decimal places to be returned. Here I returned two decimal places.
You can modify the output format:
>>> a = 13.95
>>> a
13.949999999999999
>>> print "%.2f" % a
13.95
With Python < 3 (e.g. 2.6 or 2.7), there are two ways to do so.
# Option one
older_method_string = "%.9f" % numvar
# Option two (note ':' before the '.9f')
newer_method_string = "{:.9f}".format(numvar)
But note that for Python versions above 3 (e.g. 3.2 or 3.3), option two is preferred.
For more information on option two, I suggest this link on string formatting from the Python documentation.
And for more information on option one, this link will suffice and has information on the various flags.
Reference: Convert floating point number to a certain precision, and then copy to string
You can use format operator for rounding the value up to two decimal places in Python:
print(format(14.4499923, '.2f')) // The output is 14.45
As Matt pointed out, Python 3.6 provides f-strings, and they can also use nested parameters:
value = 2.34558
precision = 2
width = 4
print(f'result: {value:{width}.{precision}f}')
which will display result: 2.35
In Python 2.7:
a = 13.949999999999999
output = float("%0.2f"%a)
print output
We multiple options to do that:
Option 1:
x = 1.090675765757
g = float("{:.2f}".format(x))
print(g)
Option 2:
The built-in round() supports Python 2.7 or later.
x = 1.090675765757
g = round(x, 2)
print(g)
The Python tutorial has an appendix called Floating Point Arithmetic: Issues and Limitations. Read it. It explains what is happening and why Python is doing its best. It has even an example that matches yours. Let me quote a bit:
>>> 0.1
0.10000000000000001
you may be tempted to use the round()
function to chop it back to the single
digit you expect. But that makes no
difference:
>>> round(0.1, 1)
0.10000000000000001
The problem is that the binary
floating-point value stored for “0.1”
was already the best possible binary
approximation to 1/10, so trying to
round it again can’t make it better:
it was already as good as it gets.
Another consequence is that since 0.1
is not exactly 1/10, summing ten
values of 0.1 may not yield exactly
1.0, either:
>>> sum = 0.0
>>> for i in range(10):
... sum += 0.1
...
>>> sum
0.99999999999999989
One alternative and solution to your problems would be using the decimal module.
Use combination of Decimal object and round() method.
Python 3.7.3
>>> from decimal import Decimal
>>> d1 = Decimal (13.949999999999999) # define a Decimal
>>> d1
Decimal('13.949999999999999289457264239899814128875732421875')
>>> d2 = round(d1, 2) # round to 2 decimals
>>> d2
Decimal('13.95')
It's doing exactly what you told it to do and is working correctly. Read more about floating point confusion and maybe try decimal objects instead.
from decimal import Decimal
def round_float(v, ndigits=2, rt_str=False):
d = Decimal(v)
v_str = ("{0:.%sf}" % ndigits).format(round(d, ndigits))
if rt_str:
return v_str
return Decimal(v_str)
Results:
Python 3.6.1 (default, Dec 11 2018, 17:41:10)
>>> round_float(3.1415926)
Decimal('3.14')
>>> round_float(3.1445926)
Decimal('3.14')
>>> round_float(3.1455926)
Decimal('3.15')
>>> round_float(3.1455926, rt_str=True)
'3.15'
>>> str(round_float(3.1455926))
'3.15'
The simple solution is here
value = 5.34343
rounded_value = round(value, 2) # 5.34
Use a lambda function like this:
arred = lambda x,n : x*(10**n)//1/(10**n)
This way you could just do:
arred(3.141591657, 2)
and get
3.14
orig_float = 232569 / 16000.0
14.5355625
short_float = float("{:.2f}".format(orig_float))
14.54
For fixing the floating point in type-dynamic languages such as Python and JavaScript, I use this technique
# For example:
a = 70000
b = 0.14
c = a * b
print c # Prints 980.0000000002
# Try to fix
c = int(c * 10000)/100000
print c # Prints 980
You can also use Decimal as following:
from decimal import *
getcontext().prec = 6
Decimal(1) / Decimal(7)
# Results in 6 precision -> Decimal('0.142857')
getcontext().prec = 28
Decimal(1) / Decimal(7)
# Results in 28 precision -> Decimal('0.1428571428571428571428571429')
It's simple like:
use decimal module for fast correctly-rounded decimal floating point arithmetic:
d = Decimal(10000000.0000009)
to achieve rounding:
d.quantize(Decimal('0.01'))
will result with Decimal('10000000.00')
make the above DRY:
def round_decimal(number, exponent='0.01'):
decimal_value = Decimal(number)
return decimal_value.quantize(Decimal(exponent))
or
def round_decimal(number, decimal_places=2):
decimal_value = Decimal(number)
return decimal_value.quantize(Decimal(10) ** -decimal_places)
PS: critique of others: formatting is not rounding.
Here is the simple solution using the format function.
float(format(num, '.2f'))
Note: We are converting numbers to float, because the format method is returning a string.
If you want to handle money, use the Python decimal module:
from decimal import Decimal, ROUND_HALF_UP
# 'amount' can be integer, string, tuple, float, or another Decimal object
def to_money(amount) -> Decimal:
money = Decimal(amount).quantize(Decimal('.00'), rounding=ROUND_HALF_UP)
return money
lambda x, n:int(x*10^n + 0.5)/10^n
has worked for me for many years in many languages.
To round a number to a resolution, the best way is the following one, which can work with any resolution (0.01 for two decimals or even other steps):
>>> import numpy as np
>>> value = 13.949999999999999
>>> resolution = 0.01
>>> newValue = int(np.round(value/resolution))*resolution
>>> print newValue
13.95
>>> resolution = 0.5
>>> newValue = int(np.round(value/resolution))*resolution
>>> print newValue
14.0
The answers I saw didn't work with the float(52.15) case. After some tests, there is the solution that I'm using:
import decimal
def value_to_decimal(value, decimal_places):
decimal.getcontext().rounding = decimal.ROUND_HALF_UP # define rounding method
return decimal.Decimal(str(float(value))).quantize(decimal.Decimal('1e-{}'.format(decimal_places)))
(The conversion of the 'value' to float and then string is very important, that way, 'value' can be of the type float, decimal, integer or string!)
Hope this helps anyone.
suppose a float number x=3.1234. I want to print this number in the middle of the string containing space in the left side and right side of x. string length will be variable. Precision of x will be variable. if string length=10 and precision=2 the output will be " 3.14 " Have any function in python that can return this?
This is really nicely documented at https://docs.python.org/3.6/library/string.html#format-specification-mini-language
But since you clearly didn't have time to google for it:
>>> x = 3.1234
>>> length=10
>>> precision=2
>>> f"{x:^{length}.{precision}}"
' 3.1 '
I'm afraid your notion of precision doesn't agree with Python's in the default case. You can fix it by specifying fixed point formatting instead of the default general formatting:
>>> f"{x:^{length}.{precision}f}"
' 3.12 '
This notation is more perspicuous than calling the method str.format(). But in Python 3.5 and earlier you need to do this instead:
>>> "{x:^{length}.{precision}f}".format(x=x, length=length, precision=precision)
But no amount of fiddling with the format is going to make 3.1234 come out as 3.14. I suspect that that was an error in the question, but if you really meant it, then there is no alternative but adjust the value of x before formatting it. Here is one way to do that:
>>> from decimal import *
>>> (Decimal(x) / Decimal ('0.02')).quantize(Decimal('1'), rounding=ROUND_UP) * Decimal('0.02')
Decimal('3.14')
This divides your number into a whole number of chunks of size 0.02, rounding up where necessary, then multiplies by 0.02 again to get the value you want.
I've run into an issue displaying float values in Python, loaded from an external data-source(they're 32bit floats, but this would apply to lower precision floats too).
(In case its important - These values were typed in by humans in C/C++, so unlike arbitrary calculated values, deviations from round numbers is likely not intended, though can't be ignored since the values may be constants such as M_PI or multiplied by constants).
Since CPython uses higher precision, (64bit typically), a value entered in as a lower precision float may repr() showing precision loss from being a 32bit-float, where the 64bit-float would show round values.
eg:
# Examples of 32bit float's displayed as 64bit floats in CPython.
0.0005 -> 0.0005000000237487257
0.025 -> 0.02500000037252903
0.04 -> 0.03999999910593033
0.05 -> 0.05000000074505806
0.3 -> 0.30000001192092896
0.98 -> 0.9800000190734863
1.2 -> 1.2000000476837158
4096.3 -> 4096.2998046875
Simply rounding the values to some arbitrary precision works in most cases, but may be incorrect since it could loose significant values with eg: 0.00000001.
An example of this can be shown by printing a float converted to a 32bit float.
def as_float_32(f):
from struct import pack, unpack
return unpack("f", pack("f", f))[0]
print(0.025) # --> 0.025
print(as_float_32(0.025)) # --> 0.02500000037252903
So my question is:
Whats the most efficient & straightforward way to get the original representation for a 32bit float, without making assumptions or loosing precision?
Put differently, if I have a data-source containing of 32bit floats, These were originally entered in by a human as round values, (examples above), but having them represented as higher precision values exposes that the value as a 32bit float is an approximation of the original value.
I would like to reverse this process, and get the round number back from the 32bit float data, but without loosing the precision which a 32bit float gives us. (which is why simply rounding isn't a good option).
Examples of why you might want to do this:
Generating API documentation where Python extracts values from a C-API that uses single precision floats internally.
When people need to read/review values of data generated which happens to be provided as single precision floats.
In both cases it's important not to loose significant precision, or show values which can't be easily read by humans at a glance.
Update, I've made a solution which I'll include as an answer (for reference and to show its possible), but highly doubt its an efficient or elegant solution.
Of course you can't know the notation used: 0.1f, 0.1F or 1e-1f where entered, that's not the purpose of this question.
You're looking to solve essentially the same problem that Python's repr solves, namely, finding the shortest decimal string that rounds to a given float. Except that in your case, the float isn't an IEEE 754 binary64 ("double precision") float, but an IEEE 754 binary32 ("single precision") float.
Just for the record, I should of course point out that retrieving the original string representation is impossible, since for example the strings '0.10', '0.1', '1e-1' and '10e-2' all get converted to the same float (or in this case float32). But under suitable conditions we can still hope to produce a string that has the same decimal value as the original string, and that's what I'll do below.
The approach you outline in your answer more-or-less works, but it can be streamlined a bit.
First, some bounds: when it comes to decimal representations of single-precision floats, there are two magic numbers: 6 and 9. The significance of 6 is that any (not-too-large, not-too-small) decimal numeric string with 6 or fewer significant decimal digits will round-trip correctly through a single-precision IEEE 754 float: that is, converting that string to the nearest float32, and then converting that value back to the nearest 6-digit decimal string, will produce a string with the same value as the original. For example:
>>> x = "634278e13"
>>> y = float(np.float32(x))
>>> y
6.342780214942106e+18
>>> "{:.6g}".format(y)
'6.34278e+18'
(Here, by "not-too-large, not-too-small" I just mean that the underflow and overflow ranges of float32 should be avoided. The property above applies for all normal values.)
This means that for your problem, if the original string had 6 or fewer digits, we can recover it by simply formatting the value to 6 significant digits. So if you only care about recovering strings that had 6 or fewer significant decimal digits in the first place, you can stop reading here: a simple '{:.6g}'.format(x) is enough. If you want to solve the problem more generally, read on.
For roundtripping in the other direction, we have the opposite property: given any single-precision float x, converting that float to a 9-digit decimal string (rounding to nearest, as always), and then converting that string back to a single-precision float, will always exactly recover the value of that float.
>>> x = np.float32(3.14159265358979)
>>> x
3.1415927
>>> np.float32('{:.9g}'.format(x)) == x
True
The relevance to your problem is there's always at least one 9-digit string that rounds to x, so we never have to look beyond 9 digits.
Now we can follow the same approach that you used in your answer: first try for a 6-digit string, then a 7-digit, then an 8-digit. If none of those work, the 9-digit string surely will, by the above. Here's some code.
def original_string(x):
for places in range(6, 10): # try 6, 7, 8, 9
s = '{:.{}g}'.format(x, places)
y = np.float32(s)
if x == y:
return s
# If x was genuinely a float32, we should never get here.
raise RuntimeError("We should never get here")
Example outputs:
>>> original_string(0.02500000037252903)
'0.025'
>>> original_string(0.03999999910593033)
'0.04'
>>> original_string(0.05000000074505806)
'0.05'
>>> original_string(0.30000001192092896)
'0.3'
>>> original_string(0.9800000190734863)
'0.98'
However, the above comes with several caveats.
First, for the key properties we're using to be true, we have to assume that np.float32 always does correct rounding. That may or may not be the case, depending on the operating system. (Even in cases where the relevant operating system calls claim to be correctly rounded, there may still be corner cases where that claim fails to be true.) In practice, it's likely that np.float32 is close enough to correctly rounded not to cause issues, but for complete confidence you'd want to know that it was correctly rounded.
Second, the above won't work for values in the subnormal range (so for float32, anything smaller than 2**-126). In the subnormal range, it's no longer true that a 6-digit decimal numeric string will roundtrip correctly through a single-precision float. If you care about subnormals, you'd need to do something more sophisticated there.
Third, there's a really subtle (and interesting!) error in the above that almost doesn't matter at all. The string formatting we're using always rounds x to the nearest places-digit decimal string to the true value of x. However, we want to know simply whether there's any places-digit decimal string that rounds back to x. We're implicitly assuming the (seemingly obvious) fact that if there's any places-digit decimal string that rounds to x, then the closest places-digit decimal string rounds to x. And that's almost true: it follows from the property that the interval of all real numbers that rounds to x is symmetric around x. But that symmetry property fails in one particular case, namely when x is a power of 2.
So when x is an exact power of 2, it's possible (but fairly unlikely) that (for example) the closest 8-digit decimal string to x doesn't round to x, but nevertheless there is an 8-digit decimal string that does round to x. You can do an exhaustive search for cases where this happens within the range of a float32, and it turns out that there are exactly three values of x for which this occurs, namely x = 2**-96, x = 2**87 and x = 2**90. For 7 digits, there are no such values. (And for 6 and 9 digits, this can never happen.) Let's take a closer look at the case x = 2**87:
>>> x = 2.0**87
>>> x
1.5474250491067253e+26
Let's take the closest 8-digit decimal value to x:
>>> s = '{:.8g}'.format(x)
>>> s
'1.547425e+26'
It turns out that this value doesn't round back to x:
>>> np.float32(s) == x
False
But the next 8-digit decimal string up from it does:
>>> np.float32('1.5474251e+26') == x
True
Similarly, here's the case x = 2**-96:
>>> x = 2**-96.
>>> x
1.262177448353619e-29
>>> s = '{:.8g}'.format(x)
>>> s
'1.2621774e-29'
>>> np.float32(s) == x
False
>>> np.float32('1.2621775e-29') == x
True
So ignoring subnormals and overflows, out of all 2 billion or so positive normal single-precision values, there are precisely three values x for which the above code doesn't work. (Note: I originally thought there was just one; thanks to #RickRegan for pointing out the error in comments.) So here's our (slightly tongue-in-cheek) fixed code:
def original_string(x):
"""
Given a single-precision positive normal value x,
return the shortest decimal numeric string which produces x.
"""
# Deal with the three awkward cases.
if x == 2**-96.:
return '1.2621775e-29'
elif x == 2**87:
return '1.5474251e+26'
elif x == 2**90:
return '1.2379401e+27'
for places in range(6, 10): # try 6, 7, 8, 9
s = '{:.{}g}'.format(x, places)
y = np.float32(s)
if x == y:
return s
# If x was genuinely a float32, we should never get here.
raise RuntimeError("We should never get here")
I think Decimal.quantize() (to round to a given number of decimal digits) and .normalize() (to strip trailing 0's) is what you need.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from decimal import Decimal
data = (
0.02500000037252903,
0.03999999910593033,
0.05000000074505806,
0.30000001192092896,
0.9800000190734863,
)
for f in data:
dec = Decimal(f).quantize(Decimal('1.0000000')).normalize()
print("Original %s -> %s" % (f, dec))
Result:
Original 0.0250000003725 -> 0.025
Original 0.0399999991059 -> 0.04
Original 0.0500000007451 -> 0.05
Original 0.300000011921 -> 0.3
Original 0.980000019073 -> 0.98
Heres a solution I've come up with which works (perfectly as far as I can tell) but isn't efficient.
It works by rounding at increasing decimal places, and returning the string when the rounded and non-rounded inputs match (when compared as values converted to lower precision).
Code:
def round_float_32(f):
from struct import pack, unpack
return unpack("f", pack("f", f))[0]
def as_float_low_precision_repr(f, round_fn):
f_round = round_fn(f)
f_str = repr(f)
f_str_frac = f_str.partition(".")[2]
if not f_str_frac:
return f_str
for i in range(1, len(f_str_frac)):
f_test = round(f, i)
f_test_round = round_fn(f_test)
if f_test_round == f_round:
return "%.*f" % (i, f_test)
return f_str
# ----
data = (
0.02500000037252903,
0.03999999910593033,
0.05000000074505806,
0.30000001192092896,
0.9800000190734863,
1.2000000476837158,
4096.2998046875,
)
for f in data:
f_as_float_32 = as_float_low_precision_repr(f, round_float_32)
print("%s -> %s" % (f, f_as_float_32))
Outputs:
0.02500000037252903 -> 0.025
0.03999999910593033 -> 0.04
0.05000000074505806 -> 0.05
0.30000001192092896 -> 0.3
0.9800000190734863 -> 0.98
1.2000000476837158 -> 1.2
4096.2998046875 -> 4096.3
If you have at least NumPy 1.14.0, you can just use repr(numpy.float32(your_value)). Quoting the release notes:
Float printing now uses “dragon4” algorithm for shortest decimal representation
The str and repr of floating-point values (16, 32, 64 and 128 bit) are now printed to give the shortest decimal representation which uniquely identifies the value from others of the same type. Previously this was only true for float64 values. The remaining float types will now often be shorter than in numpy 1.13.
Here's a demo running against a few of your example values:
>>> repr(numpy.float32(0.0005000000237487257))
'0.0005'
>>> repr(numpy.float32(0.02500000037252903))
'0.025'
>>> repr(numpy.float32(0.03999999910593033))
'0.04'
Probably what you are looking for is decimal:
Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.”
At least in python3 you can use .as_integer_ratio. That's not exactly a string but the floating point definition as such is not really well suited for giving an exact representation in "finite" strings.
a = 0.1
a.as_integer_ratio()
(3602879701896397, 36028797018963968)
So by saving these two numbers you'll never lose precision because these two exactly represent the saved floating point number. (Just divide the first by the second to get the value).
As an example using numpy dtypes (very similar to c dtypes):
# A value in python floating point precision
a = 0.1
# The value as ratio of integers
b = a.as_integer_ratio()
import numpy as np
# Force the result to have some precision:
res = np.array([0], dtype=np.float16)
np.true_divide(b[0], b[1], res)
print(res)
# Compare that two the wanted result when inputting 0.01
np.true_divide(1, 10, res)
print(res)
# Other precisions:
res = np.array([0], dtype=np.float32)
np.true_divide(b[0], b[1], res)
print(res)
res = np.array([0], dtype=np.float64)
np.true_divide(b[0], b[1], res)
print(res)
The result of all these calculations is:
[ 0.09997559] # Float16 with integer-ratio
[ 0.09997559] # Float16 reference
[ 0.1] # Float32
[ 0.1] # Float64
I have created the following snippet of code and I am trying to convert my 5 dp DNumber to a 2 dp one and insert this into a string. However which ever method I try to use, always seems to revert the DNumber back to the original number of decimal places (5)
Code snippet below:
if key == (1, 1):
DNumber = '{r[csvnum]}'.format(r=row)
# returns 7.65321
DNumber = """%.2f""" % (float(DNumber))
# returns 7.65
Check2 = False
if DNumber:
if DNumber <= float(8):
Check2 = True
if Check2:
print DNumber
# returns 7.65
string = 'test {r[csvhello]} TESTHERE test'.format(r=row).replace("TESTHERE", str("""%.2f""" % (float(gtpe))))
# returns: test Hello 7.65321 test
string = 'test {r[csvhello]} TESTHERE test'.format(r=row).replace("TESTHERE", str(DNumber))
# returns: test Hello 7.65321 test
What I hoped it would return: test Hello 7.65 test
Any Ideas or suggestion on alternative methods to try?
It seems like you were hoping that converting the float to a 2-decimal-place string and then back to a float would give you a 2-decimal-place float.
The first problem is that your code doesn't actually do that anywhere. If you'd done that, you would get something very close to 7.65, not 7.65321.
But the bigger problem is that what you're trying to do doesn't make any sense. A float always has 53 binary digits, no matter what. If you round it to two decimal digits (no matter how you do it, including by converting to string and back), what you actually get is a float rounded to two decimal digits and then rounded to 53 binary digits. The closest float to 7.65 is not exactly 7.65, but 7.650000000000000355271368.* So, that's what you'd end up with. And there's no way around that; it's inherent to the way float is stored.
However, there is a different type you can use for this: decimal.Decimal. For example:
>>> f = 7.65321
>>> s = '%.2f' % f
>>> d = decimal.Decimal(s)
>>> f, s, d
(7.65321, '7.65', Decimal('7.65'))
Or, of course, you could just pass around a string instead of a float (as you're accidentally doing in your code already), or you could remember to use the .2f format every time you want to output it.
As a side note, since your DNumber ends up as a string, this line is not doing anything useful:
if DNumber <= 8:
In Python 2.x, comparing two values of different types gives you a consistent but arbitrary and meaningless answer. With CPython 2.x, it will always be False.** In a different Python 2.x implementation, it might be different. In Python 3.x, it raises a TypeError.
And changing it to this doesn't help in any way:
if DNumber <= float(8):
Now, instead of comparing a str to an int, you're comparing a str to a float. This is exactly as meaningless, and follows the exact same rules. (Also, float(8) means the same thing as 8.0, but less readable and potentially slower.)
For that matter, this:
if DNumber:
… is always going to be true. For a number, if foo checks whether it's non-zero. That's a bad idea for float values (you should check whether it's within some absolute or relative error range of 0). But again, you don't have a float value; you have a str. And for strings, if foo checks whether the string is non-empty. So, even if you started off with 0, your string "0.00" is going to be true.
* I'm assuming here that you're using CPython, on a platform that uses IEEE-754 double for its C double type, and that all those extra conversions back and forth between string and float aren't introducing any additional errors.
** The rule is, slightly simplified: If you compare two numbers, they're converted to a type that can hold them both; otherwise, if either value is None it's smaller; otherwise, if either value is a number, it's smaller; otherwise, whichever one's type has an alphabetically earlier name is smaller.
I think you're trying to do the following - combine the formatting with the getter:
>>> a = 123.456789
>>> row = {'csvnum': a}
>>> print 'test {r[csvnum]:.2f} hello'.format(r=row)
test 123.46 hello
If your number is a 7 followed by five digits, you might want to try:
print "%r" % float(str(x)[:4])
where x is the float in question.
Example:
>>>x = 1.11111
>>>print "%r" % float(str(x)[:4])
>>>1.11
I'm making a program that, for reasons not needed to be explained, requires a float to be converted into a string to be counted with len(). However, str(float(x)) results in x being rounded when converted to a string, which throws the entire thing off. Does anyone know of a fix for it?
Here's the code being used if you want to know:
len(str(float(x)/3))
Some form of rounding is often unavoidable when dealing with floating point numbers. This is because numbers that you can express exactly in base 10 cannot always be expressed exactly in base 2 (which your computer uses).
For example:
>>> .1
0.10000000000000001
In this case, you're seeing .1 converted to a string using repr:
>>> repr(.1)
'0.10000000000000001'
I believe python chops off the last few digits when you use str() in order to work around this problem, but it's a partial workaround that doesn't substitute for understanding what's going on.
>>> str(.1)
'0.1'
I'm not sure exactly what problems "rounding" is causing you. Perhaps you would do better with string formatting as a way to more precisely control your output?
e.g.
>>> '%.5f' % .1
'0.10000'
>>> '%.5f' % .12345678
'0.12346'
Documentation here.
len(repr(float(x)/3))
However I must say that this isn't as reliable as you think.
Floats are entered/displayed as decimal numbers, but your computer (in fact, your standard C library) stores them as binary. You get some side effects from this transition:
>>> print len(repr(0.1))
19
>>> print repr(0.1)
0.10000000000000001
The explanation on why this happens is in this chapter of the python tutorial.
A solution would be to use a type that specifically tracks decimal numbers, like python's decimal.Decimal:
>>> print len(str(decimal.Decimal('0.1')))
3
Other answers already pointed out that the representation of floating numbers is a thorny issue, to say the least.
Since you don't give enough context in your question, I cannot know if the decimal module can be useful for your needs:
http://docs.python.org/library/decimal.html
Among other things you can explicitly specify the precision that you wish to obtain (from the docs):
>>> getcontext().prec = 6
>>> Decimal('3.0')
Decimal('3.0')
>>> Decimal('3.1415926535')
Decimal('3.1415926535')
>>> Decimal('3.1415926535') + Decimal('2.7182818285')
Decimal('5.85987')
>>> getcontext().rounding = ROUND_UP
>>> Decimal('3.1415926535') + Decimal('2.7182818285')
Decimal('5.85988')
A simple example from my prompt (python 2.6):
>>> import decimal
>>> a = decimal.Decimal('10.000000001')
>>> a
Decimal('10.000000001')
>>> print a
10.000000001
>>> b = decimal.Decimal('10.00000000000000000000000000900000002')
>>> print b
10.00000000000000000000000000900000002
>>> print str(b)
10.00000000000000000000000000900000002
>>> len(str(b/decimal.Decimal('3.0')))
29
Maybe this can help?
decimal is in python stdlib since 2.4, with additions in python 2.6.
Hope this helps,
Francesco
I know this is too late but for those who are coming here for the first time, I'd like to post a solution. I have a float value index and a string imgfile and I had the same problem as you. This is how I fixed the issue
index = 1.0
imgfile = 'data/2.jpg'
out = '%.1f,%s' % (index,imgfile)
print out
The output is
1.0,data/2.jpg
You may modify this formatting example as per your convenience.