Write floating point numbers without any decimal places - python

I'm trying to have Python replicate some FORTRAN output of real values. My FORTRAN prints the real value as "31380.". I'm trying to replicate the same in Python--note that although I have no decimal places, I actually want the decimal point (period) to be printed. My current code is
htgm=31380.
print '{:6.0f}'.format(htgm)
which yields "31380". What am I doing wrong?

Python format language includes an 'alternate' form for floats which forces the decimal point by using a '#' in the format string:
>>> htgm=31380.
>>> format(htgm, '#.0f')
'31380.'
Which is what I think you are looking for.
I thought #g would be what you wanted but for some reason python adds the 0 back on:
>>> htgm=31380.
>>> format(htgm, 'g')
'31380'
>>> format(htgm, '#g')
'31380.0'

It is not possible to do it Python keeping the type of htgm as float. However if you are OK with making it as str, you may do:
htgm=31380.
'{0:.0f}.'.format(htgm)
# returns: '31380.'
# OR, even simply
'{}.'.format(int(htgm))

When you need to display the number, use:
print(str(htgm)[:-1])
This notation will shave off the last '0'.

Related

Fixed-point notation is not behaving acording to the documentation

I want to format some values with a fixed precision of 3 unless it's an integer. In that case I don't want any decimal point or trailing 0s.
Acording to the docs, the 'f' type in string formating should remove the decimal point if no digits follow it:
If no digits follow the decimal point, the decimal point is also removed unless the # option is used.
But testing it with python3.8 I get the following results:
>>> f'{123:.3f}'
'123.000'
>>> f'{123.0:.3f}'
'123.000'
Am I misunderstanding something? How could I achive the desired result without using if else checks?
In order to forcefully achieve both your desired outputs with the same f-string expression, you could apply some kung-fu like
i = 123
f"{i:.{3*isinstance(i, float)}f}"
# '123'
i = 123.0
f"{i:.{3*isinstance(i, float)}f}"
# '123.000'
But this won't improve your code in terms of readability. There's no harm in being more explicit.

Python - round a float to 2 digits

I would need to have a float variable rounded to 2 significant digits and store the result into a new variable (or the same of before, it doesn't matter) but this is what happens:
>>> a
981.32000000000005
>>> b= round(a,2)
>>> b
981.32000000000005
I would need this result, but into a variable that cannot be a string since I need to insert it as a float...
>>> print b
981.32
Actually truncate would also work I don't need extreme precision in this case.
What you are trying to do is in fact impossible. That's because 981.32 is not exactly representable as a binary floating point value. The closest double precision binary floating point value is:
981.3200000000000500222085975110530853271484375
I suspect that this may come as something of a shock to you. If so, then I suggest that you read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
You might choose to tackle your problem in one of the following ways:
Accept that binary floating point numbers cannot represent such values exactly, and continue to use them. Don't do any rounding at all, and keep the full value. When you wish to display the value as text, format it so that only two decimal places are emitted.
Use a data type that can represent your number exactly. That means a decimal rather than binary type. In Python you would use decimal.
Try this :
Round = lambda x, n: eval('"%.' + str(int(n)) + 'f" % ' + repr(x))
print Round(0.1, 2)
0.10
print Round(0.1, 4)
0.1000
print Round(981,32000000000005, 2)
981,32
Just indicate the number of digits you want as a second kwarg
I wrote a solution of this problem.
Plz try
from decimal import *
from autorounddecimal.core import adround,decimal_round_digit
decimal_round_digit(Decimal("981.32000000000005")) #=> Decimal("981.32")
adround(981.32000000000005) # just wrap decimal_round_digit
More detail can be found in https://github.com/niitsuma/autorounddecimal
There is a difference between the way Python prints floats and the way it stores floats. For example:
>>> a = 1.0/5.0
>>> a
0.20000000000000001
>>> print a
0.2
It's not actually possible to store an exact representation of many floats, as David Heffernan points out. It can be done if, looking at the float as a fraction, the denominator is a power of 2 (such as 1/4, 3/8, 5/64). Otherwise, due to the inherent limitations of binary, it has to make do with an approximation.
Python recognizes this, and when you use the print function, it will use the nicer representation seen above. This may make you think that Python is storing the float exactly, when in fact it is not, because it's not possible with the IEEE standard float representation. The difference in calculation is pretty insignificant, though, so for most practical purposes it isn't a problem. If you really really need those significant digits, though, use the decimal package.

Python default behavior of str(x)

I am depending on some code that uses the Decimal class because it needs precision to a certain number of decimal places. Some of the functions allow inputs to be floats because of the way that it interfaces with other parts of the codebase. To convert them to decimal objects, it uses things like
mydec = decimal.Decimal(str(x))
where x is the float taken as input. My question is, does anyone know what the standard is for the 'str' method as applied to floats?
For example, take the number 2.1234512. It is stored internally as 2.12345119999999999 because of how floats are represented.
>>> x = 2.12345119999999999
>>> x
2.1234511999999999
>>> str(x)
'2.1234512'
Ok, str(x) in this case is doing something like '%.6f' % x. This is a problem with the way my code converts to decimals. Take the following:
>>> d = decimal.Decimal('2.12345119999999999')
>>> ds = decimal.Decimal(str(2.12345119999999999))
>>> d - ds
Decimal('-1E-17')
So if I have the float, 2.12345119999999999, and I want to pass it to Decimal, converting it to a string using str() gets me the wrong answer. I need to know what are the rules for str(x) that determine what the formatting will be, because I need to determine whether this code needs to be re-written to avoid this error (note that it might be OK, because, for example, the code might round to the 10th decimal place once we have a decimal object)
There must be some set of rules in python's docs that hopefully someone here can point me to. Thanks!
In the Python source, look in "Include/floatobject.h". The precision for the string conversion is set a few lines from the top after an comment with some explanation of the choice:
/* The str() precision PyFloat_STR_PRECISION is chosen so that in most cases,
the rounding noise created by various operations is suppressed, while
giving plenty of precision for practical use. */
#define PyFloat_STR_PRECISION 12
You have the option of rebuilding, if you need something different. Any changes will change formatting of floats and complex numbers. See ./Objects/complexobject.c and ./Objects/floatobject.c. Also, you can compare the difference between how repr and str convert doubles in these two files.
There's a couple of issues worth discussing here, but the summary is: you cannot extract information that is not stored on your system already.
If you've taken a decimal number and stored it as a floating point, you'll have lost information, since most decimal (base 10) numbers with a finite number of digits cannot be stored using a finite number of digits in base 2 (binary).
As was mentioned, str(a_float) will really call a_float.__str__(). As the documentation states, the purpose of that method is to
return a string containing a nicely printable representation of an object
There's no particular definition for the float case. My opinion is that, for your purposes, you should consider __str__'s behavior to be undefined, since there's no official documentation on it - the current implementation can change anytime.
If you don't have the original strings, there's no way to extract the missing digits of the decimal representation from the float objects. All you can do is round predictably, using string formatting (which you mention):
Decimal( "{0:.5f}".format(a_float) )
You can also remove 0s on the right with resulting_string.rstrip("0").
Again, this method does not recover the information that has been lost.

How do I force Python to keep an integer out of scientific notation

I am trying to write a method in Python 3.2 that encrypts a phrase and then decrypts it. The problem is that the numbers are so big that when Python does math with them it immediately converts it into scientific notation. Since my code requires all the numbers to function scientific notation, this is not useful.
What I have is:
coded = ((eval(input(':'))+1213633288469888484)/2)+1042
Basically, I just get a number from the user and do some math to it.
I have tried format() and a couple other things but I can't get them to work.
EDIT: I use only even integers.
In python3, '/' does real division (e.g. floating point). To get integer division, you need to use //. In other words 100/2 yields 50.0 (float) whereas 100//2 yields 50 (integer)
Your code probably needs to be changed as:
coded = ((eval(input(':'))+1213633288469888484)//2)+1042
As a cautionary tale however, you may want to consider using int instead of eval:
coded = ((int(input(':'))+1213633288469888484)//2)+1042
If you know that the floating point value is really an integer, or you don't care about dropping the fractional part, you can just convert it to an int before you print it.
>>> print 1.2e16
1.2e+16
>>> print int(1.2e16)
12000000000000000

Converting a Python Float to a String without losing precision

I am maintaining a Python script that uses xlrd to retrieve values from Excel spreadsheets, and then do various things with them. Some of the cells in the spreadsheet are high-precision numbers, and they must remain as such. When retrieving the values of one of these cells, xlrd gives me a float such as 0.38288746115497402.
However, I need to get this value into a string later on in the code. Doing either str(value) or unicode(value) will return something like "0.382887461155". The requirements say that this is not acceptable; the precision needs to be preserved.
I've tried a couple things so far to no success. The first was using a string formatting thingy:
data = "%.40s" % (value)
data2 = "%.40r" % (value)
But both produce the same rounded number, "0.382887461155".
Upon searching around for people with similar problems on SO and elsewhere on the internet, a common suggestion was to use the Decimal class. But I can't change the way the data is given to me (unless somebody knows of a secret way to make xlrd return Decimals). And when I try to do this:
data = Decimal(value)
I get a TypeError: Cannot convert float to Decimal. First convert the float to a string. But obviously I can't convert it to a string, or else I will lose the precision.
So yeah, I'm open to any suggestions -- even really gross/hacky ones if necessary. I'm not terribly experienced with Python (more of a Java/C# guy myself) so feel free to correct me if I've got some kind of fundamental misunderstanding here.
EDIT: Just thought I would add that I am using Python 2.6.4. I don't think there are any formal requirements stopping me from changing versions; it just has to not mess up any of the other code.
I'm the author of xlrd. There is so much confusion in other answers and comments to rebut in comments so I'm doing it in an answer.
#katriealex: """precision being lost in the guts of xlrd""" --- entirely unfounded and untrue. xlrd reproduces exactly the 64-bit float that's stored in the XLS file.
#katriealex: """It may be possible to modify your local xlrd installation to change the float cast""" --- I don't know why you would want to do this; you don't lose any precision by floating a 16-bit integer!!! In any case that code is used only when reading Excel 2.X files (which had an INTEGER-type cell record). The OP gives no indication that he is reading such ancient files.
#jloubert: You must be mistaken. "%.40r" % a_float is just a baroque way of getting the same answer as repr(a_float).
#EVERYBODY: You don't need to convert a float to a decimal to preserve the precision. The whole point of the repr() function is that the following is guaranteed:
float(repr(a_float)) == a_float
Python 2.X (X <= 6) repr gives a constant 17 decimal digits of precision, as that is guaranteed to reproduce the original value. Later Pythons (2.7, 3.1) give the minimal number of decimal digits that will reproduce the original value.
Python 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] on win32
>>> f = 0.38288746115497402
>>> repr(f)
'0.38288746115497402'
>>> float(repr(f)) == f
True
Python 2.7 (r27:82525, Jul 4 2010, 09:01:59) [MSC v.1500 32 bit (Intel)] on win32
>>> f = 0.38288746115497402
>>> repr(f)
'0.382887461154974'
>>> float(repr(f)) == f
True
So the bottom line is that if you want a string that preserves all the precision of a float object, use preserved = repr(the_float_object) ... recover the value later by float(preserved). It's that simple. No need for the decimal module.
You can use repr() to convert to a string without losing precision, then convert to a Decimal:
>>> from decimal import Decimal
>>> f = 0.38288746115497402
>>> d = Decimal(repr(f))
>>> print d
0.38288746115497402
EDIT: I am wrong. I shall leave this answer here so the rest of the thread makes sense, but it's not true. Please see John Machin's answer above. Thanks guys =).
If the above answers work that's great -- it will save you a lot of nasty hacking. However, at least on my system, they won't. You can check this with e.g.
import sys
print( "%.30f" % sys.float_info.epsilon )
That number is the smallest float that your system can distinguish from zero. Anything smaller than that may be randomly added or subtracted from any float when you perform an operation. This means that, at least on my Python setup, the precision is lost inside the guts of xlrd, and there seems to be nothing you can do without modifying it. Which is odd; I'd have expected this case to have occurred before, but apparently not!
It may be possible to modify your local xlrd installation to change the float cast. Open up site-packages\xlrd\sheet.py and go down to line 1099:
...
elif rc == XL_INTEGER:
rowx, colx, cell_attr, d = local_unpack('<HH3sH', data)
self_put_number_cell(rowx, colx, float(d), self.fixed_BIFF2_xfindex(cell_attr, rowx, colx))
...
Notice the float cast -- you could try changing that to a decimal.Decimal and see what happens.
EDIT: Cleared my previous answer b/c it didn't work properly.
I'm on Python 2.6.5 and this works for me:
a = 0.38288746115497402
print repr(a)
type(repr(a)) #Says it's a string
Note: This just converts to a string. You'll need to convert to Decimal yourself later if needed.
As has already been said, a float isn't precise at all - so preserving precision can be somewhat misleading.
Here's a way to get every last bit of information out of a float object:
>>> from decimal import Decimal
>>> str(Decimal.from_float(0.1))
'0.1000000000000000055511151231257827021181583404541015625'
Another way would be like so.
>>> 0.1.hex()
'0x1.999999999999ap-4'
Both strings represent the exact contents of the float. Allmost anything else interprets the float as python thinks it was probably intended (which most of the time is correct).

Categories

Resources