Remainders with fractional divisions not working in Python - python

Remainders with fractional divisions not working in Python.
For example,
>>> 59.28%3.12
3.119999999999999
>>> 59.28/3.12
19.0
Is there any way to get 0.0 as the output of 59.28%3.12

I don't know why, I don't know details of modulo implementation for floats, however this works fine:
from decimal import Decimal
Decimal("59.28") % Decimal("3.12")
EDIT: Note that you have to use quotes " (i.e. strings) in constructors. Otherwise it will try to interpret both numbers as floats which is the source of the problem (incorrect approximation).

Related

What is the meaning of this following piece of code({:.3f}) used in a Decision Tree tutorial? [duplicate]

I don't understand why, by formatting a string containing a float value, the precision of this last one is not respected. Ie:
'%f' % 38.2551994324
returns:
'38.255199'
(4 signs lost!)
At the moment I solved specifying:
'%.10f' % 38.2551994324
which returns '38.2551994324' as expected… but should I really force manually how many decimal numbers I want? Is there a way to simply tell to python to keep all of them?! (what should I do for example if I don't know how many decimals my number has?)
but should I really force manually how many decimal numbers I want? Yes.
And even with specifying 10 decimal digits, you are still not printing all of them. Floating point numbers don't have that kind of precision anyway, they are mostly approximations of decimal numbers (they are really binary fractions added up). Try this:
>>> format(38.2551994324, '.32f')
'38.25519943239999776096738060005009'
there are many more decimals there that you didn't even specify.
When formatting a floating point number (be it with '%f' % number, '{:f}'.format(number) or format(number, 'f')), a default number of decimal places is displayed. This is no different from when using str() (or '%s' % number, '{}'.format(number) or format(number), which essentially use str() under the hood), only the number of decimals included by default differs; Python versions prior to 3.2 use 12 digits for the whole number when using str().
If you expect your rational number calculations to work with a specific, precise number of digits, then don't use floating point numbers. Use the decimal.Decimal type instead:
Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.” – excerpt from the decimal arithmetic specification.
Decimal numbers can be represented exactly. In contrast, numbers like 1.1 and 2.2 do not have exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as 3.3000000000000003 as it does with binary floating point.
I would use the modern str.format() method:
>>> '{}'.format(38.2551994324)
'38.2551994324'
The modulo method for string formatting is now deprecated as per PEP-3101

How do I check the default decimal precision when converting float to str?

When converting a float to a str, I can specify the number of decimal points I want to display
'%.6f' % 0.1
> '0.100000'
'%.6f' % .12345678901234567890
> '0.123457'
But when simply calling str on a float in python 2.7, it seems to default to 12 decimal points max
str(0.1)
>'0.1'
str(.12345678901234567890)
>'0.123456789012'
Where is this max # of decimal points defined/documented? Can I programmatically get this number?
The number of decimals displayed is going to vary greatly, and there won't be a way to predict how many will be displayed in pure Python. Some libraries like numpy allow you to set precision of output.
This is simply because of the limitations of float representation.
The relevant parts of the link talk about how Python chooses to display floats.
Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine
Python keeps the number of digits manageable by displaying a rounded value instead
Now, there is the possibility of overlap here:
Interestingly, there are many different decimal numbers that share the same nearest approximate binary fraction
The method for choosing which decimal values to display was changed in Python 3.1 (But the last sentence implies this might be an implementation detail).
For example, the numbers 0.1 and 0.10000000000000001 are both
approximated by 3602879701896397 / 2 ** 55. Since all of these decimal
values share the same approximation, any one of them could be
displayed while still preserving the invariant eval(repr(x)) == x
Historically, the Python prompt and built-in repr() function would
choose the one with 17 significant digits, 0.10000000000000001.
Starting with Python 3.1, Python (on most systems) is now able to
choose the shortest of these and simply display 0.1.
I do not believe this exists in the python language spec. However, the cpython implementation does specify it. The float_repr() function, which turns a float into a string, eventually calls a helper function with the 'r' formatter, which eventually calls a utility function that hardcodes the format to what comes down to format(float, '.16g'). That code can be seen here. Note that this is for python3.6.
>>> import math
>>> str(math.pi*4)
12.5663706144
giving the maximum number of signification digits (both before and after the decimal) at 16. It appears that in the python2.7 implementation, this value was hardcoded to .12g. As for why this happened (and is somewhat lacking documentation, can be found here.)
So if you are trying to get how long a number will be formatted when printed, simply get it's length with .12g.
def len_when_displayed(n):
return len(format(n, '.12g'))
Well, if you're looking for a pure python way of accomplishing this, you could always use something like,
len(str(.12345678901234567890).split('.')[1])
>>>> 12
I couldn't find it in the documentation and will add it here if I do, but this is a work around that can at least always return the length of precision if you want to know before hand.
As you said, it always seems to be 12 even when feeding bigger floating-points.
From what I was able to find, this number can be highly variable and in these cases, finding it empirically seems to be the most reliable way of doing it. So, what I would do is define a simple method like this,
def max_floating_point():
counter = 0
current_length = 0
str_rep = '.1'
while(counter <= current_length):
str_rep += '1'
current_length = len(str(float(str_rep)).split('.')[1])
counter += 1
return current_length
This will return you the maximum length representation on your current system,
print max_floating_point()
>>>> 12
By looking at the output of random numbers converted, I have been unable to understand how the length of the str() is determined, e.g. under Python 3.6.6:
>>> str(.123456789123456789123456789)
'0.12345678912345678'
>>> str(.111111111111111111111111111)
'0.1111111111111111'
You may opt for this code that actually simulates your real situation:
import random
maxdec=max(map(lambda x:len(str(x)),filter(lambda x:x>.1,[random.random() for i in range(99)])))-2
Here we are testing the length of ~90 random numbers in the (.1,1) open interval after conversion (and deducing the 0. from the left, hence the -2).
Python 2.7.5 on a 64bit linux gives me 12, and Python 3.4.8 and 3.6.6 give me 17.

Python - round a float to 2 digits

I would need to have a float variable rounded to 2 significant digits and store the result into a new variable (or the same of before, it doesn't matter) but this is what happens:
>>> a
981.32000000000005
>>> b= round(a,2)
>>> b
981.32000000000005
I would need this result, but into a variable that cannot be a string since I need to insert it as a float...
>>> print b
981.32
Actually truncate would also work I don't need extreme precision in this case.
What you are trying to do is in fact impossible. That's because 981.32 is not exactly representable as a binary floating point value. The closest double precision binary floating point value is:
981.3200000000000500222085975110530853271484375
I suspect that this may come as something of a shock to you. If so, then I suggest that you read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
You might choose to tackle your problem in one of the following ways:
Accept that binary floating point numbers cannot represent such values exactly, and continue to use them. Don't do any rounding at all, and keep the full value. When you wish to display the value as text, format it so that only two decimal places are emitted.
Use a data type that can represent your number exactly. That means a decimal rather than binary type. In Python you would use decimal.
Try this :
Round = lambda x, n: eval('"%.' + str(int(n)) + 'f" % ' + repr(x))
print Round(0.1, 2)
0.10
print Round(0.1, 4)
0.1000
print Round(981,32000000000005, 2)
981,32
Just indicate the number of digits you want as a second kwarg
I wrote a solution of this problem.
Plz try
from decimal import *
from autorounddecimal.core import adround,decimal_round_digit
decimal_round_digit(Decimal("981.32000000000005")) #=> Decimal("981.32")
adround(981.32000000000005) # just wrap decimal_round_digit
More detail can be found in https://github.com/niitsuma/autorounddecimal
There is a difference between the way Python prints floats and the way it stores floats. For example:
>>> a = 1.0/5.0
>>> a
0.20000000000000001
>>> print a
0.2
It's not actually possible to store an exact representation of many floats, as David Heffernan points out. It can be done if, looking at the float as a fraction, the denominator is a power of 2 (such as 1/4, 3/8, 5/64). Otherwise, due to the inherent limitations of binary, it has to make do with an approximation.
Python recognizes this, and when you use the print function, it will use the nicer representation seen above. This may make you think that Python is storing the float exactly, when in fact it is not, because it's not possible with the IEEE standard float representation. The difference in calculation is pretty insignificant, though, so for most practical purposes it isn't a problem. If you really really need those significant digits, though, use the decimal package.

subtracting integer python weird results

basically what I'm doing is downloading some date from a website using urllib. That number comes to be in what I believe is Byte form. So I change it to an integer by doing the following. This seems to work fine.
real_value = (int(real_value) / 100)
Then I create another variable which should equal the difference between two values.
add_to_value = real_value - last_real_value
print(add_to_value)
The weird thing is, this sometimes works and other times I get results with a lot of extra digits on the end or it will say "9.999999999999996e-05".
So I'm really confused. Any ideas?
Floating-point numbers can't represent most numbers exactly. Even with a very simple example:
>>> 0.1 + 0.1
0.20000000000000001
You can see it's not exact. If you use floating-point numbers, this is just something you'll have to deal with. Alternatively, you can use Python's decimal module:
>>> from decimal import Decimal
>>> Decimal('0.1') + Decimal('0.1')
Decimal('0.2')
Even decimal can't represent every number exactly, but it should give you much more reasonable results when dealing with lots of base-10 operations.
read up on issues with floating points in python
assuming you are yousing python3: you may want to use a double / for classic python2's 'integer division' behaviour where the result gets rounded.
real_value = (int(real_value) // 100)
The weired values are normal and should be correct.
This is because you are using floating point arithmetic. You can always limit the precision of the results, by, e.g., setting the number of digits that are used for the representation.
Refer to: http://en.wikipedia.org/wiki/Floating_point

How do I force Python to keep an integer out of scientific notation

I am trying to write a method in Python 3.2 that encrypts a phrase and then decrypts it. The problem is that the numbers are so big that when Python does math with them it immediately converts it into scientific notation. Since my code requires all the numbers to function scientific notation, this is not useful.
What I have is:
coded = ((eval(input(':'))+1213633288469888484)/2)+1042
Basically, I just get a number from the user and do some math to it.
I have tried format() and a couple other things but I can't get them to work.
EDIT: I use only even integers.
In python3, '/' does real division (e.g. floating point). To get integer division, you need to use //. In other words 100/2 yields 50.0 (float) whereas 100//2 yields 50 (integer)
Your code probably needs to be changed as:
coded = ((eval(input(':'))+1213633288469888484)//2)+1042
As a cautionary tale however, you may want to consider using int instead of eval:
coded = ((int(input(':'))+1213633288469888484)//2)+1042
If you know that the floating point value is really an integer, or you don't care about dropping the fractional part, you can just convert it to an int before you print it.
>>> print 1.2e16
1.2e+16
>>> print int(1.2e16)
12000000000000000

Categories

Resources