There is something extremely strange happening if I do some ordinary calculations in Python. If I do a multiplication whithout brackets, it gives the right thing, but if set some things into brackets the total multiplication becomes equal to zero.
For those who don't believe (I know that it sounds strange):
>>> print( 1.1*1.15*0.8*171*15625*24*(60/368*0.75)/1000000 )
0.0
>>> print( 1.1*1.15*0.8*171*15625*24*60/368*0.75/1000000 )
7.93546875
as shown in this Jupyter screenshot.
The only difference between both multiplications is that in the first there are brackets around 60/368*0.75.
How is this possible and what can I do against it? I have no idea how this is even possible.
If you divide integers a,b in python the result is the floor of the division, thus if a < b we get:
With brackets you have the operation 60/368 which gives 0.
But without brackets the number 60 is first multiplied by everything before it, which results in some double value so dividing this value by 368 does not yield 0.
Parenthesis change the order of evaluation, and the expression inside them is evaluated first. Here, since 60 and 368 are both integer literals they are divided using integer division - meaning only the "whole" part is kept. Since 60 is smaller than 368 their integer division is 0. From there on, the result is obvious - you've got a series of multiplications and divisions where one of multipliers is 0, so the end result would also be 0.
To prevent this you could express the numbers as floating point literals - 60.0 and 368.0. (Well, technically, just using 60.0 would be sufficient here, but for consistency's sake I recommend representing all the numbers as floating point literals).
Related
I'd expect bin(~0b111000) to return the value 0b000111 because to my understanding the NOT operation would return the opposite bit as output.
I keep reading that "~x: Returns the complement of x - the number you get by switching each 1 for a 0 and each 0 for a 1" so I don't exactly know where my logic breaks down.
Why does it show -(x + 1) instead of just literally flipping all bits?
It is flipping all the bits!
You seem to think of 0b111000 as a 6bit value. That is not the case. All integers in python3 have (at least conceptually) infinitely many bits. So imagine 0b111000 to be shorthand for 0b[...]00000111000.
Now, flipping all the bits results in 0b[...]11111000111. Notice how in this case the [...] stands for infinitely many ones, so mathematically this gets interesting. And we simply cannot print infinitely many ones, so there is no way to directly print this number.
However, since this is 2s complement, this simply means: The number which, if we add 0b111001 to it, becomes 0. And that is why you see -0b111001.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 3 years ago.
When i execute these 2 lines, i get 2 different results. - Why?
item variable is a type numpy.float32
print(item)
print(item * 1)
output:
0.0006
0.0006000000284984708
I suspect this is being related to the numpy.float32 type somehow?
If i try to convert the numpy.float32 to float i get this:
item = float(item)
print(item)
output:
0.0006000000284984708
What you observe unfortunately is not avoidable. It has to do with the internal representation of a float number. In this case it doesn't even have to do with calculation issues, as suggested in comments here.
(Binary base) float numbers as used by most languages are represented as (+/- mantisse)*2^exponent.
The important part here is the mantisse, that doesn't allow to represent all numbers exactly. The value range of the mantisse and the exponent depend on the bit length of the float you use. The exponent is responsible for the maximum and minimum representable numbers, while the mantisse is responsible for the precision of the displayable numers (loosely speaking the "granularity" of the numbers).
So for your question, the mantisse is more important. As said it is like a bit array. In a byte a bit has a value depending on it's position of 1, 2, 4, ...
In the mantisse it is similar, but instead of 1, 2, 3, 4, the bits have the value 1/2, 1/4, 1/8, ...
So if you want to represent 0.75, the bits with the values 1/2 and 1/4 would be set in your mantisse and the exponent would be 0. That's it in very short.
Now, if you would try to represent a value like 0.11 in a float represenation, you will notice, that it is not possible. No matter if you use float32 or float64:
import numpy as np
item=np.float64('0.11')
print('{0:2.60f}'.format(item))
output: 0.110000000000000000555111512312578270211815834045410156250000
item=np.float32('0.11')
print('{0:2.60f}'.format(item))
output: 0.109999999403953552246093750000000000000000000000000000000000
Btw. if you want to represent the value 0.25 (1/4) it is not that the bit for 1/4 is set, but instead the bit for 1/2 and the exponent is set to -1, so 1/2*2^(-1) is again 0.25. This is done in a normalization process.
But if you want to increase the precision you could use float64, as I did it in my example. It will reduce this phenomenon a bit.
It seems, that some systems also support decimal based floats. I haven't worked with them, but probably they would avoid this kind of problems (not the calculation issus though mentioned in the post someone else posted as an answer).
The reason you see two different results is that your variable item is in numpy.float32, as you said. Python internally uses 64 bit floating point numbers, so
print(item)
returns the (lower precision) result in 32 bit, while
print(item * 1)
first multiplies with 1, which is an integer. It is not possible to multiply integer with float, so Python converts both into floats - 64 bit floats, since you do not specify anything else. The result is then a 64 bit float.
If you would specify another type of "1",
print(item * numpy.float32(1))
returns the same result as print(item), because there is no type conversion and everything can stay in 32 bit.
You haven't specified exactly what the problem is, beyond "the numbers don't match". How you handle floating point depends a little on your application, but in general you can't rely on comparing floating point numbers exactly. With a few obvious exceptions: 0 times anything should be 0, 1 times anything should be 1 (there's more, but lets stop there). So why is 1*item different from item?
>>> item = np.float32(0.0006)
>>> item
0.0006
>>> item*1
0.0006000000284984708
Right, this seems to contradict common sense. No, it's just the wrong way. Do an actual comparison and everything is still alright with the world.
>>> item == item*1
True
The numbers are the same. This should make sense - increasing the precision of a floating point shouldn't change it's value, and multiplying by 1 should not change a number.
So, what's going on? Numpy converts an np.float32 value to a python float which prints with nice rounding. However, item*1 is an np.float64 which by default shows more siginificant figures. If you print both of these with the same amount of significant figures you can see there's no real difference.
>>> "{:0.015f}".format(item*1)
'0.000600000028498'
>>> "{:0.015f}".format(item)
'0.000600000028498'
So that's it. What python prints isn't meant to be a completely accurate representation of numbers. The other answers get into why 0.0006 can't be represented exactly.
Edit
Rounding doesn't change this, it just converts item to a python float which prints with rounding.
>>> "{:0.015f}".format(round(item, 4))
'0.000600000028498'
I cannot seem to find the logic in this, but have made a workaround simply converting the numpy.float32 to float and rounding the numbers to a specific decimal.
This question already has answers here:
Negative integer division surprising result
(5 answers)
Closed 8 years ago.
Just randomly tried out this:
>>> int(-1/2)
-1
>>> int(-0.5)
0
why the results are different??
Try this:
>>> -1/2
-1
>>> -0.5
-0.5
The difference is that integer division (the former) results in an integer in some versions of Python, instead of a float like the second number is. You're using int on two different numbers, so you'll get different results. If you specify floats first, you'll see the difference disappear.
>>> -1.0/2.0
-0.5
>>> int(-1.0/2.0)
0
>>> int(-0.5)
0
The difference you see is due to how rounding works in Python. The int() function truncates to zero, as noted in the docs:
If x is floating point, the conversion truncates towards zero.
On the other hand, when both operands are integers, the / acts as though a mathematical floor operation was applied:
Plain or long integer division yields an integer of the same type; the result is that of mathematical division with the ‘floor’ function applied to the result.
So, in your first case, -1 / 2 results, theoretically, in -0.5, but because both operands are ints, Python essentially floors the result, which makes it -1. int(-1) is -1, of course. In your second example, int is applied directly to -0.5, a float, and so int truncates towards 0, resulting in 0.
(This is true of Python 2.x, which I suspect you are using.)
This is a result of two things:
Python 2.x does integer division when you divide two integers;
Python uses "Floored" division for negative numbers.
Negative integer division surprising result
http://python-history.blogspot.com/2010/08/why-pythons-integer-division-floors.html
Force at least one number to float, and the results will no longer surprise you.
assert int(-1.0/2) == 0
As others have noted, in Python 3.x the default for division of integers is to promote the result to float if there would be a nonzero remainder from the division.
As TheSoundDefense mentioned, it depends upon the version. In Python 3.3.2:
>>> int(-1/2)
0
>>> int(-0.5)
0
int() command truncates towards 0, unlike floor() which rounds downwards to the next integer.
So int(-0.5) is clearly 0.
As for -1/2, actually -1/2 is equal to -1! Therefore rounding downwards to the next integer is -1. In Python 2, -a/b != -(a/b). Actually, -1/2 equals floor(-1.0 / 2.0), which is -1.
'0424242' * -5
I understand how multiplying by strings work fundamentally, but I just stumbled on this strange fact that multiplying by negative numbers yields an empty string and thought it was interesting. I wanted to know the deeper why beneath the surface.
Anyone have a good explanation for this?
The docs on s * n say:
Values of n less than 0 are treated as 0 (which yields an empty
sequence of the same type as s).
What would you expect multiplying a string by a negative integer?
On the other hand
# Display results in nice table
print(keyword1, " "*(60-len(keyword1)), value1)
print(keyword2, " "*(60-len(keyword2)), value2)
without being worried than keyword? be longer than 60 is very handy.
This behavior is probably defined to be consistent with range(-5) being []. In fact, the latter may be exactly what underlies the behavior you observe.
That's literally part of the definition of the operation:
The * (multiplication) operator yields the product of its arguments. The arguments must either both be numbers, or one argument must be an integer and the other must be a sequence. In the former case, the numbers are converted to a common type and then multiplied together. In the latter case, sequence repetition is performed; a negative repetition factor yields an empty sequence.
My code:
import math
import cmath
print "E^ln(-1)", cmath.exp(cmath.log(-1))
What it prints:
E^ln(-1) (-1+1.2246467991473532E-16j)
What it should print:
-1
(For Reference, Google checking my calculation)
According to the documentation at python.org cmath.exp(x) returns e^(x), and cmath.log(x) returns ln (x), so unless I'm missing a semicolon or something , this is a pretty straightforward three line program.
When I test cmath.log(-1) it returns πi (technically 3.141592653589793j). Which is right. Euler's identity says e^(πi) = -1, yet Python says when I raise e^(πi), I get some kind of crazy talk (specifically -1+1.2246467991473532E-16j).
Why does Python hate me, and how do I appease it?
Is there a library to include to make it do math right, or a sacrifice I have to offer to van Rossum? Is this some kind of floating point precision issue perhaps?
The big problem I'm having is that the precision is off enough to have other values appear closer to 0 than actual zero in the final function (not shown), so boolean tests are worthless (i.e. if(x==0)) and so are local minimums, etc...
For example, in an iteration below:
X = 2 Y= (-2-1.4708141202500006E-15j)
X = 3 Y= -2.449293598294706E-15j
X = 4 Y= -2.204364238465236E-15j
X = 5 Y= -2.204364238465236E-15j
X = 6 Y= (-2-6.123233995736765E-16j)
X = 7 Y= -2.449293598294706E-15j
3 & 7 are both actually equal to zero, yet they appear to have the largest imaginary parts of the bunch, and 4 and 5 don't have their real parts at all.
Sorry for the tone. Very frustrated.
As you've already demonstrated, cmath.log(-1) doesn't return exactly i*pi. Of course, returning pi exactly is impossible as pi is an irrational number...
Now you raise e to the power of something that isn't exactly i*pi and you expect to get exactly -1. However, if cmath returned that, you would be getting an incorrect result. (After all, exp(i*pi+epsilon) shouldn't equal -1 -- Euler doesn't make that claim!).
For what it's worth, the result is very close to what you expect -- the real part is -1 with an imaginary part close to floating point precision.
It appears to be a rounding issue. While -1+1.22460635382e-16j is not a correct value, 1.22460635382e-16j is pretty close to zero. I don't know how you could fix this but a quick and dirty way could be rounding the number to a certain number of digits after the dot ( 14 maybe ? ).
Anything less than 10^-15 is normally zero. Computer calculations have a certain error that is often in that range. Floating point representations are representations, not exact values.
The problem is inherent to representing irrational numbers (like π) in finite space as floating points.
The best you can do is filter your result and set it to zero if its value is within a given range.
>>> tolerance = 1e-15
>>> def clean_complex(c):
... real,imag = c.real, c.imag
... if -tolerance < real < tolerance:
... real = 0
... if -tolerance < imag < tolerance:
... imag = 0
... return complex(real,imag)
...
>>> clean_complex( cmath.exp(cmath.log(-1)) )
(-1+0j)