What does b != a & 1 do? - python

a += b != a & 1
I came across this statement in a code, but I'm not sure what the final part (!= a & 1) of the code does. What does that do?

First you have to consult the language specification to realize what the order of operations are here. With parentheses put out this will be:
a += (b != (a & 1))
The a & 1 is bitwise and (making it 1 if a is odd and 0 otherwise), then the result of that is compared to b and the result of that which is boolean will be added to a. Now of course for the latest to be meaningful a need to be of a type that can support that (integer types does that for example, by taking True as having value of 1 and False having value of 0).
To sum it up, if b==0 it will increase a if a is even and if b==1 it will increase a if a is odd. Otherwise if b is neither 0 nor 1 it will increase a.
I noticed that some of the comments didn't notice the precedence order, and even in python they may sometimes be confusing (especially if you've already been confused by those from C). As a rule of thumb I'd recommend that you explicitely place parentheses around sub expressions if you're in the tiniest bit of doubt - or even break it down in separate statemenst. Normally the compiler will make the best of it anyway.

Related

Diffrence of two same values is not zero

When I compare two numbers at python, even though they are exactly the same, the difference between these two numbers is not zero, but something really small (<10^(-16)).
e.g.
if A == B:
print('We are the same')
Nothing happens.
But:
if A - B < 10^(-16):
print(A-B)
It prints the difference.
Where is the problem?
in Python, the ^ operator execute an exclusive or, so 10^(-16) means 10 XOR (-16), which correctly returns -6 (which is lower than A-B).
If you wanted to execute an exponentiation, you have to write 10**(-16), and your check is now working as expected.
This means that your code should be:
if A - B < 10**(-16):
print(A-B)
# OUTPUT: 0

Pipe character in object creation

I have run into the following code:
f = wx.Frame(None, -1, 'Window Title', style = wx.MAXMIZE_BOX | wx.SYSTEM_MENU)
I have read other answers about this on Stack Overflow that said the '|' had something to do with bitwise operations but I don't think that is the use here and if it is I don't understand it. Can someone explain what this character is used for in this situation?
Bitwise or really is the use here.
wx.MAXIMIZE_BOX, wx.SYSTEM_MENU, etc. are all integer constants with only a single bit set. (A different bit for each constant.) So, you can bitwise-or them together to get a collection of bits.
This is almost exactly like doing a union operation on a set. In fact, set union is also spelled | in Python. The difference is that when you're using single-bit integers, the whole set fits into a single fixed-sized integer, instead of being stored as a collection of a bunch of separate values. This is usually not so important for Python, but is—or at least used to be—for the low-level windowing APIs (mostly written in C) that wx deals with.
Let's take a simpler example:
>>> a = 0b00000001
>>> b = 0b00000010
>>> c = 0b00000100
>>> d = 0b00001000
>>> acd = a | c | d
>>> bin(acd) # notice that the a, c, and d bits are all set, but no others
'0b1101'
>>> bool(acd & c) # is c an element of acd?
True
>>> bool(acd & b) # is b an element of acd?
False
So I can pass around a set of 8 separate boolean values in a single byte. Well, this being Python, that "single byte" is still an 8-byte pointer to a 28-byte int object whose underlying value has a minimum size of 4 bytes, so I've really just made things slower and more complicated for minimal space benefit. But still, if you need to store zillions of these…
Anyway, just as we're using |, bitwise or, to mean union, we're using &, bitwise and, to mean intersection.
That bool(… & …) may be a bit confusing, until you realize that the intersection of a set with a single element is either that single element (if it's a member of the set), or 0 (if it's not). In Python, 0 is always falsey, all other numbers are always truthy.
As tripleee points out in the comments, when your values are all single-bit values, and there are no repeats, | and + actually do the same thing:
>>> bin(a | c | d)
'0b1101'
>>> bin(a + c + d)
'0b1101'
Just think about how you add things up on paper and carry the 1. Bitwise or is like adding up the columns and ignoring the carry. So, when there is no carry (because we don't have any bits showing up more than once), they do the same thing. Of course once that's no longer true, carrying the 1 and ignoring the 1 are no longer the same:
>>> bin(acd | c)
'0b1101'
>>> bin(acd + c)
'0b10001'

Infinite loop while adding two integers using bitwise operations?

I am trying to solve a problem, using python code, which requires me to add two integers without the use of '+' or '-' operators. I have the following code which works perfectly for two positive numbers:
def getSum(self, a, b):
while (a & b):
x = a & b
y = a ^ b
a = x << 1
b = y
return a ^ b
This piece of code works perfectly if the input is two positive integers or two negative integers but it fails when one number is positive and other is negative. It goes into an infinite loop. Any idea as to why this might be happening?
EDIT: Here is the link discussing the code fix for this.
Python 3 has arbitrary-precision integers ("bignums"). This means that anytime x is negative, x << 1 will make x a negative number with twice the magnitude. Zeros shifting in from the right will just push the number larger and larger.
In two's complement, positive numbers have a 0 in the highest bit and negative numbers have a 1 in the highest bit. That means that, when only one of a and b is negative, the top bits of a and b will differ. Therefore, x will be positive (1 & 0 = 0) and y will be negative (1 ^ 0 = 1). Thus the new a will be positive (x<<1) and the new b will be negative (y).
Now: arbitrary-precision negative integers actually have an infinite number of leading 1 bits, at least mathematicallly. So a is a larger and larger positive number, expanding by 2 each iteration. b keeps getting more and more leading 1 bits added to be able to carry out the bitwise & and ^ with a. Thus whatever bits of a are turned on line up with one of the added 1 bits of b, so a & b is always true, so the loop runs forever.
I faced the same problem.
More precise: you get infinity loop only when one number positive, another negative AND positive >= abs(negative).
As #cvx said it works so because of extra carry bit - another languages ignore overflows, but python adds this additional 1 to number and it become more and more and b never become zero.
So the solution is to use mask: lets ignore this additional bit:
def getSum(a: int, b: int) -> int:
mask = 0xffffffff
while b&mask > 0:
carry = a & b
cur_sum = a ^ b
a = cur_sum
b = carry << 1
return a&mask if b>0 else a
Also the last line is important! As python adds such 1 to a also, and python thinks about it like a negative value. We should skip those bits and get only last part of the a as positive number.
More information here: https://leetcode.com/problems/sum-of-two-integers/discuss/489210/Read-this-if-you-want-to-learn-about-masks
I'm guessing that this is a homework question, so I don't want to just give you a function that works --- you'll learn more by struggling with it.
The issue stems from the way that negative integers are stored. For illustration purposes, let's pretend that you're dealing with 4-bit signed integers (instead of 32-bit signed integers, or whatever). The number +1 is 0001. The number -1 is 1111. You should be able to sit down with a pen and paper and manually run your function using these two numbers. The answer, of course, should be 0000, but I think you'll discover what's going wrong with your code by working this simplified case with a pen and paper.

Pythonic way of checking for 0 in if statement?

In coding a primality tester, I came across an interesting thought. When you want to do something if the result of an operation turns out to be 0, which is the better ('pythonic') way of tackling it?
# option A - comparison
if a % b == 0:
print('a is divisible by b')
# option B - is operator
if a % b is 0:
print('a is divisible by b')
# option C - boolean not
if not a % b:
print('a is divisible by b')
PEP 8 says that comparisons to singletons like None should be done with the is operator. It also says that checking for empty sequences should use not, and not to compare boolean values with == or is. However, it doesn't mention anything about checking for a 0 as a result.
So which option should I use?
Testing against 0 is (imo) best done by testing against 0. This also indicates that there might be other values than just 0 and 1.
If the called function really only returns 0 on success and 1 on fail to say Yes/No, Success/Failure, True/False, etc., then I think the function is the problem and should (if applicable) be fixed to return True and False instead.
just personal : I prefer the not a % b way because it seems to be highly readable. But now, to lower the confusion level in the code, I will use the == 0, as it express what you expect to test exactly in a more accurate way. It's the "care for debug" approach.
0 isn't guaranteed to be a singleton so don't use is to test against it: currently C Python re-uses small integers so there is probably only one int with the value 0 plus a long if you're still on Python 2.x, and any number of float zeroes not to mention False which all compare equal to 0. Some earlier versions of Python, before it got a separate bool type used a different int zero for the result of comparisons.
Use either == (which would be my preference) or just the not, whichever you prefer.
A and C are both valid and very pythonic.
B is not, because
0 semantically is not a singleton (it is in cPython, but that is an implementation detail).
It will not work with float a or b.
It is actually possible that this will not work in some other implementation of Python.

Python ( or general programming ). Why use <> instead of != and are there risks?

I think if I understand correctly, a <> b is the exact same thing functionally as a != b, and in Python not a == b, but is there reason to use <> over the other versions? I know a common mistake for Python newcomers is to think that not a is b is the same as a != b or not a == b.
Do similar misconceptions occur with <>, or is it exactly the same functionally?
Does it cost more in memory, processor, etc.
<> in Python 2 is an exact synonym for != -- no reason to use it, no disadvantages either except the gratuitous heterogeneity (a style issue). It's been long discouraged, and has now been removed in Python 3.
Just a pedantic note: the <> operator is in some sense misnamed (misdenoted?). a <> b might naturally be interpreted as meaning a < b or a > b (evaluating a and b only once, of course), but since not all orderings are total orderings, this doesn't match the actual semantics. For example, 2.0 != float('nan') is true, but 2.0 < float('nan') or 2.0 > float('nan') is false.
The != operator isn't subject to such possible misinterpretation.
For an interesting take (with poetry!) on the decision to drop <> for Python 3.x, see Requiem for an operator.
you shouldn't use <> in python.

Categories

Resources