Sum of Two Integers without using "+" operator in python - python

Need some help understanding python solutions of leetcode 371. "Sum of Two Integers". I found https://discuss.leetcode.com/topic/49900/python-solution/2 is the most voted python solution, but I am having problem understand it.
How to understand the usage of "% MASK" and why "MASK = 0x100000000"?
How to understand "~((a % MIN_INT) ^ MAX_INT)"?
When sum beyond MAX_INT, the functions yells negative value (for example getSum(2147483647,2) = -2147483647), isn't that incorrect?
class Solution(object):
def getSum(self, a, b):
"""
:type a: int
:type b: int
:rtype: int
"""
MAX_INT = 0x7FFFFFFF
MIN_INT = 0x80000000
MASK = 0x100000000
while b:
a, b = (a ^ b) % MASK, ((a & b) << 1) % MASK
return a if a <= MAX_INT else ~((a % MIN_INT) ^ MAX_INT)

Let's disregard the MASK, MAX_INT and MIN_INT for a second.
Why does this black magic bitwise stuff work?
The reason why the calculation works is because (a ^ b) is "summing" the bits of a and b. Recall that bitwise xor is 1 when the bits differ, and 0 when the bits are the same. For example (where D is decimal and B is binary), 20D == 10100B, and 9D = 1001B:
10100
1001
-----
11101
and 11101B == 29D.
But, if you have a case with a carry, it doesn't work so well. For example, consider adding (bitwise xor) 20D and 20D.
10100
10100
-----
00000
Oops. 20 + 20 certainly doesn't equal 0. Enter the (a & b) << 1 term. This term represents the "carry" for each position. On the next iteration of the while loop, we add in the carry from the previous loop. So, if we go with the example we had before, we get:
# First iteration (a is 20, b is 20)
10100 ^ 10100 == 00000 # makes a 0
(10100 & 10100) << 1 == 101000 # makes b 40
# Second iteration:
000000 ^ 101000 == 101000 # Makes a 40
(000000 & 101000) << 1 == 0000000 # Makes b 0
Now b is 0, we are done, so return a. This algorithm works in general, not just for the specific cases I've outlined. Proof of correctness is left to the reader as an exercise ;)
What do the masks do?
All the masks are doing is ensuring that the value is an integer, because your code even has comments stating that a, b, and the return type are of type int. Thus, since the maximum possible int (32 bits) is 2147483647. So, if you add 2 to this value, like you did in your example, the int overflows and you get a negative value. You have to force this in Python, because it doesn't respect this int boundary that other strongly typed languages like Java and C++ have defined. Consider the following:
def get_sum(a, b):
while b:
a, b = (a ^ b), (a & b) << 1
return a
This is the version of getSum without the masks.
print get_sum(2147483647, 2)
outputs
2147483649
while
print Solution().getSum(2147483647, 2)
outputs
-2147483647
due to the overflow.
The moral of the story is the implementation is correct if you define the int type to only represent 32 bits.

Here is solution works in every case
cases
- -
- +
+ -
+ +
solution
python default int size is not 32bit, it is very large number, so to prevent overflow and stop running into infinite loop, we use 32bit mask to limit int size to 32bit (0xffffffff)
a,b=-1,-1
mask=0xffffffff
while (b & mask):
carry=a & b
a=a^b
b=carray <<1
print( (a&Mask) if b>0 else a)

For me, Matt's solution stuck in inifinite loop with inputs Solution().getSum(-1, 1)
So here is another (much slower) approach based on math:
import math
def getSum(a: int, b: int) -> int:
return int(math.log2(2**a * 2**b))

Related

Does a bitwise "and" operation prepend zeros to the binary representation?

When I use bitwise and operator(&) with the number of 1 to find out if a number x is odd or even (x & 1), does the interpreter change the binary representation of 1 according to the binary representation of x? For example:
2 & 1 -> 10 & 01 -> then perform comparison bitwise
5 & 1 -> 101 & 001 -> then perform comparison bitwise
100 & 1 -> 1100100 & 0000001 -> then perform comparison bitwise
Does it append zeros to the binary representation of 1 to perform bitwise and operation?
Looking at the cpython implementation, it looks like that it compares the digits according to size of the right argument. So in this case the example above works actually:
2 & 1 -> 10 & 1 -> 0 & 1 -> then perform comparison bitwise
5 & 1 -> 101 & 1 -> 1 & 1 -> then perform comparison bitwise
100 & 1 -> 1100100 & 1 -> 0 & 1 -> then perform comparison bitwise
Is my understanding right? I'm confused because of this image from Geeks for Geeks.
Conceptually, adding zeros to the shorter number gives the same result as ignoring excess digits in the longer number. They both do the same thing. Padding, however, is inefficient, so in practice you wouldn't want to do it.
The reason is because anything ANDed with 0 is 0. If you pad the short number to match the longer one and then AND the extra bits they're all going to result in 0. It works, but since you know the padded bits will just result in extra zeros, it's more efficient to ignore them and only iterate over the length of the shorter number.
Python only processes the overlapping digits. First, it conditionally swaps a and b to ensure b is the smaller number:
/* Swap a and b if necessary to ensure size_a >= size_b. */
if (size_a < size_b) {
z = a; a = b; b = z;
size_z = size_a; size_a = size_b; size_b = size_z;
negz = nega; nega = negb; negb = negz;
}
Then it iterates over the smaller size_b:
/* Compute digits for overlap of a and b. */
switch(op) {
case '&':
for (i = 0; i < size_b; ++i)
z->ob_digit[i] = a->ob_digit[i] & b->ob_digit[i];
break;
So my understanding is right, the image is just for intuition?
Yep, correct. The image is for conceptual understanding. It doesn't reflect how it's actually implemented in code.

Write a program to find greatest common divisor (GCD) or highest common factor (HCF) of given two numbers [duplicate]

I just found this algorithm to compute the greatest common divisor in my lecture notes:
public static int gcd( int a, int b ) {
while (b != 0) {
final int r = a % b;
a = b;
b = r;
}
return a;
}
So r is the remainder when dividing b into a (get the mod). Then b is assigned to a, and the remainder is assigned to b, and a is returned. I can't for the life of my see how this works!
And then, apparently this algorithm doesn't work for all cases, and this one must then be used:
public static int gcd( int a, int b ) {
final int gcd;
if (b != 0) {
final int q = a / b;
final int r = a % b; // a == r + q * b AND r == a - q * b.
gcd = gcd( b, r );
} else {
gcd = a;
}
return gcd;
}
I don't understand the reasoning behind this. I generally get recursion and am good at Java but this is eluding me. Help please?
The Wikipedia article contains an explanation, but it's not easy to find it immediately (also, procedure + proof don't always answer the question "why it works").
Basically it comes down to the fact that for two integers a, b (assuming a >= b), it is always possible to write a = bq + r where r < b.
If d=gcd(a,b) then we can write a=ds and b=dt. So we have ds = qdt + r. Since the left hand side is divisible by d, the right hand side must also be divisible by d. And since qdt is divisible by d, the conclusion is that r must also be divisible by d.
To summarise: we have a = bq + r where r < b and a, b and r are all divisible by gcd(a,b).
Since a >= b > r, we have two cases:
If r = 0 then a = bq, and so b divides both b and a. Hence gcd(a,b)=b.
Otherwise (r > 0), we can reduce the problem of finding gcd(a,b) to the problem of finding gcd(b,r) which is exactly the same number (as a, b and r are all divisible by d).
Why is this a reduction? Because r < b. So we are dealing with numbers that are definitely smaller. This means that we only have to apply this reduction a finite number of times before we reach r = 0.
Now, r = a % b which hopefully explains the code you have.
They're equivalent. First thing to notice is that q in the second program is not used at all. The other difference is just iteration vs. recursion.
As to why it works, the Wikipedia page linked above is good. The first illustration in particular is effective to convey intuitively the "why", and the animation below then illustrates the "how".
given that 'q' is never used, I don't see a difference between your plain iterative function, and the recursive iterative function... both do
gdc(first number, second number)
as long as (second number > 0) {
int remainder = first % second;
gcd = try(second as first, remainder as second);
}
}
Barring trying to apply this to non-integers, under which circumstances does this algorithm fail?
(also see http://en.wikipedia.org/wiki/Euclidean_algorithm for lots of detailed info)
Here is an interesting blog post: Tominology.
Where a lot of the intuition behind the Euclidean Algorithm is discussed, it is implemented in JavaScript, but I believe that if one want's there is no difficult to convert the code to Java.
Here is a very useful explanation that I found.
For those too lazy to open it, this is what it says :
Consider the example when you had to find the GCD of (3084,1424). Lets assume that d is the GCD. Which means d | 3084 and d | 1424 (using the symbol '|' to say 'divides').
It follows that d | (3084 - 1424). Now we'll try to reduce these numbers which are divisible by d (in this case 3084 and 1024) as much as possible, so that we reach 0 as one of the numbers. Remember that GCD (a, 0) is a.
Since d | (3084 - 1424), it follows that d | ( 3084 - 2(1424) )
which means d | 236.
Hint : (3084 - 2*1424 = 236)
Now forget about the initial numbers, we just need to solve for d, and we know that d is the greatest number that divides 236, 1424 and 3084. So we use the smaller two numbers to proceed because it'll converge the problem towards 0.
d | 1424 and d | 236 implies that d | (1424 - 236).
So, d | ( 1424 - 6(236) ) => d | 8.
Now we know that d is the greatest number that divides 8, 236, 1424 and 3084. Taking the smaller two again, we have
d | 236 and d | 8, which implies d | (236 - 8).
So, d | ( 236 - 29(8) ) => d | 4.
Again the list of numbers divisible by d increases and converges (the numbers are getting smaller, closer to 0). As it stands now, d is the greatest number that divides 4, 8, 236, 1424, 3084.
Taking same steps,
d | 8 and d | 4 implies d | (8-4).
So, d | ( 8 - 2(4) ) => d | 0.
The list of numbers divisible by d is now 0, 4, 8, 236, 1484, 3084.
GCD of (a, 0) is always a. So, as soon as you have 0 as one of the two numbers, the other number is the gcd of original two and all those which came in between.
This is exactly what your code is doing. You can recognize the terminal condition as GCD (a, 0) = a.
The other step is to find the remainder of the two numbers, and choose that and the smaller of the previous two as the new numbers.

Minimum number of steps to convert one integer to another

Recently i came across a problem, given 2 integers A and B, we need to convert A to B in minimum number of steps.
We can perform following operations on A:
If A is odd, decrease by 1
If A is even, increase by 1
Multiply A(even or odd) by 2
if A is even, divide by 2
again, we have to find the minimum number of steps needed to convert A to B.
The constraints are 0 < A, B < 10^18
My approach:
I tried to solve the problem using Breadth First Search, adding all the possible reachable numbers from a step into a queue, but it fails on higher constraints i.e. time outs.
Can anyone suggest a faster alternative?
EDIT: A is not necessarily less than B
Basically you have the following operations:
flip the lowest bit
shift bits to the left or to the right
Assume, you have A == 0, how you would construct B? Right, you flip the lower bit one by one and shift the number to the left, for example if B == 5, which is 0x0101, you will need 2 flips and 2 shifts.
Now, we have to deal with the case when A != 0 -- in this case you have to turn the lower bit to 0 and shift right to clean up the mess. For example, if you have A == 32, which is 0x0100000 and you want to get 5 (0x0101), you have to do three shifts to the right, then flip the lower bit and you're done.
So, all you have to do is to:
count how many flips/r-shifts you have to do until the highest bit of A is equal to the highest bit of B.
then count how many flips/r-shifts you need to clean up the rest
count how many flips/left-shifts you need to rebuild the lower part of B.
ok, a few hours passed, here's the solution. First a useful function, that says how many ops we need to create a number:
def bit_count(num) :
# the number of non-zero bits in a number
return bin(num).count('1')
def num_ops(num) :
# number of shifts + number of flips
return num.bit_length() + bit_count(num)
Now, well, assume A > B, because otherwise we can swap them while keeping the number of the operations the same. Here's how far we have to shift A to make it start from the same bit as B:
needed_shifts = A.bit_length() - B.bit_length()
while doing that we need to flip a few bits:
mask = (1 << (needed_shifts+1)) - 1
needed_flips = bit_count(A & mask)
Now we count how many ops are required to clean A and rebuild B:
A >>= needed_shifts
clean_shifts = (A & ~B).bit_length()
clean_flips = bit_count(A & ~B)
rebuild_shifts = (B & ~A).bit_length()
rebuild.flips = bit_count(B & ~A)
Finally we sum up all together:
result_ops = needed_shifts + needed_flips + max(clean_shifts,rebuild_shifts) * 2 + clean_flips + rebuils_flips
That's all, folks! =)
The list of available operations is symmetric: 2 sets of operations, each the opposite of the other:
the last bit can be flipped
the number can be shifted left one position or right one position if the low bit is 0.
Hence it takes the same number of operations to go from A to B or from B to A.
Going from A to B takes at most the number of operations to go from A to 0 plus the number of operations to go from B to 0. These operations strictly decrease the values of A and B. If along the way an intermediary value can be reached from both A and B, there is no need to go all the way to 0.
Here is a simple function that performs the individual steps on A and B and stops as soon as this common number is found:
def num_ops(a, b):
# compute the number of ops to transform a into b
# by symmetry, the same number of ops is needed to transform b into a
count = 0
while a != b:
if a > b:
if (a & 1) != 0:
a -= 1
else:
a >>= 1
else:
if (b & 1) != 0:
b -= 1
else:
b >>= 1
count += 1
return count
This problem can be optimised using dynamic programming.
I wrote the following code taking few things into consideration:
The infinite recursion should be avoided carefully by putting up base conditions. For eg: if A=0 and B<0, then no answer exists.
If the function convert(A, B) is called for more than 1 times into recursion and answer for the state (A, B) is not previously calculated, then recursion is terminated as no answer exists for this case. For eg: (80, 100) -> (160, 100) -> (80->100) -> (160, 100) -> ........
This is done by maintaining the count of each state into a map and defining the maximum recursive calls limit (3 in the following program) for the same state of DP.
The map dp maintains the answer for each state (A, B) and the map iterationsCount maintains the count of number of times the same state (A, B) is called.
Have a look at the following implementation:
#include <utility>
#include <iterator>
#include <map>
#include <set>
#include <iostream>
#include <climits>
typedef long long int LL;
std::map<std::pair<LL, LL>, LL> dp;
std::map<std::pair<LL, LL>, int > iterationsCount;
LL IMPOSSIBLE = (LL)1e9;
LL MAX_RECURSION_LIMIT = 3;
LL convert(LL a, LL b)
{
//std::cout<<a<<" "<<b<<std::endl;
// To avoid infinite recursion:
if(iterationsCount.find(std::make_pair(a, b))!=iterationsCount.end() &&
iterationsCount[std::make_pair(a,b)] > MAX_RECURSION_LIMIT &&
dp.find(std::make_pair(a,b))==dp.end()){
return IMPOSSIBLE;
}
// Maintaining count of each state(A, B)
iterationsCount[std::make_pair(a, b)]++;
LL value1, value2, value3, value4, value5;
value1 = value2 = value3 = value4 = value5 = IMPOSSIBLE;
if(dp.find(std::make_pair(a,b)) != dp.end()){
return dp[std::make_pair(a, b)];
}
// Base Case
if(a==0 && b<0){
return IMPOSSIBLE;
}
// Base Case
if (a == b)
return 0;
//Conditions
if (a%2 == 1){
if(a < b){
value1 = 1 + convert(2*a, b);
}
else if(a > b){
value2 = 1 + convert(a-1, b);
}
}
else{
if(a < b){
value3 = 1 + convert(a*2, b);
value4 = 1 + convert(a+1, b);
}
else if(a > b){
value5 = 1 + convert(a/2, b);
}
}
LL ans = std::min(value1, std::min(value2, std::min(value3, std::min(value4, value5))));
dp[std::make_pair(a, b)] = ans;
return ans;
}
int main(){
LL ans = convert(10, 95);
if(ans == IMPOSSIBLE){
std::cout<<"Impossible";
}else{
std::cout<<ans;
}
return 0;
}

Is it possible to solve equations of bit wise operators?

We can easily find:
a=7
b=8
c=a|b
Then c comes out to be: 15
Now can we find a if c is given?
For example:
b=8
c=15
c=a|b
Find a?
And also if x=2<<1 is given, then we can get x=4. But if 4=y<<1 is given Can we get y?
To begin with, these are just my observations and I have no sources to back them up. There are better ways, but the Wikipedia pages were really long and confusing so I hacked together this method.
Yes, you can, but you need more context (other equations to solve in reference to) and a lot more parsing. This is the method I came up with for doing this, but there are better ways to approach this problem. This was just conceptually easier for me.
Numbers
You can't just put an integer into an equation and have it work. Bitwise operators refer only refer to booleans, we just treat them as if they are meant for integers. In order to simplify an equation, we have to look at it as an array of booleans.
Taking for example an unsigned 8 bit integer:
a = 0b10111001
Now becomes:
a = {1, 0, 1, 1, 1, 0, 0, 1}
Parsing
Once you can get your equations to just booleans, then you can apply the actual bitwise operators to simple 1s and 0s. But you can take it one step further now, at this all bitwise equations can be written in terms of just AND, OR, and NOT. Addition, subtraction and multiplication can also be represented this way, but you need to manually write out the steps taken.
A ^ B = ~( ( A & B ) | ( (~A) & (~B) ) )
This includes bitshifts, but instead of expanding to other bitwise operators, they act as an assignment.
A = 0b10111001
B = 0b10100110
C = (A >> 2) ^ B
This then expands to 8 equations, one for each bit.
C[0] = A[2] ^ B[0]
C[1] = A[3] ^ B[1]
C[2] = A[4] ^ B[2]
C[3] = A[5] ^ B[3]
C[4] = A[6] ^ B[4]
C[5] = A[7] ^ B[5]
C[6] = 0 ^ B[6]
C[7] = 0 ^ B[7]
C[6] and C[7] can then be reduced to just B[6] and B[7] respectively.
Algebra
Now that you have an equation consisting of only AND, OR, and NOT, you can represent them using traditional algebra. In this step, they are no longer treated as bits, but instead as real numbers which just happen to be 0 or 1.
A | B => A + B - AB
A & B => AB
~A => 1 - A
Note that when plugging in 1 and 0, all of these remain true.
For this example, I will be using the Majority function as an example. It's job is to take in three bits and return 1 if there are more 1s than 0s.
It is defined as:
f(a, b, c) = ((a & b) | (a & c) | (b & c))
which becomes
f(a, b, c) = (ab + ac - (ab * ac)) + bc - (ab + ac - (ab * ac) * bc
f(a, b, c) = ab + ac + bc - a2bc - ab2c - abc2 + a2b2c2
And now that you have this information, you can easily combine it with your other equations using standard algebra in order to get a solution. Any non 1 or 0 solution is extraneous.
A solution (if it exists) of such equation can be considered "unique" provided that you allow three states for each bit:
bit is 0
bit is 1
does not matter X
E.g. 7 | 00001XXX(binary) = 15
Of course, such result cannot be converted to decimal.
For some operations it may be necessary to specify the bit width.
For your particular cases, the answer is no, you cannot solve or 'undo' the OR-operation (|) and shifting left or right (<<, >>) since in both cases information is lost by applying the operation. For example, 8|7=15 and 12|7=15, thus given the 7 and 15 it is not possible to obtain a unique solution.
An exception is the XOR operation, for which does hold that when a^b=c, then b^c=a and a^c=b.
you can find an a that solves the equation, but it will not be unique. assume b=c=1 then a=0 and a=1 are solutions. for c=1, b=0 there will be no solution. this is valid for all the bits in the numbers you consider. if the equation is solvable a=c will be (one of the) solution(s).
and left-shifting an integer will always result in an even integer (the least-significant bit is zero). so this only works for even itegers. in that case you can invert the operation by applying a right-shift (>>).

Complement of XOR

What is the most efficient algorithm for finding ~A XOR B? (Note that ~ is the complement function, done by reversing each 1 bit into 0 and each 0 into 1 bit, and XOR is the exclusive or function)
For example, ~4 XOR 6 = ~010 = 101 = 5 and ~6 XOR 9 = ~1111 = 0
Here's an answer that takes into account the number of bits needed to store your integers:
def xnor(a, b):
length = max(a.bit_length(), b.bit_length())
return (~a ^ b) & ((1 << length) - 1)
I can't think of a situation where this is better than just ~a ^ b however. And it almost certainly makes no sense for negative numbers.
The only problem here is that ~ returns a negative number for a positive input, and you want a positive result limited to the significant bits represented in the inputs.
Here's a function that can generate a mask of bits that are needed in the result:
def mask(n):
n = abs(n)
shift = 1
while n & (n + 1) != 0:
n |= n >> shift
shift *= 2
return n
And here's how to use it:
print (~a ^ b) & mask(a | b)
You can simply use ==.
A XNOR B is same as == operator because:
A B NXOR
F F T
F T F
T F F
T T T

Categories

Resources