We can easily find:
a=7
b=8
c=a|b
Then c comes out to be: 15
Now can we find a if c is given?
For example:
b=8
c=15
c=a|b
Find a?
And also if x=2<<1 is given, then we can get x=4. But if 4=y<<1 is given Can we get y?
To begin with, these are just my observations and I have no sources to back them up. There are better ways, but the Wikipedia pages were really long and confusing so I hacked together this method.
Yes, you can, but you need more context (other equations to solve in reference to) and a lot more parsing. This is the method I came up with for doing this, but there are better ways to approach this problem. This was just conceptually easier for me.
Numbers
You can't just put an integer into an equation and have it work. Bitwise operators refer only refer to booleans, we just treat them as if they are meant for integers. In order to simplify an equation, we have to look at it as an array of booleans.
Taking for example an unsigned 8 bit integer:
a = 0b10111001
Now becomes:
a = {1, 0, 1, 1, 1, 0, 0, 1}
Parsing
Once you can get your equations to just booleans, then you can apply the actual bitwise operators to simple 1s and 0s. But you can take it one step further now, at this all bitwise equations can be written in terms of just AND, OR, and NOT. Addition, subtraction and multiplication can also be represented this way, but you need to manually write out the steps taken.
A ^ B = ~( ( A & B ) | ( (~A) & (~B) ) )
This includes bitshifts, but instead of expanding to other bitwise operators, they act as an assignment.
A = 0b10111001
B = 0b10100110
C = (A >> 2) ^ B
This then expands to 8 equations, one for each bit.
C[0] = A[2] ^ B[0]
C[1] = A[3] ^ B[1]
C[2] = A[4] ^ B[2]
C[3] = A[5] ^ B[3]
C[4] = A[6] ^ B[4]
C[5] = A[7] ^ B[5]
C[6] = 0 ^ B[6]
C[7] = 0 ^ B[7]
C[6] and C[7] can then be reduced to just B[6] and B[7] respectively.
Algebra
Now that you have an equation consisting of only AND, OR, and NOT, you can represent them using traditional algebra. In this step, they are no longer treated as bits, but instead as real numbers which just happen to be 0 or 1.
A | B => A + B - AB
A & B => AB
~A => 1 - A
Note that when plugging in 1 and 0, all of these remain true.
For this example, I will be using the Majority function as an example. It's job is to take in three bits and return 1 if there are more 1s than 0s.
It is defined as:
f(a, b, c) = ((a & b) | (a & c) | (b & c))
which becomes
f(a, b, c) = (ab + ac - (ab * ac)) + bc - (ab + ac - (ab * ac) * bc
f(a, b, c) = ab + ac + bc - a2bc - ab2c - abc2 + a2b2c2
And now that you have this information, you can easily combine it with your other equations using standard algebra in order to get a solution. Any non 1 or 0 solution is extraneous.
A solution (if it exists) of such equation can be considered "unique" provided that you allow three states for each bit:
bit is 0
bit is 1
does not matter X
E.g. 7 | 00001XXX(binary) = 15
Of course, such result cannot be converted to decimal.
For some operations it may be necessary to specify the bit width.
For your particular cases, the answer is no, you cannot solve or 'undo' the OR-operation (|) and shifting left or right (<<, >>) since in both cases information is lost by applying the operation. For example, 8|7=15 and 12|7=15, thus given the 7 and 15 it is not possible to obtain a unique solution.
An exception is the XOR operation, for which does hold that when a^b=c, then b^c=a and a^c=b.
you can find an a that solves the equation, but it will not be unique. assume b=c=1 then a=0 and a=1 are solutions. for c=1, b=0 there will be no solution. this is valid for all the bits in the numbers you consider. if the equation is solvable a=c will be (one of the) solution(s).
and left-shifting an integer will always result in an even integer (the least-significant bit is zero). so this only works for even itegers. in that case you can invert the operation by applying a right-shift (>>).
Related
On my computer, I can check that
(0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3)
evaluates to False.
More generally, I can estimate that the formula (a + b) + c == a + (b + c) fails roughly 17% of the time when a,b,c are chosen uniformly and independently on [0,1], using the following simulation:
import numpy as np
import numexpr
np.random.seed(0)
formula = '(a + b) + c == a + (b + c)'
def failure_expectation(formula=formula, N=10**6):
a, b, c = np.random.rand(3, N)
return 1.0 - numexpr.evaluate(formula).mean()
# e.g. 0.171744
I wonder if it is possible to arrive at this probability by hand, e.g. using the definitions in the floating point standard and some assumption on the uniform distribution.
Given the answer below, I assume that the following part of the original question is out of reach, at least for now.
Is there is a tool that computes the failure probability for a given
formula without running a simulation.
Formulas can be assumed to be simple, e.g. involving the use of
parentheses, addition, subtraction, and possibly multiplication and
division.
(What follows may be an artifact of numpy random number generation, but still seems fun to explore.)
Bonus question based on an observation by NPE. We can use the following code to generate failure probabilities for uniform distributions on a sequence of ranges [[-n,n] for n in range(100)]:
import pandas as pd
def failures_in_symmetric_interval(n):
a, b, c = (np.random.rand(3, 10**4) - 0.5) * n
return 1.0 - numexpr.evaluate(formula).mean()
s = pd.Series({
n: failures_in_symmetric_interval(n)
for n in range(100)
})
The plot looks something like this:
In particular, failure probability dips down to 0 when n is a power of 2 and seems to have a fractal pattern. It also looks like every "dip" has a failure probability equal to that of some previous "peak". Any elucidation of why this happens would be great!
It's definitely possible to evaluate these things by hand, but the only methods I know are tedious and involve a lot of case-by-case enumeration.
For example, for your specific example of determining the probability that (a + b) + c == a + (b + c), that probability is 53/64, to within a few multiples of the machine epsilon. So the probability of a mismatch is 11/64, or around 17.19%, which agrees with what you were observing from your simulation.
To start with, note that there's a major simplifying factor in this particular case, and that's that Python and NumPy's "uniform-on-[0, 1]" random numbers are always of the form n/2**53 for some integer n in range(2**53), and within the constraints of the underlying Mersenne Twister PRNG, each such number is equally likely to occur. Since there are around 2**62 IEEE 754 binary64 representable values in the range [0.0, 1.0], that means that the vast majority of those IEEE 754 values aren't generated by random.random() (or np.random.rand()). This fact greatly simplifies the analysis, but also means that it's a bit of a cheat.
Here's an incomplete sketch, just to give an idea of what's involved. To compute the value of 53/64, I had to divide into five separate cases:
The case where both a + b < 1 and b + c < 1. In this case, both a + b and b + c are computed without error, and (a + b) + c and a + (b + c) therefore both give the closest float to the exact result, rounding ties to even as usual. So in this case, the probability of agreement is 1.
The case where a + b < 1 and b + c >= 1. Here (a + b) + c will be the correctly rounded value of the true sum, but a + (b + c) may not be. We can divide further into subcases, depending on the parity of the least significant bits of a, b and c. Let's abuse terminology and call a "odd" if it's of the form n/2**53 with n odd, and "even" if it's of the form n/2**53 with n even, and similarly for b and c. If b and c have the same parity (which will happen half the time), then (b + c) is computed exactly and again a + (b + c) must match (a + b) + c. For the other cases, the probability of agreement is 1/2 in each case; the details are all very similar, but for example in the case where a is odd, b is odd and c is even, (a + b) + c is computed exactly, while in computing a + (b + c) we incur two rounding errors, each of magnitude exactly 2**-53. If those two errors are in opposite directions, they cancel and we get agreement. If not, we don't. Overall, there's a 3/4 probability of agreement in this case.
The case where a + b >= 1 and b + c < 1. This is identical to the previous case after swapping the roles of a and c; the probability of agreement is again 3/4.
a + b >= 1 and b + c >= 1, but a + b + c < 2. Again, one can split on the parities of a, b and c and look at each of the resulting 8 cases in turn. For the cases even-even-even and odd-odd-odd we always get agreement. For the case odd-even-odd, the probability of agreement turns out to be 3/4 (by yet further subanalysis). For all the other cases, it's 1/2. Putting those together gets an aggregate probability of 21/32 for this case.
Case a + b + c >= 2. In this case, since we're rounding the final result to a multiple of four times 2**-53, it's necessary to look not just at the parities of a, b, and c, but to look at the last two significant bits. I'll spare you the gory details, but the probability of agreement turns out to be 13/16.
Finally, we can put all these cases together. To do that, we also need to know the probability that our triple (a, b, c) lands in each case. The probability that a + b < 1 and b + c < 1 is the volume of the square-based pyramid described by 0 <= a, b, c <= 1, a + b < 1, b + c < 1, which is 1/3. The probabilities of the other four cases can be seen (either by a bit of solid geometry, or by setting up suitable integrals) to be 1/6 each.
So our grand total is 1/3 * 1 + 1/6 * 3/4 + 1/6 * 3/4 + 1/6 * 21/32 + 1/6 * 13/16, which comes out to be 53/64, as claimed.
A final note: 53/64 almost certainly isn't quite the right answer - to get a perfectly accurate answer we'd need to be careful about all the corner cases where a + b, b + c, or a + b + c hit a binade boundary (1.0 or 2.0). It would certainly be possible to do refine the above approach to compute exactly how many of the 2**109 possible triples (a, b, c) satisfy (a + b) + c == a + (b + c), but not before it's time for me to go to bed. But the corner cases should constitute on the order of 1/2**53 of the total number of cases, so our estimate of 53/64 should be accurate to at least 15 decimal places.
Of course, there are lots of details missing above, but I hope it gives some idea of how it might be possible to do this.
Say a and b are disjoint 1D complex NumPy arrays, and I do numpy.multiply(a, b, b).
Is b guaranteed to contain the same values as I would get via b[:] = numpy.multiply(a, b)?
I haven't actually been able to produce incorrect results, but I don't know if I'm just being lucky with my particular compilation or platform or if I can actually rely on that, hence the question.
Notice that with float (i.e. real numbers) the answer would obviously yes, since a reasonable implementation can't make it fail, but with complex numbers it's easy for the cross-multiply operation to give incorrect results by writing the real part and then reading an imaginary part:
# say the real part is at [0] and the imaginary part is at [1] and c is the product of a & b
c[0] = a[0] * b[0] - a[1] * b[1]
c[1] = a[0] * b[1] + a[1] * b[0] # if c[0] overlaps a[0] or b[0] then this is wrong
Yes. Complex values ought to be treated atomically. If it doesn't work like that, then it's a bug that we'll fix.
Need some help understanding python solutions of leetcode 371. "Sum of Two Integers". I found https://discuss.leetcode.com/topic/49900/python-solution/2 is the most voted python solution, but I am having problem understand it.
How to understand the usage of "% MASK" and why "MASK = 0x100000000"?
How to understand "~((a % MIN_INT) ^ MAX_INT)"?
When sum beyond MAX_INT, the functions yells negative value (for example getSum(2147483647,2) = -2147483647), isn't that incorrect?
class Solution(object):
def getSum(self, a, b):
"""
:type a: int
:type b: int
:rtype: int
"""
MAX_INT = 0x7FFFFFFF
MIN_INT = 0x80000000
MASK = 0x100000000
while b:
a, b = (a ^ b) % MASK, ((a & b) << 1) % MASK
return a if a <= MAX_INT else ~((a % MIN_INT) ^ MAX_INT)
Let's disregard the MASK, MAX_INT and MIN_INT for a second.
Why does this black magic bitwise stuff work?
The reason why the calculation works is because (a ^ b) is "summing" the bits of a and b. Recall that bitwise xor is 1 when the bits differ, and 0 when the bits are the same. For example (where D is decimal and B is binary), 20D == 10100B, and 9D = 1001B:
10100
1001
-----
11101
and 11101B == 29D.
But, if you have a case with a carry, it doesn't work so well. For example, consider adding (bitwise xor) 20D and 20D.
10100
10100
-----
00000
Oops. 20 + 20 certainly doesn't equal 0. Enter the (a & b) << 1 term. This term represents the "carry" for each position. On the next iteration of the while loop, we add in the carry from the previous loop. So, if we go with the example we had before, we get:
# First iteration (a is 20, b is 20)
10100 ^ 10100 == 00000 # makes a 0
(10100 & 10100) << 1 == 101000 # makes b 40
# Second iteration:
000000 ^ 101000 == 101000 # Makes a 40
(000000 & 101000) << 1 == 0000000 # Makes b 0
Now b is 0, we are done, so return a. This algorithm works in general, not just for the specific cases I've outlined. Proof of correctness is left to the reader as an exercise ;)
What do the masks do?
All the masks are doing is ensuring that the value is an integer, because your code even has comments stating that a, b, and the return type are of type int. Thus, since the maximum possible int (32 bits) is 2147483647. So, if you add 2 to this value, like you did in your example, the int overflows and you get a negative value. You have to force this in Python, because it doesn't respect this int boundary that other strongly typed languages like Java and C++ have defined. Consider the following:
def get_sum(a, b):
while b:
a, b = (a ^ b), (a & b) << 1
return a
This is the version of getSum without the masks.
print get_sum(2147483647, 2)
outputs
2147483649
while
print Solution().getSum(2147483647, 2)
outputs
-2147483647
due to the overflow.
The moral of the story is the implementation is correct if you define the int type to only represent 32 bits.
Here is solution works in every case
cases
- -
- +
+ -
+ +
solution
python default int size is not 32bit, it is very large number, so to prevent overflow and stop running into infinite loop, we use 32bit mask to limit int size to 32bit (0xffffffff)
a,b=-1,-1
mask=0xffffffff
while (b & mask):
carry=a & b
a=a^b
b=carray <<1
print( (a&Mask) if b>0 else a)
For me, Matt's solution stuck in inifinite loop with inputs Solution().getSum(-1, 1)
So here is another (much slower) approach based on math:
import math
def getSum(a: int, b: int) -> int:
return int(math.log2(2**a * 2**b))
I am using the sympy library for python3, and I am handling equations, such as the following one:
a, b = symbols('a b', positive = True)
my_equation = Eq((2 * a + b) * (a - b) / 2, 0)
my_equations gets printed exactly as I have defined it ((2 * a + b) * (a - b) / 2 == 0, that is), and I am unable to reduce it even using simplify or similar functions.
What I am trying to achieve is simplifying the nonzero factors from the equation (2 * a + b and 1 / 2); ideally, I'd want to be able to simplify a - b as well, if I am sure that a != b.
Is there any way I can reach this goal?
The point is that simplify() is not capable (yet) of complex reasoning about assumptions. I tested it on Wolfram Mathematica's simplify, and it works. It looks like it's a missing feature in SymPy.
Anyway, I propose a function to do what you're looking for.
Define this function:
def simplify_eq_with_assumptions(eq):
assert eq.rhs == 0 # assert that right-hand side is zero
assert type(eq.lhs) == Mul # assert that left-hand side is a multipl.
newargs = [] # define a list of new multiplication factors.
for arg in eq.lhs.args:
if arg.is_positive:
continue # arg is positive, let's skip it.
newargs.append(arg)
# rebuild the equality with the new arguments:
return Eq(eq.lhs.func(*newargs), 0)
Now you can call:
In [5]: simplify_eq_with_assumptions(my_equation)
Out[5]: a - b = 0
You can easily adapt this function to your needs. Hopefully, in some future version of SymPy it will be sufficient to call simplify.
Euclidean definition says,
Given two integers a and b, with b ≠ 0, there exist unique integers q and r such that a = bq + r and 0 ≤ r < |b|, where |b| denotes the absolute value of b.
Based on below observation,
>>> -3 % -2 # Ideally it should be (-2 * 2) + 1
-1
>>> -3 % 2 # this looks fine, (-2 * 2) + 1
1
>>> 2 % -3 # Ideally it should be (-3 * 0) + 2
-1
looks like the % operator is running with different rules.
link1 was not helpful,
link2 gives recursive answer, because, as I do not understand how % works, it is difficult to understand How (a // b) * b + (a % b) == a works
My question:
How do I understand the behavior of modulo operator in python? Am not aware of any other language with respect to the working of % operator.
The behaviour of integer division and modulo operations are explained in an article of The History of Python, namely: Why Python's Integer Division Floors . I'll quote the relevant parts:
if one of the operands is negative, the result is floored, i.e.,
rounded away from zero (towards negative infinity):
>>> -5//2
-3
>>> 5//-2
-3
This disturbs some people, but there is a good mathematical reason.
The integer division operation (//) and its sibling, the modulo
operation (%), go together and satisfy a nice mathematical
relationship (all variables are integers):
a/b = q with remainder r
such that
b*q + r = a and 0 <= r < b
(assuming a and b are >= 0).
If you want the relationship to extend for negative a (keeping b
positive), you have two choices: if you truncate q towards zero, r
will become negative, so that the invariant changes to 0 <= abs(r)
otherwise, you can floor q towards negative infinity, and the
invariant remains 0 <= r < b.
In mathematical number theory, mathematicians always prefer the latter
choice (see e.g. Wikipedia). For Python, I made the same choice
because there are some interesting applications of the modulo
operation where the sign of a is uninteresting.
[...]
For negative b, by the way, everything just flips, and the invariant
becomes:
0 >= r > b.
In other words python decided to break the euclidean definition in certain circumstances to obtain a better behaviour in the interesting cases. In particular negative a was considered interesting while negative b was not considered as such. This is a completely arbitrary choice, which is not shared between languages.
Note that many common programming languages (C,C++,Java,...) do not satisfy the euclidean invariant, often in more cases than python (e.g. even when b is positive).
some of them don't even provide any guarantee about the sign of the remainder, leaving that detail as implementation defined.
As a side note: Haskell provides both kind of moduluses and divisions. The standard euclidean modulus and division are called rem and quot, while the floored division and "python style" modulus are called mod and div.