~ Binary Ones Complement in Python 3 [duplicate] - python

This question already has answers here:
bit-wise operation unary ~ (invert)
(5 answers)
Closed 12 months ago.
Just had a doubt about how binary one's complement work.
For example(in python):
a = 60
print(~a)
Gives an output:-
-61
Isn't binary one's complement of 60 is :
a = 0011 1100
~a = 1100 0011
Should it not be -60 ?
I know I'm wrong but why does it shift ahead to -61?

~ is a bitwise inversion operator and it acts exectly as defined:
The bitwise inversion of x is defined as -(x+1).
This is simply how the bitwise inversion of the two's complement representation of an integer works.
The two's complement wheel visualizes this pretty well:
As you can see, the bitwise inversion of 1 is -2, the bitwise inversion of 2 is -3, ..., and the bitwise inversion of 60 will be -61.

In all modern computers, the 2's complement binary is used for representing integers (and not the classical binary representation).
As confirmed in Python docs:
A two's complement binary is same as the classical binary
representation for positive integers but is slightly different for
negative numbers. Negative numbers are represented by performing the
two's complement operation on their absolute value.
The 2's complement of a negative number, -x, is written using the bit pattern for (x-1) with all of the bits complemented (switched from 1 to 0 or 0 to 1).
Example:
2's complement of -15:
-15 => complement(x-1) => complement(15-1) => complement(14) => complement(1110) => 0001
Python's ~ (bitwise NOT) operator returns the 1's complement of the number.
Example:
print(~14) # Outputs -15
14 is (1110) in its 2's complement binary form.
Here, ~14 would invert (i.e. 1's complement) all the bits in this form to 0001.
However, 0001 is actually the 2's complement of -15.
A simple rule to remember the bitwise NOT operation on integers is -(x+1).
print(~60) # Outputs -61
print(~-60) # Outputs 59

You are almost there. 1100 0011 is actually -61.
Here's how a negative binary is converted to decimal:
Invert the bits
Add 1
Convert to decimal
Add negative sign
So:
1100 0011
0011 1100 <-- bits inverted
0011 1101 <-- one added
61 <-- converted to decimal
-61 <-- added negative sign
From wikipedia's Two's complement page:
The two's complement of an N-bit number is defined as its complement with respect to 2^N. For instance, for the three-bit number 010, the two's complement is 110, because 010 + 110 = 1000.
Here 1100 0011's complement is 0011 1101 cuz
1100 0011
+ 0011 1101
-------------
1 0000 0000

Related

Bitwise: Why 14 & -14 is equals to 2 and 16 & -16 is equals to 16?

it is a dummy question, but I need to understand it more deeply
Python integers use two's complement to store signed values. That means that positive numbers are stored simply as their bit sequence (so 14 is 00001110 since it's equal to 8 + 4 + 2). On the other hand, negative numbers are stored by taking their positive quantity, inverting it, and adding one. So -14 is 11110010. We took the bitwise representation of 14 (00001110), inverted it (11110001), and added one (11110010).
But there's an added wrinkle. Python integer values are bignums; they can be arbitrarily large. So our usual notion of "this number is stored in N bits" breaks down. Instead, we may end up with two numbers of differing lengths. If we end up in that situation, we may have to sign extend the shorter one. This is just a fancy way of saying "take the most significant bit and repeat it until the number is long enough for our liking".
So in the case of 14 and -14, we have
0010
1110
We & them together. Only the second bit (counting from the right, or least significant bit) is true in both, so we get 0010, or 2. On the other hand, with 16 and -16, we get
010000
110000
For -16, we took positive sixteen (010000), flipped all of the bits (101111), and then added one, which got carried all the way over to the second most significant bit (110000). When we & these, we get 16.
010000
See also BitwiseOperators - Python Wiki
A bitwise operation is a binary operation.
In some representations of integers, one bit is used to represent the sign of the number. Depending upon which bit that is, will change the result of the bitwise &. In 2's complement, negative numbers are represented by inverting all the bits and then adding 1. There are many ways of representing real numbers in binary. Regardless, generally, a bitwise operation of a positive and negative number will always result in undefined behaviour. Probably why most calculators will only allow positive integers in bitwise operations. That is, calculators that are advanced enough to have such a feature.
EDIT: The specific numbers you chose are significant. The number 14 is represented in binary using less bits than 16. It just so happens that -16 and 16 are both exactly the same binary (looking only at the first five bits, since the rest are not significant when you come to & them together)
Now a bitwise & only sets a bit if that bit is set in both the numbers you are and-ing together.
1110 = 14
0010 = -14
&0010 = 2!
10000 = 16
10000 = -16
&10000 = 16!
There's your answer.

The bitwise NOT operator [duplicate]

I was trying to understand bitwise NOT in python.
I tried following:
print('{:b}'.format(~ 0b0101))
print(~ 0b0101)
The output is
-110
-6
I tried to understand the output as follows:
Bitwise negating 0101 gives 1010. With 1 in most significant bit, python interprets it as a negative number in 2's complement form and to get back corresponding decimal it further takes 2's complement of 1010 as follows:
1010
0101 (negating)
0110 (adding 1 to get final value)
So it prints it as -110 which is equivalent to -6.
Am I right with this interpretation?
You're half right..
The value is indeed represented by ~x == -(x+1) (add one and invert), but the explanation of why is a little misleading.
Two's compliment numbers require setting the MSB of the integer, which is a little difficult if the number can be an arbitrary number of bits long (as is the case with python). Internally python keeps a separate number (there are optimizations for short numbers however) that tracks how long the digit is. When you print a negative int using the binary format: f'{-6:b}, it just slaps a negative sign in front of the binary representation of the positive value (one's compliment). Otherwise, how would python determine how many leading one's there should be? Should positive values always have leading zeros to indicate they're positive? Internally it does indeed use two's compliment for the math though.
If we consider signed 8 bit numbers (and display all the digits) in 2's compliment your example becomes:
~ 0000 0101: 5
= 1111 1010: -6
So in short, python is performing correct bitwise negation, however the display of negative binary formatted numbers is misleading.
Python integers are arbitrarily long, so if you invert 0b0101, it would be 1111...11111010. How many ones do you write? Well, a 4-bit twos complement -6 is 1010, and a 32-bit twos complement -6 is 11111111111111111111111111111010. So an arbitrarily long -6 could ideally just be written as -6.
Check what happens when ~5 is masked to look at the bits it represents:
>>> ~5
-6
>>> format(~5 & 0xF,'b')
'1010'
>>> format(~5 & 0xFFFF,'b')
'1111111111111010'
>>> format(~5 & 0xFFFFFFFF,'b')
'11111111111111111111111111111010'
>>> format(~5 & 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFF,'b')
'11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111010'
A negative decimal representation makes sense and you must mask to limit a representation to a specific number of bits.

Why does 2**-1025 != 0.0 in Python

From the specs of IEEE754, a float coded on 64 bits has 11bits for the exponent and 52 bits for the mantissa. Hence, the smaller number that could be coded as a float should be 2**(-2**10). The wikipedia page which I believe is true, give a more exact value of 2**-1022 whose decimal value is approximately 2.2250738585072014e-308.
But, with Python, I can use as float numbers such as 2**-1052, etc. The actual limit on my computer is 2**-1074. From this page of the official documentation, Python usually conforms to IEEE754.
At the same time, the maximal value is 2**1023, which is the given value by the IEEE754 standard.
Why is it so?
Does anyone has an explanation?
What is the actual encoding of a float in Python?
Why does the range for the exponent, which is equal to 1074+1023+1, ie 2098, is not a power of 2?
First, the Python documentation does not specify that IEEE-754 is used. The choice of floating-point implementation is up to each Python implementation. IEEE-754 is very popular but not universal.
In the IEEE-754 basic 64-bit binary floating-point format (binary64), there is a one-bit sign field s, an eleven-bit exponent field e, and a 52-bit primary significand field f (for “fraction”).
The sign bit is 0 for positive and 1 for negative.
If e is all one bits (1111111111, or 2047 in decimal), the object represents an infinity (if f is zero) or a NaN (if f is not zero). (“NaN” stands for “Not a Number”).
If e is neither all zero bits nor all one bits, then it represents an exponent E = e − 1023, and the f field is used to form a number F that is 1.f (that is, the binary numeral “1.” followed by the 52 bits of f). (Equivalently, if we regard f as a binary numeral, we can say F = 1 + f • 2−52.) The number represented is (−1)s • 2E • F. These are called normal numbers.
If e is all zero bits, then it represents an exponent E = 1 − 1023, and the f field is used to form a number F that is 0.f (that is, the binary numeral “0.” followed by the 52 bits of f). (Equivalently, we can say F = 0 + f • 2−52.) Again, the number represented is (−1)s • 2E • F. These are called subnormal numbers.
Note that e = 1 and e = 0 represent the same exponent E, 1 − 1023 = −1022, but change the first bit of F from 1 to 0. F is the significand (the fraction part) of the floating-point number. (People sometimes refer to f as the significand1, but that is incorrect. It is only the field that provides most of the encoding of the mathematical significand. As we see above, the exponent field also contributes to forming the actual significand, F.)
The smallest positive number occurs when s is zero, e is zero, and f is 1 (0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001). Then E = −1022 and F = 0 + 1 • 2−52, so the number represented is (−1)0 • 2−1022 • 2−52 = 2−1074.
The largest finite number occurs when s is zero, e is 2046 (11111111110), and f is all ones (1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111). E = 2046 − 1023 = 1023. Note that f, interpreted as an integer, is 252−1, so F = 1 + *(252−1) • 2−52 = 1 + 1 − 2−52 = 2 - 2−52. So the value represented is (−1)0 • 21023 • (2 - 2−52) = 21024 − 2971.
Footnote
1 The significand is sometimes referred to as the “mantissa,” but that is an old term for the fraction portion of a logarithm. It is not entirely appropriate for floating-point numbers, as mantissas are logarithmic while significands are linear.
binary64 encodes most values with a biased exponent and a 53 bit significand.
With biased exponent as 1 or more**, the value is: 1.the_52_bit_encoded_"mantissa" * 2exponent - bias.
With biased exponent as 0, the value is: 0.the_52_bit_encoded_"mantissa" * 21 - bias.
OP is in the right idea for normal values. But there exist sub-normal values that have various amounts of leading zeros in 0.the_52_bit_encoded_"mantissa". The smallest non-zero is then 0.(51_zeros)1 * 21 - bias or 2-1074.
** When the biased exponent has a maximal value, the number is special: either an infinity or not-a-number.

Negative integer zero

Why can Python represent negative zero in float, but cannot represent negative zero in int?
More specifically:
a = 0.0
print(a)
# 0.0
b = -a
print(b)
# -0.0
BUT:
a = 0
print(a)
# 0
b = -a
print(b)
# 0
(I am aware of the discussion here negative zero in python on negative float zero, but the ints are not really discussed there).
Historically, there were integer formats that could represent both −0 and +0. Both sign-and-magnitude and one’s complement can represent −0 and +0. These proved to be less useful than two’s complement, which won favor and is ubiquitous today.
Two’s complement has some numerical properties that make it a little nicer to implement in hardware, and having two zeros caused some nuisance for programmers. (I heard of bugs such as an account balance being −0 instead of +0 resulting in a person being sent a bill when they should not have been.)
Floating-point uses sign-and-magnitude, so it can represent both −0 and +0. Due to the nature of floating-point, the arithmetic properties of two’s complement would not aid a floating-point implementation as much, and having two zeros allows a programmer to use a little extra information in some circumstances.
So the choices for integer and floating-point formats are motivated by utility, not mathematical necessity.
A Look At Integer Arithmetic
Let’s consider implementing some integer arithmetic in computer hardware using four bits for study. Essentially the first thing we would do is implement unsigned binary arithmetic, so we design some logic gates to make adders and other arithmetic units. So the inputs 0101 and 0011 to the adder produce output 1000.
Next, we want to handle negative numbers. In writing, we handle negative numbers by putting a sign on front, so our first thought might be to do the same thing with bits: Use a bit in front to indicate negative. Now we have a sign-and-magnitude representation. 0001 represents +1, and 1001 represents −1. 0010 represents +2, and 1010 represents −2. 0111 represents +7 , and 1111 represents −7. And, of course, 0000 represents +0, and 1000 represents −0. That is an idea, and then we must implement it. We have already got an adder, and, if we feed it 0010 (2) and 0011 (3), it correctly outputs 0101 (5). But, if we feed it 0011 (3) and 1001 (−1), it outputs 1100 (−4). So we have to modify it. Well, that is not too bad, we have a subtraction unit for unsigned binary, so we can look at the first bit, and, if we are adding a negative number, we subtract instead of adding. That works for some operations; for 0011 and 1001, observing the leading 1 on the second operand and feeding 011 and 001 to the subtraction unit would produce 010 (2), which is correct. But, if we have 0010 and 1011, feeding 010 and 011 to the subtraction unit might produce some error indication (it was originally designed for unsigned binary) or it might “wrap” and produce 111 (because such wrapping, along with a “borrow out” bit in the output, makes the subtraction unit work as part of a design for subtracting wider numbers). Either way, that is wrong for our signed numbers; we want the output of 0010 (2) plus 1011 (−3) to be 1001 (−1). So we have to design new arithmetic units that handle this. Perhaps, when adding numbers of mixed signs, they figure out which one is larger in magnitude, subtract the smaller from the larger, and then apply the sign bit of the larger. In any case, we have a fair amount of work to do just to design the addition and subtraction units.
Another suggestion is, to make a number negative, invert every bit. This is called one’s complement. It is easy to understand and fits the notion of negation—just negate everything. Let’s consider how it affects our arithmetic units. For the combinations of +3 or −3 with +2 or −2, we would want these results: 0011 (3) + 0010 (2) = 0101 (5), 0011 (3) + 1101 (−2) = 0001 (1), 1100 (−3) + 0010 (2) = 1110 (−1), and 1100 (−3) + 1101 (−2) = 1010 (−5). Upon examination, there is a simple way to adapt our binary adder to make this work: Do the addition on all four bits as if they were unsigned binary, and, if there is a carry out of the leading bit, add it back to the low bit. In unsigned binary 0011 + 0010 = 0101 with no carry, so the final output is 0101. 0011 + 1101 = 0000 with a carry, so the final result is 0001. 1100 + 0010 = 1110 with no carry, so the final result is 1110. 1100 + 1101 = 1001 with a carry, so the final result is 1010.
This is nice; our one’s complement adder is simpler than the sign-and-magnitude adder. It does not need to compare magnitudes and does not need to do a subtraction to handle negative numbers. We can make it cheaper and make more profit.
Then somebody comes up with the idea of two’s complement. Instead of inverting every bit, we will conceptually subtract the number from 2n, where n is the number of bits. So 10000 − 0001 = 1111 represents −1, and 1110 is −2, 1101 is −3, and so on. What does this do to our adder?
In unsigned binary, 0010 (2) + 1101 (13) = 1111 (15). In two’s complement, 0010 (2) + 1101 (−3) = 1111 (−1). The bits are the same! This actually works for all two’s complement numbers; adding the bit patterns for unsigned numbers produces the same results we want for adding two’s complement numbers. We can use the exact same logic gates for unsigned binary and two’s complement. That is brilliant, give that employee a raise. That is what modern hardware does; the same arithmetic units are used for adding or subtracting two’s complement numbers as are used for adding or subtracting unsigned numbers.
This is a large part of why two’s complement won out for representing negative integers. It results in simpler, easier, cheaper, faster, and more efficient computers.
(There is a difference between unsigned addition and two’s complement addition: How overflow is detected. In unsigned addition, an overflow occurs if there is a carry out of the high bit. In two’s complement addition, an overflow occurs if there is a carry out of the highest of the magnitude bits, hence a carry into the sign. Adder units commonly handle this by reporting both indications, in one form or another. That information, if desired, is tested in later instructions; it does not affect the addition itself.)

~1 and ~0 giving strange results in python 3

&, |, ^, and ~ are all bitwise operators in python. &, ^, and | are all working fine for me - when i take, say, 1|0, I get 1. But ~ is giving me strange results. ~1 gives me -2, and ~0 gives me -1. Is this because I'm using integers or something? I'm running python 3.
I'm hoping to get 1 from ~0, and 0 from ~1 (the integers). Is this possible?
From here
~x
Returns the complement of x - the number you get by switching each 1 for a 0 and each 0 for a 1. This is the same as -x - 1.
Following the last part of that statement:
-1 - 1 does indeed equal -2
and
-0 - 1 does indeed equal -1
That's because of the two's complement implementation of integers.
If you switch all bits from 0000 0000 (assuming 8 bit integers here, but it's still valid for larger ones), you get 1111 1111. In two's complement interpretation, that's -1, because to represent -1, you take 1, invert all bits and add one:
0000 0001 (= 1)
-> 1111 1110 (inverted)
-> 1111 1111 (added one, now this is '-1')
The same works for your second example.

Categories

Resources