Negative integer zero - python

Why can Python represent negative zero in float, but cannot represent negative zero in int?
More specifically:
a = 0.0
print(a)
# 0.0
b = -a
print(b)
# -0.0
BUT:
a = 0
print(a)
# 0
b = -a
print(b)
# 0
(I am aware of the discussion here negative zero in python on negative float zero, but the ints are not really discussed there).

Historically, there were integer formats that could represent both −0 and +0. Both sign-and-magnitude and one’s complement can represent −0 and +0. These proved to be less useful than two’s complement, which won favor and is ubiquitous today.
Two’s complement has some numerical properties that make it a little nicer to implement in hardware, and having two zeros caused some nuisance for programmers. (I heard of bugs such as an account balance being −0 instead of +0 resulting in a person being sent a bill when they should not have been.)
Floating-point uses sign-and-magnitude, so it can represent both −0 and +0. Due to the nature of floating-point, the arithmetic properties of two’s complement would not aid a floating-point implementation as much, and having two zeros allows a programmer to use a little extra information in some circumstances.
So the choices for integer and floating-point formats are motivated by utility, not mathematical necessity.
A Look At Integer Arithmetic
Let’s consider implementing some integer arithmetic in computer hardware using four bits for study. Essentially the first thing we would do is implement unsigned binary arithmetic, so we design some logic gates to make adders and other arithmetic units. So the inputs 0101 and 0011 to the adder produce output 1000.
Next, we want to handle negative numbers. In writing, we handle negative numbers by putting a sign on front, so our first thought might be to do the same thing with bits: Use a bit in front to indicate negative. Now we have a sign-and-magnitude representation. 0001 represents +1, and 1001 represents −1. 0010 represents +2, and 1010 represents −2. 0111 represents +7 , and 1111 represents −7. And, of course, 0000 represents +0, and 1000 represents −0. That is an idea, and then we must implement it. We have already got an adder, and, if we feed it 0010 (2) and 0011 (3), it correctly outputs 0101 (5). But, if we feed it 0011 (3) and 1001 (−1), it outputs 1100 (−4). So we have to modify it. Well, that is not too bad, we have a subtraction unit for unsigned binary, so we can look at the first bit, and, if we are adding a negative number, we subtract instead of adding. That works for some operations; for 0011 and 1001, observing the leading 1 on the second operand and feeding 011 and 001 to the subtraction unit would produce 010 (2), which is correct. But, if we have 0010 and 1011, feeding 010 and 011 to the subtraction unit might produce some error indication (it was originally designed for unsigned binary) or it might “wrap” and produce 111 (because such wrapping, along with a “borrow out” bit in the output, makes the subtraction unit work as part of a design for subtracting wider numbers). Either way, that is wrong for our signed numbers; we want the output of 0010 (2) plus 1011 (−3) to be 1001 (−1). So we have to design new arithmetic units that handle this. Perhaps, when adding numbers of mixed signs, they figure out which one is larger in magnitude, subtract the smaller from the larger, and then apply the sign bit of the larger. In any case, we have a fair amount of work to do just to design the addition and subtraction units.
Another suggestion is, to make a number negative, invert every bit. This is called one’s complement. It is easy to understand and fits the notion of negation—just negate everything. Let’s consider how it affects our arithmetic units. For the combinations of +3 or −3 with +2 or −2, we would want these results: 0011 (3) + 0010 (2) = 0101 (5), 0011 (3) + 1101 (−2) = 0001 (1), 1100 (−3) + 0010 (2) = 1110 (−1), and 1100 (−3) + 1101 (−2) = 1010 (−5). Upon examination, there is a simple way to adapt our binary adder to make this work: Do the addition on all four bits as if they were unsigned binary, and, if there is a carry out of the leading bit, add it back to the low bit. In unsigned binary 0011 + 0010 = 0101 with no carry, so the final output is 0101. 0011 + 1101 = 0000 with a carry, so the final result is 0001. 1100 + 0010 = 1110 with no carry, so the final result is 1110. 1100 + 1101 = 1001 with a carry, so the final result is 1010.
This is nice; our one’s complement adder is simpler than the sign-and-magnitude adder. It does not need to compare magnitudes and does not need to do a subtraction to handle negative numbers. We can make it cheaper and make more profit.
Then somebody comes up with the idea of two’s complement. Instead of inverting every bit, we will conceptually subtract the number from 2n, where n is the number of bits. So 10000 − 0001 = 1111 represents −1, and 1110 is −2, 1101 is −3, and so on. What does this do to our adder?
In unsigned binary, 0010 (2) + 1101 (13) = 1111 (15). In two’s complement, 0010 (2) + 1101 (−3) = 1111 (−1). The bits are the same! This actually works for all two’s complement numbers; adding the bit patterns for unsigned numbers produces the same results we want for adding two’s complement numbers. We can use the exact same logic gates for unsigned binary and two’s complement. That is brilliant, give that employee a raise. That is what modern hardware does; the same arithmetic units are used for adding or subtracting two’s complement numbers as are used for adding or subtracting unsigned numbers.
This is a large part of why two’s complement won out for representing negative integers. It results in simpler, easier, cheaper, faster, and more efficient computers.
(There is a difference between unsigned addition and two’s complement addition: How overflow is detected. In unsigned addition, an overflow occurs if there is a carry out of the high bit. In two’s complement addition, an overflow occurs if there is a carry out of the highest of the magnitude bits, hence a carry into the sign. Adder units commonly handle this by reporting both indications, in one form or another. That information, if desired, is tested in later instructions; it does not affect the addition itself.)

Related

Bitwise: Why 14 & -14 is equals to 2 and 16 & -16 is equals to 16?

it is a dummy question, but I need to understand it more deeply
Python integers use two's complement to store signed values. That means that positive numbers are stored simply as their bit sequence (so 14 is 00001110 since it's equal to 8 + 4 + 2). On the other hand, negative numbers are stored by taking their positive quantity, inverting it, and adding one. So -14 is 11110010. We took the bitwise representation of 14 (00001110), inverted it (11110001), and added one (11110010).
But there's an added wrinkle. Python integer values are bignums; they can be arbitrarily large. So our usual notion of "this number is stored in N bits" breaks down. Instead, we may end up with two numbers of differing lengths. If we end up in that situation, we may have to sign extend the shorter one. This is just a fancy way of saying "take the most significant bit and repeat it until the number is long enough for our liking".
So in the case of 14 and -14, we have
0010
1110
We & them together. Only the second bit (counting from the right, or least significant bit) is true in both, so we get 0010, or 2. On the other hand, with 16 and -16, we get
010000
110000
For -16, we took positive sixteen (010000), flipped all of the bits (101111), and then added one, which got carried all the way over to the second most significant bit (110000). When we & these, we get 16.
010000
See also BitwiseOperators - Python Wiki
A bitwise operation is a binary operation.
In some representations of integers, one bit is used to represent the sign of the number. Depending upon which bit that is, will change the result of the bitwise &. In 2's complement, negative numbers are represented by inverting all the bits and then adding 1. There are many ways of representing real numbers in binary. Regardless, generally, a bitwise operation of a positive and negative number will always result in undefined behaviour. Probably why most calculators will only allow positive integers in bitwise operations. That is, calculators that are advanced enough to have such a feature.
EDIT: The specific numbers you chose are significant. The number 14 is represented in binary using less bits than 16. It just so happens that -16 and 16 are both exactly the same binary (looking only at the first five bits, since the rest are not significant when you come to & them together)
Now a bitwise & only sets a bit if that bit is set in both the numbers you are and-ing together.
1110 = 14
0010 = -14
&0010 = 2!
10000 = 16
10000 = -16
&10000 = 16!
There's your answer.

The bitwise NOT operator [duplicate]

I was trying to understand bitwise NOT in python.
I tried following:
print('{:b}'.format(~ 0b0101))
print(~ 0b0101)
The output is
-110
-6
I tried to understand the output as follows:
Bitwise negating 0101 gives 1010. With 1 in most significant bit, python interprets it as a negative number in 2's complement form and to get back corresponding decimal it further takes 2's complement of 1010 as follows:
1010
0101 (negating)
0110 (adding 1 to get final value)
So it prints it as -110 which is equivalent to -6.
Am I right with this interpretation?
You're half right..
The value is indeed represented by ~x == -(x+1) (add one and invert), but the explanation of why is a little misleading.
Two's compliment numbers require setting the MSB of the integer, which is a little difficult if the number can be an arbitrary number of bits long (as is the case with python). Internally python keeps a separate number (there are optimizations for short numbers however) that tracks how long the digit is. When you print a negative int using the binary format: f'{-6:b}, it just slaps a negative sign in front of the binary representation of the positive value (one's compliment). Otherwise, how would python determine how many leading one's there should be? Should positive values always have leading zeros to indicate they're positive? Internally it does indeed use two's compliment for the math though.
If we consider signed 8 bit numbers (and display all the digits) in 2's compliment your example becomes:
~ 0000 0101: 5
= 1111 1010: -6
So in short, python is performing correct bitwise negation, however the display of negative binary formatted numbers is misleading.
Python integers are arbitrarily long, so if you invert 0b0101, it would be 1111...11111010. How many ones do you write? Well, a 4-bit twos complement -6 is 1010, and a 32-bit twos complement -6 is 11111111111111111111111111111010. So an arbitrarily long -6 could ideally just be written as -6.
Check what happens when ~5 is masked to look at the bits it represents:
>>> ~5
-6
>>> format(~5 & 0xF,'b')
'1010'
>>> format(~5 & 0xFFFF,'b')
'1111111111111010'
>>> format(~5 & 0xFFFFFFFF,'b')
'11111111111111111111111111111010'
>>> format(~5 & 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFF,'b')
'11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111010'
A negative decimal representation makes sense and you must mask to limit a representation to a specific number of bits.

Python XOR behavior with a mix of positive/negative number

Here is two results I get when I xor 2 integers. The sames bits, but a different sign for the second parameter of the xor.
>>> bin(0b0001 ^ -0b0010)
'-0b1'
>>> bin(0b0001 ^ 0b0010)
'0b11'
I don't really understand the logic. Isn't XOR just supposed so XOR every bit one by one ? Even with signed bits ? I would expect to get the same results (with a different sign).
If python's integers were fixed-width (eg: 32-bit, or 64-bit), a negative number would be represented in 2's complement form. That is, if you want -a, then take the bits of a, invert them all, and then add 1. Then a ^ b is just the number that's represented by the bitwise xor of the bits of a and b in two's complement. The result is re-interpreted in two's complement (ie: negative if the top bit is set).
Python's int type isn't fixed-width, but the result of a ^ b follows the same pattern: imagine that the values are represented as a wide-enough fixed-with int type, and then take the xor of the two values.
Although this now seems a bit arbitrary, it makes sense historically: Python adopted many operations from C, so xor was defined to work like in C. Python had a fixed-width integer type like C, and having a ^ b give the same result for the fixed-width and arbitary-width integer types essentially forces the current definition.
Back to a worked example: 1 ^ -2. 8 bits is more than enough to represent these two values. In 2's complement:
1 = 00000001
-2 = 11111110
Then the bitwise xor is:
= 11111111
This is the 8-bit 2's complement representation of -1. Although we've used 8 bits here, the result is the same no matter the width chosen as long as it's enough to represent the two values.

~ Binary Ones Complement in Python 3 [duplicate]

This question already has answers here:
bit-wise operation unary ~ (invert)
(5 answers)
Closed 12 months ago.
Just had a doubt about how binary one's complement work.
For example(in python):
a = 60
print(~a)
Gives an output:-
-61
Isn't binary one's complement of 60 is :
a = 0011 1100
~a = 1100 0011
Should it not be -60 ?
I know I'm wrong but why does it shift ahead to -61?
~ is a bitwise inversion operator and it acts exectly as defined:
The bitwise inversion of x is defined as -(x+1).
This is simply how the bitwise inversion of the two's complement representation of an integer works.
The two's complement wheel visualizes this pretty well:
As you can see, the bitwise inversion of 1 is -2, the bitwise inversion of 2 is -3, ..., and the bitwise inversion of 60 will be -61.
In all modern computers, the 2's complement binary is used for representing integers (and not the classical binary representation).
As confirmed in Python docs:
A two's complement binary is same as the classical binary
representation for positive integers but is slightly different for
negative numbers. Negative numbers are represented by performing the
two's complement operation on their absolute value.
The 2's complement of a negative number, -x, is written using the bit pattern for (x-1) with all of the bits complemented (switched from 1 to 0 or 0 to 1).
Example:
2's complement of -15:
-15 => complement(x-1) => complement(15-1) => complement(14) => complement(1110) => 0001
Python's ~ (bitwise NOT) operator returns the 1's complement of the number.
Example:
print(~14) # Outputs -15
14 is (1110) in its 2's complement binary form.
Here, ~14 would invert (i.e. 1's complement) all the bits in this form to 0001.
However, 0001 is actually the 2's complement of -15.
A simple rule to remember the bitwise NOT operation on integers is -(x+1).
print(~60) # Outputs -61
print(~-60) # Outputs 59
You are almost there. 1100 0011 is actually -61.
Here's how a negative binary is converted to decimal:
Invert the bits
Add 1
Convert to decimal
Add negative sign
So:
1100 0011
0011 1100 <-- bits inverted
0011 1101 <-- one added
61 <-- converted to decimal
-61 <-- added negative sign
From wikipedia's Two's complement page:
The two's complement of an N-bit number is defined as its complement with respect to 2^N. For instance, for the three-bit number 010, the two's complement is 110, because 010 + 110 = 1000.
Here 1100 0011's complement is 0011 1101 cuz
1100 0011
+ 0011 1101
-------------
1 0000 0000

Performing right shift and bit masking on binary fraction in python

I am looking for a way in python to perform right shift and bit masking on a binary number which has a fraction part as well. For e.g., if there are 1 integer and 2 fraction bits in the number, then number 0b101 corresponds to 1.25 in decimal. First, I want to know the pythonic way to represent this number in python.
Second, I want to perform 1 right shift on this number (0b101>>1) so that the resultant number will be 0b010 which will be 0.5 in decimal. Is there an intrinsic and pythonic way in python to perform this operation. Similarly, how to mask and get a specific bit from the binary number?
Presently, for shift I am multiplying the number by 2**-x, x is the number of right shifts. I cannot think a similar operation I can perform for bit mask.
If you really must get directly at the internal representation of a float you can use struct, like this:
>>> import struct
>>> a = 1.25
>>> b = struct.pack('>d',a)
>>> b
b'?\xf4\x00\x00\x00\x00\x00\x00' # the ? means \x3f, leftmost 7 bits of exponent
>>> a.hex()
'0x1.4000000000000p+0'
You can mask the bit you want out of the bytestring that struct.pack() returns.
[edit] The question mark representing \x3f is because the default output representation of a bytestring is a string and Python will where possible show an ascii character, not two hex digits.
[edit] This representation is in principle platform-dependent, but in practice it isn't, because virtually every computer (even IBM mainframes nowadays) has a floating-point processor that uses this format.
Finding out which bit you want may be something of a challenge.
>>> c = struct.pack('>d',a/2)
>>> c
b'?\xe4\x00\x00\x00\x00\x00\x00'
>>> (a/2).hex()
'0x1.4000000000000p-1'
As you can see, division by 2 is not quite the simple one-bit shift to the right that your question seems to suggest you are expecting. In this case, the division by 2 has decremented the exponent by 1 (from 0x3ff to 0x3fe; 1023 to 1022) and left the bit pattern of the fraction (0x4000) unchanged. The exponent appears large because it is biased by 1023.
The main difficulties are
Sign, exponent and fraction don't align to byte boundaries, but to nybble boundaries (sign plus exponent: 12 bits; fraction: 52 bits)
The number is normalized so that it has no leading zeroes (much as scientific notation in decimal is normalized so that it has no leading zeroes) and, since everyone knows it's there, the leading 1 is not stored.
I can recommend the Wikpedia article on this subject: it has lots of useful examples.
But I suspect that you don't really want to get at the internal representation of a float. Instead, you want a fixed-point binary class, without pesky binary exponents, that works much the same as you would do it on paper, and where division by a power of 2 really does reflect as a shift of so many bits to the right.
Depending on how much work you want to put into it, you could do this by defining a FixedBinary class as a subclass of numbers.Real, with the integer portion internally represented by one int and the fractional component by another int, and the sign by a third int, so that 1.25 would be represented as (1, int(0.25 * 65536), +1) (or some other power of 2).
This also shows you the simplest way to get a bit representation of your fraction.
[edit] I recommend storing the sign separately. You could store it in the integer portion, or the fraction, or both, but all have disadvantages.
If you store it in the sign of the fraction, the twos-complement
representation of negative integers will give you difficulty when
you want to mask your bits.
If you don't store it in the sign of the fraction there will be no way to
represent -0.5.
If you don't store it in the sign of the integer portion there will be no way to represent -1.0.
A multiplicand of 65536 will give you 4 decimal digits of accuracy. You can increase it if you want more. I also recommend that you store your fraction in the rightmost bits and simply ignore the leftmost bits. In other words, be content with the binary point being in the middle of the int, don't insist on it being on the left. That is because you will need headroom to the left of the binary point when you do multiplication.
Implementing your own numeric class is a considerable amount of work, though.
You can work using fxpmath.
Info about this package is at:
https://github.com/francof2a/fxpmath
For your example:
from fxpmath import Fxp
x = Fxp('0b0101', signed=True, n_word=4, n_frac=2)
print(x)
y = x >> 1
print(y)
# example of AND mask
z = x & Fxp('0b0110', signed=True, n_word=4, n_frac=2)
print(z.bin())
outputs:
1.25
0.5
0100

Categories

Resources