How to convert signed to unsigned integer in python - python

Let's say I have this number i = -6884376.
How do I refer to it as to an unsigned variable?
Something like (unsigned long)i in C.

Assuming:
You have 2's-complement representations in mind; and,
By (unsigned long) you mean unsigned 32-bit integer,
then you just need to add 2**32 (or 1 << 32) to the negative value.
For example, apply this to -1:
>>> -1
-1
>>> _ + 2**32
4294967295L
>>> bin(_)
'0b11111111111111111111111111111111'
Assumption #1 means you want -1 to be viewed as a solid string of 1 bits, and assumption #2 means you want 32 of them.
Nobody but you can say what your hidden assumptions are, though. If, for example, you have 1's-complement representations in mind, then you need to apply the ~ prefix operator instead. Python integers work hard to give the illusion of using an infinitely wide 2's complement representation (like regular 2's complement, but with an infinite number of "sign bits").
And to duplicate what the platform C compiler does, you can use the ctypes module:
>>> import ctypes
>>> ctypes.c_ulong(-1) # stuff Python's -1 into a C unsigned long
c_ulong(4294967295L)
>>> _.value
4294967295L
C's unsigned long happens to be 4 bytes on the box that ran this sample.

To get the value equivalent to your C cast, just bitwise and with the appropriate mask. e.g. if unsigned long is 32 bit:
>>> i = -6884376
>>> i & 0xffffffff
4288082920
or if it is 64 bit:
>>> i & 0xffffffffffffffff
18446744073702667240
Do be aware though that although that gives you the value you would have in C, it is still a signed value, so any subsequent calculations may give a negative result and you'll have to continue to apply the mask to simulate a 32 or 64 bit calculation.
This works because although Python looks like it stores all numbers as sign and magnitude, the bitwise operations are defined as working on two's complement values. C stores integers in twos complement but with a fixed number of bits. Python bitwise operators act on twos complement values but as though they had an infinite number of bits: for positive numbers they extend leftwards to infinity with zeros, but negative numbers extend left with ones. The & operator will change that leftward string of ones into zeros and leave you with just the bits that would have fit into the C value.
Displaying the values in hex may make this clearer (and I rewrote to string of f's as an expression to show we are interested in either 32 or 64 bits):
>>> hex(i)
'-0x690c18'
>>> hex (i & ((1 << 32) - 1))
'0xff96f3e8'
>>> hex (i & ((1 << 64) - 1)
'0xffffffffff96f3e8L'
For a 32 bit value in C, positive numbers go up to 2147483647 (0x7fffffff), and negative numbers have the top bit set going from -1 (0xffffffff) down to -2147483648 (0x80000000). For values that fit entirely in the mask, we can reverse the process in Python by using a smaller mask to remove the sign bit and then subtracting the sign bit:
>>> u = i & ((1 << 32) - 1)
>>> (u & ((1 << 31) - 1)) - (u & (1 << 31))
-6884376
Or for the 64 bit version:
>>> u = 18446744073702667240
>>> (u & ((1 << 63) - 1)) - (u & (1 << 63))
-6884376
This inverse process will leave the value unchanged if the sign bit is 0, but obviously it isn't a true inverse because if you started with a value that wouldn't fit within the mask size then those bits are gone.

Python doesn't have builtin unsigned types. You can use mathematical operations to compute a new int representing the value you would get in C, but there is no "unsigned value" of a Python int. The Python int is an abstraction of an integer value, not a direct access to a fixed-byte-size integer.

Since version 3.2 :
def unsignedToSigned(n, byte_count):
return int.from_bytes(n.to_bytes(byte_count, 'little', signed=False), 'little', signed=True)
def signedToUnsigned(n, byte_count):
return int.from_bytes(n.to_bytes(byte_count, 'little', signed=True), 'little', signed=False)
output :
In [3]: unsignedToSigned(5, 1)
Out[3]: 5
In [4]: signedToUnsigned(5, 1)
Out[4]: 5
In [5]: unsignedToSigned(0xFF, 1)
Out[5]: -1
In [6]: signedToUnsigned(0xFF, 1)
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 signedToUnsigned(0xFF, 1)
Input In [1], in signedToUnsigned(n, byte_count)
4 def signedToUnsigned(n, byte_count):
----> 5 return int.from_bytes(n.to_bytes(byte_count, 'little', signed=True), 'little', signed=False)
OverflowError: int too big to convert
In [7]: signedToUnsigned(-1, 1)
Out[7]: 255
Explanations : to/from_bytes convert to/from bytes, in 2's complement considering the number as one of size byte_count * 8 bits. In C/C++, chances are you should pass 4 or 8 as byte_count for respectively a 32 or 64 bit number (the int type).
I first pack the input number in the format it is supposed to be from (using the signed argument to control signed/unsigned), then unpack to the format we would like it to have been from. And you get the result.
Note the Exception when trying to use fewer bytes than required to represent the number (In [6]). 0xFF is 255 which can't be represented using a C's char type (-128 ≤ n ≤ 127). This is preferable to any other behavior.

You could use the struct Python built-in library:
Encode:
import struct
i = -6884376
print('{0:b}'.format(i))
packed = struct.pack('>l', i) # Packing a long number.
unpacked = struct.unpack('>L', packed)[0] # Unpacking a packed long number to unsigned long
print(unpacked)
print('{0:b}'.format(unpacked))
Out:
-11010010000110000011000
4288082920
11111111100101101111001111101000
Decode:
dec_pack = struct.pack('>L', unpacked) # Packing an unsigned long number.
dec_unpack = struct.unpack('>l', dec_pack)[0] # Unpacking a packed unsigned long number to long (revert action).
print(dec_unpack)
Out:
-6884376
[NOTE]:
> is BigEndian operation.
l is long.
L is unsigned long.
In amd64 architecture int and long are 32bit, So you could use i and I instead of l and L respectively.
[UPDATE]
According to the #hl037_ comment, this approach works on int32 not int64 or int128 as I used long operation into struct.pack(). Nevertheless, in the case of int64, the written code would be changed simply using long long operand (q) in struct as follows:
Encode:
i = 9223372036854775807 # the largest int64 number
packed = struct.pack('>q', i) # Packing an int64 number
unpacked = struct.unpack('>Q', packed)[0] # Unpacking signed to unsigned
print(unpacked)
print('{0:b}'.format(unpacked))
Out:
9223372036854775807
111111111111111111111111111111111111111111111111111111111111111
Next, follow the same way for the decoding stage. As well as this, keep in mind q is long long integer — 8byte and Q is unsigned long long
But in the case of int128, the situation is slightly different as there is no 16-byte operand for struct.pack(). Therefore, you should split your number into two int64.
Here's how it should be:
i = 10000000000000000000000000000000000000 # an int128 number
print(len('{0:b}'.format(i)))
max_int64 = 0xFFFFFFFFFFFFFFFF
packed = struct.pack('>qq', (i >> 64) & max_int64, i & max_int64)
a, b = struct.unpack('>QQ', packed)
unpacked = (a << 64) | b
print(unpacked)
print('{0:b}'.format(unpacked))
Out:
123
10000000000000000000000000000000000000
111100001011110111000010000110101011101101001000110110110010000000011110100001101101010000000000000000000000000000000000000

just use abs for converting unsigned to signed in python
a=-12
b=abs(a)
print(b)
Output:
12

Related

How to convert a char to signed integer in python

I don't have much experience with Python, so I need your help!
In the following example I can convert a char to unsigned integer, but i need a signed integer. How can I convert a char to signed integer in python?
d="bd"
d=int(d,16)
print (d)
The Result is: 189
but I expect: -67
First a nitpick: It's not a char, it's a string.
The main problem is that int() cannot know how long the input is supposed to be; or in other words: it cannot know which bit is the MSB (most significant bit) designating the sign. In python, int just means "an integer, i.e. any whole number". There is no defined bit size of numbers, unlike in C.
For int(), the inputs 000000bd and bd therefore are the same; and the sign is determined by the presence or absence of a - prefix.
For arbitrary bit count of your input numbers (not only the standard 8, 16, 32, ...), you will need to do the two-complement conversion step manually, and tell it the supposed input size. (In C, you would do that implicitely by assigning the conversion result to an integer variable of the target bit size).
def hex_to_signed_number(s, width_in_bits):
n = int(s, 16) & (pow(2, width_in_bits) - 1)
if( n >= pow(2, width_in_bits-1) ):
n -= pow(2, width_in_bits)
return n
Some testcases for that function:
In [6]: hex_to_signed_number("bd", 8)
Out[6]: -67
In [7]: hex_to_signed_number("bd", 16)
Out[7]: 189
In [8]: hex_to_signed_number("80bd", 16)
Out[8]: -32579
In [9]: hex_to_signed_number("7fff", 16)
Out[9]: 32767
In [10]: hex_to_signed_number("8000", 16)
Out[10]: -32768
print(int.from_bytes(bytes.fromhex("bd"), byteorder="big", signed=True))
You can convert the string into Bytes and then convert bytes to int by adding signed to True which will give you negative integer value.

What is good way to negate an integer in binary operation in python?

Based on what I've read about the binary representation of integers, the first bit is for sign (positive or negative).
Let's say we have an integer x = 5 and sys.getsizeof(x) returns 28 (that is binary representation in 28 bit).
For now I am trying to flip the first bit to 1 by using x|=(1<<27)but it returns 134217733.
I was just wondering whether it needs to be some negative number? (not -5)
Is there anything wrong with what I am doing?
You can't switch a Python int from positive to negative the way you're trying to, by just flipping a bit in its representation. You're assuming it's stored in a fixed-length two's complement representation. But integers in Python 3 are not fixed-length bit strings, and they are not stored in a two's complement representation. Instead, they are stored as variable-length strings of 30- or 15-bit "digits", with the sign stored separately (like a signed-magnitude representation). So the "lowest-level" way to negate a Python int is not with bit operations, but with the unary - operator, which will switch its sign. (See the end of this answer for details from the Python 3 source.)
(I should also mention that sys.getsizeof() does not tell you the number of bits in your int. It gives you the number of bytes of memory that the integer object is using. This is also not the number of bytes of the actual stored number; most of those bytes are for other things.)
You can still play around with two's complement representations in Python, by emulating a fixed-length bit string using a positive int. First, choose the length you want, for example 6 bits. (You could just as easily choose larger numbers like 28 or 594.) We can define some helpful constants and functions:
BIT_LEN = 6
NUM_INTS = 1 << BIT_LEN # 0b1000000
BIT_MASK = NUM_INTS - 1 # 0b111111
HIGH_BIT = 1 << (BIT_LEN - 1) # 0b100000
def to2c(num):
"""Returns the two's complement representation for a signed integer."""
return num & BIT_MASK
def from2c(bits):
"""Returns the signed integer for a two's complement representation."""
bits &= BIT_MASK
if bits & HIGH_BIT:
return bits - NUM_INTS
Now we can do something like you were trying to:
>>> x = to2c(2)
>>> x |= 1 << 5
>>> bin(x)
'0b100010'
>>> from2c(x)
-30
Which shows that turning on the high bit for the number 2 in a 6-bit two's complement representation turns the number into -30. This makes sense, because 26-1 = 32, so the lowest integer in this representation is -32. And -32 + 2 = -30.
If you're interested in the details of how Python 3 stores integers, you can look through Objects/longobject.c in the source. In particular, looking at the function _PyLong_Negate():
/* If a freshly-allocated int is already shared, it must
be a small integer, so negating it must go to PyLong_FromLong */
Py_LOCAL_INLINE(void)
_PyLong_Negate(PyLongObject **x_p)
{
PyLongObject *x;
x = (PyLongObject *)*x_p;
if (Py_REFCNT(x) == 1) {
Py_SIZE(x) = -Py_SIZE(x);
return;
}
*x_p = (PyLongObject *)PyLong_FromLong(-MEDIUM_VALUE(x));
Py_DECREF(x);
}
you can see that all it does in the normal case is negate the Py_SIZE() value of the integer object. Py_SIZE() is simply a reference to the ob_size field of the integer object. When this value is 0, the integer is 0. Otherwise, its sign is the sign of the integer, and its absolute value is the number of 30- or 15-bit digits in the array that holds the integer's absolute value.
Negative number representations in python :
Depending on how many binary digit you want, subtract from a number (2n):
>>> bin((1 << 8) - 1)
'0b11111111'
>>> bin((1 << 16) - 1)
'0b1111111111111111'
>>> bin((1 << 32) - 1)
'0b11111111111111111111111111111111'
Function to generate two's compliment(negative number):
def to_twoscomplement(bits, value):
if value < 0:
value = ( 1<<bits ) + value
formatstring = '{:0%ib}' % bits
return formatstring.format(value)
Output:
>>> to_twoscomplement(16, 3)
'0000000000000011'
>>> to_twoscomplement(16, -3)
'1111111111111101'
Refer : two's complement of numbers in python

Two's complement of Hex number in Python

Below a and b (hex), representing two's complement signed binary numbers.
For example:
a = 0x17c7cc6e
b = 0xc158a854
Now I want to know the signed representation of a & b in base 10. Sorry I'm a low level programmer and new to python; feel very stupid for asking this. I don't care about additional library's but the answer should be simple and straight forward. Background: a & b are extracted data from a UDP packet. I have no control over the format. So please don't give me an answer that would assume I can change the format of those varibles before hand.
I have converted a & b into the following with this:
aBinary = bin(int(a, 16))[2:].zfill(32) => 00010111110001111100110001101110 => 398969966
bBinary = bin(int(b, 16))[2:].zfill(32) => 11000001010110001010100001010100 => -1051154348
I was trying to do something like this (doesn't work):
if aBinary[1:2] == 1:
aBinary = ~aBinary + int(1, 2)
What is the proper way to do this in python?
why not using ctypes ?
>>> import ctypes
>>> a = 0x17c7cc6e
>>> ctypes.c_int32(a).value
398969966
>>> b = 0xc158a854
>>> ctypes.c_int32(b).value
-1051154348
A nice way to do this in Python is using bitwise operations. For example, for 32-bit values:
def s32(value):
return -(value & 0x80000000) | (value & 0x7fffffff)
Applying this to your values:
>>> s32(a)
398969966
>>> s32(b)
-1051154348
What this function does is sign-extend the value so it's correctly interpreted with the right sign and value.
Python is a bit tricky in that it uses arbitrary precision integers, so negative numbers are treated as if there were an infinite series of leading 1 bits. For example:
>>> bin(-42 & 0xff)
'0b11010110'
>>> bin(-42 & 0xffff)
'0b1111111111010110'
>>> bin(-42 & 0xffffffff)
'0b11111111111111111111111111010110'
>>> import numpy
>>> numpy.int32(0xc158a854)
-1051154348
You'll have to know at least the width of your data. For instance, 0xc158a854 has 8 hexadecimal digits so it must be at least 32 bits wide; it appears to be an unsigned 32 bit value. We can process it using some bitwise operations:
In [232]: b = 0xc158a854
In [233]: if b >= 1<<31: b -= 1<<32
In [234]: b
Out[234]: -1051154348L
The L here marks that Python 2 has switched to processing the value as a long; it's usually not important, but in this case indicates that I've been working with values outside the common int range for this installation. The tool to extract data from binary structures such as UDP packets is struct.unpack; if you just tell it that your value is signed in the first place, it will produce the correct value:
In [240]: s = '\xc1\x58\xa8\x54'
In [241]: import struct
In [242]: struct.unpack('>i', s)
Out[242]: (-1051154348,)
That assumes two's complement representation; one's complement (such as the checksum used in UDP), sign and magnitude, or IEEE 754 floating point are some less common encodings for numbers.
Another modern solution:
>>> b = 0xc158a854
>>> int.from_bytes(bytes.fromhex(hex(b)[2:]), byteorder='big', signed=True)
-1051154348
2^31 = 0x80000000 (sign bit, which in two's compliment represents -2^31)
2^31-1 = 0x7fffffff (all the positive bits)
and hence (n & 0x7fffffff) - (n & 0x80000000) will apply the sign correctly
you could even do, n - ((n & 0x80000000)<<1) to subtract the msb value twice
or finally there's, (n & 0x7fffffff) | -(n & 0x80000000) to just merge the negative bit, not subtract
def signedHex(n): return (n & 0x7fffffff) | -(n & 0x80000000)
signedHex = lambda n: (n & 0x7fffffff) | -(n & 0x80000000)
value=input("enter hexa decimal value for getting compliment values:=")
highest_value="F"*len(value)
resulting_decimal=int(highest_value,16)-int(value,16)
ones_compliment=hex(resulting_decimal)
twos_compliment=hex(r+1)
print(f'ones compliment={ones_compliment}\n twos complimet={twos_compliment}')

two's complement of numbers in python

I am writing code that will have negative and positive numbers all 16 bits long with the MSB being the sign aka two's complement. This means the smallest number I can have is -32768 which is 1000 0000 0000 0000 in two's complement form. The largest number I can have is 32767 which is 0111 1111 1111 1111.
The issue I am having is python is representing the negative numbers with the same binary notation as positive numbers just putting a minus sign out the front i.e. -16384 is displayed as -0100 0000 0000 0000 what I want to be displayed for a number like -16384 is 1100 0000 0000 0000.
I am not quite sure how this can be coded. This is the code i have. Essentially if the number is between 180 and 359 its going to be negative. I need to display this as a twos compliment value. I dont have any code on how to display it because i really have no idea how to do it.
def calculatebearingActive(i):
numTracks = trackQty_Active
bearing = (((i)*360.0)/numTracks)
if 0< bearing <=179:
FC = (bearing/360.0)
FC_scaled = FC/(2**(-16))
return int(FC_scaled)
elif 180<= bearing <=359:
FC = -1*(360-bearing)/(360.0)
FC_scaled = FC/(2**(-16))
return int(FC_scaled)
elif bearing ==360:
FC = 0
return FC
If you're doing something like
format(num, '016b')
to convert your numbers to a two's complement string representation, you'll want to actually take the two's complement of a negative number before stringifying it:
format(num if num >= 0 else (1 << 16) + num, '016b')
or take it mod 65536:
format(num % (1 << 16), '016b')
The two's complement of a value is the one's complement plus one.
You can write your own conversion function based on that:
def to_binary(value):
result = ''
if value < 0:
result = '-'
value = ~value + 1
result += bin(value)
return result
The result looks like this:
>>> to_binary(10)
'0b1010'
>>> to_binary(-10)
'-0b1010'
Edit: To display the bits without the minus in front you can use this function:
def to_twoscomplement(bits, value):
if value < 0:
value = ( 1<<bits ) + value
formatstring = '{:0%ib}' % bits
return formatstring.format(value)
>>> to_twoscomplement(16, 3)
'0000000000000011'
>>> to_twoscomplement(16, -3)
'1111111111111101'
If you actually want to store the numbers using 16 bits, you can use struct.
import struct
>>> struct.pack('h', 32767)
'\xff\x7f'
>>> struct.pack('h', -32767)
'\x01\x80'
You can unpack using unpack
>>> a = struct.pack('h', 32767)
>>> struct.unpack('H', a)
32767
Python can hold unlimited integer values, the bit representation will adopt to hold any number you put. So such technical details as two complement does not make sense in this context. In C 'b1111111111111111' means -1 for int16 and 65535 for uint16 or int32. In python it is always is 65535 as the int will adopt to hold such values.
I think this is why they opted to add - in front of negative numbers regardless of string representation (binary, oct, hex, decimal ...).
If you wish to replicate the C behaviour and get the negative represented in two complement form you have the following options:
1 int > uint > bin using numpy
The most straight forward way is to dump the value as signed limited int and read it as unsigned.
If you have access to numpy the code is pretty easy to read:
>>> bin(np.int16(-30).astype(np.uint16))
'0b1111111111100010'
>>> bin(np.int16(-1).astype(np.uint16))
'0b1111111111111111'
>>> bin(np.int16(-2).astype(np.uint16))
'0b1111111111111110'
>>> bin(np.int16(-16).astype(np.uint16))
'0b1111111111110000'
2 int > uint > bin using struct
You can do the similar think with struct but it is slightly harder to understand
>>> bin(struct.unpack('>H', struct.pack('>h', 30))[0])
'0b1111111111100010'
>>> bin(struct.unpack('>H', struct.pack('>h', -1))[0])
'0b1111111111111111'
>>> bin(struct.unpack('>H', struct.pack('>h', -2))[0])
'0b1111111111111110'
>>> bin(struct.unpack('>H', struct.pack('>h', -16))[0])
'0b1111111111110000'
Note: h is signed and H is unsigned int 16, and '>' stands for bigendian it comes handy if you want to read bytes directly without converting them back to int
3. int using struct, then read byte by byte and convert to bin
>>> ''.join(f'{byte:08b}' for byte in struct.pack('>h', -1<<15))
'1000000000000000'
>>> ''.join(f'{byte:08b}' for byte in struct.pack('>h', -1))
'1111111111111111'
>>> ''.join(f'{byte:08b}' for byte in struct.pack('>h', -2))
'1111111111111110'
>>> ''.join(f'{byte:08b}' for byte in struct.pack('>h', -16))
'1111111111110000'
Note this has some quirks, as you need to remember to enforce the byte binary representation to be 8 digits long with '08'.
3 Directly from math,
Lastly you can go check what wikipedia says about 2 complement representation and get it implemented directly from the math formula Two complement > Subtraction from 2N
>>> bin(2**16 -16)
'0b1111111111110000'
>>> bin(2**16 -3)
'0b1111111111111101'
This looks super simple but it is hard to understand if you are not versed in the way 2 complement representation works.
Since you haven't given any code examples, I can't be sure what's going on. Based on the numbers in your example, I don't think you're using bin(yourint) because you're output doesn't contain 0b. Maybe you're already slicing that off in your examples.
If you are storing your binary data as strings, you could do something like:
def handle_negatives(binary_string):
If binary_string < 0:
binary_string = '1' + str(binary_string)[1:]
Return binary_string
Here are some caveats in printing complements(1s and 2s) in binary form, in python:
UNSIGNED RANGE: 0 to (2^k)-1 for k bit number
ex: 0 to (2^32)-1 numbers
ex: 0 to 7 for 3 bit unsigned numbers (count = 8)
SIGNED RANGE: -2^(k-1) to +2^(k-1)-1 for 1+k bit number (k-1 is for dividing current range k into two equal half)
ex: -2^31 to +(2^31)-1 numbers
ex -8 to +7 for 1+3 bit signed numbers (count = 8)
bin(int)->str converts an integer to binary string
CAVEAT: 1. Since in python there is no limit to length of integer
for ~x or !x (1s_complement/not/negate) we can't determine how many bits after MSB needs to be 1
so python just prints out unsigned value of target negative number in binary format with a
'-' sign in the beginning
ex: for x = b'01010'(10) we get ~x = -0b'1011' (-11)
but we should be getting -16+5 = -11
(-b'10000'+b'00101') = -b'10101' (-11 signed) or (21 unsigned)
to get real binary value after negation(or 1s complement) one could simulate it
NOTE: 2^0 is always 1, so (2**0 == 1) in python
NOTE: (1 << k) is always 2^k (left shift is 2 raised to the power k)
ex: bin((1 << k)-1 - x) which is ((2^k)-1)-x (1s complement)
ex: bin((1 << k)-1 - x) + 1 which is (2^k)-x (2s complement)
2. Same goes for reverse parsing of signed binary string to int:
ex: int("-0b'0101'", 2) gives -5 but instead it actually is -11 assuming -0b represents all bits
from MSB till current to be like 1111......0101 which is actually -16+5 = -11
BUT due to PYTHON's limitation of representing signed binary values we need to adhere to
current way of parsing considering unsigned binary strings with sign in front for -ve numbers
# NOTE: how the significant prefix zeros doesn't matter in both +ve and -ve cases
# Byte type inputs
x = b'+1010' # valid +ve number byte string
x = b'1010' # valid +ve number byte string
x = b'-1010' # valid -ve number byte string
x = b'+01010' # valid +ve number byte string
x = b'01010' # valid +ve number byte string
x = b'-01010' # valid -ve number byte string
int(b'101') # interprets as base 10 for each digit
int(b'101', 2) # interprets as base 2 for each digit
int(b'101', 8) # interprets as base 8 for each digit
int(b'101', 10) # interprets as base 10 for each digit
int(b'101', 16) # interprets as base 16 for each digit
# String type inputs
x = '+1010' # valid +ve number string
x = '1010' # valid +ve number string
x = '-1010' # valid -ve number string
x = '+01010' # valid +ve number string
x = '01010' # valid +ve number string
x = '-01010' # valid -ve number string
int('101') # interprets as base 10 for each digit
int('101', 2) # interprets as base 2 for each digit
int('101', 8) # interprets as base 8 for each digit
int('101', 10) # interprets as base 10 for each digit
int('101', 16) # interprets as base 16 for each digit
# print(bin(int(x, 2)), int(x,2), ~int(x, 2), bin(~int(x,2)), "-"+bin((1<<k)-1 - int(x,2)))
k = 5 # no of bits
assert 2**0 == 1 # (2^0 is always 1)
_2k = (1 << k) # (2^k == left shift (2^0 or 1) by k times == multiply 2 by k times)
x = '01010' # valid +ve number string
x = int(x,2)
print("input:", x) # supposed to be 1s complement of binStr but due to python's limitation,
# we consider it -(unsigned binStr)
_1s = '-'+bin((_2k-1)-x)
print("1s complement(negate/not): ", _1s, ~x)
_2s = '-'+bin(_2k-x)
print("2s complement(1s +1): ", _2s, ~x+1)
output:
k = 5 (5 bit representation)
input: 10
1s complement(negate/not): -0b10101 -11
2s complement(1s +1): -0b10110 -10
k=32 (32 bit representation)
input: 10
1s complement(negate/not): -0b11111111111111111111111111110101 -11
2s complement(1s +1): -0b11111111111111111111111111110110 -10

Python equivalent of C code from Bit Twiddling Hacks?

I have a bit counting method that I am trying to make as fast as possible. I want to try the algorithm below from Bit Twiddling Hacks, but I don't know C. What is 'type T' and what is the python equivalent of (T)~(T)0/3?
A generalization of the best bit
counting method to integers of
bit-widths upto 128 (parameterized by
type T) is this:
v = v - ((v >> 1) & (T)~(T)0/3); // temp
v = (v & (T)~(T)0/15*3) + ((v >> 2) & (T)~(T)0/15*3); // temp
v = (v + (v >> 4)) & (T)~(T)0/255*15; // temp
c = (T)(v * ((T)~(T)0/255)) >> (sizeof(v) - 1) * CHAR_BIT; // count
T is a integer type, which I'm assuming is unsigned. Since this is C, it'll be fixed width, probably (but not necessarily) one of 8, 16, 32, 64 or 128. The fragment (T)~(T)0 that appears repeatedly in that code sample just gives the value 2**N-1, where N is the width of the type T. I suspect that the code may require that N be a multiple of 8
for correct operation.
Here's a direct translation of the given code into Python, parameterized in terms of N, the width of T in bits.
def count_set_bits(v, N=128):
mask = (1 << N) - 1
v = v - ((v >> 1) & mask//3)
v = (v & mask//15*3) + ((v >> 2) & mask//15*3)
v = (v + (v >> 4)) & mask//255*15
return (mask & v * (mask//255)) >> (N//8 - 1) * 8
Caveats:
(1) the above will only work for numbers up to 2**128. You might be able to generalize it for larger numbers, though.
(2) There are obvious inefficiencies: for example, 'mask//15' is computed twice. This doesn't matter for C, of course, because the compiler will almost certainly do the division at compile time rather than run time, but Python's peephole optimizer may not be so clever.
(3) The fastest C method may well not translate to the fastest Python method. For Python speed, you should probably be looking for an algorithm that minimizes the number of Python bitwise operations. As Alexander Gessler said: profile!
What you copied is a template for generating code. It's not a good idea to transliterate that template into another language and expect it to run fast. Let's expand the template.
(T)~(T)0 means "as many 1-bits as fit in type T". The algorithm needs 4 masks which we will compute for the various T-sizes we might be interested in.
>>> for N in (8, 16, 32, 64, 128):
... all_ones = (1 << N) - 1
... constants = ' '.join([hex(x) for x in [
... all_ones // 3,
... all_ones // 15 * 3,
... all_ones // 255 * 15,
... all_ones // 255,
... ]])
... print N, constants
...
8 0x55 0x33 0xf 0x1
16 0x5555 0x3333 0xf0f 0x101
32 0x55555555L 0x33333333L 0xf0f0f0fL 0x1010101L
64 0x5555555555555555L 0x3333333333333333L 0xf0f0f0f0f0f0f0fL 0x101010101010101L
128 0x55555555555555555555555555555555L 0x33333333333333333333333333333333L 0xf0f0f0f0f0f0f0f0f0f0f0f0f0f0f0fL 0x1010101010101010101010101010101L
>>>
You'll notice that the masks generated for the 32-bit case match those in the hardcoded 32-bit C code. Implementation detail: lose the L suffix from the 32-bit masks (Python 2.x) and lose all L suffixes for Python 3.x.
As you can see the whole template and (T)~(T)0 caper is merely obfuscatory sophistry. Put quite simply, for a k-byte type, you need 4 masks:
k bytes each 0x55
k bytes each 0x33
k bytes each 0x0f
k bytes each 0x01
and the final shift is merely N-8 (i.e. 8*(k-1)) bits. Aside: I doubt if the template code would actually work on a machine whose CHAR_BIT was not 8, but there aren't very many of those around these days.
Update: There is another point that affects the correctness and the speed when transliterating such algorithms from C to Python. The C algorithms often assume unsigned integers. In C, operations on unsigned integers work silently modulo 2**N. In other words, only the least significant N bits are retained. No overflow exceptions. Many bit twiddling algorithms rely on this. However (a) Python's int and long are signed (b) old Python 2.X will raise an exception, recent Python 2.Xs will silently promote int to long and Python 3.x int == Python 2.x long.
The correctness problem usually requires register &= all_ones at least once in the Python code. Careful analysis is often required to determine the minimal correct masking.
Working in long instead of int doesn't do much for efficiency. You'll notice that the algorithm for 32 bits will return a long answer even from input of 0, because the 32-bits all_ones is long.

Categories

Resources