Integers in Python are stored in two's complement, correct?
Although:
>>> x = 5
>>> bin(x)
0b101
And:
>>> x = -5
>>> bin(x)
-0b101
That's pretty lame. How do I get python to give me the numbers in REAL binary bits, and without the 0b infront of it? So:
>>> x = 5
>>> bin(x)
0101
>>> y = -5
>>> bin(y)
1011
It works best if you provide a mask. That way you specify how far to sign extend.
>>> bin(-27 & 0b1111111111111111)
'0b1111111111100101'
Or perhaps more generally:
def bindigits(n, bits):
s = bin(n & int("1"*bits, 2))[2:]
return ("{0:0>%s}" % (bits)).format(s)
>>> print bindigits(-31337, 24)
111111111000010110010111
In basic theory, the actual width of the number is a function of the size of the storage. If it's a 32-bit number, then a negative number has a 1 in the MSB of a set of 32. If it's a 64-bit value, then there are 64 bits to display.
But in Python, integer precision is limited only to the constraints of your hardware. On my computer, this actually works, but it consumes 9GB of RAM just to store the value of x. Anything higher and I get a MemoryError. If I had more RAM, I could store larger numbers.
>>> x = 1 << (1 << 36)
So with that in mind, what binary number represents -1? Python is well-capable of interpreting literally millions (and even billions) of bits of precision, as the previous example shows. In 2's complement, the sign bit extends all the way to the left, but in Python there is no pre-defined number of bits; there are as many as you need.
But then you run into ambiguity: does binary 1 represent 1, or -1? Well, it could be either. Does 111 represent 7 or -1? Again, it could be either. So does 111111111 represent 511, or -1... well, both, depending on your precision.
Python needs a way to represent these numbers in binary so that there's no ambiguity of their meaning. The 0b prefix just says "this number is in binary". Just like 0x means "this number is in hex". So if I say 0b1111, how do I know if the user wants -1 or 15? There are two options:
Option A: The sign bit
You could declare that all numbers are signed, and the left-most bit is the sign bit. That means 0b1 is -1, while 0b01 is 1. That also means that 0b111 is also -1, while 0b0111 is 7. In the end, this is probably more confusing than helpful particularly because most binary arithmetic is going to be unsigned anyway, and people are more likely to run into mistakes by accidentally marking a number as negative because they didn't include an explicit sign bit.
Option B: The sign indication
With this option, binary numbers are represented unsigned, and negative numbers have a "-" prefix, just like they do in decimal. This is (a) more consistent with decimal, (b) more compatible with the way binary values are most likely going to be used. You lose the ability to specify a negative number using its two's complement representation, but remember that two's complement is a storage implementation detail, not a proper indication of the underlying value itself. It shouldn't have to be something that the user has to understand.
In the end, Option B makes the most sense. There's less confusion and the user isn't required to understand the storage details.
To properly interpret a binary sequence as two's complement, there needs to a length associated with the sequence. When you are working low-level types that correspond directly to CPU registers, there is an implicit length. Since Python integers can have an arbitrary length, there really isn't an internal two's complement format. Since there isn't a length associated with a number, there is no way to distinguish between positive and negative numbers. To remove the ambiguity, bin() includes a minus sign when formatting a negative number.
Python's arbitrary length integer type actually uses a sign-magnitude internal format. The logical operations (bit shifting, and, or, etc.) are designed to mimic two's complement format. This is typical of multiple precision libraries.
Here is a little bit more readable version of Tylerl answer, for example let's say you want -2 in its 8-bits negative representation of "two's complement" :
bin(-2 & (2**8-1))
2**8 stands for the ninth bit (256), substract 1 to it and you have all the preceding bits set to one (255)
for 8 and 16 bits masks, you can replace (2**8-1) by 0xff, or 0xffff. The hexadecimal version becomes less readalbe after that point.
If this is unclear, here is a regular function of it:
def twosComplement (value, bitLength) :
return bin(value & (2**bitLength - 1))
The compliment of one minus number's meaning is mod value minus the positive value.
So I thinkļ¼the brief way for the compliment of -27 is
bin((1<<32) - 27) // 32 bit length '0b11111111111111111111111111100101'
bin((1<<16) - 27)
bin((1<<8) - 27) // 8 bit length '0b11100101'
Not sure how to get what you want using the standard lib. There are a handful of scripts and packages out there that will do the conversion for you.
I just wanted to note the "why" , and why it's not lame.
bin() doesn't return binary bits. it converts the number to a binary string. the leading '0b' tells the interpreter that you're dealing with a binary number , as per the python language definition. this way you can directly work with binary numbers, like this
>>> 0b01
1
>>> 0b10
2
>>> 0b11
3
>>> 0b01 + 0b10
3
that's not lame. that's great.
http://docs.python.org/library/functions.html#bin
bin(x)
Convert an integer number to a binary string.
http://docs.python.org/reference/lexical_analysis.html#integers
Integer and long integer literals are described by the following lexical definitions:
bininteger ::= "0" ("b" | "B") bindigit+
bindigit ::= "0" | "1"
Use slices to get rid of unwanted '0b'.
bin(5)[2:]
'101'
or if you want digits,
tuple ( bin(5)[2:] )
('1', '0', '1')
or even
map( int, tuple( bin(5)[2:] ) )
[1, 0, 1]
tobin = lambda x, count=8: "".join(map(lambda y:str((x>>y)&1), range(count-1, -1, -1)))
e.g.
tobin(5) # => '00000101'
tobin(5, 4) # => '0101'
tobin(-5, 4) # => '1011'
Or as clear functions:
# Returns bit y of x (10 base). i.e.
# bit 2 of 5 is 1
# bit 1 of 5 is 0
# bit 0 of 5 is 1
def getBit(y, x):
return str((x>>y)&1)
# Returns the first `count` bits of base 10 integer `x`
def tobin(x, count=8):
shift = range(count-1, -1, -1)
bits = map(lambda y: getBit(y, x), shift)
return "".join(bits)
(Adapted from W.J. Van de Laan's comment)
I'm not entirely certain what you ultimately want to do, but you might want to look at the bitarray package.
def tobin(data, width):
data_str = bin(data & (2**width-1))[2:].zfill(width)
return data_str
You can use the Binary fractions package. This package implements TwosComplement with binary integers and binary fractions. You can convert binary-fraction strings into their twos complement and vice-versa
Example:
>>> from binary_fractions import TwosComplement
>>> TwosComplement.to_float("11111111111") # TwosComplement --> float
-1.0
>>> TwosComplement.to_float("11111111100") # TwosComplement --> float
-4.0
>>> TwosComplement(-1.5) # float --> TwosComplement
'10.1'
>>> TwosComplement(1.5) # float --> TwosComplement
'01.1'
>>> TwosComplement(5) # int --> TwosComplement
'0101'
To use this with Binary's instead of float's you can use the Binary class inside the same package.
PS: Shameless plug, I'm the author of this package.
For positive numbers, just use:
bin(x)[2:].zfill(4)
For negative numbers, it's a little different:
bin((eval("0b"+str(int(bin(x)[3:].zfill(4).replace("0","2").replace("1","0").replace("2","1"))))+eval("0b1")))[2:].zfill(4)
As a whole script, this is how it should look:
def binary(number):
if number < 0:
return bin((eval("0b"+str(int(bin(number)[3:].zfill(4).replace("0","2").replace("1","0").replace("2","1"))))+eval("0b1")))[2:].zfill(4)
return bin(number)[2:].zfill(4)
x=input()
print binary(x)
A modification on tylerl's very helpful answer that provides sign extension for positive numbers as well as negative (no error checking).
def to2sCompStr(num, bitWidth):
num &= (2 << bitWidth-1)-1 # mask
formatStr = '{:0'+str(bitWidth)+'b}'
ret = formatStr.format(int(num))
return ret
Example:
In [11]: to2sCompStr(-24, 18)
Out[11]: '111111111111101000'
In [12]: to2sCompStr(24, 18)
Out[12]: '000000000000011000'
No need, it already is. It is just python choosing to represent it differently. If you start printing each nibble separately, it will show its true colours.
checkNIB = '{0:04b}'.format
checkBYT = lambda x: '-'.join( map( checkNIB, [ (x>>4)&0xf, x&0xf] ) )
checkBTS = lambda x: '-'.join( [ checkBYT( ( x>>(shift*8) )&0xff ) for shift in reversed( range(4) ) if ( x>>(shift*8) )&0xff ] )
print( checkBTS(-0x0002) )
Output is simple:
>>>1111-1111-1111-1111-1111-1111-1111-1110
Now it reverts to original representation when you want to display a twos complement of an nibble but it is still possible if you divide it into halves of nibble and so. Just have in mind that the best result is with negative hex and binary integer interpretations simple numbers not so much, also with hex you can set up the byte size.
We can leverage the property of bit-wise XOR. Use bit-wise XOR to flip the bits and then add 1. Then you can use the python inbuilt bin() function to get the binary representation of the 2's complement. Here's an example function:
def twos_complement(input_number):
print(bin(input_number)) # prints binary value of input
mask = 2**(1 + len(bin(input_number)[2:])) - 1 # Calculate mask to do bitwise XOR operation
twos_comp = (input_number ^ mask) + 1 # calculate 2's complement, for negative of input_number (-1 * input_number)
print(bin(twos_comp)) # print 2's complement representation of negative of input_number.
I hope this solves your problem`
num = input("Enter number : ")
bin_num=bin(num)
binary = '0' + binary_num[2:]
print binary
I need to compute the hamming distance between two integers by counting the number of differing bits between their binary representations.
This is the function that I am using for that purpose:
def hamming(a, b):
# compute and return the Hamming distance between the integers
return bin(int(a) ^ int(b)).count("1")
I started to conduct some simple tests on this function to make sure it works properly but almost immediately I see that it does not and I am trying to understand as to why.
I tested the function with these two numbers:
a = -1704441252336819740
b = -1704441252336819741
The binary representations of these numbers given by python are:
bin(a): -0b10111 10100111 01100100 01001001 11011010 00001110 11011110 00011100
bin(b): -0b10111 10100111 01100100 01001001 11011010 00001110 11011110 00011101
As you can see their binary representations are the same aside for the first digit thus the hamming distance should be 1.
However, the returned hamming distance from the function is 3 and I can't seem to understand why.
The issue arises when I compute the XOR between these two digits as a ^ b returns 7 (thus counts 3 '1' bits) when I would expect it to return 1 (and count 1 '1' bit).
I believe this has to do with the fact that the XOR value seems to be getting stored as an unsigned integer with the minimal number of possible bits whereas I need it to be stored as a
How am I misunderstanding the XOR operator and how can I change my function to work the way I want it to?
Actually, it is the bin function that is misleading:
Instead of displaying the actual binary value stored, it displays |x| (absolute value) and prints minus sign in front of it for negative numbers.
But, that is not how the values are actually stored.
XOR operates on the actual binary values which are stored in two's compliment, and that is why you are getting bigger bit difference then you expected.
As a simple example lets take two 4 bit numbers:
-10 = 0b0110
-11 = 0b0101
^ = 0b0011
As you can see, in this representation there are two bits of difference between these two numbers, while if they were positive, there would be only one bit difference.
I had some issues with a piece of code and ended up doing the following command line snippet.This was just an experiment and I didn't store such large values in any variable in the real code(modulo 10**9 +7).
>>> a=1
>>> for i in range(1,101):
... a=a*i
...
>>> b=1
>>> for i in range(1,51):
... b=b*i
...
>>> c=pow(2,50)
>>> a//(b*c)
2725392139750729502980713245400918633290796330545803413734328823443106201171875
>>> a/(b*c)
2.7253921397507295e+78
>>> (a//(b*c))%(10**9 +7)
196932377
>>> (a/(b*c))%(10**9 +7)
45708938.0
>>>
I don't understand why integer divison gives the correct output while floating point divison fails.
Basically I calculated: ( (100!) / ((50!)*(2^50)) ) % (10**9 +7)
Because of precision.
Integers and floats are coded differently. In particular, in python 3, integers can be arbitrarily large - the one you gave, for example, is more than 250 bits large when you convert it to binary. They're stored in a way that can accommodate however large they are.
However, floating-point numbers are constrained to a certain size - usually 64 bits. These 64 bits are divided into a sign (1 bit), mantissa, and exponent - the number of bits in the mantissa limit how precise the number can be. Python's documentation contains a section on this limitation.
So, when you do
(a//(b*c))%(10**9 +7)
you're performing that calculation with integers, which, again, are arbitrarily large. However, when you do this:
(a/(b*c))%(10**9 +7)
you're performing that calculation with a number that only has 18 significant digits - it's already imprecise, and doing more calculations with it only further corrupts the answer.
What you can do to avoid this, if you need to use very large floating-point numbers, is use python's decimal module (which is part of the standard library), which will not have these problems.
The reason is that integers are precise, but floats are limited by the floating point precision: Python2.7 default float precision
I have 32 bit numbers A=0x0000000A and B=0X00000005.
I get A xor B by A^B and it gives 0b1111.
I rotated this and got D=0b111100000 but I want this to be 32 bit number not just for printing but I need MSB bits even though there are 0 in this case for further manipulation.
Most high-level languages don't have ROR/ROL operators. There are two ways to deal with this: one is to add an external library like ctypes or https://github.com/scott-griffiths/bitstring, that have native support for rotate or bitslice support for integers (which is pretty easy to add).
One thing to keep in mind is that Python is 'infinite' precision - those MSBs are always 0 for positive numbers, 1 for negative numbers; python stores as many digits as it needs to hold up to the highest magnitude difference from the default. This is one reason you see weird notation in python like ~(0x3) is shown as -0x4, which is equivalent in two's complement notation, rather than the equivalent positive value, but -0x4 is always true, even if you AND it against a 5000 bit number, it will just mask off the bottom two bits.
Or, you can just do yourself, the way we all used to, and how the hardware actually does it:
def rotate_left(number, rotatebits, numbits=32):
newnumber = (number << rotatebits) & ~((1<<numbits)-1)
newnumber |= (number & ~((1<<rotatebits)-1)) << rotatebits
return newnumber
To get the binary of an integer you could use bin().
Just an short example:
>>> i = 333333
>>> print (i)
333333
>>> print (bin(i))
0b1010001011000010101
>>>
bin(i)[2:].zfill(32)
I guess does what you want.
I think your bigger problem here is that you are misunderstanding the difference between a number and its representation
12 ^ 18 #would xor the values
56 & 11 # and the values
if you need actual 32bit signed integers you can use numpy
a =numpy.array(range(100),dtype=np.int32)
After a bit of googling, nothing came up. I am manipulating sequence numbers for network packets and need the numbers to be of a fixed length. For example:
>>> 0000 + 1
1
Instead, I'd like the integer that is returned to be 0001. Are there any built-in commands for setting an integer of fixed length?
Edit: I do not need to print these integers, I need to actually manipulate them. I will need them to iterate but they must be fixed length so that they can be easily found in a networking protocol head file.
What you're asking doesn't make any sense. The integer 0011 and the integer 11 are exactly the same number.*
If you want to format them as strings to print them out or to search a text file, you can do that with, e.g., format(n, '04'). It doesn't matter whether you're formatting 11 or 0011, they're both the same number, and that number will format to the string '0011'.
If you want to convert them to big-endian 32-bit C-style unsigned integers, again, they're both the same number, and struct.pack('>I', n) will pack that number to the byte string b'\x00\x00\x00\x0b'.
If you want to add them modulo 10000, again, they're both the same number, and (n + 9990) % 10000 will give you 1.
No matter what operation you dream up, there will be no difference.
* Actually, in Python 2.x, number literals starting with 0 are treated as octal, not decimal, so 0011 is actually 9, not 11. And in 3.x numbers starting with 0 are a SyntaxError, to avoid the confusion caused by accidentally writing octal numbers. But forget all that. We're not talking about the Python number literals, we're talking about something even simpler here: the numbers themselves.
Numbers don't have a "length", they're just numbers. The representation of a number as text, in a string, has a length. To convert numbers to strings in Python, use the format() function:
x = 1
s = "{:04d}".format(x)
print(s)