Bitplane decomposition of 16 bit two's complement signed integer signal data? - python

I am trying to do bit plane decomposition of a 16 bit two's complement signed integer signal data(electrocardiogram) in python so i will get 16 signal data bit plane. I know how to decompose an 8 bit unsigned integer image signal, and i reimplement the code in this problem. I thought that i am supposed to get bit plane data with its values containing negative because it is originaly a 16 bit signed integer, but i got resulting 16 bit unsigned integer signal not 16 bit signed integer signal.
Here's my code:
import numpy as np
def intToTcbin16(value):
return format(value % (1 << 16), '016b')
def Tcbin16ToInt(bin):
while len(bin)<16 :
bin = '0'+bin
if bin[0] == '0':
return int(bin, 2)
else:
return -1 * (int(''.join('1' if x == '0' else '0' for x in bin), 2) + 1)
def bitplanedecomposesignal(ecgdat):
lst = []
for j in range(len(ecgdat)):
lst.append(intToTcbin16(ecgdat[j]))
sixteen = (np.array([Tcbin16ToInt(i[0]) for i in lst],dtype = np.int16)*32768)
fiveteen = (np.array([Tcbin16ToInt(i[1]) for i in lst],dtype = np.int16)*16384)
fourteen = (np.array([Tcbin16ToInt(i[2]) for i in lst],dtype = np.int16)*8192)
thirteen = (np.array([Tcbin16ToInt(i[3]) for i in lst],dtype = np.int16)*4096)
twelve = (np.array([Tcbin16ToInt(i[4]) for i in lst],dtype = np.int16)*2048)
eleven = (np.array([Tcbin16ToInt(i[5]) for i in lst],dtype = np.int16)*1024)
ten = (np.array([Tcbin16ToInt(i[6]) for i in lst],dtype = np.int16)*512)
nine = (np.array([Tcbin16ToInt(i[7]) for i in lst],dtype = np.int16)*256)
eight = (np.array([Tcbin16ToInt(i[8]) for i in lst],dtype = np.int16)*128)
seven = (np.array([Tcbin16ToInt(i[9]) for i in lst],dtype = np.int16)*64)
six = (np.array([Tcbin16ToInt(i[10]) for i in lst],dtype = np.int16)*32)
five = (np.array([Tcbin16ToInt(i[11]) for i in lst],dtype = np.int16)*16)
four = (np.array([Tcbin16ToInt(i[12]) for i in lst],dtype = np.int16)*8)
three = (np.array([Tcbin16ToInt(i[13]) for i in lst],dtype = np.int16)*4)
two = (np.array([Tcbin16ToInt(i[14]) for i in lst],dtype = np.int16)*2)
one = (np.array([Tcbin16ToInt(i[15]) for i in lst],dtype = np.int16)*1)
return sixteen,fiveteen,fourteen,thirteen,twelve,eleven,ten,nine,eight,seven,six,five,four,three,two,one
here is the signal plot before decomposition:
for example, here is the 16th bitplane signal plot after decomposition:
What did i do wrong?how to do it right?and how to recompose it back?

In the sixteen line, change the 32768 to -32768. Everything else looks right.
Like you said, the planes of the existing bitplanedecomposesignal() code reconstruct the value as if it were unsigned 16-bit data rather than signed. However, if the most significant bit is on, then the represented value is negative, and we should subtract 2^16 = 65536 from the unsigned value. So the most significant bit should contribute 32768 - 65536 = -32768 rather than +32768.
Example:
value = −32700 decimal
= 1000000001000100 binary int16
↑ ↑ ↑
−2^15 2^6 2^2
−2^15 + 2^6 + 2^2 = −32700 decimal = value
Side comment: Numpy has good efficient bitwise functions that you might find useful. I'd consider using np.bitwise_and to extract bitplanes.

Related

How to see the binary value of float that's really stored in memory. Python [duplicate]

How to get the string as binary IEEE 754 representation of a 32 bit float?
Example
1.00 -> '00111111100000000000000000000000'
You can do that with the struct package:
import struct
def binary(num):
return ''.join('{:0>8b}'.format(c) for c in struct.pack('!f', num))
That packs it as a network byte-ordered float, and then converts each of the resulting bytes into an 8-bit binary representation and concatenates them out:
>>> binary(1)
'00111111100000000000000000000000'
Edit:
There was a request to expand the explanation. I'll expand this using intermediate variables to comment each step.
def binary(num):
# Struct can provide us with the float packed into bytes. The '!' ensures that
# it's in network byte order (big-endian) and the 'f' says that it should be
# packed as a float. Alternatively, for double-precision, you could use 'd'.
packed = struct.pack('!f', num)
print 'Packed: %s' % repr(packed)
# For each character in the returned string, we'll turn it into its corresponding
# integer code point
#
# [62, 163, 215, 10] = [ord(c) for c in '>\xa3\xd7\n']
integers = [ord(c) for c in packed]
print 'Integers: %s' % integers
# For each integer, we'll convert it to its binary representation.
binaries = [bin(i) for i in integers]
print 'Binaries: %s' % binaries
# Now strip off the '0b' from each of these
stripped_binaries = [s.replace('0b', '') for s in binaries]
print 'Stripped: %s' % stripped_binaries
# Pad each byte's binary representation's with 0's to make sure it has all 8 bits:
#
# ['00111110', '10100011', '11010111', '00001010']
padded = [s.rjust(8, '0') for s in stripped_binaries]
print 'Padded: %s' % padded
# At this point, we have each of the bytes for the network byte ordered float
# in an array as binary strings. Now we just concatenate them to get the total
# representation of the float:
return ''.join(padded)
And the result for a few examples:
>>> binary(1)
Packed: '?\x80\x00\x00'
Integers: [63, 128, 0, 0]
Binaries: ['0b111111', '0b10000000', '0b0', '0b0']
Stripped: ['111111', '10000000', '0', '0']
Padded: ['00111111', '10000000', '00000000', '00000000']
'00111111100000000000000000000000'
>>> binary(0.32)
Packed: '>\xa3\xd7\n'
Integers: [62, 163, 215, 10]
Binaries: ['0b111110', '0b10100011', '0b11010111', '0b1010']
Stripped: ['111110', '10100011', '11010111', '1010']
Padded: ['00111110', '10100011', '11010111', '00001010']
'00111110101000111101011100001010'
Here's an ugly one ...
>>> import struct
>>> bin(struct.unpack('!i',struct.pack('!f',1.0))[0])
'0b111111100000000000000000000000'
Basically, I just used the struct module to convert the float to an int ...
Here's a slightly better one using ctypes:
>>> import ctypes
>>> bin(ctypes.c_uint32.from_buffer(ctypes.c_float(1.0)).value)
'0b111111100000000000000000000000'
Basically, I construct a float and use the same memory location, but I tag it as a c_uint32. The c_uint32's value is a python integer which you can use the builtin bin function on.
Note: by switching types we can do reverse operation as well
>>> ctypes.c_float.from_buffer(ctypes.c_uint32(int('0b111111100000000000000000000000', 2))).value
1.0
also for double-precision 64-bit float we can use the same trick using ctypes.c_double & ctypes.c_uint64 instead.
Found another solution using the bitstring module.
import bitstring
f1 = bitstring.BitArray(float=1.0, length=32)
print(f1.bin)
Output:
00111111100000000000000000000000
For the sake of completeness, you can achieve this with numpy using:
f = 1.00
int32bits = np.asarray(f, dtype=np.float32).view(np.int32).item() # item() optional
You can then print this, with padding, using the b format specifier
print('{:032b}'.format(int32bits))
With these two simple functions (Python >=3.6) you can easily convert a float number to binary and vice versa, for IEEE 754 binary64.
import struct
def bin2float(b):
''' Convert binary string to a float.
Attributes:
:b: Binary string to transform.
'''
h = int(b, 2).to_bytes(8, byteorder="big")
return struct.unpack('>d', h)[0]
def float2bin(f):
''' Convert float to 64-bit binary string.
Attributes:
:f: Float number to transform.
'''
[d] = struct.unpack(">Q", struct.pack(">d", f))
return f'{d:064b}'
For example:
print(float2bin(1.618033988749894))
print(float2bin(3.14159265359))
print(float2bin(5.125))
print(float2bin(13.80))
print(bin2float('0011111111111001111000110111011110011011100101111111010010100100'))
print(bin2float('0100000000001001001000011111101101010100010001000010111011101010'))
print(bin2float('0100000000010100100000000000000000000000000000000000000000000000'))
print(bin2float('0100000000101011100110011001100110011001100110011001100110011010'))
The output is:
0011111111111001111000110111011110011011100101111111010010100100
0100000000001001001000011111101101010100010001000010111011101010
0100000000010100100000000000000000000000000000000000000000000000
0100000000101011100110011001100110011001100110011001100110011010
1.618033988749894
3.14159265359
5.125
13.8
I hope you like it, it works perfectly for me.
This problem is more cleanly handled by breaking it into two parts.
The first is to convert the float into an int with the equivalent bit pattern:
import struct
def float32_bit_pattern(value):
return sum(ord(b) << 8*i for i,b in enumerate(struct.pack('f', value)))
Python 3 doesn't require ord to convert the bytes to integers, so you need to simplify the above a little bit:
def float32_bit_pattern(value):
return sum(b << 8*i for i,b in enumerate(struct.pack('f', value)))
Next convert the int to a string:
def int_to_binary(value, bits):
return bin(value).replace('0b', '').rjust(bits, '0')
Now combine them:
>>> int_to_binary(float32_bit_pattern(1.0), 32)
'00111111100000000000000000000000'
Piggy-tailing on Dan's answer with colored version for Python3:
import struct
BLUE = "\033[1;34m"
CYAN = "\033[1;36m"
GREEN = "\033[0;32m"
RESET = "\033[0;0m"
def binary(num):
return [bin(c).replace('0b', '').rjust(8, '0') for c in struct.pack('!f', num)]
def binary_str(num):
bits = ''.join(binary(num))
return ''.join([BLUE, bits[:1], GREEN, bits[1:10], CYAN, bits[10:], RESET])
def binary_str_fp16(num):
bits = ''.join(binary(num))
return ''.join([BLUE, bits[:1], GREEN, bits[1:10][-5:], CYAN, bits[10:][:11], RESET])
x = 0.7
print(x, "as fp32:", binary_str(0.7), "as fp16 is sort of:", binary_str_fp16(0.7))
After browsing through lots of similar questions I've written something which hopefully does what I wanted.
f = 1.00
negative = False
if f < 0:
f = f*-1
negative = True
s = struct.pack('>f', f)
p = struct.unpack('>l', s)[0]
hex_data = hex(p)
scale = 16
num_of_bits = 32
binrep = bin(int(hex_data, scale))[2:].zfill(num_of_bits)
if negative:
binrep = '1' + binrep[1:]
binrep is the result.
Each part will be explained.
f = 1.00
negative = False
if f < 0:
f = f*-1
negative = True
Converts the number to a positive if negative, and sets the variable negative to false. The reason for this is that the difference between positive and negative binary representations is just in the first bit, and this was the simpler way than to figure out what goes wrong when doing the whole process with negative numbers.
s = struct.pack('>f', f) #'?\x80\x00\x00'
p = struct.unpack('>l', s)[0] #1065353216
hex_data = hex(p) #'0x3f800000'
s is a hex representation of the binary f. it is however not in the pretty form i need. Thats where p comes in. It is the int representation of the hex s. And then another conversion to get a pretty hex.
scale = 16
num_of_bits = 32
binrep = bin(int(hex_data, scale))[2:].zfill(num_of_bits)
if negative:
binrep = '1' + binrep[1:]
scale is the base 16 for the hex. num_of_bits is 32, as float is 32 bits, it is used later to fill the additional places with 0 to get to 32. Got the code for binrep from this question. If the number was negative, just change the first bit.
I know this is ugly, but i didn't find a nice way and I needed it fast. Comments are welcome.
This is a little more than was asked, but it was what I needed when I found this entry. This code will give the mantissa, base and sign of the IEEE 754 32 bit float.
import ctypes
def binRep(num):
binNum = bin(ctypes.c_uint.from_buffer(ctypes.c_float(num)).value)[2:]
print("bits: " + binNum.rjust(32,"0"))
mantissa = "1" + binNum[-23:]
print("sig (bin): " + mantissa.rjust(24))
mantInt = int(mantissa,2)/2**23
print("sig (float): " + str(mantInt))
base = int(binNum[-31:-23],2)-127
print("base:" + str(base))
sign = 1-2*("1"==binNum[-32:-31].rjust(1,"0"))
print("sign:" + str(sign))
print("recreate:" + str(sign*mantInt*(2**base)))
binRep(-0.75)
output:
bits: 10111111010000000000000000000000
sig (bin): 110000000000000000000000
sig (float): 1.5
base:-1
sign:-1
recreate:-0.75
Convert float between 0..1
def float_bin(n, places = 3):
if (n < 0 or n > 1):
return "ERROR, n must be in 0..1"
answer = "0."
while n > 0:
if len(answer) - 2 == places:
return answer
b = n * 2
if b >= 1:
answer += '1'
n = b - 1
else:
answer += '0'
n = b
return answer
Several of these answers did not work as written with Python 3, or did not give the correct representation for negative floating point numbers. I found the following to work for me (though this gives 64-bit representation which is what I needed)
def float_to_binary_string(f):
def int_to_8bit_binary_string(n):
stg=bin(n).replace('0b','')
fillstg = '0'*(8-len(stg))
return fillstg+stg
return ''.join( int_to_8bit_binary_string(int(b)) for b in struct.pack('>d',f) )
I made a very simple one. please check it. and if you think there was any mistake please let me know. this works fine for me.
sds=float(input("Enter the number : "))
sf=float("0."+(str(sds).split(".")[-1]))
aa=[]
while len(aa)<15:
dd=round(sf*2,5)
if dd-1>0:
aa.append(1)
sf=dd-1
else:
sf=round(dd,5)
aa.append(0)
des=aa[:-1]
print("\n")
AA=([str(i) for i in des])
print("So the Binary Of : %s>>>"%sds,bin(int(str(sds).split(".")[0])).replace("0b",'')+"."+"".join(AA))
or in case of integer number just use bin(integer).replace("0b",'')
Let's use numpy!
import numpy as np
def binary(num, string=True):
bits = np.unpackbits(np.array([num]).view('u1'))
if string:
return np.array2string(bits, separator='')[1:-1]
else:
return bits
e.g.,
binary(np.pi)
# '0001100000101101010001000101010011111011001000010000100101000000'
binary(np.pi, string=False)
# array([0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1,
# 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0,
# 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0],
# dtype=uint8)
You can use the .format for the easiest representation of bits in my opinion:
my code would look something like:
def fto32b(flt):
# is given a 32 bit float value and converts it to a binary string
if isinstance(flt,float):
# THE FOLLOWING IS AN EXPANDED REPRESENTATION OF THE ONE LINE RETURN
# packed = struct.pack('!f',flt) <- get the hex representation in (!)Big Endian format of a (f) Float
# integers = []
# for c in packed:
# integers.append(ord(c)) <- change each entry into an int
# binaries = []
# for i in integers:
# binaries.append("{0:08b}".format(i)) <- get the 8bit binary representation of each int (00100101)
# binarystring = ''.join(binaries) <- join all the bytes together
# return binarystring
return ''.join(["{0:08b}".format(i) for i in [ord(c) for c in struct.pack('!f',flt)]])
return None
Output:
>>> a = 5.0
'01000000101000000000000000000000'
>>> b = 1.0
'00111111100000000000000000000000'

Reading the longitude [28:0] from a 4 byte hexadecimal number

I am receiving a longitude and accuracy as a 4 byte hexadecimal string: 99054840
I'm trying to extract a longitude from this value.
The specs tell me the following:
Bits [28:0]: signed value λ, little-endian format, longitude in ° = λ ÷ 1,000,000
Bits [31:29]: unsigned value α, range 0-7, a measure for accuracy
My device is located physically at a longitude of 4.7199. So I now what the result of the conversion should be.
To read the value of the longitude I currently do (with incorrect result):
def get_longitude(reading):
# split in different bytes
n=2
all_bytes = [reading[i:i+n] for i in range(0, len(reading), n)]
# convert to binary
long_bytes_binary = list(map(hex_to_binary, all_bytes))
# drop the accuracy bits
long_bytes_binary[3] = long_bytes_binary[3][0:5]
# little endian & concatenate bytes
longitude_binary = ''.join(list(reversed(long_bytes_binary)))
# get longitude
lon = binary_to_decimal(int(longitude_binary))/1_000_000
Which comes to 138.93. So totally different from the 4.7199 (expected outcome)
Here are the helper methods
def hex_to_binary(payload):
scale = 16
num_of_bits = 8
binary_payload = bin(int(payload, scale))[2:].zfill(num_of_bits)
return binary_payload
def binary_to_decimal(binary):
binary1 = binary
decimal, i, n = 0, 0, 0
while(binary != 0):
dec = binary % 10
decimal = decimal + dec * pow(2, i)
binary = binary//10
i += 1
return decimal
What am I doing wrong? How can I correctly read the value?
Or is my device broken :)
I'm cheating a little bit here by using struct to do the endian swap, but you get the idea.
import struct
val = 0x99054840
val = struct.unpack('<I',struct.pack('>I',val))[0]
print(hex(val))
accuracy = (val >> 29) & 7
longitude = (val & 0x1ffffff) / 1000000
print(accuracy,longitude)
Output:
C:\tmp>x.py
0x40480599
2 4.720025
C:\tmp> ```
The OP code dropped the last 3 bits instead of the first 3 bits for accuracy. This change fixes it:
# drop the accuracy bits
long_bytes_binary[3] = long_bytes_binary[3][3:]
But the calculation can be much more simple:
def hex_to_longitude(x):
b = bytes.fromhex(x) # convert hex string to bytes
i = int.from_bytes(b,'little') # treat bytes as little-endian integer
return (i & 0x1FFFFFFF) / 1e6 # 29-bitwise AND mask divided by one million
x = '99054840'
print(hex_to_longitude(x))
4.720025

Convert Bytes to Floating Point Numbers WITHOUT using STRUCT

I'm trying to write my "personal"(without using any modules or functions : struct,float....,int....,...) python version of STL binary file reader, according to WIKIPEDIA : A binary STL file contains :
a 80-character (byte) headern which is generally ignored.
a 4-byte unsigned integer indicating the number of triangular facets in the file.
Each triangle is described by twelve 32-bit floating-point numbers: three for the normal and then three for the X/Y/Z coordinate of each vertex – just as with the ASCII version of STL. After these follows a 2-byte ("short") unsigned integer that is the "attribute byte count" – in the standard format, this should be zero because most software does not understand anything else. (((3+3+3)+3)*4+2=50 bytes for each point)
--Floating-point numbers are represented as IEEE floating-point numbers and are assumed to be little-endian--
With the help of two saviors I discovered how unsigned integers are stored, I can figure out the number of triangular facets in the file with 3 methods (computed by hand ) :
def int_from_bytes(inbytes): # ShadowRanger's
res = 0
for i, b in enumerate(inbytes):
res |= b << (i * 8)
return res
or
def int_from_bytes(inbytes): # ShadowRanger's
res = 0
for b in inbytes:
res <<= 8 # Adjust bytes seen so far to make room for new byte
res |= b # Mask in new byte
return res
or
def unsigned_int(s): # Robᵩ's
result = 0
for ch in s[::-1]:
result *= 256
result += ch
return result
Now I have to convert the rest of the file (3rd item in the list):Floating-point numbers
for the first point the 50-bytes are :
b'\x9a'b'\xa3' b'\x14' b'\xbe' b'\x05' b'$' b'\x85' b'\xbe' b'N' b'b'
b't' b'?' b'\xcd' b'\xa6' b'\x04' b'\xc4' b'\xfb' b';' b'\xd4' b'\xc1'
b'\x84' b'w' b'\x81' b'A' b'\xcd' b'\xa6' b'\x04' b'\xc4' b'\xa5' b'\x15'
b'\xd3' b'\xc1' b'\xb2' b'\xc7' b'\x81' b'A' b'\xef' b'\xa6' b'\x04'
b'\xc4' b'\x81' b'\x14' b'\xd3' b'\xc1' b'Y' b'\xc7' b'\x81' b'A' b'\x00'
b'\x00'
How can I convert this by hand, What is the principle of the representation, what rules should I know to do the conversion by hand (some bytes don't start with \x ). ?
Thank you for your time.
Like this:
def int_from_bytes(inbytes):
res = 0
shft = 0
for b in inbytes:
res |= ord(b) << shft
shft += 8
return res
def float_from_bytes(inbytes):
bits = int_from_bytes(inbytes)
mantissa = ((bits&8388607)/8388608.0)
exponent = (bits>>23)&255
sign = 1.0 if bits>>31 ==0 else -1.0
if exponent != 0:
mantissa+=1.0
elif mantissa==0.0:
return sign*0.0
return sign*pow(2.0,exponent-127)*mantissa
print float_from_bytes('\x9a\xa3\x14\xbe')
print float_from_bytes('\x00\x00\x00\x40')
print float_from_bytes('\x00\x00\xC0\xbf')
output:
-0.145155340433
2.0
-1.5
The format is IEEE-754 floating point. Try this out to see what each bit means: https://www.h-schmidt.net/FloatConverter/IEEE754.html

Randrange whitin static method

I wrote a method (in Python) get_16bit_error for generate a random integer of 16 bits. I'm trying to understand why this code always output "similar" numbers. For an instance when I ran this code two times I obtained 0x10000, 0x100000 or 0x800000, 0x8000000.
class Util(object):
#staticmethod
def get_16bit_error():
i = randrange(0, 16)
e = bin(2 ** i)[2:]
len_e = len(e)
e = "0"*(16 - len_e) + e
return int(e + "0"*(16), 2)
for i in range(2):
print hex(Util.get_16bit_error())
Bit is a binary digit, so 16 bits is 16 binary digits.
import random
class Util(object):
#staticmethod
def get_16bit_error():
string = ''
for i in range(16):
string += random.choice(['1', '0'])
return '0b' + string
the_binary = hex(Util.get_16bit_error())
in_decimal = print int(the_binary, 2)
print the_binary # or in_decimal
You're generating numbers of the form 2**(16 + randrange(0,16)). I'm guessing that's not what you want. Let's break it down:
i = randrange(0, 16)
i is a random number between 0 and 16. Good so far. Let's proceed assuming i = 3.
e = bin(2 ** i)[2:]
So now you've exponentiated 2 by i (in our example, 2**i = 8. Then you convert it to binary representation and take the back end of the string, so e = 1000.
len_e = len(e)
The length of e, which is 4 in our example.
e = "0"*(16 - len_e) + e
This essentially pads e with zeros so that it has length 16. In our example, e = '0000000000001000'.
return int(e + "0"*(16), 2)
First you add sixteen zeros to the end of the binary representation. This has the effect of taking what was 2**i and making it 2**(16 + i). You then convert it to an integer. That's why all your hex representations look the same.
If you want to generate a random 16 bit number, try this:
import random
def rand_16_bit_int():
return random.randrange(2**16)
Lastly, I'm not sure what the purpose of the Util class and the staticmethod is here... perhaps you're coming from Java where you have to put all your methods in classes. Not so in python. It's sufficient to define rand_16_bit_int as a free function in your module.

Binary representation of float in Python (bits not hex)

How to get the string as binary IEEE 754 representation of a 32 bit float?
Example
1.00 -> '00111111100000000000000000000000'
You can do that with the struct package:
import struct
def binary(num):
return ''.join('{:0>8b}'.format(c) for c in struct.pack('!f', num))
That packs it as a network byte-ordered float, and then converts each of the resulting bytes into an 8-bit binary representation and concatenates them out:
>>> binary(1)
'00111111100000000000000000000000'
Edit:
There was a request to expand the explanation. I'll expand this using intermediate variables to comment each step.
def binary(num):
# Struct can provide us with the float packed into bytes. The '!' ensures that
# it's in network byte order (big-endian) and the 'f' says that it should be
# packed as a float. Alternatively, for double-precision, you could use 'd'.
packed = struct.pack('!f', num)
print 'Packed: %s' % repr(packed)
# For each character in the returned string, we'll turn it into its corresponding
# integer code point
#
# [62, 163, 215, 10] = [ord(c) for c in '>\xa3\xd7\n']
integers = [ord(c) for c in packed]
print 'Integers: %s' % integers
# For each integer, we'll convert it to its binary representation.
binaries = [bin(i) for i in integers]
print 'Binaries: %s' % binaries
# Now strip off the '0b' from each of these
stripped_binaries = [s.replace('0b', '') for s in binaries]
print 'Stripped: %s' % stripped_binaries
# Pad each byte's binary representation's with 0's to make sure it has all 8 bits:
#
# ['00111110', '10100011', '11010111', '00001010']
padded = [s.rjust(8, '0') for s in stripped_binaries]
print 'Padded: %s' % padded
# At this point, we have each of the bytes for the network byte ordered float
# in an array as binary strings. Now we just concatenate them to get the total
# representation of the float:
return ''.join(padded)
And the result for a few examples:
>>> binary(1)
Packed: '?\x80\x00\x00'
Integers: [63, 128, 0, 0]
Binaries: ['0b111111', '0b10000000', '0b0', '0b0']
Stripped: ['111111', '10000000', '0', '0']
Padded: ['00111111', '10000000', '00000000', '00000000']
'00111111100000000000000000000000'
>>> binary(0.32)
Packed: '>\xa3\xd7\n'
Integers: [62, 163, 215, 10]
Binaries: ['0b111110', '0b10100011', '0b11010111', '0b1010']
Stripped: ['111110', '10100011', '11010111', '1010']
Padded: ['00111110', '10100011', '11010111', '00001010']
'00111110101000111101011100001010'
Here's an ugly one ...
>>> import struct
>>> bin(struct.unpack('!i',struct.pack('!f',1.0))[0])
'0b111111100000000000000000000000'
Basically, I just used the struct module to convert the float to an int ...
Here's a slightly better one using ctypes:
>>> import ctypes
>>> bin(ctypes.c_uint32.from_buffer(ctypes.c_float(1.0)).value)
'0b111111100000000000000000000000'
Basically, I construct a float and use the same memory location, but I tag it as a c_uint32. The c_uint32's value is a python integer which you can use the builtin bin function on.
Note: by switching types we can do reverse operation as well
>>> ctypes.c_float.from_buffer(ctypes.c_uint32(int('0b111111100000000000000000000000', 2))).value
1.0
also for double-precision 64-bit float we can use the same trick using ctypes.c_double & ctypes.c_uint64 instead.
Found another solution using the bitstring module.
import bitstring
f1 = bitstring.BitArray(float=1.0, length=32)
print(f1.bin)
Output:
00111111100000000000000000000000
For the sake of completeness, you can achieve this with numpy using:
f = 1.00
int32bits = np.asarray(f, dtype=np.float32).view(np.int32).item() # item() optional
You can then print this, with padding, using the b format specifier
print('{:032b}'.format(int32bits))
With these two simple functions (Python >=3.6) you can easily convert a float number to binary and vice versa, for IEEE 754 binary64.
import struct
def bin2float(b):
''' Convert binary string to a float.
Attributes:
:b: Binary string to transform.
'''
h = int(b, 2).to_bytes(8, byteorder="big")
return struct.unpack('>d', h)[0]
def float2bin(f):
''' Convert float to 64-bit binary string.
Attributes:
:f: Float number to transform.
'''
[d] = struct.unpack(">Q", struct.pack(">d", f))
return f'{d:064b}'
For example:
print(float2bin(1.618033988749894))
print(float2bin(3.14159265359))
print(float2bin(5.125))
print(float2bin(13.80))
print(bin2float('0011111111111001111000110111011110011011100101111111010010100100'))
print(bin2float('0100000000001001001000011111101101010100010001000010111011101010'))
print(bin2float('0100000000010100100000000000000000000000000000000000000000000000'))
print(bin2float('0100000000101011100110011001100110011001100110011001100110011010'))
The output is:
0011111111111001111000110111011110011011100101111111010010100100
0100000000001001001000011111101101010100010001000010111011101010
0100000000010100100000000000000000000000000000000000000000000000
0100000000101011100110011001100110011001100110011001100110011010
1.618033988749894
3.14159265359
5.125
13.8
I hope you like it, it works perfectly for me.
This problem is more cleanly handled by breaking it into two parts.
The first is to convert the float into an int with the equivalent bit pattern:
import struct
def float32_bit_pattern(value):
return sum(ord(b) << 8*i for i,b in enumerate(struct.pack('f', value)))
Python 3 doesn't require ord to convert the bytes to integers, so you need to simplify the above a little bit:
def float32_bit_pattern(value):
return sum(b << 8*i for i,b in enumerate(struct.pack('f', value)))
Next convert the int to a string:
def int_to_binary(value, bits):
return bin(value).replace('0b', '').rjust(bits, '0')
Now combine them:
>>> int_to_binary(float32_bit_pattern(1.0), 32)
'00111111100000000000000000000000'
Piggy-tailing on Dan's answer with colored version for Python3:
import struct
BLUE = "\033[1;34m"
CYAN = "\033[1;36m"
GREEN = "\033[0;32m"
RESET = "\033[0;0m"
def binary(num):
return [bin(c).replace('0b', '').rjust(8, '0') for c in struct.pack('!f', num)]
def binary_str(num):
bits = ''.join(binary(num))
return ''.join([BLUE, bits[:1], GREEN, bits[1:10], CYAN, bits[10:], RESET])
def binary_str_fp16(num):
bits = ''.join(binary(num))
return ''.join([BLUE, bits[:1], GREEN, bits[1:10][-5:], CYAN, bits[10:][:11], RESET])
x = 0.7
print(x, "as fp32:", binary_str(0.7), "as fp16 is sort of:", binary_str_fp16(0.7))
After browsing through lots of similar questions I've written something which hopefully does what I wanted.
f = 1.00
negative = False
if f < 0:
f = f*-1
negative = True
s = struct.pack('>f', f)
p = struct.unpack('>l', s)[0]
hex_data = hex(p)
scale = 16
num_of_bits = 32
binrep = bin(int(hex_data, scale))[2:].zfill(num_of_bits)
if negative:
binrep = '1' + binrep[1:]
binrep is the result.
Each part will be explained.
f = 1.00
negative = False
if f < 0:
f = f*-1
negative = True
Converts the number to a positive if negative, and sets the variable negative to false. The reason for this is that the difference between positive and negative binary representations is just in the first bit, and this was the simpler way than to figure out what goes wrong when doing the whole process with negative numbers.
s = struct.pack('>f', f) #'?\x80\x00\x00'
p = struct.unpack('>l', s)[0] #1065353216
hex_data = hex(p) #'0x3f800000'
s is a hex representation of the binary f. it is however not in the pretty form i need. Thats where p comes in. It is the int representation of the hex s. And then another conversion to get a pretty hex.
scale = 16
num_of_bits = 32
binrep = bin(int(hex_data, scale))[2:].zfill(num_of_bits)
if negative:
binrep = '1' + binrep[1:]
scale is the base 16 for the hex. num_of_bits is 32, as float is 32 bits, it is used later to fill the additional places with 0 to get to 32. Got the code for binrep from this question. If the number was negative, just change the first bit.
I know this is ugly, but i didn't find a nice way and I needed it fast. Comments are welcome.
This is a little more than was asked, but it was what I needed when I found this entry. This code will give the mantissa, base and sign of the IEEE 754 32 bit float.
import ctypes
def binRep(num):
binNum = bin(ctypes.c_uint.from_buffer(ctypes.c_float(num)).value)[2:]
print("bits: " + binNum.rjust(32,"0"))
mantissa = "1" + binNum[-23:]
print("sig (bin): " + mantissa.rjust(24))
mantInt = int(mantissa,2)/2**23
print("sig (float): " + str(mantInt))
base = int(binNum[-31:-23],2)-127
print("base:" + str(base))
sign = 1-2*("1"==binNum[-32:-31].rjust(1,"0"))
print("sign:" + str(sign))
print("recreate:" + str(sign*mantInt*(2**base)))
binRep(-0.75)
output:
bits: 10111111010000000000000000000000
sig (bin): 110000000000000000000000
sig (float): 1.5
base:-1
sign:-1
recreate:-0.75
Convert float between 0..1
def float_bin(n, places = 3):
if (n < 0 or n > 1):
return "ERROR, n must be in 0..1"
answer = "0."
while n > 0:
if len(answer) - 2 == places:
return answer
b = n * 2
if b >= 1:
answer += '1'
n = b - 1
else:
answer += '0'
n = b
return answer
Several of these answers did not work as written with Python 3, or did not give the correct representation for negative floating point numbers. I found the following to work for me (though this gives 64-bit representation which is what I needed)
def float_to_binary_string(f):
def int_to_8bit_binary_string(n):
stg=bin(n).replace('0b','')
fillstg = '0'*(8-len(stg))
return fillstg+stg
return ''.join( int_to_8bit_binary_string(int(b)) for b in struct.pack('>d',f) )
I made a very simple one. please check it. and if you think there was any mistake please let me know. this works fine for me.
sds=float(input("Enter the number : "))
sf=float("0."+(str(sds).split(".")[-1]))
aa=[]
while len(aa)<15:
dd=round(sf*2,5)
if dd-1>0:
aa.append(1)
sf=dd-1
else:
sf=round(dd,5)
aa.append(0)
des=aa[:-1]
print("\n")
AA=([str(i) for i in des])
print("So the Binary Of : %s>>>"%sds,bin(int(str(sds).split(".")[0])).replace("0b",'')+"."+"".join(AA))
or in case of integer number just use bin(integer).replace("0b",'')
Let's use numpy!
import numpy as np
def binary(num, string=True):
bits = np.unpackbits(np.array([num]).view('u1'))
if string:
return np.array2string(bits, separator='')[1:-1]
else:
return bits
e.g.,
binary(np.pi)
# '0001100000101101010001000101010011111011001000010000100101000000'
binary(np.pi, string=False)
# array([0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1,
# 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0,
# 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0],
# dtype=uint8)
You can use the .format for the easiest representation of bits in my opinion:
my code would look something like:
def fto32b(flt):
# is given a 32 bit float value and converts it to a binary string
if isinstance(flt,float):
# THE FOLLOWING IS AN EXPANDED REPRESENTATION OF THE ONE LINE RETURN
# packed = struct.pack('!f',flt) <- get the hex representation in (!)Big Endian format of a (f) Float
# integers = []
# for c in packed:
# integers.append(ord(c)) <- change each entry into an int
# binaries = []
# for i in integers:
# binaries.append("{0:08b}".format(i)) <- get the 8bit binary representation of each int (00100101)
# binarystring = ''.join(binaries) <- join all the bytes together
# return binarystring
return ''.join(["{0:08b}".format(i) for i in [ord(c) for c in struct.pack('!f',flt)]])
return None
Output:
>>> a = 5.0
'01000000101000000000000000000000'
>>> b = 1.0
'00111111100000000000000000000000'

Categories

Resources