I have to following strange problem while trying to read and unpack int32 + int64 in python 2.7.9
file = open('my_file.bin','rb')
s = file.read(4 + 8)
struct.unpack('IQ',s)
I get the following error:
unpack requires a string argument of length 16
Why is that ? I=4 Q=8 IQ=12
btw the following works:
s = file.read(4)
struct.unpack('I',s)
s = file.read(8)
struct.unpack('Q',s)
Haven't used it myself, but according to the documentation, unpack() uses native padding of structs, as would a C compiler on your machine: apparently, you are running on a 64 bit machine. Prefix the format string IQ with an equals sign =IQ if you know the struct to be packed and follow native byte ordering.
Background: CPU's can fetch data aligned on word boundaries more efficiently than packed data, which require two fetch cycles (and DRAM access is slow compared to CPU speeds). Now that 64 bits is common (with 8 byte words), this helps explain why we need much more memory these days…
It is alignment related issue. You can check in the docs.
Related
I just spent ~30 minutes debugging and double checking Python & C# code, to find out that my struct.pack was writing the wrong data. When I separated this into separate calls, it works fine.
This is what I had before
file.write(struct.pack("fffHf", kf_time / frame_divisor, kf_in_tangent, kf_out_tangent, kf_interpolation_type, kf_value))
This is what I have now
file.write(struct.pack("f", kf_time / frame_divisor))
file.write(struct.pack("f", kf_in_tangent))
file.write(struct.pack("f", kf_out_tangent))
file.write(struct.pack("H", kf_interpolation_type))
file.write(struct.pack("f", kf_value))
Why does the first variation not write the data that I expected? What is so different than writing these separately?
(File is opened in binary mode, platform is 64 bit Windows, Python 3.5)
Presumably because, as the struct documentation clearly states:
Note By default, the result of packing a given C struct
includes pad bytes in order to maintain proper alignment
for the C types involved; similarly, alignment is taken
into account when unpacking. This behavior is chosen so
that the bytes of a packed struct correspond exactly to
the layout in memory of the corresponding C struct. To
handle platform-independent data formats or omit implicit
pad bytes, use standard size and alignment instead of
native size and alignment: see Byte Order, Size, and
Alignment for details.
I'm attempting to use Python's struct module to decode some binary headers from a GPS system. I have two types of header, long and short, and I have an example of reading each one below:
import struct
import binascii
packed_data_short = binascii.unhexlify('aa44132845013b078575e40c')
packed_data_long = binascii.unhexlify('aa44121ca20200603400000079783b07bea9bd0c00000000cc5dfa33')
print packed_data_short
print len(packed_data_short)
sS = struct.Struct('c c c B H H L')
unpacked_data_short = sS.unpack(packed_data_short)
print 'Unpacked Values:', unpacked_data_short
print ''
print packed_data_long
print len(packed_data_long)
sL = struct.Struct('c c c B H c b H H b c H L L H H')
unpacked_data_long = sL.unpack(packed_data_long)
print 'Unpacked Values:', unpacked_data_long
In both cases I get the length I am expecting - 12 bytes for a short header and 28 bytes for a long one. In addition all the fields appear correctly and (to the best of my knowledge with old data) are sensible values. All good so far.
I move this across onto another computer (running a different version of Python - 2.7.6 as opposed to 2.7.11) and I get different struct lengths using calcsize, and get errors trying to pass it the length I've both calculated and the other version is content with. Now the short header is expecting 16 bytes and the long one 36 bytes.
If I pass the larger amount it is asking for most of the records are find until the "L" records. In the long example the first one is as expected but the second one, which should just be 0, is not correct, and consequently the two fields after are also incorrect. In light of the number of bytes the function wants I noticed that it is 4 for each of the "L"s, and indeed just running struct.calcsize('L') I get 8 for the length in 2.7.6 and 4 for 2.7.11. This at least narrows down where the problem is, but I don't understand why it is happening.
At present I'm updating the second computer to Python 2.7.11 (will update once I have it), but I can't find anything in the struct documentation which would suggest there has been a change to this. Is there anything I have clearly missed or is this simply a version problem?
The documentation I have been referring to is here.
EDIT: Further to comment regarding OS - one is a 64 bit version of Windows 7 (the one which works as expected), the second is a 64 bit version of Ubuntu 14.04.
This is not a bug; see struct documentation:
Note
By default, the result of packing a given C struct includes pad bytes
in order to maintain proper alignment for the C types involved;
similarly, alignment is taken into account when unpacking. This
behavior is chosen so that the bytes of a packed struct correspond
exactly to the layout in memory of the corresponding C struct. To
handle platform-independent data formats or omit implicit pad bytes,
use standard size and alignment instead of native size and alignment:
see Byte Order, Size, and Alignment for details.
To decode the data from that GPS device, you need to use < or > in your format string as described in 7.3.2.1. Byte Order, Size, and Alignment. Since you got it working on the other machine, I presume the data is in little-endian format, and it would work portably if you used
sS = struct.Struct('<cccBHHL')
sL = struct.Struct('<cccBHcbHHbcHLLHH')
whose sizes are always
>>> sS.size
12
>>> sL.size
28
Why did they differ? The original computer you're using is either a Windows machine or a 32-bit machine, and the remote machine is a 64-bit *nix. In native sizes, L means the type unsigned long of a C compiler. In 32-bit Unixen and all Windows versions, this is 32-bit wide.
In 64-bit Unixes the standard ABI on x86 is LP64 which means that long and pointers are 64-bit wide. However, Windows uses LLP64; only long long is 64-bit there; the reason is that lots of code and even Windows API itself has for long relied on long being exactly 32 bits.
With < flag present, L and I both are always guaranteed to be 32-bit. There was no problem with other field specifiers because their size remains the same on all x86 platforms and operating systems.
In Python, long integers have unlimited precision. I would like to write a 16 byte (128 bit) integer to a file. struct from the standard library supports only up to 8 byte integers. array has the same limitation. Is there a way to do this without masking and shifting each integer?
Some clarification here: I'm writing to a file that's going to be read in from non-Python programs, so pickle is out. All 128 bits are used.
I think for unsigned integers (and ignoring endianness) something like
import binascii
def binify(x):
h = hex(x)[2:].rstrip('L')
return binascii.unhexlify('0'*(32-len(h))+h)
>>> for i in 0, 1, 2**128-1:
... print i, repr(binify(i))
...
0 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
1 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01'
340282366920938463463374607431768211455 '\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff'
might technically satisfy the requirements of having non-Python-specific output, not using an explicit mask, and (I assume) not using any non-standard modules. Not particularly elegant, though.
Two possible solutions:
Just pickle your long integer. This will write the integer in a special format which allows it to be read again, if this is all you want.
Use the second code snippet in this answer to convert the long int to a big endian string (which can be easily changed to little endian if you prefer), and write this string to your file.
The problem is that the internal representation of bigints does not directly include the binary data you ask for.
The PyPi bitarray module in combination with the builtin bin() function seems like a good combination for a solution that is simple and flexible.
bytes = bitarray(bin(my_long)[2:]).tobytes()
The endianness can be controlled with a few more lines of code. You'll have to evaluate the efficiency.
Why not use struct with the unsigned long long type twice?
import struct
some_file.write(struct.pack("QQ", var/(2**64), var%(2**64)))
That's documented here (scroll down to get the table with Q): http://docs.python.org/library/struct.html
This may not avoid the "mask and shift each integer" requirement. I'm not sure that avoiding mask and shift means in the context of Python long values.
The bytes are these:
def bytes( long_int ):
bytes = []
while long_int != 0:
b = long_int%256
bytes.insert( 0, b )
long_int //= 256
return bytes
You can then pack this list of bytes using struct.pack( '16b', bytes )
With Python 3.2 and later, you can use int.to_bytes and int.from_bytes: https://docs.python.org/3/library/stdtypes.html#int.to_bytes
You could pickle the object to binary, use protocol buffers (I don't know if they allow you to serialize unlimited precision integers though) or BSON if you do not want to write code.
But writing a function that dumps 16 byte integers by shifting it should not be so hard to do if it's not time critical.
This may be a little late, but I don't see why you can't use struct:
bigint = 0xFEDCBA9876543210FEDCBA9876543210L
print bigint,hex(bigint).upper()
cbi = struct.pack("!QQ",bigint&0xFFFFFFFFFFFFFFFF,(bigint>>64)&0xFFFFFFFFFFFFFFFF)
print len(cbi)
The bigint by itself is rejected, but if you mask it with &0xFFFFFFFFFFFFFFFF you can reduce it to an 8 byte int instead of 16. Then the upper part is shifted and masked as well. You may have to play with byte ordering a bit. I used the ! mark to tell it to produce a network endian byte order. Also, the msb and lsb (upper and lower bytes) may need to be reversed. I will leave that as an exercise for the user to determine. I would say saving things as network endian would be safer so you always know what the endianess of your data is.
No, don't ask me if network endian is big or little endian...
Based on #DSM's answer, and to support negative integers and varying byte sizes, I've created the following improved snippet:
def to_bytes(num, size):
x = num if num >= 0 else 256**size + num
h = hex(x)[2:].rstrip("L")
return binascii.unhexlify("0"*((2*size)-len(h))+h)
This will properly handle negative integers and let the user set the number of bytes
I'm attempting to write a Python C extension that reads packed binary data (it is stored as structs of structs) and then parses it out into Python objects. Everything works as expected on a 32 bit machine (the binary files are always written on 32bit architecture), but not on a 64 bit box. Is there a "preferred" way of doing this?
It would be a lot of code to post but as an example:
struct
{
WORD version;
BOOL upgrade;
time_t time1;
time_t time2;
} apparms;
File *fp;
fp = fopen(filePath, "r+b");
fread(&apparms, sizeof(apparms), 1, fp);
return Py_BuildValue("{s:i,s:l,s:l}",
"sysVersion",apparms.version,
"powerFailTime", apparms.time1,
"normKitExpDate", apparms.time2
);
Now on a 32 bit system this works great, but on a 64 bit my time_t sizes are different (32bit vs 64 bit longs).
Damn, you people are fast.
Patrick, I originally started using the struct package but found it just way to slow for my needs. Plus I was looking for an excuse to write a Python Extension.
I know this is a stupid question but what types do I need to watch out for?
Thanks.
Explicitly specify that your data types (e.g. integers) are 32-bit. Otherwise if you have two integers next to each other when you read them they will be read as one 64-bit integer.
When you are dealing with cross-platform issues, the two main things to watch out for are:
Bitness. If your packed data is written with 32-bit ints, then all of your code must explicitly specify 32-bit ints when reading and writing.
Byte order. If you move your code from Intel chips to PPC or SPARC, your byte order will be wrong. You will have to import your data and then byte-flip it so that it matches up with the current architecture. Otherwise 12 (0x0000000C) will be read as 201326592 (0x0C000000).
Hopefully this helps.
The 'struct' module should be able to do this, although alignment of structs in the middle of the data is always an issue. It's not very hard to get it right, however: find out (once) what boundary the structs-in-structs align to, then pad (manually, with the 'x' specifier) to that boundary. You can doublecheck your padding by comparing struct.calcsize() with your actual data. It's certainly easier than writing a C extension for it.
In order to keep using Py_BuildValue() like that, you have two options. You can determine the size of time_t at compiletime (in terms of fundamental types, so 'an int' or 'a long' or 'an ssize_t') and then use the right format character to Py_BuildValue -- 'i' for an int, 'l' for a long, 'n' for an ssize_t. Or you can use PyInt_FromSsize_t() manually, in which case the compiler does the upcasting for you, and then use the 'O' format characters to pass the result to Py_BuildValue.
You need to make sure you're using architecture independent members for your struct. For instance an int may be 32 bits on one architecture and 64 bits on another. As others have suggested, use the int32_t style types instead. If your struct contains unaligned members, you may need to deal with padding added by the compiler too.
Another common problem with cross architecture data is endianness. Intel i386 architecture is little-endian, but if you're reading on a completely different machine (e.g. an Alpha or Sparc), you'll have to worry about this too.
The Python struct module deals with both these situations, using the prefix passed as part of the format string.
# - Use native size, endianness and alignment. i= sizeof(int), l= sizeof(long)
= - Use native endianness, but standard sizes and alignment (i=32 bits, l=64 bits)
< - Little-endian standard sizes/alignment
Big-endian standard sizes/alignment
In general, if the data passes off your machine, you should nail down the endianness and the size / padding format to something specific — ie. use "<" or ">" as your format. If you want to handle this in your C extension, you may need to add some code to handle it.
What's your code for reading the binary data? Make sure you're copying the data into properly-sized types like int32_t instead of just int.
Why aren't you using the struct package?
I'm reading some MPEG Transport Stream protocol over UDP and it has some funky bitfields in it (length 13 for example). I'm using the "struct" library to do the broad unpacking, but is there a simple way to say "Grab the next 13 bits" rather than have to hand-tweak the bit manipulation? I'd like something like the way C does bit fields (without having to revert to C).
Suggestions?
The bitstring module is designed to address just this problem. It will let you read, modify and construct data using bits as the basic building blocks. The latest versions are for Python 2.6 or later (including Python 3) but version 1.0 supported Python 2.4 and 2.5 as well.
A relevant example for you might be this, which strips out all the null packets from a transport stream (and quite possibly uses your 13 bit field?):
from bitstring import Bits, BitStream
# Opening from a file means that it won't be all read into memory
s = Bits(filename='test.ts')
outfile = open('test_nonull.ts', 'wb')
# Cut the stream into 188 byte packets
for packet in s.cut(188*8):
# Take a 13 bit slice and interpret as an unsigned integer
PID = packet[11:24].uint
# Write out the packet if the PID doesn't indicate a 'null' packet
if PID != 8191:
# The 'bytes' property converts back to a string.
outfile.write(packet.bytes)
Here's another example including reading from bitstreams:
# You can create from hex, binary, integers, strings, floats, files...
# This has a hex code followed by two 12 bit integers
s = BitStream('0x000001b3, uint:12=352, uint:12=288')
# Append some other bits
s += '0b11001, 0xff, int:5=-3'
# read back as 32 bits of hex, then two 12 bit unsigned integers
start_code, width, height = s.readlist('hex:32, 2*uint:12')
# Skip some bits then peek at next bit value
s.pos += 4
if s.peek(1):
flags = s.read(9)
You can use standard slice notation to slice, delete, reverse, overwrite, etc. at the bit level, and there are bit level find, replace, split etc. functions. Different endiannesses are also supported.
# Replace every '1' bit by 3 bits
s.replace('0b1', '0b001')
# Find all occurrences of a bit sequence
bitposlist = list(s.findall('0b01000'))
# Reverse bits in place
s.reverse()
The full documentation is here.
It's an often-asked question. There's an ASPN Cookbook entry on it that has served me in the past.
And there is an extensive page of requirements one person would like to see from a module doing this.