I just spent ~30 minutes debugging and double checking Python & C# code, to find out that my struct.pack was writing the wrong data. When I separated this into separate calls, it works fine.
This is what I had before
file.write(struct.pack("fffHf", kf_time / frame_divisor, kf_in_tangent, kf_out_tangent, kf_interpolation_type, kf_value))
This is what I have now
file.write(struct.pack("f", kf_time / frame_divisor))
file.write(struct.pack("f", kf_in_tangent))
file.write(struct.pack("f", kf_out_tangent))
file.write(struct.pack("H", kf_interpolation_type))
file.write(struct.pack("f", kf_value))
Why does the first variation not write the data that I expected? What is so different than writing these separately?
(File is opened in binary mode, platform is 64 bit Windows, Python 3.5)
Presumably because, as the struct documentation clearly states:
Note By default, the result of packing a given C struct
includes pad bytes in order to maintain proper alignment
for the C types involved; similarly, alignment is taken
into account when unpacking. This behavior is chosen so
that the bytes of a packed struct correspond exactly to
the layout in memory of the corresponding C struct. To
handle platform-independent data formats or omit implicit
pad bytes, use standard size and alignment instead of
native size and alignment: see Byte Order, Size, and
Alignment for details.
Related
My Python app needs to receive an array of 16-bit integer tuples from a C++ application.
The data consists of an array of 32-bit unsigned integers, where each integer represents an IQ complex number. I and Q are each signed 16-bit numbers.
The array size is constant (6000).
The apps run on similar architectures so I don’t need to worry about endianness.
Please suggest a Python code snippet to read the data from a socket into a list of IQ tuples. (I know how to create and connect a socket).
Best regards
David
You can use struct library in python is you have incoming data as hexadecimal bytes.
Or if they are simple hexadecimal numbers then conversion is direct.
It would be really helpful if you can tell the data type you receive from C++ and the required format in python.
Python has several options to process binary data; in this case, you start out by reading from a socket, producing an immutable bytes buffer (bytes in Python 3, str in Python 2). This can be parsed as 16-bit words using either struct.unpack or array.array:
tuple_of_ints = struct.unpack('=12000h', data)
array_of_s16s = array.array('h', data)
From there, you still have only a one-dimensional structure, where odd and even items are your I and Q values. If using numpy, you could use ndarray.fromstring or ndarray.frombuffer to create a similar array, then reshape it.
We could also convert items individually, which is a bit slower:
list_of_complex_numbers = [complex(*struct.unpack('hh',data[i:i+4]))
for i in range(0,len(data),4)]
numpy is also capable of reading from file, so with a file-like socket you might be able to use numpy.fromfile(socket, numpy.int16, 2*6000).
I have to following strange problem while trying to read and unpack int32 + int64 in python 2.7.9
file = open('my_file.bin','rb')
s = file.read(4 + 8)
struct.unpack('IQ',s)
I get the following error:
unpack requires a string argument of length 16
Why is that ? I=4 Q=8 IQ=12
btw the following works:
s = file.read(4)
struct.unpack('I',s)
s = file.read(8)
struct.unpack('Q',s)
Haven't used it myself, but according to the documentation, unpack() uses native padding of structs, as would a C compiler on your machine: apparently, you are running on a 64 bit machine. Prefix the format string IQ with an equals sign =IQ if you know the struct to be packed and follow native byte ordering.
Background: CPU's can fetch data aligned on word boundaries more efficiently than packed data, which require two fetch cycles (and DRAM access is slow compared to CPU speeds). Now that 64 bits is common (with 8 byte words), this helps explain why we need much more memory these days…
It is alignment related issue. You can check in the docs.
I'm attempting to use Python's struct module to decode some binary headers from a GPS system. I have two types of header, long and short, and I have an example of reading each one below:
import struct
import binascii
packed_data_short = binascii.unhexlify('aa44132845013b078575e40c')
packed_data_long = binascii.unhexlify('aa44121ca20200603400000079783b07bea9bd0c00000000cc5dfa33')
print packed_data_short
print len(packed_data_short)
sS = struct.Struct('c c c B H H L')
unpacked_data_short = sS.unpack(packed_data_short)
print 'Unpacked Values:', unpacked_data_short
print ''
print packed_data_long
print len(packed_data_long)
sL = struct.Struct('c c c B H c b H H b c H L L H H')
unpacked_data_long = sL.unpack(packed_data_long)
print 'Unpacked Values:', unpacked_data_long
In both cases I get the length I am expecting - 12 bytes for a short header and 28 bytes for a long one. In addition all the fields appear correctly and (to the best of my knowledge with old data) are sensible values. All good so far.
I move this across onto another computer (running a different version of Python - 2.7.6 as opposed to 2.7.11) and I get different struct lengths using calcsize, and get errors trying to pass it the length I've both calculated and the other version is content with. Now the short header is expecting 16 bytes and the long one 36 bytes.
If I pass the larger amount it is asking for most of the records are find until the "L" records. In the long example the first one is as expected but the second one, which should just be 0, is not correct, and consequently the two fields after are also incorrect. In light of the number of bytes the function wants I noticed that it is 4 for each of the "L"s, and indeed just running struct.calcsize('L') I get 8 for the length in 2.7.6 and 4 for 2.7.11. This at least narrows down where the problem is, but I don't understand why it is happening.
At present I'm updating the second computer to Python 2.7.11 (will update once I have it), but I can't find anything in the struct documentation which would suggest there has been a change to this. Is there anything I have clearly missed or is this simply a version problem?
The documentation I have been referring to is here.
EDIT: Further to comment regarding OS - one is a 64 bit version of Windows 7 (the one which works as expected), the second is a 64 bit version of Ubuntu 14.04.
This is not a bug; see struct documentation:
Note
By default, the result of packing a given C struct includes pad bytes
in order to maintain proper alignment for the C types involved;
similarly, alignment is taken into account when unpacking. This
behavior is chosen so that the bytes of a packed struct correspond
exactly to the layout in memory of the corresponding C struct. To
handle platform-independent data formats or omit implicit pad bytes,
use standard size and alignment instead of native size and alignment:
see Byte Order, Size, and Alignment for details.
To decode the data from that GPS device, you need to use < or > in your format string as described in 7.3.2.1. Byte Order, Size, and Alignment. Since you got it working on the other machine, I presume the data is in little-endian format, and it would work portably if you used
sS = struct.Struct('<cccBHHL')
sL = struct.Struct('<cccBHcbHHbcHLLHH')
whose sizes are always
>>> sS.size
12
>>> sL.size
28
Why did they differ? The original computer you're using is either a Windows machine or a 32-bit machine, and the remote machine is a 64-bit *nix. In native sizes, L means the type unsigned long of a C compiler. In 32-bit Unixen and all Windows versions, this is 32-bit wide.
In 64-bit Unixes the standard ABI on x86 is LP64 which means that long and pointers are 64-bit wide. However, Windows uses LLP64; only long long is 64-bit there; the reason is that lots of code and even Windows API itself has for long relied on long being exactly 32 bits.
With < flag present, L and I both are always guaranteed to be 32-bit. There was no problem with other field specifiers because their size remains the same on all x86 platforms and operating systems.
I'm working on a program where I store some data in an integer and process it bitwise. For example, I might receive the number 48, which I will process bit-by-bit. In general the endianness of integers depends on the machine representation of integers, but does Python do anything to guarantee that the ints will always be little-endian? Or do I need to check endianness like I would in C and then write separate code for the two cases?
I ask because my code runs on a Sun machine and, although the one it's running on now uses Intel processors, I might have to switch to a machine with Sun processors in the future, which I know is big-endian.
Python's int has the same endianness as the processor it runs on. The struct module lets you convert byte blobs to ints (and viceversa, and some other data types too) in either native, little-endian, or big-endian ways, depending on the format string you choose: start the format with # or no endianness character to use native endianness (and native sizes -- everything else uses standard sizes), '~' for native, '<' for little-endian, '>' or '!' for big-endian.
This is byte-by-byte, not bit-by-bit; not sure exactly what you mean by bit-by-bit processing in this context, but I assume it can be accomodated similarly.
For fast "bulk" processing in simple cases, consider also the array module -- the fromstring and tostring methods can operate on large number of bytes speedily, and the byteswap method can get you the "other" endianness (native to non-native or vice versa), again rapidly and for a large number of items (the whole array).
If you need to process your data 'bitwise' then the bitstring module might be of help to you. It can also deal with endianness between platforms.
The struct module is the best standard method of dealing with endianness between platforms. For example this packs and unpack the integers 1, 2, 3 into two 'shorts' and one 'long' (2 and 4 bytes on most platforms) using native endianness:
>>> from struct import *
>>> pack('hhl', 1, 2, 3)
'\x00\x01\x00\x02\x00\x00\x00\x03'
>>> unpack('hhl', '\x00\x01\x00\x02\x00\x00\x00\x03')
(1, 2, 3)
To check the endianness of the platform programmatically you can use
>>> import sys
>>> sys.byteorder
which will either return "big" or "little".
The following snippet will tell you if your system default is little endian (otherwise it is big-endian)
import struct
little_endian = (struct.unpack('<I', struct.pack('=I', 1))[0] == 1)
Note, however, this will not affect the behavior of bitwise operators: 1<<1 is equal to 2 regardless of the default endianness of your system.
Check when?
When doing bitwise operations, the int in will have the same endianess as the ints you put in. You don't need to check that. You only need to care about this when converting to/from sequences of bytes, in both languages, afaik.
In Python you use the struct module for this, most commonly struct.pack() and struct.unpack().
I'm attempting to write a Python C extension that reads packed binary data (it is stored as structs of structs) and then parses it out into Python objects. Everything works as expected on a 32 bit machine (the binary files are always written on 32bit architecture), but not on a 64 bit box. Is there a "preferred" way of doing this?
It would be a lot of code to post but as an example:
struct
{
WORD version;
BOOL upgrade;
time_t time1;
time_t time2;
} apparms;
File *fp;
fp = fopen(filePath, "r+b");
fread(&apparms, sizeof(apparms), 1, fp);
return Py_BuildValue("{s:i,s:l,s:l}",
"sysVersion",apparms.version,
"powerFailTime", apparms.time1,
"normKitExpDate", apparms.time2
);
Now on a 32 bit system this works great, but on a 64 bit my time_t sizes are different (32bit vs 64 bit longs).
Damn, you people are fast.
Patrick, I originally started using the struct package but found it just way to slow for my needs. Plus I was looking for an excuse to write a Python Extension.
I know this is a stupid question but what types do I need to watch out for?
Thanks.
Explicitly specify that your data types (e.g. integers) are 32-bit. Otherwise if you have two integers next to each other when you read them they will be read as one 64-bit integer.
When you are dealing with cross-platform issues, the two main things to watch out for are:
Bitness. If your packed data is written with 32-bit ints, then all of your code must explicitly specify 32-bit ints when reading and writing.
Byte order. If you move your code from Intel chips to PPC or SPARC, your byte order will be wrong. You will have to import your data and then byte-flip it so that it matches up with the current architecture. Otherwise 12 (0x0000000C) will be read as 201326592 (0x0C000000).
Hopefully this helps.
The 'struct' module should be able to do this, although alignment of structs in the middle of the data is always an issue. It's not very hard to get it right, however: find out (once) what boundary the structs-in-structs align to, then pad (manually, with the 'x' specifier) to that boundary. You can doublecheck your padding by comparing struct.calcsize() with your actual data. It's certainly easier than writing a C extension for it.
In order to keep using Py_BuildValue() like that, you have two options. You can determine the size of time_t at compiletime (in terms of fundamental types, so 'an int' or 'a long' or 'an ssize_t') and then use the right format character to Py_BuildValue -- 'i' for an int, 'l' for a long, 'n' for an ssize_t. Or you can use PyInt_FromSsize_t() manually, in which case the compiler does the upcasting for you, and then use the 'O' format characters to pass the result to Py_BuildValue.
You need to make sure you're using architecture independent members for your struct. For instance an int may be 32 bits on one architecture and 64 bits on another. As others have suggested, use the int32_t style types instead. If your struct contains unaligned members, you may need to deal with padding added by the compiler too.
Another common problem with cross architecture data is endianness. Intel i386 architecture is little-endian, but if you're reading on a completely different machine (e.g. an Alpha or Sparc), you'll have to worry about this too.
The Python struct module deals with both these situations, using the prefix passed as part of the format string.
# - Use native size, endianness and alignment. i= sizeof(int), l= sizeof(long)
= - Use native endianness, but standard sizes and alignment (i=32 bits, l=64 bits)
< - Little-endian standard sizes/alignment
Big-endian standard sizes/alignment
In general, if the data passes off your machine, you should nail down the endianness and the size / padding format to something specific — ie. use "<" or ">" as your format. If you want to handle this in your C extension, you may need to add some code to handle it.
What's your code for reading the binary data? Make sure you're copying the data into properly-sized types like int32_t instead of just int.
Why aren't you using the struct package?