I have created a buffer object in python like so:
f = io.open('some_file', 'rb')
byte_stream = buffer(f.read(4096))
I'm now passing byte_stream as a parameter to a C function, through SWIG. I have a typemap for converting the data which looks like this:
%typemap(in) unsigned char * byte_stream {
PyObject *buf = $input;
//some code to read the contents of buf
}
I have tried a few different things bug can't get to the actual content/value of my byte_stream. How do I convert or access the content of my byte_stream using the C API? There are many different methods for converting a C data to a buffer but none that I can find for going the other way around.
I have tried looking at this object in gcb but neither it, or the values it points to contain my data.
(I'm using buffers because I want to avoid the overhead of converting the data to a string when reading it from the file)
I'm using python 2.6 on Linux.
--
Thanks Pavel
I'm using buffers because I want to
avoid the overhead of converting the
data to a string when reading it from
the file
You are not avoiding anything. The string is already built by the read() method. Calling buffer() just builds an additional buffer object pointing to that string.
As for getting at the memory pointed to by the buffer object, try PyObject_AsReadBuffer(). See also http://docs.python.org/c-api/objbuffer.html.
As soon as you use the read method on your file object, the data will be converted to a str object; calling the buffer method does not convert it into a stream of any kind. If you want to avoid the overhead of creating the string object, you could simply pass the file object to your C code and then use it via its C API.
Related
I'm using Python ctypes to call a function from a shared library.
The function is called with a char* buffer in which it writes its result. The return value of the function is the number of bytes written to the buffer.
Calling the function works fine, however i'm struggling to access the individual bytes of the buffer.
I create the buffer and call the function like this:
buf = (c_void_p * RECBUFFERSIZE)()
n = functionInLibrary(buf)
Now how to read the individual bytes stored in buf?
I already tried using
cast(buf, c_char_p).value this yields a bytes object with the contents of buf. BUT it is terminated by the first null-byte in buf.
And this is exactly what i do not want. I need to read the first n bytes from buf.
Nevermind. I found it out myself:
cast(buf, POINTER(c_char))[0:n]
I have this Python tool written by someone else to flash a certain microcontroller, but he has written this tool for Python 2.6 and I am using Python 3.3.
So, most of it I got ported, but this line is making problems:
data = map(lambda c: ord(c), file(args[0], 'rb').read())
The file function does not exist in Python 3 and has to be replaced with open. But then, a function which gets data as an argument causes an exception:
TypeError: object of type 'map' has no len()
But what I see so far in the documentation is, that map has to join iterable types to one big iterable, am I missing something?
What do I have to do to port this to Python 3?
In Python 3, map returns an iterator. If your function expects a list, the iterator has to be explicitly converted, like this:
data = list(map(...))
And we can do it simply, like this
with open(args[0], "rb") as input_file:
data = list(input_file.read())
rb refers to read in binary mode. So, it actually returns the bytes. So, we just have to convert them to a list.
Quoting from the open's docs,
Python distinguishes between binary and text I/O. Files opened in
binary mode (including 'b' in the mode argument) return contents as
bytes objects without any decoding.
I have a c++ application which writes blocks of unsigned char data. So I would be writing unsigned char data[8].
Now, I am using python (read ctypes functionality in python), to read and buffer it in my tool for further processing.
Problem
When I read the data from file and break it down into chunks of 8, all the resultant data is in string format.I have the following structure
class MyData(Union):
_fields_=[ ("data",8 * c_ubytes), ("overlap", SelfStructure) ]
Now, I am trying to pass the data as follows
dataObj = MyData(str[0:8])
It throws an error, expected c_ubyte_Array_8 instance, got str. I think I need to convert string to array of size 8 of c_ubyte. Tried with bytearray but did not succeed. Please let me know how to do.
Try this:
(ctypes.c_ubyte * 8)(*[ctypes.c_ubyte(ord(c)) for c in str[:8]])
I need to generate a tar file but as a string in memory rather than as an actual file. What I have as input is a single filename and a string containing the assosiated contents. I'm looking for a python lib I can use and avoid having to role my own.
A little more work found these functions but using a memory steam object seems a little... inelegant. And making it accept input from strings looks like even more... inelegant. OTOH it works. I assume, as most of it is new to me. Anyone see any bugs in it?
Use tarfile in conjunction with cStringIO:
c = cStringIO.StringIO()
t = tarfile.open(mode='w', fileobj=c)
# here: do your work on t, then...:
s = c.getvalue() # extract the bytestring you need
I need to write some methods for loading/saving some classes to and from a binary file. However I also want to be able to accept the binary data from other places, such as a binary string.
In c++ I could do this by simply making my class methods use std::istream and std::ostream which could be a file, a stringstream, the console, whatever.
Does python have a similar input/output class which can be made to represent almost any form of i/o, or at least files and memory?
The Python way to do this is to accept an object that implements read() or write(). If you have a string, you can make this happen with StringIO:
from cStringIO import StringIO
s = "My very long string I want to read like a file"
file_like_string = StringIO(s)
data = file_like_string.read(10)
Remember that Python uses duck-typing: you don't have to involve a common base class. So long as your object implements read(), it can be read like a file.
The Pickle and cPickle modules may also be helpful to you.