Python/SWIG: Output an array - python

I am trying to output an array of values from a C function wrapped using SWIG for Python. The way I am trying to do is using the following typemap.
Pseudo code:
int oldmain() {
float *output = {0,1};
return output;
}
Typemap:
%typemap(out) float* {
int i;
$result = PyList_New($1_dim0);
for (i = 0; i < $1_dim0; i++) {
PyObject *o = PyFloat_FromDouble((double) $1[i]);
PyList_SetItem($result,i,o);
}
}
My code compiles well, but it hangs when I run access this function (with no more ways to debug it).
Any suggestions on where I am going wrong?
Thanks.

The easiest way to allow the length to vary is to add another output parameter that tells you the size of the array too:
%module test
%include <stdint.i>
%typemap(in,numinputs=0,noblock=1) size_t *len {
size_t templen;
$1 = &templen;
}
%typemap(out) float* oldmain {
int i;
$result = PyList_New(templen);
for (i = 0; i < templen; i++) {
PyObject *o = PyFloat_FromDouble((double)$1[i]);
PyList_SetItem($result,i,o);
}
}
%inline %{
float *oldmain(size_t *len) {
static float output[] = {0.f, 1.f, 2, 3, 4};
*len = sizeof output/sizeof *output;
return output;
}
%}
This is modified from this answer to add size_t *len which can be used to return the length of the array at run time. The typemap completely hides that output from the Python wrapper though and instead uses it in the %typemap(out) instead of a fixed size to control the length of the returned list.

This should get you going:
/* example.c */
float * oldmain() {
static float output[] = {0.,1.};
return output;
}
You are returning a pointer here, and swig has no idea about the size of it. Plain $1_dim0 would not work, so you would have to hard code or do some other magic. Something like this:
/* example.i */
%module example
%{
/* Put header files here or function declarations like below */
extern float * oldmain();
%}
%typemap(out) float* oldmain {
int i;
//$1, $1_dim0, $1_dim1
$result = PyList_New(2);
for (i = 0; i < 2; i++) {
PyObject *o = PyFloat_FromDouble((double) $1[i]);
PyList_SetItem($result,i,o);
}
}
%include "example.c"
Then in python you should get:
>> import example
>> example.oldmain()
[0.0, 1.0]
When adding typemaps you may find -debug-tmsearch very handy, i.e.
swig -python -debug-tmsearch example.i
Should clearly indicate that your typemap is used when looking for a suitable 'out' typemap for float *oldmain. Also if you just like to access c global variable array you can do the same trick using typemap for varout instead of just out.

Related

How can I print the content of PyByteArrayObject*?

I am using PyArg_Parsetuple to parse a bytearray sent from Python with the Y format specifier.
Y (bytearray) [PyByteArrayObject *]
Requires that the Python object is a bytearray object, without attempting any conversion.
Raises TypeError if the object is not a bytearray object.
In C code I am doing:
static PyObject* py_write(PyObject* self, PyObject* args)
{
PyByteArrayObject* obj;
PyArg_ParseTuple(args, "Y", &obj);
.
.
.
The Python script is sending the following data:
arr = bytearray()
arr.append(0x2)
arr.append(0x0)
How do I loop over the PyByteArrayObject* in C? To print 2 and 0?
Rather than poking implementation details, you should go through the documented API, particularly, accessing the data buffer through PyByteArray_AS_STRING or PyByteArray_AsString rather than through direct struct member access:
char *data = PyByteArray_AS_STRING(bytearray);
Py_ssize_t len = PyByteArray_GET_SIZE(bytearray);
for (Py_ssize_t i = 0; i < len; i++) {
do_whatever_with(data[i]);
}
Note that everything in the public API takes the bytearray as a PyObject *, not a PyByteArrayObject *.
With the help of the comment section, I found the definition for PyByteArrayObject
/* Object layout */
typedef struct {
PyObject_VAR_HEAD
Py_ssize_t ob_alloc; /* How many bytes allocated in ob_bytes */
char *ob_bytes; /* Physical backing buffer */
char *ob_start; /* Logical start inside ob_bytes */
Py_ssize_t ob_exports; /* How many buffer exports */
} PyByteArrayObject;
And the actual code to loop
PyByteArrayObject* obj;
PyArg_ParseTuple(args, "Y", &obj);
Py_ssize_t i = 0;
for (i = 0; i < PyByteArray_GET_SIZE(obj); i++)
printf("%u\n", obj->ob_bytes[i]);
And I got the expected output.
Even better, simply use the Direct API
char* s = PyByteArray_AsString(obj);
int i = 0;
for (i = 0; i < PyByteArray_GET_SIZE(obj); i++)
printf("%u\n", s[i]);

Python C Wrapper Memory Leak

I am moderately experienced in python and C but new to writing python modules as wrappers on C functions. For a project I needed one function named "score" to run much faster than I was able to get in python so I coded it in C and literally just want to be able to call it from python. It takes in a python list of integers and I want the C function to get an array of integers, the length of that array, and then return an integer back to python. Here is my current (working) solution.
static PyObject *module_score(PyObject *self, PyObject *args) {
int i, size, value, *gene;
PyObject *seq, *data;
/* Parse the input tuple */
if (!PyArg_ParseTuple(args, "O", &data))
return NULL;
seq = PySequence_Fast(data, "expected a sequence");
size = PySequence_Size(seq);
gene = (int*) PyMem_Malloc(size * sizeof(int));
for (i = 0; i < size; i++)
gene[i] = PyInt_AsLong(PySequence_Fast_GET_ITEM(seq, i));
/* Call the external C function*/
value = score(gene, size);
PyMem_Free(gene);
/* Build the output tuple */
PyObject *ret = Py_BuildValue("i", value);
return ret;
}
This works but seems to leak memory and at a rate I can't ignore. I made sure that the leak is happening in the shown function by temporarily making the score function just return 0 and still saw the leaking behavior. I had thought that the call to PyMem_Free should take care of the PyMem_Malloc'ed storage but my current guess is that something in this function is getting allocated and retained on each call since the leaking behavior is proportional to the number of calls to this function. Am I not doing the sequence to array conversion correctly or am I possibly returning the ending value inefficiently? Any help is appreciated.
seq is a new Python object so you will need delete that object. You should check if seq is NULL, too.
Something like (untested):
static PyObject *module_score(PyObject *self, PyObject *args) {
int i, size, value, *gene;
long temp;
PyObject *seq, *data;
/* Parse the input tuple */
if (!PyArg_ParseTuple(args, "O", &data))
return NULL;
if (!(seq = PySequence_Fast(data, "expected a sequence")))
return NULL;
size = PySequence_Size(seq);
gene = (int*) PyMem_Malloc(size * sizeof(int));
for (i = 0; i < size; i++) {
temp = PyInt_AsLong(PySequence_Fast_GET_ITEM(seq, i));
if (temp == -1 && PyErr_Occurred()) {
Py_DECREF(seq);
PyErr_SetString(PyExc_ValueError, "an integer value is required");
return NULL;
}
/* Do whatever you need to verify temp will fit in an int */
gene[i] = (int*)temp;
}
/* Call the external C function*/
value = score(gene, size);
PyMem_Free(gene);
Py_DECREF(seq):
/* Build the output tuple */
PyObject *ret = Py_BuildValue("i", value);
return ret;
}

How to convert the numpy.ndarray to a cv::Mat using Python/C API?

I use python as an interface to operate the image, but when I need to write some custom functions to operate the matrix, I find out that numpy.ndarray is too slow when I iterate. I want to transfer the array to cv::Mat so that I can handle it easily because I used to write C++ code for image processing based on cv::Mat structure.
my test.cpp:
#include <Python.h>
#include <iostream>
using namespace std;
static PyObject *func(PyObject *self, PyObject *args) {
printf("What should I write here?\n");
// How to parse the args to get an np.ndarray?
// cv::Mat m = whateverFunction(theParsedArray);
return Py_BuildValue("s", "Any help?");
}
static PyMethodDef My_methods[] = {
{ "func", (PyCFunction) func, METH_VARARGS, NULL },
{ NULL, NULL, 0, NULL }
};
PyMODINIT_FUNC initydw_cvpy(void) {
PyObject *m=Py_InitModule("ydw_cvpy", My_methods);
if (m == NULL)
return;
}
main.py:
if __name__ == '__main__':
print ydw_cvpy.func()
Result:
What should I write here?
Any help?
It's been a while since I've played with raw C python bindings (I usually use boost::python) but the key is PyArray_FromAny. Some untested sample code would look like
PyObject* source = /* somehow get this from the argument list */
PyArrayObject* contig = (PyArrayObject*)PyArray_FromAny(source,
PyArray_DescrFromType(NPY_UINT8),
2, 2, NPY_ARRAY_CARRAY, NULL);
if (contig == nullptr) {
// Throw an exception
return;
}
cv::Mat mat(PyArray_DIM(contig, 0), PyArray_DIM(contig, 1), CV_8UC1,
PyArray_DATA(contig));
/* Use mat here */
PyDECREF(contig); // You can't use mat after this line
Note that this assumes that you have a an CV_8UC1 array. If you have a CV_8UC3 you need to ask for a 3 dimensional array
PyArrayObject* contig = (PyArrayObject*)PyArray_FromAny(source,
PyArray_DescrFromType(NPY_UINT8),
3, 3, NPY_ARRAY_CARRAY, NULL);
assert(contig && PyArray_DIM(contig, 2) == 3);
If you need to handle any random type of array, you probably also want to look at this answer.

SWIG -- Using typemap inside of extend

I have a c++ class written and I am using SWIG to make a Python version of my class. I would like to overload the constructor so that it can take in Python lists. For example:
>>> import example
>>> a = example.Array([1,2,3,4])
I was attempting to use the typemap feature in swig, but the scope of typemap does not include code in extend
Here is a similar example to what I have...
%typemap(in) double[]
{
if (!PyList_Check($input))
return NULL;
int size = PyList_Size($input);
int i = 0;
$1 = (double *) malloc((size+1)*sizeof(double));
for (i = 0; i < size; i++)
{
PyObject *o = PyList_GetItem($input,i);
if (PyNumber_Check(o))
$1[i] = PyFloat_AsDouble(o);
else
{
PyErr_SetString(PyExc_TypeError,"list must contain numbers");
free($1);
return NULL;
}
}
$1[i] = 0;
}
%include "Array.h"
%extend Array
{
Array(double lst[])
{
Array *a = new Array();
...
/* do stuff with lst[] */
...
return a;
}
}
I know the typemap is working correctly (I wrote a small test function that just prints out elements in the double[]).
I attempted putting the typemap inside the extend clause, but that did not solve the problem.
Maybe there is another way to use Python Lists inside of the extend, but I could not find any examples.
Thanks for the help in advance.
You're really close: instead of a double lst[], extend with std::list<double>:
%include "std_list.i" // or std_vector.i
%include "Array.h"
%extend Array
{
Array(const std::list<double>& numbers) {
Array* arr = new Array;
...put numbers list items in "arr", then
return a; // interpreter will take ownership
}
}
SWIG should automatically convert the Python list to the std::list.

Is there any way to use pythonappend with SWIG's new builtin feature?

I have a little project that works beautifully with SWIG. In particular, some of my functions return std::vectors, which get translated to tuples in Python. Now, I do a lot of numerics, so I just have SWIG convert these to numpy arrays after they're returned from the c++ code. To do this, I use something like the following in SWIG.
%feature("pythonappend") My::Cool::Namespace::Data() const %{ if isinstance(val, tuple) : val = numpy.array(val) %}
(Actually, there are several functions named Data, some of which return floats, which is why I check that val is actually a tuple.) This works just beautifully.
But, I'd also like to use the -builtin flag that's now available. Calls to these Data functions are rare and mostly interactive, so their slowness is not a problem, but there are other slow loops that speed up significantly with the builtin option.
The problem is that when I use that flag, the pythonappend feature is silently ignored. Now, Data just returns a tuple again. Is there any way I could still return numpy arrays? I tried using typemaps, but it turned into a giant mess.
Edit:
Borealid has answered the question very nicely. Just for completeness, I include a couple related but subtly different typemaps that I need because I return by const reference and I use vectors of vectors (don't start!). These are different enough that I wouldn't want anyone else stumbling around trying to figure out the minor differences.
%typemap(out) std::vector<int>& {
npy_intp result_size = $1->size();
npy_intp dims[1] = { result_size };
PyArrayObject* npy_arr = (PyArrayObject*)PyArray_SimpleNew(1, dims, NPY_INT);
int* dat = (int*) PyArray_DATA(npy_arr);
for (size_t i = 0; i < result_size; ++i) { dat[i] = (*$1)[i]; }
$result = PyArray_Return(npy_arr);
}
%typemap(out) std::vector<std::vector<int> >& {
npy_intp result_size = $1->size();
npy_intp result_size2 = (result_size>0 ? (*$1)[0].size() : 0);
npy_intp dims[2] = { result_size, result_size2 };
PyArrayObject* npy_arr = (PyArrayObject*)PyArray_SimpleNew(2, dims, NPY_INT);
int* dat = (int*) PyArray_DATA(npy_arr);
for (size_t i = 0; i < result_size; ++i) { for (size_t j = 0; j < result_size2; ++j) { dat[i*result_size2+j] = (*$1)[i][j]; } }
$result = PyArray_Return(npy_arr);
}
Edit 2:
Though not quite what I was looking for, similar problems may also be solved using #MONK's approach (explained here).
I agree with you that using typemap gets a little messy, but it is the right way to accomplish this task. You are also right that the SWIG documentation does not directly say that %pythonappend is incompatible with -builtin, but it is strongly implied: %pythonappend adds to the Python proxy class, and the Python proxy class does not exist at all in conjunction with the -builtin flag.
Before, what you were doing was having SWIG convert the C++ std::vector objects into Python tuples, and then passing those tuples back down to numpy - where they were converted again.
What you really want to do is convert them once, at the C level.
Here's some code which will turn all std::vector<int> objects into NumPy integer arrays:
%{
#include "numpy/arrayobject.h"
%}
%init %{
import_array();
%}
%typemap(out) std::vector<int> {
npy_intp result_size = $1.size();
npy_intp dims[1] = { result_size };
PyArrayObject* npy_arr = (PyArrayObject*)PyArray_SimpleNew(1, dims, NPY_INT);
int* dat = (int*) PyArray_DATA(npy_arr);
for (size_t i = 0; i < result_size; ++i) {
dat[i] = $1[i];
}
$result = PyArray_Return(npy_arr);
}
This uses the C-level numpy functions to construct and return an array. In order, it:
Ensures NumPy's arrayobject.h file is included in the C++ output file
Causes import_array to be called when the Python module is loaded (otherwise, all NumPy methods will segfault)
Maps any returns of std::vector<int> into NumPy arrays with a typemap
This code should be placed before you %import the headers which contain the functions returning std::vector<int>. Other than that restriction, it's entirely self-contained, so it shouldn't add too much subjective "mess" to your codebase.
If you need other vector types, you can just change the NPY_INT and all the int* and int bits, otherwise duplicating the function above.

Categories

Resources