We can extract a PyObject pointing to a python method using
PyObject *method = PyDict_GetItemString(methodsDictionary,methodName.c_str());
I want to know how many arguments the method takes. So if the function is
def f(x,y):
return x+y
how do I find out it needs 2 arguments?
Followed through the link provided by Jon. Assuming you don't want to (or can't) use Boost in your application, the following should get you the number (easily adapted from How to find the number of parameters to a Python function from C?):
PyObject *key, *value;
int pos = 0;
while(PyDict_Next(methodsDictionary, &pos, &key, &value)) {
if(PyCallable_Check(value)) {
PyObject* fc = PyObject_GetAttrString(value, "func_code");
if(fc) {
PyObject* ac = PyObject_GetAttrString(fc, "co_argcount");
if(ac) {
const int count = PyInt_AsLong(ac);
// we now have the argument count, do something with this function
Py_DECREF(ac);
}
Py_DECREF(fc);
}
}
}
If you're in Python 2.x the above definitely works. In Python 3.0+, you seem to need to use "__code__" instead of "func_code" in the snippet above.
I appreciate the inability to use Boost (my company won't allow it for the projects I've been working on lately), but in general, I would try to knuckle down and use that if you can, as I've found that the Python C API can in general get a bit fiddly as you try to do complicated stuff like this.
Related
Our code base currently supports a single SWIG interface file (for Python) that has grown over the years to include roughly 300 C++ classes (technically interfaces), all of which inherit from a single base class, and all of which exist in a single global namespace. This allows us, with a minimal amount of SWIG code, to implement dynamic casting among the C++ classes that the SWIG classes represent while at the same time simplifying by keeping the C++ inheritance structure out of SWIG.
As long as we compiled our SWIG interface in a single module, this mechanism worked well -- but as the SWIG interface file has grown it has become difficult to manage, and compile/link times have grown. To address this I split the interface file up into separate modules by the names of the derived classes -- one module for class names beginning with "A" to "G", one for names beginning with "H" to "N", etc., resulting in four derived-class modules and a base class module. I was able to get these modules to compile and link, and exhibit expected behavior for the dynamic casting, following the method outlined here: (http://www.swig.org/Doc3.0/SWIGDocumentation.html#Modules_nn1)
However, breaking the single module into four parts (five parts counting the base class) causes problems with the namespace when containers come into play. Consider the following function, from a class in my v-to-z interface file:
void RemoveIsolated(const std::vector<global::IFoo*> spRemoveIsolated) {
…
}
That takes a vector of one of the derived classes that exist in the global namespace. This worked without issue when I had only one module but now class IFoo lives in the a-to-g module -- so if I cast something to an IFoo*, it's an a-to-g.IFoo*. However, the function demands a global::IFoo*.
This seems to be a situation that could be addressed by the SWIG template mechanism. I've seen discussions in which people have had success by means of at one point (possibly in the interface file for the base class??) declaring
%template(FooVector) std::vector<global::Foo*>;
And at another point (possibly in the interface file for the derived class??):
%template () std::vector<global::Foo*>;
But my attempts to implement this have not been successful. The discussions are somewhat ambiguous, it's possible that I'm doing something wrong. Can anyone provide clarification, ideally with an example?
The piece of information it looks like you're missing is the %import directive, which lets modules cooperate with the definition of types, without repeating them and still ending up with a single wrapped type. The documentation suggests using this to reduce module size even.
Probably all you need to do is have your v-to-z module %import the a-to-g module to get this working for you. (Personally I'd have tried to divide them up by functionality rather than alphabetically though, so the dependency between then wouldn't be an issue)
Thanks for your suggestion Flexo. Importing the a-to-g module did not work; the C++ compiler complained that all of the classes (interfaces) declared there were not part of the global namespace when it tried to compile to v-to-z wrapper file. However, going through the exercise led me to question why we had been having success previously when we were compiling a single module. It turned out that we were using a typemapping macro in the interface file for the single module that would take a
const std::vector<global::IFoo*>
and map it thusly:
TYPEMAPMACRO(global::IFoo, SWIGTYPE_p_global__IFoo)
for vector containers. The macro itself, for anyone who's interested, is:
%define TYPEMAPMACRO(type, name)
%typemap(in) const std::vector {
/*Check if is a list */
std::vector vec;
void *pobj = 0;
if(PyTuple_Check($input))
{
size_t size = PyTuple_Size($input);
for (size_t j = 0; j < size; j++) {
PyObject *o = PyTuple_GetItem($input, j);
void *argp1 = 0 ;
int res1 = SWIG_ConvertPtr(o, &argp1, name, 0 | 0 );
if (!SWIG_IsOK(res1)) {
SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "Typemap of std::vector" "', argument " "1"" of type '" """'");
}
vec.push_back(reinterpret_cast< type * >(argp1));
}
$1 = vec;
}
else if (SWIG_IsOK(SWIG_ConvertPtr($input, &pobj, name, 0 | 0 ))) {
PyObject *o = $input;
void *argp1 = 0 ;
int res1 = SWIG_ConvertPtr(o, &argp1, name, 0 | 0 );
if (!SWIG_IsOK(res1)) {
SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "Typemap of std::vector" "', argument " "1"" of type '" """'");
}
vec.push_back(reinterpret_cast< type * >(argp1));
$1 = vec;
}
else {
PyErr_SetString(PyExc_TypeError, "not a list");
return NULL;
}
}
%typecheck(SWIG_TYPECHECK_POINTER) std::vector {
void *pobj = 0;
if(!PyTuple_Check($input) && !SWIG_IsOK(SWIG_ConvertPtr($input, &pobj, name, 0 | 0 ))) {
$1 = 0;
PyErr_Clear();
} else {
$1 = 1;
}
}
%enddef
My sense is that this is standard boilerplate stuff, I don't claim to understand it well as it's someone else's code, but what I do understand now that I did not before is that I needed to place the macro for the typemap before the function that uses the typemap (e.g the "RemoveIsolated" example above). That ordering had been broken when I divided my big module up into smaller ones.
This question already has answers here:
Why do some built-in Python functions only have pass?
(2 answers)
Closed 4 years ago.
in many codes, i see classes with functions in them that they just used pass phrase with some comment upon them.
like this native builtin function from python:
def copyright(*args, **kwargs): # real signature unknown
"""
interactive prompt objects for printing the license text, a list of
contributors and the copyright notice.
"""
pass
i know pass does nothing, and its kind of apathetic and null phrase, but why programmers use such functions ?
and also there are some functions with return "" like:
def bin(number): # real signature unknown; restored from __doc__
"""
bin(number) -> string
Return the binary representation of an integer.
>>> bin(2796202)
'0b1010101010101010101010'
"""
return ""
why programmers use such things ?
Your IDE is lying to you. Those functions don't actually look like that; your IDE has made up a bunch of fake source code with almost no resemblance to the real thing. That's why it says things like # real signature unknown. I don't know why they thought this was a good idea.
The real code looks completely different. For example, here's the real bin (Python 2.7 version):
static PyObject *
builtin_bin(PyObject *self, PyObject *v)
{
return PyNumber_ToBase(v, 2);
}
PyDoc_STRVAR(bin_doc,
"bin(number) -> string\n\
\n\
Return the binary representation of an integer or long integer.");
It's written in C, and it's implemented as a simple wrapper around the C function PyNumber_ToBase:
PyObject *
PyNumber_ToBase(PyObject *n, int base)
{
PyObject *res = NULL;
PyObject *index = PyNumber_Index(n);
if (!index)
return NULL;
if (PyLong_Check(index))
res = _PyLong_Format(index, base, 0, 1);
else if (PyInt_Check(index))
res = _PyInt_Format((PyIntObject*)index, base, 1);
else
/* It should not be possible to get here, as
PyNumber_Index already has a check for the same
condition */
PyErr_SetString(PyExc_ValueError, "PyNumber_ToBase: index not "
"int or long");
Py_DECREF(index);
return res;
}
its a TBD (to be done) thing
you know you will need this function, you know what to give it and you know what it returns, but you arent going to write it right now, so you make a "prototype"
sometimes packages will ship with these functions because they expect you to inherit them and overwrite them
I have a variable number of numpy arrays, which I'd like to pass to a C function. I managed to pass each individual array (using <ndarray>.ctypes.data_as(c_void_p)), but the number of array may vary a lot.
I thought I could pass all of these "pointers" in a list and use the PyList_GetItem() function in the C code. It works like a charm, except that the values of all elements are not the pointers I usually get when they are passed as function arguments.
Though, if I have :
from numpy import array
from ctypes import py_object
a1 = array([1., 2., 3.8])
a2 = array([222.3, 33.5])
values = [a1, a2]
my_cfunc(py_object(values), c_long(len(values)))
And my C code looks like :
void my_cfunc(PyObject *values)
{
int i, n;
n = PyObject_Length(values)
for(i = 0; i < n; i++)
{
unsigned long long *pointer;
pointer = (unsigned long long *)(PyList_GetItem(values, i);
printf("value 0 : %f\n", *pointer);
}
}
The printed value are all 0.0000
I have tried a lot of different solutions, using ctypes.byref(), ctypes.pointer(), etc. But I can't seem to be able to retrieve the real pointer values. I even have the impression the values converted by c_void_p() are truncated to 32 bits...
While there are many documentations about passing numpy pointers to C, I haven't seen anything about c_types within Python list (I admit this may seem strange...).
Any clue ?
After a few hours spent reading many pages of documentation and digging in numpy include files, I've finally managed to understand exactly how it works. Since I've spent a great amount of time searching for these exact explanations, I'm providing the following text as a way to avoid anyone to waste its time.
I repeat the question :
How to transfer a list of numpy arrays, from Python to C
(I also assume you know how to compile, link and import your C module in Python)
Passing a Numpy array from Python to C is rather simple, as long as it's going to be passed as an argument in a C function. You just need to do something like this in Python
from numpy import array
from ctypes import c_long
values = array([1.0, 2.2, 3.3, 4.4, 5.5])
my_c_func(values.ctypes.data_as(c_void_p), c_long(values.size))
And the C code could look like :
void my_c_func(double *value, long size)
{
int i;
for (i = 0; i < size; i++)
printf("%ld : %.10f\n", i, values[i]);
}
That's simple... but what if I have a variable number of arrays ? Of course, I could use the techniques which parses the function's argument list (many examples in Stackoverflow), but I'd like to do something different.
I'd like to store all my arrays in a list and pass this list to the C function, and let the C code handle all the arrays.
In fact, it's extremely simple, easy et coherent... once you understand how it's done ! There is simply one very simple fact to remember :
Any member of a list/tuple/dictionary is a Python object... on the C side of the code !
You can't expect to directly pass a pointer as I initially, and wrongly, thought. Once said, it sounds very simple :-) Though, let's write some Python code :
from numpy import array
my_list = (array([1.0, 2.2, 3.3, 4.4, 5.5]),
array([2.9, 3.8. 4.7, 5.6]))
my_c_func(py_object(my_list))
Well, you don't need to change anything in the list, but you need to specify that you are passing the list as a PyObject argument.
And here is the how all this is being accessed in C.
void my_c_func(PyObject *list)
{
int i, n_arrays;
// Get the number of elements in the list
n_arrays = PyObject_Length(list);
for (i = 0; i LT n_arrays; i++)
{
PyArrayObject *elem;
double *pd;
elem = PyList_GetItem(list,
i);
pd = PyArray_DATA(elem);
printf("Value 0 : %.10f\n", *pd);
}
}
Explanation :
The list is received as a pointer to a PyObject
We get the number of array from the list by using the PyObject_Length() function.
PyList_GetItem() always return a PyObject (in fact a void *)
We retrieve the pointer to the array of data by using the PyArray_DATA() macro.
Normally, PyList_GetItem() returns a PyObject *, but, if you look in the Python.h and ndarraytypes.h, you'll find that they are both defined as (I've expanded the macros !):
typedef struct _object {
Py_ssize_t ob_refcnt;
struct _typeobject *ob_type;
} PyObject;
And the PyArrayObject... is exactly the same. Though, it's perfectly interchangeable at this level. The content of ob_type is accessible for both objects and contain everything which is needed to manipulate any generic Python object. I admit that I've used one of its member during my investigations. The struct member tp_name is the string containing the name of the object... in clear text; and believe me, it helped ! This is how I discovered what each list element was containing.
While these structures don't contain anything else, how is it that we can access the pointer of this ndarray object ? Simply using object macros... which use an extended structure, allowing the compiler to know how to access the additional object's elements, behind the ob_type pointer. The PyArray_DATA() macro is defined as :
#define PyArray_DATA(obj) ((void *)((PyArrayObject_fields *)(obj))->data)
There, it's casting the PyArayObject * as a PyArrayObject_fields * and this latest structure is simply (simplified and macros expanded !) :
typedef struct tagPyArrayObject_fields {
Py_ssize_t ob_refcnt;
struct _typeobject *ob_type;
char *data;
int nd;
npy_intp *dimensions;
npy_intp *strides;
PyObject *base;
PyArray_Descr *descr;
int flags;
PyObject *weakreflist;
} PyArrayObject_fields;
As you can see, the first two element of the structure are the same as a PyObject and PyArrayObject, but additional elements can be addressed using this definition. It is tempting to directly access these elements, but it's a very bad and dangerous practice which is more than strongly discouraged. You must rather use the macros and don't bother with the details and elements in all these structures. I just thought you might be interested by some internals.
Note that all PyArrayObject macros are documented in http://docs.scipy.org/doc/numpy/reference/c-api.array.html
For instance, the size of a PyArrayObject can be obtained using the macro PyArray_SIZE(PyArrayObject *)
Finally, it's very simple and logical, once you know it :-)
For a non template class I would write something like that
But I don't know what should I do if my class is a template class.
I've tried something like that and it's not working.
extern "C" {
Demodulator<double>* Foo_new_double(){ return new Demodulator<double>(); }
Demodulator<float>* Foo_new_float(){ return new Demodulator<float>(); }
void demodulateDoubleMatrix(Demodulator<double>* demodulator, double * input, int rows, int columns){ demodulator->demodulateMatrixPy(input, rows, columns) }
}
Note: Your question contradicts the code partially, so I'm ignoring the code for now.
C++ templates are an elaborated macro mechanism that gets resolved at compile time. In other words, the binary only contains the code from template instantiations (which is what you get when you apply parameters, typically types, to the the template), and those are all that you can export from a binary to other languages. Exporting them is like exporting any regular type, see for example std::string.
Since the templates themselves don't survive compilation, there is no way that you can export them from a binary, not to C, not to Python, not even to C++! For the latter, you can provide the templates themselves though, but that doesn't include them in a binary.
Two assumptions:
Exporting/importing works via binaries. Of course, you could write an import that parses C++.
C++ specifies (or specified?) export templates, but as far as I know, this isn't really implemented in the wild, so I left that option out.
The C++ language started as a superset of C: That is, it contains new keywords, syntax and capabilities that C does not provide. C does not have the concept of a class, has no concept of a member function and does not support the concept of access restrictions. C also does not support inheritance. The really big difference, however, is templates. C has macros, and that's it.
Therefore no, you can't directly expose C++ code to C in any fashion, you will have to use C-style code in your C++ to expose the C++ layer.
template<T> T foo(T i) { /* ... */ }
extern "C" int fooInt(int i) { return foo(i); }
However C++ was originally basically a C code generator, and C++ can still interop (one way) with the C ABI: member functions are actually implemented by turning this->function(int arg); into ThisClass0int1(this, arg); or something like that. In theory, you could write something to do this to your code, perhaps leveraging clang.
But that's a non-trivial task, something that's already well-tackled by SWIG, Boost::Python and Cython.
The problem with templates, however, is that the compiler ignores them until you "instantiate" (use) them. std::vector<> is not a concrete thing until you specify std::vector<int> or something. And now the only concrete implementation of that is std::vector<int>. Until you've specified it somewhere, std::vector<string> doesn't exist in your binary.
You probably want to start by looking at something like this http://kos.gd/2013/01/5-ways-to-use-python-with-native-code/, select a tool, e.g. SWIG, and then start building an interface to expose what you want/need to C. This is a lot less work than building the wrappers yourself. Depending which tool you use, it may be as simple as writing a line saying using std::vector<int> or typedef std::vector<int> IntVector or something.
---- EDIT ----
The problem with a template class is that you are creating an entire type that C can't understand, consider:
template<typename T>
class Foo {
T a;
int b;
T c;
public:
Foo(T a_) : a(a_) {}
void DoThing();
T GetA() { return a; }
int GetB() { return b; }
T GetC() { return c; }
};
The C language doesn't support the class keyword, never mind understand that members a, b and c are private, or what a constructor is, and C doesn't understand member functions.
Again it doesn't understand templates so you'll need to do what C++ does automatically and generate an instantiation, by hand:
struct FooDouble {
double a;
int b;
double c;
};
Except, all of those variables are private. So do you really want to be exposing them? If not, you probably just need to typedef "FooDouble" to something the same size as Foo and make a macro to do that.
Then you need to write replacements for the member functions. C doesn't understand constructors, so you will need to write a
extern "C" FooDouble* FooDouble_construct(double a);
FooDouble* FooDouble_construct(double a) {
Foo* foo = new Foo(a);
return reinterept_cast<FooDouble*>(foo);
}
and a destructor
extern "C" void FooDouble_destruct(FooDouble* foo);
void FooDouble_destruct(FooDouble* foo) {
delete reinterpret_cast<Foo*>(foo);
}
and a similar pass-thru for the accessors.
I've written a high level motor controller in Python, and have got to a point where I want to go a little lower level to get some speed, so I'm interested in coding those bits in C.
I don't have much experience with C, but the math I'm working on is pretty straightforward, so I'm sure I can implement with a minimal amount of banging my head against the wall. What I'm not sure about is how best to invoke this compiled C program in order to pipe it's outputs back into my high-level python controller.
I've used a little bit of ctypes, but only to pull some functions from a manufacfturer-supplied DLL...not sure if that is an appropriate path to go down in this case.
Any thoughts?
You can take a look at this tutorial here.
Also, a more reliable example on the official python website, here.
For example,
sum.h function
int sum(int a, int b)
A file named, module.c,
#include <Python.h>
#include "sum.h"
static PyObject* mod_sum(PyObject *self, PyObject *args)
{
int a;
int b;
int s;
if (!PyArg_ParseTuple(args,"ii",&a,&b))
return NULL;
s = sum(a,b);
return Py_BuildValue("i",s);
}
static PyMethodDef ModMethods[] = {
{"sum", mod_sum, METH_VARARGS, "Description.."},
{NULL,NULL,0,NULL}
};
PyMODINIT_FUNC initmod(void)
{
PyObject *m;
m = Py_InitModule("module",ModMethods);
if (m == NULL)
return;
}
Python
import module
s = module.sum(3,5)
Another option: try numba.
It gives you C-like speed for free: just import numba and #autojit your functions, for a wonderful speed increase.
Won't work if you have complicated data types, but if you're looping and jumping around array indices, it might be just what you're looking for.
you can use SWIG, it is very simple to use
You can use Cython for setting the necessary c types and compile your python syntax code.