Exporting C global variables to a python extension - python

I have to write a python extension to a C module that comes from a third-party package. The module contains the declarations of some methods and also of the following variables at the module level:
int mcnumipar = 13;
struct my_struct {char *name;
void *par;
enum instr_formal_types type;
char *val;};
struct my_struct mcinput[mcnumipar+1] = {
"E0", &mcipE0, instr_type_double, "4.94",
"dE", &mcipdE, instr_type_double, "0.24",
"dt", &mcipdt, instr_type_double, "6.4e-6",
"coh", &mcipcoh, instr_type_string, "Rb_liq_coh.sqw",
"inc", &mcipinc, instr_type_string, "Rb_liq_inc.sqw"
};
I succeeded in exporting the C-methods to my python extension using the PyMethodDef mechanism as explained in the Python/C API documentation. Unfortunately, I failed for the global variables.
Is there a way to export those variables (mcnumipar & mcinput) into my python
extension ?
thanks a lot
Eric

Certainly. Possibly the easiest way would be to create Python objects for those variables with Py_BuildValue(), and then add them into your module object (the one you created with Py_InitModule()) using PyObject_SetAttrString().
If the contents of those global variables may change over time, and you want your Python code to be able to see the latest values, then you may be better off exposing extra methods which return the current values.
As a third option, you could use ctypes fairly easily to inspect or even change the current values of those variables. It would be a bit strange to build a true Python-C module for part of an API, and expose the rest through ctypes, but it might end up fitting your needs.

Please, check Cpython functions:
PyModule_AddIntConstant, PyModule_AddStringConstant, PyModule_AddObject
Example:
/* Adding module globals */
if (PyModule_AddIntConstant(m, NAME_INT, 42)) {
goto except;
}
if (PyModule_AddStringConstant(m, NAME_STR, "String value")) {
goto except;
}
if (PyModule_AddObject(m, NAME_TUP, Py_BuildValue("iii", 66, 68, 73))) {
goto except;
}
if (PyModule_AddObject(m, NAME_LST, Py_BuildValue("[iii]", 66, 68, 73))) {
goto except;
}
Here you have a complete tutorial with several use cases that worked for me: http://pythonextensionpatterns.readthedocs.io/en/latest/module_globals.html
Regards,
Pablo

Related

How to reload python module that is build with c++ source code using pybind11

I am using the first step example in pybind11's documentation
#include <pybind11/pybind11.h>
int add(int i, int j)
{
return i + j;
}
PYBIND11_MODULE(example, m)
{
m.doc() = "pybind11 example plugin"; // optional module docstring
m.def("add", &add, "A function which adds two numbers");
}
everything works fine, i can use it in python shell:
import example
example.add(2, 3) #returns 5
Now i made a simple change to use float instead of int for input for add(), everything compiles. and i want to reload the module example so i can test the new float based add(). However, i can not figure out a way to reload the example module. importlib.reload does not work, %autorelaod 2 in IPython does not work either.
both approached tested to work with pure python based modules, but not the c++ and pybind11 based modules.
Did I miss anything here? or it ought to be like this?
UPDATE: seems it is a known issue related to How to Reload a Python3 C extension module?
Python's import mechanism will never dlclose() a shared library. Once
loaded, the library will stay until the process terminates.
pybind11 module and ctypes module seems to share the same traits here regarding how the module is loaded/imported.
Also quote from https://github.com/pybind/pybind11/issues/2511:
The way C extensions are loaded by Python does not allow them to be
reloaded (in contract to Python modules, where the Python code can
just be reloaded and doesn't refer to a dynamically loaded library)
I now just wonder if there is a method to wrap this up in a more convenient way for reloading the module. E.g., spawn a subprocess for a new python shell that copies all C extensions related variable/module, and substitute the original one.
Its seems no way straightforward. Since it's possible to manage standard shared library by manually dlopen() & dlclose(), you can change your PYBIND11_MODULE to a pre-defined function like
void __bind_module(void *bind_) {
typedef void (*binder_t)(const char *, py::cpp_function);
auto bind = (binder_t) bind_;
bind("add", add);
}
and then write a manager module to attach/detach those libraries. Something like importlib from yourself.

Breaking up SWIG Python interface -- containers create namespace conflict

Our code base currently supports a single SWIG interface file (for Python) that has grown over the years to include roughly 300 C++ classes (technically interfaces), all of which inherit from a single base class, and all of which exist in a single global namespace. This allows us, with a minimal amount of SWIG code, to implement dynamic casting among the C++ classes that the SWIG classes represent while at the same time simplifying by keeping the C++ inheritance structure out of SWIG.
As long as we compiled our SWIG interface in a single module, this mechanism worked well -- but as the SWIG interface file has grown it has become difficult to manage, and compile/link times have grown. To address this I split the interface file up into separate modules by the names of the derived classes -- one module for class names beginning with "A" to "G", one for names beginning with "H" to "N", etc., resulting in four derived-class modules and a base class module. I was able to get these modules to compile and link, and exhibit expected behavior for the dynamic casting, following the method outlined here: (http://www.swig.org/Doc3.0/SWIGDocumentation.html#Modules_nn1)
However, breaking the single module into four parts (five parts counting the base class) causes problems with the namespace when containers come into play. Consider the following function, from a class in my v-to-z interface file:
void RemoveIsolated(const std::vector<global::IFoo*> spRemoveIsolated) {
…
}
That takes a vector of one of the derived classes that exist in the global namespace. This worked without issue when I had only one module but now class IFoo lives in the a-to-g module -- so if I cast something to an IFoo*, it's an a-to-g.IFoo*. However, the function demands a global::IFoo*.
This seems to be a situation that could be addressed by the SWIG template mechanism. I've seen discussions in which people have had success by means of at one point (possibly in the interface file for the base class??) declaring
%template(FooVector) std::vector<global::Foo*>;
And at another point (possibly in the interface file for the derived class??):
%template () std::vector<global::Foo*>;
But my attempts to implement this have not been successful. The discussions are somewhat ambiguous, it's possible that I'm doing something wrong. Can anyone provide clarification, ideally with an example?
The piece of information it looks like you're missing is the %import directive, which lets modules cooperate with the definition of types, without repeating them and still ending up with a single wrapped type. The documentation suggests using this to reduce module size even.
Probably all you need to do is have your v-to-z module %import the a-to-g module to get this working for you. (Personally I'd have tried to divide them up by functionality rather than alphabetically though, so the dependency between then wouldn't be an issue)
Thanks for your suggestion Flexo. Importing the a-to-g module did not work; the C++ compiler complained that all of the classes (interfaces) declared there were not part of the global namespace when it tried to compile to v-to-z wrapper file. However, going through the exercise led me to question why we had been having success previously when we were compiling a single module. It turned out that we were using a typemapping macro in the interface file for the single module that would take a
const std::vector<global::IFoo*>
and map it thusly:
TYPEMAPMACRO(global::IFoo, SWIGTYPE_p_global__IFoo)
for vector containers. The macro itself, for anyone who's interested, is:
%define TYPEMAPMACRO(type, name)
%typemap(in) const std::vector {
/*Check if is a list */
std::vector vec;
void *pobj = 0;
if(PyTuple_Check($input))
{
size_t size = PyTuple_Size($input);
for (size_t j = 0; j < size; j++) {
PyObject *o = PyTuple_GetItem($input, j);
void *argp1 = 0 ;
int res1 = SWIG_ConvertPtr(o, &argp1, name, 0 | 0 );
if (!SWIG_IsOK(res1)) {
SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "Typemap of std::vector" "', argument " "1"" of type '" """'");
}
vec.push_back(reinterpret_cast< type * >(argp1));
}
$1 = vec;
}
else if (SWIG_IsOK(SWIG_ConvertPtr($input, &pobj, name, 0 | 0 ))) {
PyObject *o = $input;
void *argp1 = 0 ;
int res1 = SWIG_ConvertPtr(o, &argp1, name, 0 | 0 );
if (!SWIG_IsOK(res1)) {
SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "Typemap of std::vector" "', argument " "1"" of type '" """'");
}
vec.push_back(reinterpret_cast< type * >(argp1));
$1 = vec;
}
else {
PyErr_SetString(PyExc_TypeError, "not a list");
return NULL;
}
}
%typecheck(SWIG_TYPECHECK_POINTER) std::vector {
void *pobj = 0;
if(!PyTuple_Check($input) && !SWIG_IsOK(SWIG_ConvertPtr($input, &pobj, name, 0 | 0 ))) {
$1 = 0;
PyErr_Clear();
} else {
$1 = 1;
}
}
%enddef
My sense is that this is standard boilerplate stuff, I don't claim to understand it well as it's someone else's code, but what I do understand now that I did not before is that I needed to place the macro for the typemap before the function that uses the typemap (e.g the "RemoveIsolated" example above). That ordering had been broken when I divided my big module up into smaller ones.

Exposing std::vector<struct> with boost.python

I have an existing c++ code library that uses a struct with std::vector, which should be exposed to python.
in the header:
struct sFOO
{
unsigned int start = 3 ;
double foo = 20.0 ;
};
in the cpp:
namespace myName
{
myfoo::myfoo(){
sFOO singlefoo;
std::vector<sFOO> foos;
}
sFOO singlefoo;
std::vector<sFOO>* myfoo::get_vector(){
return &foos;
}
}
and for boost::python snippet:
using namespace boost::python;
class dummy3{};
BOOST_PYTHON_MODULE(vStr)
{
scope myName = class_<dummy3>("myName");
class_<myName::sFOO>("sFOO")
.add_property("start",&myName::sFOO::start)
.add_property("foo",&myName::sFOO::foo)
;
class_<myName::myfoo>("myfoo", no_init)
.def(init<>())
.def("checkfoo",&myName::myfoo::checkfoo)
.add_property("foos",&myName::myfoo::foos)
.add_property("singlefoo",&myName::myfoo::singlefoo)
}
(FYI, fictitious class dummy3 is used to simulate namespace, and using scope is therefore not an option.)
A compilation and importing processes are OK, and I can access singlefoo, but whenever I try to access vector foos, I encounter the error message below.
Python class registered for C++ class std::vector<myName::sFOO,
std::allocator<myName::sFOO> >
To circumvent this issue,
I've firstly tried vector_indexing_suite, but it didn't help exposing pre-defined vector of struct.
I also assumed that there should be a solution related to exposing pointer to python, so I have tried to get a pointer by following:
.def("get_vector",&myName::myfoo::get_vector)
which produces compile Error.
Since I am quite a novice to both C++ and Boost, any comments including solution, tips for search, and a suggestion to suitable reference would be greatly appreciated.
Thanks in advance!
Method .def("get_vector",&myName::myfoo::get_vector) is not working because it returns a pointer to a vector, so it's necessary to inform the policy that defines how object ownership should be managed:
class_<myName::myfoo>("myfoo", no_init)
// current code
.def("get_vector", &myfoo::get_vector, return_value_policy<reference_existing_object>())
;
In order to use vector_indexing_suite, it is necessary to implement the equal to operator to the class that it holds:
struct sFOO
{
unsigned int start = 3 ;
double foo = 20.0 ;
bool operator==(const sFOO& rhs)
{
return this == &rhs; //< implement your own rules.
}
};
then you can export the vector:
#include <boost/python/suite/indexing/vector_indexing_suite.hpp>
class_<std::vector<sFOO>>("vector_sFOO_")
.def(vector_indexing_suite<std::vector<sFOO>>())
;

Best method to call a C function from python code

I have some C code which has some basic functions. I want to be able to call these C functions from my python code. There seem to be a lot f methods to do this as I search online, but they look a bit complicated. Can anyone suggest which is the simplest and best method to call C functions from python without any issues ?
Cffi library is a fairly modern approach to this. It also works across python and pypy.
Provided you have your functions contained in a shared library, you can just import them as python methods. Have a look at examples here: http://cffi.readthedocs.org/en/latest/overview.html
When you want to extend Python with some C functions you can check out the following example. What you need is a module which registers wrapper functions for your own functions.
For more information check out the Python docs.
#include <Python.h>
static PyObject *
yourfunction(PyObject *self, PyObject *args, PyObject *keywds)
{
int voltage;
char *state = "a stiff";
char *action = "voom";
char *type = "Norwegian Blue";
static char *kwlist[] = {"voltage", "state", "action", "type", NULL};
if (!PyArg_ParseTupleAndKeywords(args, keywds, "i|sss", kwlist,
&voltage, &state, &action, &type))
return NULL;
printf("-- This parrot wouldn't %s if you put %i Volts through it.\n",
action, voltage);
printf("-- Lovely plumage, the %s -- It's %s!\n", type, state);
Py_INCREF(Py_None);
return Py_None;
}
static PyMethodDef keywdarg_methods[] = {
{"yourfunction", (PyCFunction)yourfunction, METH_VARARGS | METH_KEYWORDS,
"the doc of your function"},
{NULL, NULL, 0, NULL} /* sentinel */
};
// in Python 2.x the function name initmodulename is executed when imported
void initkeywdarg(void)
{
/* Create the module and add the functions */
Py_InitModule("keywdarg", keywdarg_methods);
}
To compile the file you can use clang. Keep in mind that you have to correct the include path if the Python headers are located somewhere else. The created binary keywdarg.so can be created by using import keywdarg.
clang++ -shared -I/usr/include/python2.7 -fPIC keywdarg.cpp -o keywdarg.so -lpython
You can use a shared library using the C compiler and make the calls of the functions that are defined in the library, to this, you can use the module CPython.
Let's say your c code sends email, and the code accept args like this
./sendemail email title body
Then from python , you can do something like:
from subprocess import call
call(["./sendemail", "email#email.com", "subject", "body"])

Can I export c++ template class to C and therefore to python with ctypes?

For a non template class I would write something like that
But I don't know what should I do if my class is a template class.
I've tried something like that and it's not working.
extern "C" {
Demodulator<double>* Foo_new_double(){ return new Demodulator<double>(); }
Demodulator<float>* Foo_new_float(){ return new Demodulator<float>(); }
void demodulateDoubleMatrix(Demodulator<double>* demodulator, double * input, int rows, int columns){ demodulator->demodulateMatrixPy(input, rows, columns) }
}
Note: Your question contradicts the code partially, so I'm ignoring the code for now.
C++ templates are an elaborated macro mechanism that gets resolved at compile time. In other words, the binary only contains the code from template instantiations (which is what you get when you apply parameters, typically types, to the the template), and those are all that you can export from a binary to other languages. Exporting them is like exporting any regular type, see for example std::string.
Since the templates themselves don't survive compilation, there is no way that you can export them from a binary, not to C, not to Python, not even to C++! For the latter, you can provide the templates themselves though, but that doesn't include them in a binary.
Two assumptions:
Exporting/importing works via binaries. Of course, you could write an import that parses C++.
C++ specifies (or specified?) export templates, but as far as I know, this isn't really implemented in the wild, so I left that option out.
The C++ language started as a superset of C: That is, it contains new keywords, syntax and capabilities that C does not provide. C does not have the concept of a class, has no concept of a member function and does not support the concept of access restrictions. C also does not support inheritance. The really big difference, however, is templates. C has macros, and that's it.
Therefore no, you can't directly expose C++ code to C in any fashion, you will have to use C-style code in your C++ to expose the C++ layer.
template<T> T foo(T i) { /* ... */ }
extern "C" int fooInt(int i) { return foo(i); }
However C++ was originally basically a C code generator, and C++ can still interop (one way) with the C ABI: member functions are actually implemented by turning this->function(int arg); into ThisClass0int1(this, arg); or something like that. In theory, you could write something to do this to your code, perhaps leveraging clang.
But that's a non-trivial task, something that's already well-tackled by SWIG, Boost::Python and Cython.
The problem with templates, however, is that the compiler ignores them until you "instantiate" (use) them. std::vector<> is not a concrete thing until you specify std::vector<int> or something. And now the only concrete implementation of that is std::vector<int>. Until you've specified it somewhere, std::vector<string> doesn't exist in your binary.
You probably want to start by looking at something like this http://kos.gd/2013/01/5-ways-to-use-python-with-native-code/, select a tool, e.g. SWIG, and then start building an interface to expose what you want/need to C. This is a lot less work than building the wrappers yourself. Depending which tool you use, it may be as simple as writing a line saying using std::vector<int> or typedef std::vector<int> IntVector or something.
---- EDIT ----
The problem with a template class is that you are creating an entire type that C can't understand, consider:
template<typename T>
class Foo {
T a;
int b;
T c;
public:
Foo(T a_) : a(a_) {}
void DoThing();
T GetA() { return a; }
int GetB() { return b; }
T GetC() { return c; }
};
The C language doesn't support the class keyword, never mind understand that members a, b and c are private, or what a constructor is, and C doesn't understand member functions.
Again it doesn't understand templates so you'll need to do what C++ does automatically and generate an instantiation, by hand:
struct FooDouble {
double a;
int b;
double c;
};
Except, all of those variables are private. So do you really want to be exposing them? If not, you probably just need to typedef "FooDouble" to something the same size as Foo and make a macro to do that.
Then you need to write replacements for the member functions. C doesn't understand constructors, so you will need to write a
extern "C" FooDouble* FooDouble_construct(double a);
FooDouble* FooDouble_construct(double a) {
Foo* foo = new Foo(a);
return reinterept_cast<FooDouble*>(foo);
}
and a destructor
extern "C" void FooDouble_destruct(FooDouble* foo);
void FooDouble_destruct(FooDouble* foo) {
delete reinterpret_cast<Foo*>(foo);
}
and a similar pass-thru for the accessors.

Categories

Resources