boost python expose and import methods time cost - python

I am experiencing a difficulty using boost python facilities to extend my C++ code to Python. I've written the boost.python wrappers successfully. I also have access to my C++ objects from Python without any error, in addition called a Python file (module) method from C++ using boost attr("")() function without any problem.
My problem is the execution time of the Python method. Referencing to the wrapped objects are about microseconds in Python code as I've printed. Although the time calling the Python method takes is about milliseconds and it increases with respect to the number of references I've made in the Python to my wrapped C++ objects (and only referencing/assigning not any further use). Thus I've made some search and my assumptions about this increasing time is:
some reference policies (default policies) causes this problem by doing some unnecessary operation(s) when returning from the Python code. So probably I'm doing something wrong in the wrappers.
Boost.Python call method has some overhead, which there might be some options I'm not aware of.
It worth mentioning that the Python method called in each execution cycle of my program and each time I get a very same (not exact) time.
I hope my description were enough. Below is also a part of my sample code:
One of my Wrappers:
class_<Vertex<> >("Vertex")
.def(init<float, float>())
.def_readwrite("x", &Vertex<>::x)
.def_readwrite("y", &Vertex<>::y)
.def("abs", &Vertex<>::abs)
.def("angle", &Vertex<>::angle)
.def(self - self)
.def(self -= self)
;
Calling a Python module method (which is "run"):
pyFile = import(fileName.c_str());
scope scope1(pyFile);
object pyNameSpace = scope1.attr("__dict__");
return extract<int>(pyFile.attr("run")());

Related

How to deal with multiple, conflicting PyBoost Bindings?

In a program, I am using multiple libraries that include boost bindings to the C++ eigen library. (In my case, OMPL & Pinocchio, but I guess the problem is not related to the specific libraries)
Both of them implement a binding for the same C++ class (an STL vector of integers). My problem is, the two bindings differ from each other - in my specific case,
pinocchio.pinocchio_pywrap.StdVec_Int has a tolist() method, while ompl.util._util.vectorInt doesn't.
It seems that, whatever library is imported first, is the deciding factor for which binding is used. This is dangerous because it leads to errors that are hard to trace, especially for externals that are using the code - all of a sudden, methods fail that worked before, just because of a new import.
My preferred solution to this problem would be forcing both libraries to use their own bindings only - but I don't know if that is possible.
Alternatively - is there a way to globally define which C++ bindings to prefer in case two libraries collide?
Example for the "hard-to-explain-bug": Imagine running several unittests in one session - and the one on ompl-related methods runs first. It fails later on when the tolist() method is used on an object which is expected to be of type pinocchio.pinocchio_pywrap.StdVec_Int. You start to debug and run the failing unittest only - and it runs smoothly because ompl was not imported before...

Return a python function from Python C-API

I am currently in the process of writing a small python module using the Python API, that will speed up some of the slower python code, that is repeatedly run in a in a simulation of sorts. My issue is that currently this code is takes a bunch of arguments, that in many use cases won't change. For example the function signature will be like: func(x,a,b,c,d,e), but after an initialisation only x will change. I therefore will have the python code littered with lambda x : func(x,a,b,c,d,e) where I wrap these before use. I have observed that this actually introduces quite a bit of calling overhead.
My idea to fix this was to create a PyObject* that is essentially C++ lambda instead of the python one. The main issue with this is that I have not found a way to create PyObjects from C++ lambdas, or even lower level functions. Since functions/lambdas in python can be passed as arguments I assume it is possible, but is there a clean way I'm missing.
I would seriously consider using swig instead of pybind11 for example. It's just peace of mind. If you don't want to use swig directly, you can at least see what swig does to wrap up features like proxy objects.
http://www.swig.org/Doc2.0/SWIGPlus.html#SWIGPlus_nn38

Calling Python member functions from C++

I need to test the feasibility of calling Python member functions (running in one process)from within C++. This is for testing interfacing C++ to an existing Python application. I need to minimize the modifications to the Python code as that's run by a separate team. Therefore I do not have control of when the Python objects are created on the C++ side. For my test I'd like to try and:
See if I can determine how many instances of a specified Python class have been created
If that number is > 0, then I would like to test calling a member function on one of the instantiated Python objects from C++
I can do a simple call from C++ to a global, non member Python function, but can't figure out how to do the above 2 steps from the C++ side.
I'd also like to try and do this without pulling in the Boost Python interop. library (but will if that's the only way this can be achieved).
Thanks if anyone can advise.

Tracking memory usage allocated within C++ wrapped by Cython

I have a Python program which calls some Cython code which in turn wraps some raw C++ code. I would like to see how much memory the base C++ code is allocating. I've tried the memory_profiler module for Python, however, it can't seem to detect anything allocated by the C++ code. My evidence for this is that I have a Cython object that in turn stores an instance of a C++ object. This C++ object should definitely hold onto a bunch of memory. In python, when I create an instance of the Cython object (and it stores an instance of the C++ object), memory_profiler does not detect any extra memory stored (or at least detects only a negligible amount).
Is there any other way to detect how much memory Python is having allocated by the base C++ objects? Or is there something similar to memory_profiler, but for Cython?
If you can run your program on Linux, use https://github.com/vmware/chap (for example, start with "summarize used").

Query on python execution model

Below is the program that defines a function within another function.
1) When we say python program.py Does every line of python source directly gets converted to set of machine instructions that get executed on processor?
2) Above diagram has GlobalFrame and LocalFrame and Objects. In the above program, Where does Frames Objects and code reside in runtime? Is there a separate memory space given to this program within python interpreter's virtual memory address space?
"Does every line of python source directly gets converted to set of machine instructions that get executed on processor?"
No. Python code (not necessarily by line) typically gets converted to an intermediate code which is then interpreted by what some call a "virtual machine" (confusingly, as VM means something completely different in other contexts, but ah well). CPython, the most popular implementation (which everybody thinks of as "python":-), uses its own bytecode and interpreter thereof. Jython uses Java bytecode and a JVM to run it. And so on. PyPy, perhaps the most interesting implementation, can emit almost any sort of resulting code, including machine code -- but it's far from a line by line process!-)
"Where does Frames Objects and code reside in runtime"
On the "heap", as defined by the malloc, or equivalent, in the C programming language in the CPython implementation (or Java for Jython, etc, etc).
That is, whenever a new PyObject is made (in CPython's internals), a malloc or equivalent happens and that object is forevermore referred via a pointer (a PyObject*, in C syntax). Functions, frames, code objects, and so forth, almost everything is an object in Python -- no special treatment, "everything is first-class"!-)

Categories

Resources