I need help in writing a wrapper to call some functions and get JSON responce (a unicode string) from a third-party shared library. The library's headers file is shown below:
#include <string>
#include <ExportLib.h>
// some code ignored here
typedef std::string UString;
using namespace std;
namespace ns1{
class DLL_PUBLIC S_API {
public:
static UString function1();
static UString function2();
// some code ignored here
};
}
The problem is I'm not so good in C/C++, so I have no idea what to do with Cython. I would be very grateful if someone would point me in right direction. I wrote a .pyx file like so:
from libcpp.string cimport string
cdef extern from "libName.h" namespace "ns1":
cdef cppclass S_API:
string function1;
string function2;
This compiles fine and I do get a .so file, which I can import in Pyhton. But I am still unable to access function1() or any other function inside the module.
Related
I am trying to create a wrapper for a c++ method that returns a c++ class(vtkPolyData) which comes from an external c++ library (vtk). The same library has python binding available which is already installed in my python environment. How do you tell pybind that the c++ class (vtkPolydata) and its python variant are the same?
I tried to use this custom type caster macro. but I get TypeError: Unable to convert function return value to a Python type! The signature was : (self: Versa3dLib.skeletonizer, offset distance: float) -> vtkPolyData
which is confusing since it looks like the conversion maps to the correct type but python is unable to interpret it. So I am not sure what's wrong since I don't see anything wrong with the macro either. I noticed that in python vtkPolyData has type vtkCommonDataModelPython.vtkPolyData. is that why the conversion is not done correctly?
#include "skeletonizer.h"
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include "PybindVTKTypeCaster.h"
#include <vtkSmartPointer.h>
namespace py = pybind11;
PYBIND11_VTK_TYPECASTER(vtkPolyData)
PYBIND11_DECLARE_HOLDER_TYPE(T, vtkSmartPointer<T>);
namespace pybind11 { namespace detail {
template <typename T>
struct holder_helper<vtkSmartPointer<T>> { // <-- specialization
static const T *get(const vtkSmartPointer<T> &p) { return p.GetPointer(); }
};
}}
PYBIND11_MODULE(Versa3dLib, m)
{
py::class_<skeletonizer>(m, "skeletonizer")
.def(py::init<vtkPolyData *>())
.def("get_offset", &skeletonizer::get_offset,
"return vtkPolyData offset",
py::arg("offset distance"));
}
Skeletonizer
#ifndef SKELETONIZER_H
#define SKELETONIZER_H
#include <vtkPolyData.h>
#include <vector>
#include <vtkSmartPointer.h>
using namespace std;
class skeletonizer
{
public:
skeletonizer(vtkPolyData* data);
vtkSmartPointer<vtkPolyData> get_offset(double dist);
};
#endif
skeletonizer cpp
#include "skeletonizer.h"
skeletonizer::skeletonizer(vtkPolyData* data)
{
};
vtkSmartPointer<vtkPolyData> skeletonizer::get_offset(double dist)
{
vtkSmartPointer<vtkPolyData> offsets = vtkSmartPointer<vtkPolyData>::New();
return offsets;
};
I think this should be a more general solution (hopefully easier to use?):
vtk_pybind (README)
vtk_pybind.h
example binding code
example python test using bindings
I believe this should be an improvement on the VTK code by:
Generalizing the type casters using SFINAE (rather than requiring explicit instantiations...).
Permit direct casting of vtkSmartPointer and vtkNew (assuming the types inside these are VTK types).
Made the code kind-of follow Drake's C++ + Python binding conventions.
For the above solution you had, I think it was close leveraging the SMTK code, but the holder type instantation was incorrect - you'd need type_caster specializations for the smart pointers (which the vtk_pybind code I posted would provide).
I'll see if I can post an issue on SMTK to see if they want to improve / simplify their binding code (esp. if people refer to it!).
EDIT: Posted issue here: https://gitlab.kitware.com/cmb/smtk/issues/228
This question is about how to pass a C++ object to a python function that is called in a (C++) embedded Python interpreter.
The following C++ class (MyClass.h) is designed for testing:
#ifndef MyClassH
#define MyClassH
#include <string>
using std::string;
class MyClass
{
public:
MyClass(const string& lbl): label(lbl) {}
~MyClass(){}
string getLabel() {return label;}
private:
string label;
};
#endif
A python module, exposing the C++ class, can be generated by the following Swig interface file:
%module passmetopython
%{ #include "MyClass.h" %}
%include "std_string.i"
//Expose to Python
%include "MyClass.h"
Below is a Python script using the python module
import passmetopython as pmtp
def execute(obj):
#This function is to be called from C/C++, with a
#MyClass object as an argument
print ("Entering execute function")
lbl = obj.getLabel();
print ("Printing from within python execute function. Object label is: " + lbl)
return True
def main():
c = pmtp.MyClass("Test 1")
retValue = execute(c)
print("Return value: " + str(retValue))
#Test function from within python
if __name__ == '__main__':
main()
This question is about how to get the python execute() function working, when called from c++, with a C++ object as an argument.
The following C++ program was written to test the functions (minimum amount of error checking):
#include "Python.h"
#include <iostream>
#include <sstream>
#include "MyClass.h"
using namespace std;
int main()
{
MyClass obj("In C++");
cout << "Object label: \"" << obj.getLabel() << "\"" << endl;
//Setup the Python interpreter and eventually call the execute function in the
//demo python script
Py_Initialize();
//Load python Demo script, "passmetopythonDemo.py"
string PyModule("passmetopythonDemo");
PyObject* pm = PyUnicode_DecodeFSDefault(PyModule.c_str());
PyRun_SimpleString("import sys");
stringstream cmd;
cmd << "sys.path.append(\"" << "." << "\")";
PyRun_SimpleString(cmd.str().c_str());
PyObject* PyModuleP = PyImport_Import(pm);
Py_DECREF(pm);
//Now create PyObjects for the Python functions that we want to call
PyObject* pFunc = PyObject_GetAttrString(PyModuleP, "execute");
if(pFunc)
{
//Setup argument
PyObject* pArgs = PyTuple_New(1);
//Construct a PyObject* from long
PyObject* pObj(NULL);
/* My current attempt to create avalid argument to Python */
pObj = PyLong_FromLong((long) &obj);
PyTuple_SetItem(pArgs, 0, pObj);
/***** Calling python here *****/
cout<<endl<<"Calling function with an MyClass argument\n\n";
PyObject* res = PyObject_CallObject(pFunc, pArgs);
if(!res)
{
cerr << "Failed calling function..";
}
}
return 0;
}
When running the above code, the execute() python function, with a MyClass object as an argument, fails and returns NULL. However, the Python function is entered, as I can see the output (Entering execute function) in the console output, indicating that the object passed is not, indeed, a valid MyClass object.
There are a lot of examples on how to pass simple types, like ints, doubles or string types to Python from C/C++. But there are very few example showing how to pass a C/C++ object/ pointer, which is kind of puzzling.
The above code, with a CMake file, can be checked out from github:
https://github.com/TotteKarlsson/miniprojects/tree/master/passMeToPython
This code is not to use any boost python or other API's. Cython sounds interesting though, and if it can be used to simplify on the C++ side, it could be acceptable.
This is a partial answer to my own question. I'm saying partial, because I do believe there is a better way.
Building on this post http://swig.10945.n7.nabble.com/Pass-a-Swig-wrapped-C-class-to-embedded-Python-code-td8812.html
I generated the swig runtime header, as described here, section 15.4: http://www.swig.org/Doc2.0/Modules.html#Modules_external_run_time
Including the generated header in the C++ code above, allow the following code to be written:
PyObject* pObj = SWIG_NewPointerObj((void*)&obj, SWIG_TypeQuery("_p_MyClass"), 0 );
This code is using information from the Swig python wrap source files, namely the "swig" name of the type MyClass, i.e. _p_MyClass.
With the above PyObject* as an argument to the PyObject_CallObject function, the python execute() function in the code above executes fine, and the Python code, using the generated python module, do have proper access to the MyClass objects internal data. This is great.
Although the above code illustrate how to pass, and retrieve data between C++ and Python in a quite simple fashion, its not ideal, in my opinion.
The usage of the swig header file in the C++ code is really not that pretty, and in addition, it requires a user to "manually" look into swig generated wrapper code in order to find the "_p_MyClass" code.
There must be a better way!? Perhaps something should be added to the swig interface file in order to get this looking nicer?
PyObject *pValue;
pValue = PyObject_CallMethod(pInstance, "add","(i)",x);
if (pValue)
Py_DECREF(pValue);
else
PyErr_Print();
I have a project with SWIG set up generating python code. I have a typedef of std::string to Message and a say(Message) function. I am able to call say with a string in python. I want to be able to make a variable of the type Message and the Message type is exported to the library, but not the python wrapper. Here are my files:
test.h
#include <string>
#include <iostream>
typedef std::string Message
void say(Message s);
test.cpp
#include "test.h"
void say(Message s)
{
std::cout << s << std::endl;
}
test.i
%module test
%{
#include "test.h"
%}
typedef std::string Message;
%include "std_string.i"
%include "test.h"
Python example
import test
test.say('a')
# >>> a
# What I want to be able to do
msg = test.Message('a')
# >>> Traceback (most recent call last):
# >>> File "<stdin>", line 1, in <module>
# >>> AttributeError: module 'test' has no attribute 'Message'
My actual use case also involves typedefs to other types (primarily enums) and I'm curious if those cases take any different handling. I believe I could wrap the objects in a class for the SWIG bindings and then modify the SWIG-generated classes (or maybe use a SWIG typemap), but I feel that that's a bit of a roundabout solution to what I would think is a common situation.
I thought this may be an issue with having access to the code in the string header, but I run into the same issue if I try to typedef something like an int.
My best approach so far has to been a wrapper template:
template<typename T>
class Wrapper
{
public:
Wrapper(T x) : data(x){};
T data;
T operator()() { return data; };
};
And a corresponding %template directive in test.i:
%template(Message) Wrapper<std::string>;
Unfortunately, this seems to have a few drawbacks so far:
You have to actually call operator(), ie, test.Message('a')() needs to be called
You need to either use some conditional compilation or name the wrapper something different from the typedef; otherwise, test.say won't accept the wrapper or a string and thus it is not able to be used at all.
It doesn't seem to work with enums with an error on construction.
I also thought that I may be clever and change operator* to just return what was being wrapped, but it looks like SWIG wrapped what was returned anyway.
In general in SWIG typedefs should "just work". (There's one exception where I know they regularly don't behave as expected and that's when instantiating templates, but this isn't an issue here.)
In your example I think that your problem is simply the visibility of the typedef relative to the definition of std::string. If you change your .i file to be:
%module test
%{
#include "test.h"
%}
%include "std_string.i"
typedef std::string Message;
%include "test.h"
Or
%module test
%{
#include "test.h"
%}
%include "std_string.i"
%include "test.h"
Then I'd expect your example code to work.
In terms of std::string vs const char* there should be very little observable behavior difference for Python users. Python's native string type will get converted correctly and automatically for either, so the rule I'd stick to is that if you're using C++ then use the C++ type unless there's an overriding reason not to. POD-ness (or lack thereof) is not likely to be the expensive part of your interface and even less likely to be the bottleneck in your performance.
So I'm trying to import some function contained in a .lib file into Python to build an SDK that will allow me to talk to some special hardware components. I read online that it is not really easy to import a .lib file into Python:
Static library (.lib) to Python project
So I'm trying to build a dll using the .lib and its corresponding .h file. I don't have access to the .lib source code. All I have access to is the .h file. I've looked online and found this:
Converting static link library to dynamic dll
Since I'm building the DLL for Python I can't use the .def file. I tried directly just importing the .h and .lib file into a project and creating a dll file but the functions were not implemented. So I tried making a separate .h file called wrapper that wraps around the functions in the .h file and calls them, but the functions are still not implemented and working. And honestly I highly doubt what I did is correct
Here is my code:
hardware.h - the header file that came with the .lib file (note only putting up one function)
extern "C" int WINAPI GetDigitalOutputInfo(unsigned int deviceNumbers[16],
unsigned int numBits[16],
unsigned int numLines[16]);
_hardware.h - wrapper around the original header file
#pragma once
#include <Windows.h>
#ifdef Hardware_EXPORTS
#define Hardware_API __declspec(dllexport)
#else
#define Hardware_API __declspec(dllimport)
#endif
namespace Hardware
{
class Functions
{
public:
static Hardware_API int NewGetDigitalOutputInfo(unsigned int deviceNumbers[16], unsigned int numBits[16], unsigned int numLines[16]);
};
}
Hardware.cpp - implementing the wrapper
#include "stdafx.h"
#include "hardware.h"
#include "_hardware.h"
#pragma comment(lib, "..\\lib\\PlexDO.lib")
#pragma comment(lib, "legacy_stdio_definitions.lib")
namespace Hardware
{
int Functions::NewGetDigitalOutputInfo(unsigned int deviceNumbers[16], unsigned int numBits[16], unsigned int numLines[16]) {
return GetDigitalOutputInfo(deviceNumbers, numBits, numLines);
}
}
Anyway I feel like making a wrapper is pointless as I should be able to just use the original .h file and .lib file directly to call the functions. Unless making a wrapper is the only way I can make a dll without getting the lib file source code. Are there ways to make a dll without knowing the lib file source code? Is there a way to import the lib file directly into Python? Any help is appreciated.
Thank you #qexyn for the help. So I took out the namespace and class from my wrapper (_hardware.h) and made the functions global. Then I added extern "C" to those global functions, so my code ended up looking like this:
_hardware.h
#pragma once
#include <Windows.h>
#ifdef Hardware_EXPORTS
#define Hardware_API __declspec(dllexport)
#else
#define Hardware_API __declspec(dllimport)
#endif
extern "C" Hardware_API int NewGetDigitalOutputInfo(unsigned int deviceNumbers[16], unsigned int numBits[16], unsigned int numLines[16]);
After that everything worked. I was able to get Hardware info in my python SDK. Make sure to add the extern "C" otherwise the names get mangled and it show up ugly if you look up your function in dependency walker. Thanks for the help
I am trying to compile a C++ module to use in scipy.weave that is composed of several headers and source C++ files. These files contain classes and methods that extensively use the Numpy/C-API interface. But I am failing to figure out how to include import_array() successfully. I have been struggling on this for the past week and I am going nuts. I hope you could help me with it because the weave help is not very explanatory.
In practice I have first a module called pycapi_utils that contains some routines to interface C objects with Python objects. It consists of a header file pycapi_utils.h and a source file pycapi_utils.cpp such as:
//pycapi_utils.h
#if ! defined _PYCAPI_UTILS_H
#define _PYCAPI_UTILS_H 1
#include <stdlib.h>
#include <Python.h>
#include <numpy/arrayobject.h>
#include <tuple>
#include <list>
typedef std::tuple<const char*,PyObject*> pykeyval; //Tuple type (string,Pyobj*) as dictionary entry (key,val)
typedef std::list<pykeyval> kvlist;
//Declaration of methods
PyObject* array_double_to_pyobj(double* v_c, long int NUMEL); //Convert from array to Python list (double)
...
...
#endif
and
//pycapi_utils.cpp
#include "pycapi_utils.h"
PyObject* array_double_to_pyobj(double* v_c, long int NUMEL){
//Convert a double array to a Numpy array
PyObject* out_array = PyArray_SimpleNew(1, &NUMEL, NPY_DOUBLE);
double* v_b = (double*) ((PyArrayObject*) out_array)->data;
for (int i=0;i<NUMEL;i++) v_b[i] = v_c[i];
free(v_c);
return out_array;
}
Then I have a further module model that contains classes and routines dealing with some mathematical model. Again it consists of a header and source file like:
//model.h
#if ! defined _MODEL_H
#define _MODEL_H 1
//model class
class my_model{
int i,j;
public:
my_model();
~my_model();
double* update(double*);
}
//Simulator
PyObject* simulate(double* input);
#endif
and
//model.cpp
#include "pycapi_utils.h"
#include "model.h"
//Define class and methods
model::model{
...
...
}
...
...
double* model::update(double* input){
double* x = (double*)calloc(N,sizeof(double));
...
...
// Do something
...
...
return x;
}
PyObject* simulate(double* input){
//Initialize Python interface
Py_Initialize;
import_array();
model random_network;
double* output;
output = random_network.update(input);
return array_double_to_pyobj(output); // from pycapi_utils.h
}
The above code is included in a scipy.weave module in Python with
def model_py(input):
support_code="""
#include "model.h"
"""
code = """
return_val = simulate(input.data());
"""
libs=['gsl','gslcblas','m']
vars = ['input']
out = weave.inline(code,
vars,
support_code=support_code,
sources = source_files,
libraries=libs
type_converters=converters.blitz,
compiler='gcc',
extra_compile_args=['-std=c++11'],
force=1)
It fails to compile giving:
error: int _import_array() was not declared in this scope
Noteworthy is that if I lump into pycapi_utils.h also the source pycapi_utils.cpp, everything works fine. But I don't want to use this solution, as in practice my modules here need to be included in several other modules that also use PyObjects and need call import_array().
I was looking to this post on stack exchange, but I cannot figure out if and how to properly define the #define directives in my case. Also the example in that post is not exactly my case as there, import_array() is called within the global scope of main() whereas in my case import_array() is called within my simulate routine which is invoked by main() build by scipy.weave.
I had a similar problem, as the link you've posted points out, the root of all evil is that the PyArray_API is defined static, which means that each translation unit has it's own PyArray_API which is initialized with PyArray_API = NULL by default. Thus import_array() must be called once for every *.cpp file. In your case it should be sufficient to call it in pycapi_utils.cpp and also once in model.cpp. You can also test if array_import is necessary before actualy calling it with:
if(PyArray_API == NULL)
{
import_array();
}
So apparently if I include in the pycapi_utils module a simple initialization routine such as:
//pycapi_utils.h
...
...
void init_numpy();
//pycapi_utils.cpp
...
...
void init_numpy(){
Py_Initialize;
import_array();
}
and then I invoke this routine at the beginning of any function / method that uses Numpy objects in my C code, it works. That is, the above code is edited as:
//pycapi_utils.cpp
...
...
PyObject* array_double_to_pyobj(...){
init_numpy();
...
...
}
//model.cpp
...
...
PyObject* simulate(...){
init_numpy();
...
...
}
My only concern at this point is whether there is a way to minimize number of calls to init_numpy(), or regardless I have to call it from any function that I define within my CPP modules that uses Numpy objects...