Building Python interface to custom CGAL function - python

I'm trying to build a python interface to a custom CGAL function. The function works just fine on its own, but I get errors when I try to do the linking with Python using SWIG.
I have been using 'cgal_create_cmake_lists' to create the CMakeLists.txt for the CGAL part, doing cmake and make, and then using the generated .o files to link with SWIG, but I think I am not linking the required boost and GMP libraries properly with the SWIG-generated shared object ".so" file.
SWIG generates the shared object files for me, but then when I try to import my module, I get an error:
Symbol not found: __ZN4CGAL11NULL_VECTORE
I am using the command "ld -bundle -flat_namespace -undefined suppress -o cgal_executable.so *.o" to generate the shared object file. Should I be linking boost and GMP libraries in this step? If so, how do I do that? I am on a Mac OS. I can share my full code if needed.
I know there is a project for Python-SWIG bindings, but I really don't need all of it, and also it seems to be missing a lot of useful CGAL features. I would like to work in C++, and just interface my functions with Python as needed.

Related

Packing Python with a C++ app that embeds Python using pybind11

I have a C++ app where I'm trying to use pybind11 in order to support a scripting system in my app that would allow users to write their own scripts. I'm attempting to use Python in much the same way that many people use Lua as a scripting language to extend their app's functionality.
There is one big difference I've found in regards to Lua vs Python/pybind11: with Lua I can statically link the scripting language into my executable or even package a single shared library, but with Python/pybind11 it seems that I am reliant on whether or not the end user has Python already installed on their system.
Is that correct?
Is there a way I can tell pybind11 to statically link the Python libraries?
It also seems that pybind11 will search the path for the python executable, and then assume the shared libraries are in the same folder. Is there a way I can distribute the shared libraries and then tell my embedded interpreter to use those shared libraries?
My ultimate goal is that users can install my C++ app on their machine, and my app will have a built-in Python interpreter that can execute the scripts, regardless if Python is actually installed on the machine. Ideally I would do this with static linking, but dynamically linking and distributing the required libraries is also acceptable.
Pybind11 does not search for Python executable or anything like that. You link against libpythonX.Y.so.Z, and it works just like with any other shared library. So you can link against your Python library that you distribute. You only need to make sure your executable will find your library at run time. Again, this is no different from distributing any other shared library. Use rpath, or use LD_LIBRARY_PATH in a wrapper script, or whatever. There is a ton of questions and answers about it right here on SO (first hit).
You can link a static Python library, if you have one that is usable. On my system (Ubuntu 20 x64), trying to link against libpython3.8.a produces a ton of relocation errors. The system library only works when the entire executable is built statically, i.e. with g++ -static ... If you need a dynamic executable, you need to build your own static Python library with -fPIC. For the record, here is the command I used to link against the system library.
g++ -static -pthread -I/usr/include/python3.8 example.cpp -lpython3.8 --ldl -lutil -lexpat -lz
Last but not least, if your users want to write Python scripts, they probably have Python installed.

pybind with boost/dll - dual use DLL?

TL;DR
Adding pybind11 bindings to a working C++ DLL project allows me to import and use the resulting DLL in Python but breaks my ability to use it in C++ code using boost/dll machinery.
Summary
I've got a C++ library that I compile to FooLib.dll. I use boost/dll's BOOST_DLL_ALIAS and boost::dll::import_alias() to export and load a class Foo that does some work in other C++ code.
Some details omitted but it all works great, following this recipe.
I'd like to be able to call the same library code from Python to do some complicated functional testing and do comparisons to numpy/scipy prototypes without having to write so much test code in C++.
So I tried to add pybind11 bindings to the FooLib DLL project using PYBIND11_MODULE.
It compiles, I get a FooLib.dll. I can copy and rename that to FooLib.pyd, import it as a Python module, and it all works fine. I export Foo as a Python class, and it works.
However, when I compile in the pybind bindings, the boost/dll import machinery can no longer load the original FooLib.dll. I verify with boost::dll::library_info() that the appropriate CreateFoo symbol is exported to the DLL. But loading with boost::dll::import_alias() fails with:
boost::dll::shared_library::load() failed: The specified module could not be found
Minimal Example
Unfortunately, something that needs a C++ and Python executable and compiled boost isn't exactly minimal, but I did my best here:
https://github.com/danzimmerman/pybind-boostdll-minimal
Direct links to the source files:
DLL Project Files
HelloSayerLib.h
HelloSayerImp.cpp
C++ Test Code
HelloSayerLibCppTest.cpp
Python Test Code
HelloSayerLibPythonTests.py
Any advice for next steps?
Is it even possible to compile to one binary that works for both C++ and Python like this?
The suggestion in #n.'pronouns'm. comment is correct. Simply copying the python DLL from the Anaconda distribution I built against to the C++ program's run directory fixes the problem. Makes sense in retrospect, but didn't occur to me.
Makes it even more likely that I should keep the builds separate or at least set up my real project to only build with pybind bindings on my machine.

Python + setuptools: distributing a pre-compiled shared library with boost.python bindings

I have a C++ library (we'll call it Example in the following) for which I wrote Python bindings using the boost.python library. This Python-wrapped library will be called pyExample. The entire project is built using CMake and the resulting Python-wrapped library is a file named libpyExample.so.
When I use the Python bindings from a Python script located in the same directory as libpyExample.so, I simply have to write:
import libpyExample
libpyExample.hello_world()
and this executes a hello_world() function exposed by the wrapping process.
What I want to do
For convenience, I would like my pyExample library to be available from anywhere simply using the command
import pyExample
I also want pyExample to be easily installable in any virtualenv in just one command. So I thought a convenient process would be to use setuptools to make that happen. That would therefore imply:
Making libpyExample.so visible for any Python script
Changing the name under which the module is accessed
I did find many things about compiling C++ extensions with setuptools, but nothing about packaging a pre-compiled C++ extension. Is what I want to do even possible?
What I do not want to do
I don't want to build the pyExample library with setuptools, I would like to avoid modifying the existing project too much. The CMake build is just fine, I can retrieve the libpyExample.so file very easily.
If I understand your question correctly, you have the following situation:
you have an existing CMake-based build of a C++ library with Python bindings
you want to package this library with setuptools
The latter then allows you to call python setup.py install --user, which installs the lib in the site-packages directory and makes it available from every path in your system.
What you want is possible, if you overload the classes that setuptools uses to build extensions, so that those classes actually call your CMake build system. This is not trivial, but you can find a working example here, provided by the pybind11 project:
https://github.com/pybind/cmake_example
Have a look into setup.py, you will see how the classes build_ext and Extension are inherited from and modified to execute the CMake build.
This should work out of the box for you or with little modification - if your build requires special -D flags to be set.
I hope this helps!

embedding python into c++ library

I want to embed python into an shared library.
I use elmer to create the c code from a python script. This code I compile into a static library. I used python2.7-config --cflags --ldflags to get the compiler and linker flags for my system.
Now when I want to test this static library with a test application I get many undefined reference to errors (for every Py- and el-function). These errors are located in the (from elmer) generated c file.
I know there is an issue with embedding python. I got this to work in the past one time (without linking my test application to python or elmer) but I do not know how... (yeah, it is better to document something like this^^ )
thanks to n.m. I got the answer:
I learned that while building a static lib the linker is not invoked.
So moving the cflags and ldflags to the next shared lib/application solved the problem.

Python ctypes error GOMP_critical_end when loading library

I have a library that I compiled with gcc using -fopenmp and linking to libmkl_gnu_thread.a.
When I try to load this library using ctypes I get the error message
undefined symbol: GOMP_critical_end
Compiling this without openmp and linking to libmkl_sequential.a instead of gnu_thread, the library works fine, but I'd rather not have to build different versions in order to support Python.
How do I fix this error? Do I need to build python from source with openmp support? I'd like to avoid this since users don't want to have to build their own python to use this software.
I'm using python2.7.6.
Having -fopenmp while compiling enables OpenMP support and introduces in the resultant object file references to functions from the GNU OpenMP run-time support library libgomp. You should then link your shared object (a.k.a. shared library) against libgomp in order to tell the run-time linker to also load libgomp (if not already loaded via some other dependency) whenever your library is used so that it could resolve all symbols.
Linking against libgomp can be done in two ways:
If you use GCC to also link the object files and produce the shared object, just give it the -fopenmp flag.
If you use the system linker (usually that's ld), then give it the -lgomp option.
A word of warning for the second case: if you are using GCC that is not the default system-wide one, e.g. you have multiple GCC versions installed or use a version that comes from a separate package or have built one yourself, you should provide the correct path to libgomp.so that matches the version of GCC.

Categories

Resources