I want to embed python into an shared library.
I use elmer to create the c code from a python script. This code I compile into a static library. I used python2.7-config --cflags --ldflags to get the compiler and linker flags for my system.
Now when I want to test this static library with a test application I get many undefined reference to errors (for every Py- and el-function). These errors are located in the (from elmer) generated c file.
I know there is an issue with embedding python. I got this to work in the past one time (without linking my test application to python or elmer) but I do not know how... (yeah, it is better to document something like this^^ )
thanks to n.m. I got the answer:
I learned that while building a static lib the linker is not invoked.
So moving the cflags and ldflags to the next shared lib/application solved the problem.
Related
I have a C++ app where I'm trying to use pybind11 in order to support a scripting system in my app that would allow users to write their own scripts. I'm attempting to use Python in much the same way that many people use Lua as a scripting language to extend their app's functionality.
There is one big difference I've found in regards to Lua vs Python/pybind11: with Lua I can statically link the scripting language into my executable or even package a single shared library, but with Python/pybind11 it seems that I am reliant on whether or not the end user has Python already installed on their system.
Is that correct?
Is there a way I can tell pybind11 to statically link the Python libraries?
It also seems that pybind11 will search the path for the python executable, and then assume the shared libraries are in the same folder. Is there a way I can distribute the shared libraries and then tell my embedded interpreter to use those shared libraries?
My ultimate goal is that users can install my C++ app on their machine, and my app will have a built-in Python interpreter that can execute the scripts, regardless if Python is actually installed on the machine. Ideally I would do this with static linking, but dynamically linking and distributing the required libraries is also acceptable.
Pybind11 does not search for Python executable or anything like that. You link against libpythonX.Y.so.Z, and it works just like with any other shared library. So you can link against your Python library that you distribute. You only need to make sure your executable will find your library at run time. Again, this is no different from distributing any other shared library. Use rpath, or use LD_LIBRARY_PATH in a wrapper script, or whatever. There is a ton of questions and answers about it right here on SO (first hit).
You can link a static Python library, if you have one that is usable. On my system (Ubuntu 20 x64), trying to link against libpython3.8.a produces a ton of relocation errors. The system library only works when the entire executable is built statically, i.e. with g++ -static ... If you need a dynamic executable, you need to build your own static Python library with -fPIC. For the record, here is the command I used to link against the system library.
g++ -static -pthread -I/usr/include/python3.8 example.cpp -lpython3.8 --ldl -lutil -lexpat -lz
Last but not least, if your users want to write Python scripts, they probably have Python installed.
TL;DR
Adding pybind11 bindings to a working C++ DLL project allows me to import and use the resulting DLL in Python but breaks my ability to use it in C++ code using boost/dll machinery.
Summary
I've got a C++ library that I compile to FooLib.dll. I use boost/dll's BOOST_DLL_ALIAS and boost::dll::import_alias() to export and load a class Foo that does some work in other C++ code.
Some details omitted but it all works great, following this recipe.
I'd like to be able to call the same library code from Python to do some complicated functional testing and do comparisons to numpy/scipy prototypes without having to write so much test code in C++.
So I tried to add pybind11 bindings to the FooLib DLL project using PYBIND11_MODULE.
It compiles, I get a FooLib.dll. I can copy and rename that to FooLib.pyd, import it as a Python module, and it all works fine. I export Foo as a Python class, and it works.
However, when I compile in the pybind bindings, the boost/dll import machinery can no longer load the original FooLib.dll. I verify with boost::dll::library_info() that the appropriate CreateFoo symbol is exported to the DLL. But loading with boost::dll::import_alias() fails with:
boost::dll::shared_library::load() failed: The specified module could not be found
Minimal Example
Unfortunately, something that needs a C++ and Python executable and compiled boost isn't exactly minimal, but I did my best here:
https://github.com/danzimmerman/pybind-boostdll-minimal
Direct links to the source files:
DLL Project Files
HelloSayerLib.h
HelloSayerImp.cpp
C++ Test Code
HelloSayerLibCppTest.cpp
Python Test Code
HelloSayerLibPythonTests.py
Any advice for next steps?
Is it even possible to compile to one binary that works for both C++ and Python like this?
The suggestion in #n.'pronouns'm. comment is correct. Simply copying the python DLL from the Anaconda distribution I built against to the C++ program's run directory fixes the problem. Makes sense in retrospect, but didn't occur to me.
Makes it even more likely that I should keep the builds separate or at least set up my real project to only build with pybind bindings on my machine.
I have a python project containing python bindings to a C++ library mylib.lib. Mylib.lib is dependent on different libs such as boost, ICU, xml++ etc. In my setup.py pipeline I have a part where I have to compile python bindings (pybind11).
Because this entire setup is new to me, I work on it incrementally. I added all required headers and the compilation itself went smoothly - linker outputed a lot unresolved external symbols which was ok at this stage and so I decided to link mylib.lib and then LINK.EXE asked about very specific library:
LINK : fatal error LNK1104: cannot open file 'libboost_filesystem-vc142-mt-x64-1_72.lib'
I have no problem linking boost in setup.py, I just don't understand how linker can be so specific. I did not ask about that library and in case I have missed something, I made linker to be verbose and there is no boost mention in the command. Any clues?
Cheers!
Edit: This happens only with Boost, rest of libraries are ignored and I have to manually link them.
I'm trying to build a python interface to a custom CGAL function. The function works just fine on its own, but I get errors when I try to do the linking with Python using SWIG.
I have been using 'cgal_create_cmake_lists' to create the CMakeLists.txt for the CGAL part, doing cmake and make, and then using the generated .o files to link with SWIG, but I think I am not linking the required boost and GMP libraries properly with the SWIG-generated shared object ".so" file.
SWIG generates the shared object files for me, but then when I try to import my module, I get an error:
Symbol not found: __ZN4CGAL11NULL_VECTORE
I am using the command "ld -bundle -flat_namespace -undefined suppress -o cgal_executable.so *.o" to generate the shared object file. Should I be linking boost and GMP libraries in this step? If so, how do I do that? I am on a Mac OS. I can share my full code if needed.
I know there is a project for Python-SWIG bindings, but I really don't need all of it, and also it seems to be missing a lot of useful CGAL features. I would like to work in C++, and just interface my functions with Python as needed.
I'm compiling a SWIG wrapped C++ library into a python module, that should ideally be distributable for individuals to use the library transparently like a module. I'm building the library using cmake and swig on OSX 10.8.2 (System framework - Apple python2.7.2, Installed framework - python.org python 2.7.5)
The trouble I'm running into is that after linking with the framework, the compiled library is very selective of the version of python that is being run, even though otool -L shows that it is compiled with "compatability version 2.7.0". It appears that the different distributions have slightly different linker symbols and stuff starts to break
The most common problem is that it crashes the python kernel with a Fatal Python error: PyThreadState_Get: no current thread (according to: this question, indicative of a linking incompatability). I can get my library to work in the python it was compiled against.
Unfortunately this library is for use in a academic laboratory, with computers of all different ages and operating systems, many of them in permanent deprecation in order to run proprietary software that hasn't been updated in years, and I certainly don't have time to play I.T. and fix all of them, currently I've just been compiling against the version of python that comes with the latest Enthought distribution since most computers can get that in one way or another . A lot of the researchers I work with use some python IDE specific to their field that comes with an interpreter built in, but is not modifiable and not a Framework build (so I can't build against it), for the time being, they can run their experiment scripts in Enthought as a stop-gap, but its not ideal. Even when I link against the python.org distribution that is the same version as the built-in IDE python (2.7.2 I think, it even has the same damn release number), it still breaks the same way.
In any case, the question is, is there any way to link a SWIG wrapped python library so that it will run (at least on OSX) regardless of what interpreter is importing it (given certain minimum conditions, like guaranteed to be >=2.7.0).
EDIT
Compiling against canopy/python installed version with the following linker flags in cmake
set (CMAKE_SHARED_LINKER_FLAGS "-L ~/Library/Enthought/Canopy_32bit/User/lib -ldl -framework CoreFoundation -lpython2.7 -u _PyMac_Error ~/Library/Enthought/C\
anopy_32bit/User/lib")
This results in an #rpath symbol path when examining the linked library with otool, seems to work fine with enthought/canopy on other OSX systems, the -lpython seems to be optional, it adds an additional python symbol in the library reference to the osx python (not system python)
Compiling against system python with the following linker flags
set (CMAKE_SHARED_LINKER_FLAGS "-L /Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/config -ldl -framework CoreFoundation -u _PyMac_Error /Library/Frameworks/Python.framework/Versions/Current/Python")
Works in enthought and system python
Neither of these work with the bundled python with psychopy, which is the target environment, compiling against the bundled python works with psychopy but no other python.
I've been getting the same error/having the same problem. I'd be interested if you've found a solution.
I have found that if I compile against the native python include directory and run the native OS X python binary /usr/bin/python that it works just fine, always. Even when I compile against some other python library (like the one I find at /Applications/Canopy.app/appdata/canopy-1.0.3.1262.macosx-x86_64/Canopy.app/Contents/include ) I can get the native OS X interpreter to work just fine.
I can't seem to get the Enthought version to work, ever. What directory are you compiling against for use with Enthought/Canopy?
There also seems to be some question of configuring SWIG at installation to know about a particular python library, but this might not be related: http://swig.10945.n7.nabble.com/SWIG-Installation-mac-osx-10-8-3-Message-w-o-attachments-td13183.html