I have a C++ app where I'm trying to use pybind11 in order to support a scripting system in my app that would allow users to write their own scripts. I'm attempting to use Python in much the same way that many people use Lua as a scripting language to extend their app's functionality.
There is one big difference I've found in regards to Lua vs Python/pybind11: with Lua I can statically link the scripting language into my executable or even package a single shared library, but with Python/pybind11 it seems that I am reliant on whether or not the end user has Python already installed on their system.
Is that correct?
Is there a way I can tell pybind11 to statically link the Python libraries?
It also seems that pybind11 will search the path for the python executable, and then assume the shared libraries are in the same folder. Is there a way I can distribute the shared libraries and then tell my embedded interpreter to use those shared libraries?
My ultimate goal is that users can install my C++ app on their machine, and my app will have a built-in Python interpreter that can execute the scripts, regardless if Python is actually installed on the machine. Ideally I would do this with static linking, but dynamically linking and distributing the required libraries is also acceptable.
Pybind11 does not search for Python executable or anything like that. You link against libpythonX.Y.so.Z, and it works just like with any other shared library. So you can link against your Python library that you distribute. You only need to make sure your executable will find your library at run time. Again, this is no different from distributing any other shared library. Use rpath, or use LD_LIBRARY_PATH in a wrapper script, or whatever. There is a ton of questions and answers about it right here on SO (first hit).
You can link a static Python library, if you have one that is usable. On my system (Ubuntu 20 x64), trying to link against libpython3.8.a produces a ton of relocation errors. The system library only works when the entire executable is built statically, i.e. with g++ -static ... If you need a dynamic executable, you need to build your own static Python library with -fPIC. For the record, here is the command I used to link against the system library.
g++ -static -pthread -I/usr/include/python3.8 example.cpp -lpython3.8 --ldl -lutil -lexpat -lz
Last but not least, if your users want to write Python scripts, they probably have Python installed.
Related
I'm trying to build a python interface to a custom CGAL function. The function works just fine on its own, but I get errors when I try to do the linking with Python using SWIG.
I have been using 'cgal_create_cmake_lists' to create the CMakeLists.txt for the CGAL part, doing cmake and make, and then using the generated .o files to link with SWIG, but I think I am not linking the required boost and GMP libraries properly with the SWIG-generated shared object ".so" file.
SWIG generates the shared object files for me, but then when I try to import my module, I get an error:
Symbol not found: __ZN4CGAL11NULL_VECTORE
I am using the command "ld -bundle -flat_namespace -undefined suppress -o cgal_executable.so *.o" to generate the shared object file. Should I be linking boost and GMP libraries in this step? If so, how do I do that? I am on a Mac OS. I can share my full code if needed.
I know there is a project for Python-SWIG bindings, but I really don't need all of it, and also it seems to be missing a lot of useful CGAL features. I would like to work in C++, and just interface my functions with Python as needed.
I am working with the 3.6.4 source release of Python. I have no trouble building it with Visual Studio as a dynamic library (/MDd) I can link the Python .dll to my own code and verify its operation.
But when I build it (and my code) with (/MTd) it soon runs off the rails when I try to open a file with a Python program. A Debug assertion fails in read.cpp ("Expression: _osfile(fh) & FOPEN"). What I believe is happening is the Python .dll is linking with improper system libraries. What I can't figure out is how to get it to link with the correct ones (static libraries).
This is what I needed to do to build and use python statically embedded in another application.
To build the static python library (e.g., python36_d.lib, python36.lib)
Convert ALL projects in the python solution (pcbuild.sln) to static. This is about 40 projects, so it may take awhile. This includes setting library products to be build as 'static lib', and setting all /MD and /MDd build options to /MT and /MTd.
For at least the pythoncore project alter the Preprocess define to be Py_NO_ENABLE_SHARED. This tells the project it will be looking for calls from static libraries.
By hook or crook, find yourself a pyconfig.h file and put it in the Include area of your Python build. It is unclear how this file is built from Windows tools, but one seems to be able to snag one from other sources and it works ok. One could probably grab the pyconfig.h from the Pre-compiled version of the code you are building. [By the way, the Python I built was 3.6.5 and was built with Windows 2015, update 3.]
Hopefully, this should enable you to build both python36.lib and python36_d.lib. Now you need to make changes to your application project(s) to enable it to link with the python library. You need to do this:
Add the Python Include directory to the General->Include Directories list.
Add the Python Library directories to the General->Library Directories lists.
This will be ..\PCBuild\win32 and ..\PCBuild\amd64.
Add the define Py_NO_ENABLE_SHARED to the C/C++ -> Preprocessor area.
For Linker->input add (for releases) python36.lib;shlwapi.lib;version.lib and (for debugs) python36_d.lib;shlwapi.lib;version.lib.
And that should be it. It should run and work. But one more thing. In order to be able to function, the executable needs to access the Lib directory of the python build. So a copy of that needs to be moved to wherever the executable (containing the embedded python) resides. Or you can add the Lib area to the execution PATH for windows. That should work as well.
That's about all of it.
I have a C++ library (we'll call it Example in the following) for which I wrote Python bindings using the boost.python library. This Python-wrapped library will be called pyExample. The entire project is built using CMake and the resulting Python-wrapped library is a file named libpyExample.so.
When I use the Python bindings from a Python script located in the same directory as libpyExample.so, I simply have to write:
import libpyExample
libpyExample.hello_world()
and this executes a hello_world() function exposed by the wrapping process.
What I want to do
For convenience, I would like my pyExample library to be available from anywhere simply using the command
import pyExample
I also want pyExample to be easily installable in any virtualenv in just one command. So I thought a convenient process would be to use setuptools to make that happen. That would therefore imply:
Making libpyExample.so visible for any Python script
Changing the name under which the module is accessed
I did find many things about compiling C++ extensions with setuptools, but nothing about packaging a pre-compiled C++ extension. Is what I want to do even possible?
What I do not want to do
I don't want to build the pyExample library with setuptools, I would like to avoid modifying the existing project too much. The CMake build is just fine, I can retrieve the libpyExample.so file very easily.
If I understand your question correctly, you have the following situation:
you have an existing CMake-based build of a C++ library with Python bindings
you want to package this library with setuptools
The latter then allows you to call python setup.py install --user, which installs the lib in the site-packages directory and makes it available from every path in your system.
What you want is possible, if you overload the classes that setuptools uses to build extensions, so that those classes actually call your CMake build system. This is not trivial, but you can find a working example here, provided by the pybind11 project:
https://github.com/pybind/cmake_example
Have a look into setup.py, you will see how the classes build_ext and Extension are inherited from and modified to execute the CMake build.
This should work out of the box for you or with little modification - if your build requires special -D flags to be set.
I hope this helps!
I'm compiling a SWIG wrapped C++ library into a python module, that should ideally be distributable for individuals to use the library transparently like a module. I'm building the library using cmake and swig on OSX 10.8.2 (System framework - Apple python2.7.2, Installed framework - python.org python 2.7.5)
The trouble I'm running into is that after linking with the framework, the compiled library is very selective of the version of python that is being run, even though otool -L shows that it is compiled with "compatability version 2.7.0". It appears that the different distributions have slightly different linker symbols and stuff starts to break
The most common problem is that it crashes the python kernel with a Fatal Python error: PyThreadState_Get: no current thread (according to: this question, indicative of a linking incompatability). I can get my library to work in the python it was compiled against.
Unfortunately this library is for use in a academic laboratory, with computers of all different ages and operating systems, many of them in permanent deprecation in order to run proprietary software that hasn't been updated in years, and I certainly don't have time to play I.T. and fix all of them, currently I've just been compiling against the version of python that comes with the latest Enthought distribution since most computers can get that in one way or another . A lot of the researchers I work with use some python IDE specific to their field that comes with an interpreter built in, but is not modifiable and not a Framework build (so I can't build against it), for the time being, they can run their experiment scripts in Enthought as a stop-gap, but its not ideal. Even when I link against the python.org distribution that is the same version as the built-in IDE python (2.7.2 I think, it even has the same damn release number), it still breaks the same way.
In any case, the question is, is there any way to link a SWIG wrapped python library so that it will run (at least on OSX) regardless of what interpreter is importing it (given certain minimum conditions, like guaranteed to be >=2.7.0).
EDIT
Compiling against canopy/python installed version with the following linker flags in cmake
set (CMAKE_SHARED_LINKER_FLAGS "-L ~/Library/Enthought/Canopy_32bit/User/lib -ldl -framework CoreFoundation -lpython2.7 -u _PyMac_Error ~/Library/Enthought/C\
anopy_32bit/User/lib")
This results in an #rpath symbol path when examining the linked library with otool, seems to work fine with enthought/canopy on other OSX systems, the -lpython seems to be optional, it adds an additional python symbol in the library reference to the osx python (not system python)
Compiling against system python with the following linker flags
set (CMAKE_SHARED_LINKER_FLAGS "-L /Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/config -ldl -framework CoreFoundation -u _PyMac_Error /Library/Frameworks/Python.framework/Versions/Current/Python")
Works in enthought and system python
Neither of these work with the bundled python with psychopy, which is the target environment, compiling against the bundled python works with psychopy but no other python.
I've been getting the same error/having the same problem. I'd be interested if you've found a solution.
I have found that if I compile against the native python include directory and run the native OS X python binary /usr/bin/python that it works just fine, always. Even when I compile against some other python library (like the one I find at /Applications/Canopy.app/appdata/canopy-1.0.3.1262.macosx-x86_64/Canopy.app/Contents/include ) I can get the native OS X interpreter to work just fine.
I can't seem to get the Enthought version to work, ever. What directory are you compiling against for use with Enthought/Canopy?
There also seems to be some question of configuring SWIG at installation to know about a particular python library, but this might not be related: http://swig.10945.n7.nabble.com/SWIG-Installation-mac-osx-10-8-3-Message-w-o-attachments-td13183.html
I've embedded python on a mobile device successfully, but now how do I include a python library such as urllib?
Additionally, how can I include my own python scripts without a PYTHONPATH?
(please note: python is not installed on this system)
The easiest way is to create a .zip file containing all the python code you need and add this to your process's PYTHONPATH environment variable (via setenv()) prior to initializing the embedded Python interpreter. Usage of .pyd libraries can be done similarly by adding them to the same directory as the .zip and including the directory in the PYTHONPATH as well.
Usage of the setenv() call can cause trouble on Windows if you're mixing c-runtime versions. I spent many aggrivating hours learing that setenv() only sets the environment variables for the version of the c-runtime your compiler ships with. So if, for example, Python was built with VC++ 2005 and your compiler is VC++ 2008, you'll need to use an alternative mechanism. Browsing the sources for py2exe and/or PyInstaller may provide you with a better solution (since you're doing essentially the same thing as these tools) but a simple alternative is to "cheat" by using PyRun_SimpleString() to set the module search path from within Python itself.
snprintf(buff, "import sys\nsys.path.append("%s")\n", py_zip_filename)
PyRun_SimpleString(buff)