Compiling shared library for python to distribute - python

I'm compiling a SWIG wrapped C++ library into a python module, that should ideally be distributable for individuals to use the library transparently like a module. I'm building the library using cmake and swig on OSX 10.8.2 (System framework - Apple python2.7.2, Installed framework - python.org python 2.7.5)
The trouble I'm running into is that after linking with the framework, the compiled library is very selective of the version of python that is being run, even though otool -L shows that it is compiled with "compatability version 2.7.0". It appears that the different distributions have slightly different linker symbols and stuff starts to break
The most common problem is that it crashes the python kernel with a Fatal Python error: PyThreadState_Get: no current thread (according to: this question, indicative of a linking incompatability). I can get my library to work in the python it was compiled against.
Unfortunately this library is for use in a academic laboratory, with computers of all different ages and operating systems, many of them in permanent deprecation in order to run proprietary software that hasn't been updated in years, and I certainly don't have time to play I.T. and fix all of them, currently I've just been compiling against the version of python that comes with the latest Enthought distribution since most computers can get that in one way or another . A lot of the researchers I work with use some python IDE specific to their field that comes with an interpreter built in, but is not modifiable and not a Framework build (so I can't build against it), for the time being, they can run their experiment scripts in Enthought as a stop-gap, but its not ideal. Even when I link against the python.org distribution that is the same version as the built-in IDE python (2.7.2 I think, it even has the same damn release number), it still breaks the same way.
In any case, the question is, is there any way to link a SWIG wrapped python library so that it will run (at least on OSX) regardless of what interpreter is importing it (given certain minimum conditions, like guaranteed to be >=2.7.0).
EDIT
Compiling against canopy/python installed version with the following linker flags in cmake
set (CMAKE_SHARED_LINKER_FLAGS "-L ~/Library/Enthought/Canopy_32bit/User/lib -ldl -framework CoreFoundation -lpython2.7 -u _PyMac_Error ~/Library/Enthought/C\
anopy_32bit/User/lib")
This results in an #rpath symbol path when examining the linked library with otool, seems to work fine with enthought/canopy on other OSX systems, the -lpython seems to be optional, it adds an additional python symbol in the library reference to the osx python (not system python)
Compiling against system python with the following linker flags
set (CMAKE_SHARED_LINKER_FLAGS "-L /Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/config -ldl -framework CoreFoundation -u _PyMac_Error /Library/Frameworks/Python.framework/Versions/Current/Python")
Works in enthought and system python
Neither of these work with the bundled python with psychopy, which is the target environment, compiling against the bundled python works with psychopy but no other python.

I've been getting the same error/having the same problem. I'd be interested if you've found a solution.
I have found that if I compile against the native python include directory and run the native OS X python binary /usr/bin/python that it works just fine, always. Even when I compile against some other python library (like the one I find at /Applications/Canopy.app/appdata/canopy-1.0.3.1262.macosx-x86_64/Canopy.app/Contents/include ) I can get the native OS X interpreter to work just fine.
I can't seem to get the Enthought version to work, ever. What directory are you compiling against for use with Enthought/Canopy?
There also seems to be some question of configuring SWIG at installation to know about a particular python library, but this might not be related: http://swig.10945.n7.nabble.com/SWIG-Installation-mac-osx-10-8-3-Message-w-o-attachments-td13183.html

Related

Packing Python with a C++ app that embeds Python using pybind11

I have a C++ app where I'm trying to use pybind11 in order to support a scripting system in my app that would allow users to write their own scripts. I'm attempting to use Python in much the same way that many people use Lua as a scripting language to extend their app's functionality.
There is one big difference I've found in regards to Lua vs Python/pybind11: with Lua I can statically link the scripting language into my executable or even package a single shared library, but with Python/pybind11 it seems that I am reliant on whether or not the end user has Python already installed on their system.
Is that correct?
Is there a way I can tell pybind11 to statically link the Python libraries?
It also seems that pybind11 will search the path for the python executable, and then assume the shared libraries are in the same folder. Is there a way I can distribute the shared libraries and then tell my embedded interpreter to use those shared libraries?
My ultimate goal is that users can install my C++ app on their machine, and my app will have a built-in Python interpreter that can execute the scripts, regardless if Python is actually installed on the machine. Ideally I would do this with static linking, but dynamically linking and distributing the required libraries is also acceptable.
Pybind11 does not search for Python executable or anything like that. You link against libpythonX.Y.so.Z, and it works just like with any other shared library. So you can link against your Python library that you distribute. You only need to make sure your executable will find your library at run time. Again, this is no different from distributing any other shared library. Use rpath, or use LD_LIBRARY_PATH in a wrapper script, or whatever. There is a ton of questions and answers about it right here on SO (first hit).
You can link a static Python library, if you have one that is usable. On my system (Ubuntu 20 x64), trying to link against libpython3.8.a produces a ton of relocation errors. The system library only works when the entire executable is built statically, i.e. with g++ -static ... If you need a dynamic executable, you need to build your own static Python library with -fPIC. For the record, here is the command I used to link against the system library.
g++ -static -pthread -I/usr/include/python3.8 example.cpp -lpython3.8 --ldl -lutil -lexpat -lz
Last but not least, if your users want to write Python scripts, they probably have Python installed.

MPI - C++ vs Python, can't compile anymore

I could always use mpi with C++, compiling programs via mpicxx. Some time ago, I also had to use mpi with Python. I installed the Python package mpi4py via Anaconda and my Python mpi codes worked fine.
Now, however, I went back to C++. When trying to compile with mpicxx, I get an error:
"
mpicxx -o TEST.exe TEST.cpp
The Open MPI wrapper compiler was unable to find the specified compiler
x86_64-conda_cos6-linux-gnu-c++ in your PATH.
Note that this compiler was either specified at configure time or in
one of several possible environment variables."
Apparently, installing mpi4py with Anaconda messed something up as the system tries to look for a compiler with "conda" in its name...
Can someone guide me to fix this in a way that I can freely use both, C++ and Python, with mpi?
edit: Using Ubuntu 18.04.

Compile python extension for different C++ runtime

I am developing a plugin for plex media server. Plex uses Python 2.7 for plugin system. I need to use python-levenshtein package, which needs to be compiled for different systems.
I found python-Levenshtein wheels (manylinux, macos, arm, windows), but the wheel for Windows compiled with VS 2008 (msvcr90.dll). And plex embeded python compiled with VS 2013 (msvcr130.dll).
Being on Windows 10 how can i compile python package with different VS runtimes?
Note: compiling extensions for versions of Python other than the official releases is not supported by the Python team, and has to be supported by the product - in this case, Plex. So assuming Plex hasn't provided a way to do this, everything below is hacks and workarounds.
Also, if the library developer is not doing the builds then you're trying to cover for them, which means you are basically at their "level" and will need to know their code as well as they do. Welcome to open source software :)
Finally, this only applies to legacy Python 2. If anyone reading this is on Python 3.5 or later, just install the latest Visual Studio and use the latest compiler - the entire 14.x line is compatible and will work fine with recent Python versions.
Assuming you've been careful about how the extension interacts with the C Runtime, it's often safe to compile your extension against the "wrong" C runtime version. Things to watch out for here are:
allocating memory that will be freed by Python
freeing memory that was allocated by Python
passing/getting file descriptors or FILE* pointers to/from Python
setting any global settings like encoding or locale
writing to standard streams
If all of these are isolated, then you'll just end up with two C runtimes in memory and they won't try to interact.
Now, if for some reason this can't be done, you can try compiling the extension using a later toolset. The easiest way to do this is:
get the source code (I'm assuming setup.py based build via wheel)
open VS 2013 Developer Command prompt
run set DISTUTILS_USE_SDK=1 (this will bypass MSVC detection)
run set MSSdk=1 (also needed to bypass detection on legacy Python 2)
run python setup.py bdist_wheel (you may need to install wheel into Python first)
Now you can take the wheel file and install it however you normally would, perhaps by extracting/copying the files or passing the full filename to pip.
In general, there's a very high chance that simply building the extension will not succeed. In that case, you'll have to modify the source code to work - a lot of names were deprecated and removed in the three MSVC versions between VC 9.0 and VC 12.0, and so you won't see deprecation warnings for any of them.
If the library developer has already made their library work with Python 3, many of the fixes should be there but may not be detected (as Python 3 either used VC 10.0, which didn't need the fixes, or VC 14.x, which may be detected as _MSC_VER >= 1900 and won't detect VC 12.0). They may appreciate it if you contribute any fixes you make so that the next person is helped, but many library developers are dropping support for versions of Python prior to 3.5 now and so they may not be interested in maintaining legacy support.

building PyOpenNI on OSX

I'm researching an appropriate (or at least straightforward stack) for ultimately getting skeleton information from a kinect, via a python api on an OSX platform. Most of the information I am finding is quite spread out and disconnected.
While it seems perfectly obvious that a windows-based stack would be microsoft's own pykinect on top of their kinect SDK, I can't seem to figure out what works well in an OSX environment.
Here is the info that I have compiled so far:
libfreenect is the obvious source for the low level drivers (this part is working just fine)
OpenNI offers the framework + NITE middleware to provide recognition. (not python)
PyOpenNI - python bindings for OpenNI with support for skeleton and other advanced features.
I have concluded that this is the most recommended stack to date. What I would like to achieve is simple skeleton data similar to what the windows SDK python wrapper gives you out of the box. Ultimately I will be using this in a PyQt-based app to draw the display, and then into Maya to apply the data.
My question is two parts, and I would accept an answer in either direction if it were the most appropriate...
Build issues for PyOpenNI
So far, I have been unable to successfully build PyOpenNI on either OSX Snow Leopard (10.6.8), or Lion (10.7.4). Both systems have updated xcode. I have noticed that the source files are hardcoded to expect python2.7, so on snow leopard I have had to make sure it was installed and the default version (also tried a virtualenv).
On Snow Leopard, I was seeing the cmake process find different libs, headers, bin for python, and ultimately the make produced an .so that crashed with 'mismatched interpreter'.
On Lion, I also got mismatched interpreter crashes. But after I installed python2.7 via homebrew, it generated a new error:
ImportError: dlopen(./openni.so, 2): Symbol not found: _environ
Referenced from: /usr/local/lib/libpython2.7.dylib
Expected in: dynamic lookup
Are there any specific steps to building this on OSX that I am missing, such as environment variables to ensure its pointing at the correct python2.7 libs? Does anyone have a successful build process for this platform?
Alternate question
Is this still the most recommended stack for OSX?
Follow up
I've accepted my own answer as a temporary working solution. If someone can provide a better one, I will gladly accept it!
Update
Part of this process isn't necessary after a patch I submitted (information here). Since then I have also written up a more detailed blog post about installing the entire stack on OSX: Getting Started With Xbox360 Kinect On OSX
After hacking around on this a bit, I have found a working fix (though it doesn't address the issue at the build level). There is an existing issue with cmake, where it does not properly detect other python frameworks besides the system framework (which causes the mismatch between the python binary and the libs).
I first reinstalled my python2.7 install via homebrew, adding the --framework flag
After building the module, I noticed via otool that it was still linking to my system python, and the system python on Lion is fat i386 and x86_64. I also noticed that the libboost (boost installed via homebrew) which was linked to openni.so was also linked against the system python instead of homebrew. So I used the following to relink them:
install_name_tool -change \
/System/Library/Frameworks/Python.framework/Versions/2.7/Python \
/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Python \
openni.so
install_name_tool -change \
/System/Library/Frameworks/Python.framework/Versions/2.7/Python \
/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Python \
/usr/local/Cellar/boost/1.49.0/lib/libboost_python-mt.dylib
After doing this, I was able to import openni without any errors.
Here is the summary of the workaround:
python2.7 as framework, x86_64 (not fat)
libboost linked to the proper 64bit python
export CPPFLAGS="-arch x86_64"
cmake and make steps like normal
relink openni.so to the 64bit python
Ideally, someone would post a better answer than this one showing how to fix this during the build phase with environment variables, and not have to do a relink fix at the end.

Can I use VS2005 to build extensions for a Python system built with VS2003

RDFLib needs C extensions to be compiled to install on ActiveState Python 2.5; as far as I can tell, there's no binary installer anywhere obvious on the web. On attempting to install with python setup.py install, it produces the following message:
error: Python was built with Visual Studio 2003;
extensions must be built with a compiler than can generate compatible binaries.
Visual Studio 2003 was not found on this system. If you have Cygwin installed,
you can try compiling with MingW32, by passing "-c mingw32" to setup.py.
There are various resources on the web about configuring a compiler for distutils that discuss using MinGW, although I haven't got this to work yet. As an alternative I have VS2005.
Can anyone categorically tell me whether you can use the C compiler in VS2005 to build Python extension modules for a VS2003 compiled Python (in this case ActiveState Python 2.5). If this is possible, what configuration is needed?
The main problem is C run-time library. Python 2.4/2.5 linked against msvcr71.dll and therefore all C-extensions should be linked against this dll.
Another option is to use gcc (mingw) instead of VS2005, you can use it to compile python extensions only. There is decent installer that allows you to configure gcc as default compiler for your Python version:
http://www.develer.com/oss/GccWinBinaries
I can't tell you categorically, but I don't believe you can. I've only run into this problem in the inverse situation (Python built with VS2005, trying to build with VS2003). Searching the web did not turn up any way to hack around it. My eventual solution was to get VC Express, since VC2005 is when Microsoft started releasing the free editions. But that's obviously not an option for you.
I don't use ActiveState Python, but is there a newer version you could use? The source ships with project files for VS2008, and I'm pretty sure the python.org binary builds stopped using VS2003 a while ago.
As of today Mar 2012, I can categorically say it is possible with Python2.4.4 (only one I've tested) and Visual Studio 2005 and 2008. Just installing VS10 to check that. I don't know why it works and I have problems using distutils so I have to compile manually.

Categories

Resources