Find boost_python version from setup.py - python

I have developed a Python module using Boost Python. This extension has a setup.py file to install it. This file requests the extension to be linked against libboost_python by doing something like the following:
my_module = Extension('_mymodule', files=...
libraries=['boost_python'],
...)
Recently, the boost developers seem to have changed the naming convention of Boost Python from naming the library libboost_python to naming it libboost_python27 (to reflect the fact that they were calling libboost_python3 the corresponding library for Python 3.x).
What is the best way, from the setup.py script, to detect how the Boost Python library should be named (given that it is installed in a non-standard location)?

Related

create RPM with python script and protoc-generated modules

I have python script which also imports protoc-generated modules (protobuf and grpc). I would like to package this in rpm. Previously I used to write a spec file defining rpm, its contents, compile/link flags (if building from c/c++ sources), post/postun phases if any and so on.
What would be the correct way to create rpm for python script + generated python modules. I know that there is distutils python module, and people normally write their own setup.py script, and then do python setup.py bdist_rpm. However I'm not sure how I can plug-in the protobuf generation phase.
I'd appreciate helpful suggestions!

Programmatically obtain Python install paths in prefix, without distutils

As distutils is being removed from Python in the > 3.10 versions, and setuptools will not be added to the stdlib, I want to replace an existing setup.py recipe for building/installing a C++ library Cython extension (i.e. not primarily a Python package, not run in a venv, etc.) with some custom code.
The Cython part is working fine, and I just about managed to construct an equivalent call to the C++ compiler from that previously executed by distutils, by using config-var info from sysconfig... though the latter was very trial and error, with no documentation or particular consistency to the config-var collection as far as I could tell.
But I am now stuck on identifying what directory to install my build extension .so into, within the target prefix of my build. Depending on the platform and path scheme in use, and the prefix itself, the subdirs could be in lib or lib64, a pythonX.Y subdir of some sort, and a final site-packages, dist-packages or other directory. This decision was previously made by distutils but I can't find any equivalent code to return such path decisions in other stdlib packages.
Any suggestions of answers or best-practice approaches? (Other than "use setuptools", please!)

How does setuptools add modules to the runtime?

I'm new to Python and overwhelmed with documentation regarding modules, package modules and imports.
Best I can understand, there are no "packages", only modules and import mechanisms and the officially recommended high-level API is provided bysetuptools whose Distribution class is the key element that drives what and how modules are added to the runtime module import mechanism (sys.path?) but I'm lost in the details of things.
My understanding is that setup.py is a normal Python module that indirectly uses runtime API to add more modules while being executed before actual "user" code. As a normal module it could download more code (like installers), sort dependencies topologically, call system linker etc.
But specifically, I'm having difficulty in understanding:
How setup.py decides whether to shadow system modules (like math) or not
How setup.py configures the import mechanism so that above doesn't happen by accident
How does -m interpreter flag work in comparison?
For conciseness, let's assume the most recent CPython and standard library only.
EDIT:
I see that packages are real, so are globals and levels as seen in __import__ function. It appears that setuptools has control over them.
setuptools helps packaging the code into the right formats (example: setup.py build sdist bdist_wheel to build the archives that can be downloaded from PyPI) and then unpacking this code unto the right locations (example: setup.py install or indirectly pip install) so that later on it can be picked up automatically by Python's own import mechanisms (typically yes, the location is one of the directories pointed at by sys.path).
I might be wrong but I wouldn't say that:
setup.py decides whether to shadow system modules (like math) or not
nor that:
setup.py configures the import mechanism so that above doesn't happen by accident
As far as I know this can really well happen, there's nothing to prevent that and it is not necessarily a bad thing.
How does -m interpreter flag work in comparison?
Instead of instructing Python interpreter to execute the code from a Python file by its path (example: python ./setup.py) it is possible to instruct the Python interpreter to execute a Python module (example: python -m setup, python -m pip, python -m http.server). In this case the Python will look for the named module in the directories pointed at by sys.path. This is why setuptools/setup.py and pip install the packages in a location that - if everything is configured right - is in sys.path.
I recommend you have a look at the values stored in sys.path by yourself, with for example the following command: python -c "import sys; print(sys.path)".

Does Cython compile imported modules as part of the binary?

I'm just now reading into cython and I'm wondering if cython compiles imported modules as part of the executable of if you still need to have the modules installed on the target machine to run the cython binary.
The "interface" of a Cython module remains at the Python level. When you import a module in Cython, the module becomes available only at the Python level of the code and uses the regular Python import mechanism.
So:
Cython does not "compile in" the dependencies.
You need to install the dependencies on the target machine.
For "Cython level" code, including the question of "cimporting" module, Cython uses the equivalent of C headers (the .pxd declaration files) and dynamically loaded libraries to access external code. The .so files (for Linux, DLL for windows and dylib for mac) need to be present on the target machine.

Python + setuptools: distributing a pre-compiled shared library with boost.python bindings

I have a C++ library (we'll call it Example in the following) for which I wrote Python bindings using the boost.python library. This Python-wrapped library will be called pyExample. The entire project is built using CMake and the resulting Python-wrapped library is a file named libpyExample.so.
When I use the Python bindings from a Python script located in the same directory as libpyExample.so, I simply have to write:
import libpyExample
libpyExample.hello_world()
and this executes a hello_world() function exposed by the wrapping process.
What I want to do
For convenience, I would like my pyExample library to be available from anywhere simply using the command
import pyExample
I also want pyExample to be easily installable in any virtualenv in just one command. So I thought a convenient process would be to use setuptools to make that happen. That would therefore imply:
Making libpyExample.so visible for any Python script
Changing the name under which the module is accessed
I did find many things about compiling C++ extensions with setuptools, but nothing about packaging a pre-compiled C++ extension. Is what I want to do even possible?
What I do not want to do
I don't want to build the pyExample library with setuptools, I would like to avoid modifying the existing project too much. The CMake build is just fine, I can retrieve the libpyExample.so file very easily.
If I understand your question correctly, you have the following situation:
you have an existing CMake-based build of a C++ library with Python bindings
you want to package this library with setuptools
The latter then allows you to call python setup.py install --user, which installs the lib in the site-packages directory and makes it available from every path in your system.
What you want is possible, if you overload the classes that setuptools uses to build extensions, so that those classes actually call your CMake build system. This is not trivial, but you can find a working example here, provided by the pybind11 project:
https://github.com/pybind/cmake_example
Have a look into setup.py, you will see how the classes build_ext and Extension are inherited from and modified to execute the CMake build.
This should work out of the box for you or with little modification - if your build requires special -D flags to be set.
I hope this helps!

Categories

Resources