Python + setuptools: distributing a pre-compiled shared library with boost.python bindings - python

I have a C++ library (we'll call it Example in the following) for which I wrote Python bindings using the boost.python library. This Python-wrapped library will be called pyExample. The entire project is built using CMake and the resulting Python-wrapped library is a file named libpyExample.so.
When I use the Python bindings from a Python script located in the same directory as libpyExample.so, I simply have to write:
import libpyExample
libpyExample.hello_world()
and this executes a hello_world() function exposed by the wrapping process.
What I want to do
For convenience, I would like my pyExample library to be available from anywhere simply using the command
import pyExample
I also want pyExample to be easily installable in any virtualenv in just one command. So I thought a convenient process would be to use setuptools to make that happen. That would therefore imply:
Making libpyExample.so visible for any Python script
Changing the name under which the module is accessed
I did find many things about compiling C++ extensions with setuptools, but nothing about packaging a pre-compiled C++ extension. Is what I want to do even possible?
What I do not want to do
I don't want to build the pyExample library with setuptools, I would like to avoid modifying the existing project too much. The CMake build is just fine, I can retrieve the libpyExample.so file very easily.

If I understand your question correctly, you have the following situation:
you have an existing CMake-based build of a C++ library with Python bindings
you want to package this library with setuptools
The latter then allows you to call python setup.py install --user, which installs the lib in the site-packages directory and makes it available from every path in your system.
What you want is possible, if you overload the classes that setuptools uses to build extensions, so that those classes actually call your CMake build system. This is not trivial, but you can find a working example here, provided by the pybind11 project:
https://github.com/pybind/cmake_example
Have a look into setup.py, you will see how the classes build_ext and Extension are inherited from and modified to execute the CMake build.
This should work out of the box for you or with little modification - if your build requires special -D flags to be set.
I hope this helps!

Related

create RPM with python script and protoc-generated modules

I have python script which also imports protoc-generated modules (protobuf and grpc). I would like to package this in rpm. Previously I used to write a spec file defining rpm, its contents, compile/link flags (if building from c/c++ sources), post/postun phases if any and so on.
What would be the correct way to create rpm for python script + generated python modules. I know that there is distutils python module, and people normally write their own setup.py script, and then do python setup.py bdist_rpm. However I'm not sure how I can plug-in the protobuf generation phase.
I'd appreciate helpful suggestions!

How does setuptools add modules to the runtime?

I'm new to Python and overwhelmed with documentation regarding modules, package modules and imports.
Best I can understand, there are no "packages", only modules and import mechanisms and the officially recommended high-level API is provided bysetuptools whose Distribution class is the key element that drives what and how modules are added to the runtime module import mechanism (sys.path?) but I'm lost in the details of things.
My understanding is that setup.py is a normal Python module that indirectly uses runtime API to add more modules while being executed before actual "user" code. As a normal module it could download more code (like installers), sort dependencies topologically, call system linker etc.
But specifically, I'm having difficulty in understanding:
How setup.py decides whether to shadow system modules (like math) or not
How setup.py configures the import mechanism so that above doesn't happen by accident
How does -m interpreter flag work in comparison?
For conciseness, let's assume the most recent CPython and standard library only.
EDIT:
I see that packages are real, so are globals and levels as seen in __import__ function. It appears that setuptools has control over them.
setuptools helps packaging the code into the right formats (example: setup.py build sdist bdist_wheel to build the archives that can be downloaded from PyPI) and then unpacking this code unto the right locations (example: setup.py install or indirectly pip install) so that later on it can be picked up automatically by Python's own import mechanisms (typically yes, the location is one of the directories pointed at by sys.path).
I might be wrong but I wouldn't say that:
setup.py decides whether to shadow system modules (like math) or not
nor that:
setup.py configures the import mechanism so that above doesn't happen by accident
As far as I know this can really well happen, there's nothing to prevent that and it is not necessarily a bad thing.
How does -m interpreter flag work in comparison?
Instead of instructing Python interpreter to execute the code from a Python file by its path (example: python ./setup.py) it is possible to instruct the Python interpreter to execute a Python module (example: python -m setup, python -m pip, python -m http.server). In this case the Python will look for the named module in the directories pointed at by sys.path. This is why setuptools/setup.py and pip install the packages in a location that - if everything is configured right - is in sys.path.
I recommend you have a look at the values stored in sys.path by yourself, with for example the following command: python -c "import sys; print(sys.path)".

How to write nosetests for f2py bindings

So I've put together some Python bindings to some legacy Fortran code. They work, I can install this with the usual python setup.py install, but I would like to write some nosetests to check things using e.g. travis-ci.
I'm not sure how one goes about doing this. If one installs the package, and the bindings are available, then obviously you can just import the different methods and classes and use them in tests, but what if the package isn't installed? I guess that a pointer to some examples would be worthwile.

Use setuptools to create an executable distribution of a package with --multi-version?

I'm trying to create simple Windows executable installers for a Python package that will only be delivered internally to other developers on my team. I'd like to let them install multiple versions of the package at the same time and specify which to run by using different commands: <package>-0.1,<package>-0.2, etc.
It looks like setuptools offers a fairly standard way to install multiple versions of a package: just create an egg for each version of the package, then install each egg using the --multi-version option with easy_install.
However, rather than providing the eggs and asking developers to use easy_install via the command-line, I'd like to just offer something they can run that will automatically perform the installation. python setup.py bdist_wininst does almost exactly what I want, but as far as I can tell, there's no way to create a bdist_wininst that uses the --multi-version option (or does anything similar).
Is there a way to accomplish this? I realize I could just manually write an executable program to call easy_install --multi-version, but that seems like a ridiculous workaround for something that I'd hope is achievable just using the built-in capabilities of setuptools.
Tangentially, is there an easy way to have the installer automatically create <package>-<version>.bat files for each installed version of the package?
The setup.py script for the SCons project implements this feature using vanilla distutils, and the method is also usable with setuptools. At the time of writing, the current version of setup.py is here. The trick is the use of the cmdclass argument to replace install_lib with a modified class that inherits from the default install_lib class used by distutils.
A pared-down version of the same trick, using setuptools (note that this is done by simply replacing all instances of distutils with setuptools):
import setuptools.command.install_lib
_install_lib = setuptools.command.install_lib.install_lib
class install_lib(_install_lib):
def finalize_options(self):
_install_lib.finalize_options(self)
self.install_dir = join(self.install_dir,'PKG_NAME-'+PKGVERSION)

Python package with compiled code

I'm looking into releasing a python package which includes an existing fortran or C program. The fortran/C program is compiled by running
./configure
make
The python code calls the resulting binary through subprocess calls (i.e. the code is not really wrapped as such). What I would like is that when the user types
python setup.py install
the fortran/C program is first compiled using the ./configure and make commands, then I want the python module to be installed, and the binary to be installed in the python bin/ directory alongside executables that are usually installed via the scripts= option in distutils.core.setup.
First, are there any problems with doing this? And if not, what is the best way to do it via setup.py? Are there existing functions to automate the ./configure and make, since this is pretty standard? Or should I just use os.system calls? And either way, where should those commands go in setup.py? Then should I have make output the binary to e.g. scripts/ and then have scripts=['scripts/mybinary'] in the setup() function?
Don't make this too complex.
Just provide them as separate items with a README that says -- basically -- what you said in the question.
Build the Fortran/C with ./configure; make; make install.
Setup Python with python setup.py install.
It doesn't appear to be rocket science. Trying to over-simplify the installation means that you must account for every OS vagary and oddness.
It's easier to trust the users to do "standard" installations so that the Fortran/C is on the system PATH, and your Python script should be configured to find them on the system PATH.
People who want to use your software are then free to reconfigure it to their own unique needs. They will anyway. Don't overpackage and force them to fight against you to reconfigure things.
consider writing a python C extension as a wrapper for your C code, and a f2py extension as a wrapper for your fortran code. Then you can just use them in your python code as fast calls instead of using subprocess.

Categories

Resources