Our build pipeline spits out an *.so / *.pyb python extension through pybind11. As a step in the pipeline I need to package the extension as a wheel for easy distribution through pip. I am trying to come up with a setup.py that takes the existing library and does not rely on re-compiling the binaries through the setup.py (so I really don't want this). It would require a major rewrite of the devops scripts.
When having a folder structure such as:
setup.py
my_module.cpython-39-darwin.so
A very basic setup.py can create a functioning wheel (python setup.py bdist_wheel):
setup(
name = 'my_module',
version='0.9.102',
packages=["."],
package_data={'': ['*.so', '*.pyi']},
)
Unfortunately, the wheel is missing the important python tag and platform name, etc: my_module-0.9.102-py3-none-any.whl vs. my_module-0.9.102-cp39-cp39-macosx_10_13_x86_64.whl
Setting --python-tag and --plat-name works, setting --py-limited-api does not, however.
Through research I found that overwriting the distclass adds the correct tag again, but the Root-Is-Purelib is set back to false. This, unfortunately, creates a broken wheel when installing through pip as it puts the binary in a my_module-0.9.102.data/purelib folder...
Overwriting the is_pure seems to be ignored also:
from setuptools import setup, find_packages, Distribution
class BinaryDistribution(Distribution):
def is_pure(self):
return True
def has_ext_modules(foo):
return True
setup(
name = 'my_module',
version='0.9.102',
packages=["."],
package_data={'': ['*.so', '*.pyi']},
distclass=BinaryDistribution
)
What else can I do to wrap my pre-compiled python libraries to wheels for distribution without rewriting lots of the build pipeline?
Not a perfect solution but a functional wheel is created when using the wheel module instead:
from setuptools import setup
setup(name='my_module',
packages=['my_module'],
package_data={'': ['*.so', '*.pyi']},
)
create wheel with:
pip install wheel
pip wheel .
Related
We have a placeholder egg that contains no code and only exists for the sake of pulling down a list of dependent packages from our PyPi repository.
Most of these dependent packages are platform-agnostic, however some are only used on Win32 platforms.
Is it possible to somehow make the dependency platform-conditional, so that a given dependency in my install_requires list will only get pulled down when installing on Win32?
Alternatively: Is it possible to specify a list of optional dependencies, that will be installed if available, but will not cause easy_install to fail if they are not?
For sdist, egg and wheel release from : https://setuptools.readthedocs.io/en/latest/userguide/dependency_management.html#platform-specific-dependencies
Sometimes a project might require a dependency to run on a specific platform. This could to a package that back ports a module so that it can be used in older python versions. Or it could be a package that is required to run on a specific operating system. This will allow a project to work on multiple different platforms without installing dependencies that are not required for a platform that is installing the project.
setup(
name="Project",
...
install_requires=[
'enum34 ; python_version<"3.4"',
'pywin32 >= 1.0 ; platform_system=="Windows"'
]
)
In setup.py:
from setuptools import setup
import sys
setup(
name="...",
install_requires=["This", "That"] + (
["WinOnly", "AnotherWinOnly"] if sys.platform.startswith("win") else []
)
)
distutils.util.get_platform has more information than sys.platform if you need it:
>>> sys.platform
'linux2'
>>> distutils.util.get_platform()
'linux-i686'
Use the extras_require distribution option to make 'win32 support' an optional feature:
setup(
...
extras_require={
'win32': 'pywin32'
},
...
)
Then specify the win32 feature when installing on Windows:
easy_install mypackage[win32]
This will pull down the pywin32 package, which is listed as a dependency for the 'win32' feature of mypackage.
See here for more information about optional features.
When the egg is built (using python setup.py bdist_egg), you can force setuptools/distribute to build a platform-specific egg.
from setuptools import setup
import os
# Monkey-patch Distribution so it always claims to be platform-specific.
from distutils.core import Distribution
Distribution.has_ext_modules = lambda *args, **kwargs: True
requirements = ['generic-foo', 'generic-bar']
if os.getenv('WINDOWS_BUILD'):
requirements.extend(['a-windows-only-requirement'])
setup(
name="...",
install_requires=requirements
)
You can then run:
# Force a windows build
$ WINDOWS_BUILD=y python setup.py bdist_egg -p win32
# Do a linux build -- you may not need to specify -p if you're happy
# with your current linux architecture.
$ python setup.py bdist_egg -p linux-i686
Given is a (set of) Python3 packages that is to be deployed in different scenarios either cythonized or as original scripts; the source are pure Python3 sources. Preferably, I would like to use the same setup.py, if possible.
use case
in-place
include .py modules
cythonized .so modules
1. development pip3 install -e .
yes
yes
2. "unoptimized" install pip3 install .
yes
3. cythonized install pip3 install . --install-option="cythonize"
no (except __init__.py)
yes
4. build (binary) wheel python3 setup.py bdist_wheel
no (except __init__.py)
yes
So far, I succeeded in building a binary distribution wheel with only the cythonized .so shared library and without the original .py module files, following Package only binary compiled .so files of a python library compiled with Cython
. This covers use case #4 and is handled by class build_py.
However, I would also cover #1, #2 and maybe #3; #3 might be better tackled by separately building the bdist_wheel and then installing this, of not otherwise possible in a single step.
# https://stackoverflow.com/a/56043918
from setuptools.command.build_py import build_py as build_py_orig
try:
from Cython.Build import cythonize
except:
cythonize = None
from setuptools.command.install import install as install_orig
# https://stackoverflow.com/a/56043918
extensions = [
Extension('spam.*', ['spam/**/*.py'],
extra_compile_args=["-O3", "-Wall"]),
]
cython_excludes = ['spam/**/__init__.py']
def not_cythonized(tup):
(package, module, filepath) = tup
return any(
fnmatch.fnmatchcase(filepath, pat=pattern) for pattern in cython_excludes
) or not any(
fnmatch.fnmatchcase(filepath, pat=pattern)
for ext in extensions
for pattern in ext.sources
)
class build_py(build_py_orig):
def find_modules(self):
modules = super().find_modules()
return list(filter(not_cythonized, modules))
def find_package_modules(self, package, package_dir):
modules = super().find_package_modules(package, package_dir)
return list(filter(not_cythonized, modules))
class install(install_orig):
def finalize_options(self):
super().finalize_options()
self.distribution.ext_modules = None
setup(
name='spam',
packages=find_packages(),
ext_modules=cythonize(
extensions,
exclude=cython_excludes,
compiler_directives={
"language_level": 3,
"always_allow_keywords": True,
},
build_dir="build", # needs to be explicitly set, otherwise pollutes package sources
) if cythonize is not None else [],
cmdclass={
'build_py': build_py,
'install': install,
},
include_package_data=True,
install_requires=[...]
)
The problems I'm facing here:
for use cases #1 and #2 I don't want to cythonize, so ext_modules= should not be specified/set.
What is a sensible way to handle ext_modules= in this situation? I find it hard to detect the requested operation (install, install -e, develop) before calling setup(), so would it be better to inherit and override the install and develop classes?
If the latter, is it possible and allowed to clear the ext_modules and how do I avoid prematurely evaluating cythonize(...)?
in use case #2 with the above code pip3 decides to build an egg which unfortunately includes the .so's. Might this be due to cythonize(...) getting evaluated in any case? can I avoid building the egg or how do I prevent the egg build process from including the shared libs?
this currently includes both the sources (which I don't want to be include) as well as the cythonized modules: how can I prevent the install class from installing most of the source modules, yet installing the __init__.pys?
Let's say I have a simple library which uses setuptools for packaging and distributing. The library in this case also requires a minimum version of Python 3.6, meaning my setup.py would be something like as follows:
from setuptools import setup, find_packages
setup(
name='something',
version='0.0.1',
description='description',
long_description=long_description,
# More metadata
packages=find_packages(exclude=['tests', 'docs']),
python_requires='>=3.6'
)
Now, when I run python setup.py bdist_wheel, I get a file named something-0.0.1-py3-none-any.whl. As evident here, wheel is ignoring the python_requires option in setuptools when determining the Python tag for my wheel (it should be py36 but is the default py3). Obviously, I realize that I can just pass in --python-tag py36 from the command line, which will do the job, but the continuous deployment service I am using for deploying my library only takes in the name of the distribution I am using (bdist_wheel). As such, I cannot pass any command line parameters.
After doing a bit of research, I found that I could inherit from the bdist_wheel class and override the python_tag member variable, but according to the wheel README:
It should be noted that wheel is not intended to be used as a library, and as such there is no stable, public API.
Because of this, I want to avoid inheriting from the bdist_wheel class which might force me to rewrite my class every time some breaking change occurs.
Is there any alternative way through setuptools which allows me to pass in the Python tag for a wheel?
Every command line argument for every distutils command can be persisted in setup config file. Create a file named setup.cfg in the same directory your setup.py resides in and store the custom bdist_wheel configuration in there:
# setup.cfg
[bdist_wheel]
python-tag=py36
Now running python setup.py bdist_wheel will be essentially the same as running python setup.py bdist_wheel --python-tag py36.
Relevant article in the distutils docs: Writing the Setup Configuration File.
You could hack in something like
if 'bdist_wheel' in sys.argv:
if not any(arg.startswith('--python-tag') for arg in sys.argv):
sys.argv.extend(['--python-tag', 'py36'])
but it's arguably just as brittle...
I am working on a python2 package in which the setup.py contains some custom install commands. These commands actually build some Rust code and output some .dylib files that are moved into the python package.
An important point is that the Rust code is outside the python package.
setuptools is supposed to detect automatically if the python package is pure python or platform specific (if it contains some C extensions for instance).
In my case, when I run python setup.py bdist_wheel, the generated wheel is tagged as a pure python wheel: <package_name>-<version>-py2-none-any.whl.
This is problematic because I need to run this code on different platforms, and thus I need to generated one wheel per platform.
Is there a way, when building a wheel, to force the build to be platform specific ?
Here's the code that I usually look at from uwsgi
The basic approach is:
setup.py
# ...
try:
from wheel.bdist_wheel import bdist_wheel as _bdist_wheel
class bdist_wheel(_bdist_wheel):
def finalize_options(self):
_bdist_wheel.finalize_options(self)
self.root_is_pure = False
except ImportError:
bdist_wheel = None
setup(
# ...
cmdclass={'bdist_wheel': bdist_wheel},
)
The root_is_pure bit tells the wheel machinery to build a non-purelib (pyX-none-any) wheel. You can also get fancier by saying there are binary platform-specific components but no cpython abi specific components.
The modules setuptools, distutils and wheel decide whether a python distribution is pure by checking if it has ext_modules.
If you build an external module on your own, you can still list it in ext_modules so that the building tools know it exists. The trick is to provide an empty list of sources so that setuptools and distutils will not try to build it. For example,
setup(
...,
ext_modules=[
setuptools.Extension(
name='your.external.module',
sources=[]
)
]
)
This solution worked better for me than patching the bdist_wheel command. The reason is that bdist_wheel calls the install command internally and that command checks again for the existence of ext_modules to decide between purelib or platlib install. If you don't list the external module, you end up with the lib installed in a purelib subfolder inside the wheel. That causes problems when using auditwheel repair, which complains about the extensions being installed in a purelib folder.
You can also specify/spoof a specific platform name when building wheels by specifying a --plat-name:
python setup.py bdist_wheel --plat-name=manylinux1_x86_64
Neither the root_is_pure trick nor the empty ext_modules trick worked for me, but after MUCH searching myself, I finally found a working solution in 'pip setup.py bdist_wheel' no longer builds forced non-pure wheels
Basically, you override the 'has_ext_modules' function in the Distribution class, and set distclass to point to the overriding class. At that point, setup.py will believe you have a binary distribution, and will create a wheel with the specific version of python, the ABI, and the current architecture. As suggested by https://stackoverflow.com/users/5316090/py-j:
from setuptools import setup
from setuptools.dist import Distribution
DISTNAME = "packagename"
DESCRIPTION = ""
MAINTAINER = ""
MAINTAINER_EMAIL = ""
URL = ""
LICENSE = ""
DOWNLOAD_URL = ""
VERSION = '1.2'
PYTHON_VERSION = (2, 7)
# Tested with wheel v0.29.0
class BinaryDistribution(Distribution):
"""Distribution which always forces a binary package with platform name"""
def has_ext_modules(foo):
return True
setup(name=DISTNAME,
description=DESCRIPTION,
maintainer=MAINTAINER,
maintainer_email=MAINTAINER_EMAIL,
url=URL,
license=LICENSE,
download_url=DOWNLOAD_URL,
version=VERSION,
packages=["packagename"],
# Include pre-compiled extension
package_data={"packagename": ["_precompiled_extension.pyd"]},
distclass=BinaryDistribution)
I find Anthony Sottile's answer great, but didn't work for me.
My case is that I have to enforce the wheel to be created for x86_64, but any python3, so making root impure actually caused my wheel to be py36-cp36 :(
A better way, IMO, in general is just to use sys.argv:
from setuptools import setup
import sys
sys.argv.extend(['--plat-name', 'x86_64'])
setup(name='example-wheel')
I am working on a fork of a python projet (tryton) which uses setuptools for packaging. I am trying to extend the server part of the project, and would like to be able to use the existing modules with my fork.
Those modules are distributed with setuptools packaging, and are requiring the base project for installation.
I need a way to make it so that my fork is considered an acceptable requirement for those modules.
EDIT : Here is what I used in my setup.py :
from setuptools import setup
setup(
...
provides=["trytond (2.8.2)"],
...
)
The modules I want to be able to install have those requirements :
from setuptools import setup
setup(
...
install_requires=["trytond>=2.8"]
...
)
As it is, with my package installed, trying to install a module triggers the installation of the trytond package.
Don’t use provides, it comes from a packaging specification (a metadata PEP) that is not implemented by any tool. The requiremens in the install_requires argument map to the name in your other setup.py. IOW, replace your provides with setup(name='trytond', version='2.8.2').
If you are building rpms, it is possible to use the setup.cfg as follows:
[bdist_rpm]
provides = your-package = 0.8
obsoletes = your-package