Usage of "provides" keyword-argument in python's setup.py - python

I am working on a fork of a python projet (tryton) which uses setuptools for packaging. I am trying to extend the server part of the project, and would like to be able to use the existing modules with my fork.
Those modules are distributed with setuptools packaging, and are requiring the base project for installation.
I need a way to make it so that my fork is considered an acceptable requirement for those modules.
EDIT : Here is what I used in my setup.py :
from setuptools import setup
setup(
...
provides=["trytond (2.8.2)"],
...
)
The modules I want to be able to install have those requirements :
from setuptools import setup
setup(
...
install_requires=["trytond>=2.8"]
...
)
As it is, with my package installed, trying to install a module triggers the installation of the trytond package.

Don’t use provides, it comes from a packaging specification (a metadata PEP) that is not implemented by any tool. The requiremens in the install_requires argument map to the name in your other setup.py. IOW, replace your provides with setup(name='trytond', version='2.8.2').

If you are building rpms, it is possible to use the setup.cfg as follows:
[bdist_rpm]
provides = your-package = 0.8
obsoletes = your-package

Related

Separate `setup.py` instructions for Apple silicon [duplicate]

We have a placeholder egg that contains no code and only exists for the sake of pulling down a list of dependent packages from our PyPi repository.
Most of these dependent packages are platform-agnostic, however some are only used on Win32 platforms.
Is it possible to somehow make the dependency platform-conditional, so that a given dependency in my install_requires list will only get pulled down when installing on Win32?
Alternatively: Is it possible to specify a list of optional dependencies, that will be installed if available, but will not cause easy_install to fail if they are not?
For sdist, egg and wheel release from : https://setuptools.readthedocs.io/en/latest/userguide/dependency_management.html#platform-specific-dependencies
Sometimes a project might require a dependency to run on a specific platform. This could to a package that back ports a module so that it can be used in older python versions. Or it could be a package that is required to run on a specific operating system. This will allow a project to work on multiple different platforms without installing dependencies that are not required for a platform that is installing the project.
setup(
name="Project",
...
install_requires=[
'enum34 ; python_version<"3.4"',
'pywin32 >= 1.0 ; platform_system=="Windows"'
]
)
In setup.py:
from setuptools import setup
import sys
setup(
name="...",
install_requires=["This", "That"] + (
["WinOnly", "AnotherWinOnly"] if sys.platform.startswith("win") else []
)
)
distutils.util.get_platform has more information than sys.platform if you need it:
>>> sys.platform
'linux2'
>>> distutils.util.get_platform()
'linux-i686'
Use the extras_require distribution option to make 'win32 support' an optional feature:
setup(
...
extras_require={
'win32': 'pywin32'
},
...
)
Then specify the win32 feature when installing on Windows:
easy_install mypackage[win32]
This will pull down the pywin32 package, which is listed as a dependency for the 'win32' feature of mypackage.
See here for more information about optional features.
When the egg is built (using python setup.py bdist_egg), you can force setuptools/distribute to build a platform-specific egg.
from setuptools import setup
import os
# Monkey-patch Distribution so it always claims to be platform-specific.
from distutils.core import Distribution
Distribution.has_ext_modules = lambda *args, **kwargs: True
requirements = ['generic-foo', 'generic-bar']
if os.getenv('WINDOWS_BUILD'):
requirements.extend(['a-windows-only-requirement'])
setup(
name="...",
install_requires=requirements
)
You can then run:
# Force a windows build
$ WINDOWS_BUILD=y python setup.py bdist_egg -p win32
# Do a linux build -- you may not need to specify -p if you're happy
# with your current linux architecture.
$ python setup.py bdist_egg -p linux-i686

setup.py - building c-extension with Numpy dependency

I have created a simple c-function (using the guide here Create Numpy ufunc) and I'm now trying to distribute my package on pypi. For it to work, it needs to compile the c-file(s) into a .so -file, which then can be imported from python and everything is good. To compile it needs the header file numpy/ndarraytypes.h from Numpy.
When building and installing locally, it works fine. This since I know where the header files are located. However when distributing it, where can we find the Numpy folder? It is obvious from the logs that numpy gets installed before my package is built and installed so I just need to include the correct Numpy folder.
from setuptools import setup
from setuptools import Extension
if __name__ == "__main__":
setup(
name="myPack",
install_requires=[
"numpy>=1.22.3", # <- where is this one located after it gets installed?
],
ext_modules=[
Extension(
'npufunc',
sources=['npufunc.c'],
include_dirs=[
# what to put here for it to find the "numpy/ndarraytypes.h" header file when installing?
"/usr/local/Cellar/numpy/1.22.3_1/lib/python3.9/site-packages/numpy/core/include/" # <- this one I have locally and when added, installtion works fine
]
),
]
)
You can query numpy for the include directory from python via
import numpy
numpy.get_include()
This should return a string (/usr/lib/python3.10/site-packages/numpy/core/include on my system) which you can add to the include_dirs. See here for the docs.
As for you question: numpy is a build dependency for you project. The old setup.py method is kind of bad at handling those, which is why it has been superseded by the more modern pyproject.toml approach, which (among other things) makes such specifications possible. A fairly minimal pyproject.toml for your setting (setuptools + numpy) would look like this:
[build-system]
requires = ["setuptools", "wheel", "oldest-supported-numpy"]
build-backend = "setuptools.build_meta"
Given such a pyproject.toml you can build the extension using the python build module by calling
python -m build
which should produce a wheel with a compiled so inside it.

python: create wheel from existing c extension

Our build pipeline spits out an *.so / *.pyb python extension through pybind11. As a step in the pipeline I need to package the extension as a wheel for easy distribution through pip. I am trying to come up with a setup.py that takes the existing library and does not rely on re-compiling the binaries through the setup.py (so I really don't want this). It would require a major rewrite of the devops scripts.
When having a folder structure such as:
setup.py
my_module.cpython-39-darwin.so
A very basic setup.py can create a functioning wheel (python setup.py bdist_wheel):
setup(
name = 'my_module',
version='0.9.102',
packages=["."],
package_data={'': ['*.so', '*.pyi']},
)
Unfortunately, the wheel is missing the important python tag and platform name, etc: my_module-0.9.102-py3-none-any.whl vs. my_module-0.9.102-cp39-cp39-macosx_10_13_x86_64.whl
Setting --python-tag and --plat-name works, setting --py-limited-api does not, however.
Through research I found that overwriting the distclass adds the correct tag again, but the Root-Is-Purelib is set back to false. This, unfortunately, creates a broken wheel when installing through pip as it puts the binary in a my_module-0.9.102.data/purelib folder...
Overwriting the is_pure seems to be ignored also:
from setuptools import setup, find_packages, Distribution
class BinaryDistribution(Distribution):
def is_pure(self):
return True
def has_ext_modules(foo):
return True
setup(
name = 'my_module',
version='0.9.102',
packages=["."],
package_data={'': ['*.so', '*.pyi']},
distclass=BinaryDistribution
)
What else can I do to wrap my pre-compiled python libraries to wheels for distribution without rewriting lots of the build pipeline?
Not a perfect solution but a functional wheel is created when using the wheel module instead:
from setuptools import setup
setup(name='my_module',
packages=['my_module'],
package_data={'': ['*.so', '*.pyi']},
)
create wheel with:
pip install wheel
pip wheel .

How to force a python wheel to be platform specific when building it?

I am working on a python2 package in which the setup.py contains some custom install commands. These commands actually build some Rust code and output some .dylib files that are moved into the python package.
An important point is that the Rust code is outside the python package.
setuptools is supposed to detect automatically if the python package is pure python or platform specific (if it contains some C extensions for instance).
In my case, when I run python setup.py bdist_wheel, the generated wheel is tagged as a pure python wheel: <package_name>-<version>-py2-none-any.whl.
This is problematic because I need to run this code on different platforms, and thus I need to generated one wheel per platform.
Is there a way, when building a wheel, to force the build to be platform specific ?
Here's the code that I usually look at from uwsgi
The basic approach is:
setup.py
# ...
try:
from wheel.bdist_wheel import bdist_wheel as _bdist_wheel
class bdist_wheel(_bdist_wheel):
def finalize_options(self):
_bdist_wheel.finalize_options(self)
self.root_is_pure = False
except ImportError:
bdist_wheel = None
setup(
# ...
cmdclass={'bdist_wheel': bdist_wheel},
)
The root_is_pure bit tells the wheel machinery to build a non-purelib (pyX-none-any) wheel. You can also get fancier by saying there are binary platform-specific components but no cpython abi specific components.
The modules setuptools, distutils and wheel decide whether a python distribution is pure by checking if it has ext_modules.
If you build an external module on your own, you can still list it in ext_modules so that the building tools know it exists. The trick is to provide an empty list of sources so that setuptools and distutils will not try to build it. For example,
setup(
...,
ext_modules=[
setuptools.Extension(
name='your.external.module',
sources=[]
)
]
)
This solution worked better for me than patching the bdist_wheel command. The reason is that bdist_wheel calls the install command internally and that command checks again for the existence of ext_modules to decide between purelib or platlib install. If you don't list the external module, you end up with the lib installed in a purelib subfolder inside the wheel. That causes problems when using auditwheel repair, which complains about the extensions being installed in a purelib folder.
You can also specify/spoof a specific platform name when building wheels by specifying a --plat-name:
python setup.py bdist_wheel --plat-name=manylinux1_x86_64
Neither the root_is_pure trick nor the empty ext_modules trick worked for me, but after MUCH searching myself, I finally found a working solution in 'pip setup.py bdist_wheel' no longer builds forced non-pure wheels
Basically, you override the 'has_ext_modules' function in the Distribution class, and set distclass to point to the overriding class. At that point, setup.py will believe you have a binary distribution, and will create a wheel with the specific version of python, the ABI, and the current architecture. As suggested by https://stackoverflow.com/users/5316090/py-j:
from setuptools import setup
from setuptools.dist import Distribution
DISTNAME = "packagename"
DESCRIPTION = ""
MAINTAINER = ""
MAINTAINER_EMAIL = ""
URL = ""
LICENSE = ""
DOWNLOAD_URL = ""
VERSION = '1.2'
PYTHON_VERSION = (2, 7)
# Tested with wheel v0.29.0
class BinaryDistribution(Distribution):
"""Distribution which always forces a binary package with platform name"""
def has_ext_modules(foo):
return True
setup(name=DISTNAME,
description=DESCRIPTION,
maintainer=MAINTAINER,
maintainer_email=MAINTAINER_EMAIL,
url=URL,
license=LICENSE,
download_url=DOWNLOAD_URL,
version=VERSION,
packages=["packagename"],
# Include pre-compiled extension
package_data={"packagename": ["_precompiled_extension.pyd"]},
distclass=BinaryDistribution)
I find Anthony Sottile's answer great, but didn't work for me.
My case is that I have to enforce the wheel to be created for x86_64, but any python3, so making root impure actually caused my wheel to be py36-cp36 :(
A better way, IMO, in general is just to use sys.argv:
from setuptools import setup
import sys
sys.argv.extend(['--plat-name', 'x86_64'])
setup(name='example-wheel')

In a setup.py involving Cython, if install_requires, then how can from library import something?

This doesn't make sense to me. How can I use the setup.py to install Cython and then also use the setup.py to compile a library proxy?
import sys, imp, os, glob
from setuptools import setup
from Cython.Build import cythonize # this isn't installed yet
setup(
name='mylib',
version='1.0',
package_dir={'mylib': 'mylib', 'mylib.tests': 'tests'},
packages=['mylib', 'mylib.tests'],
ext_modules = cythonize("mylib_proxy.pyx"), #how can we call cythonize here?
install_requires=['cython'],
test_suite='tests',
)
Later:
python setup.py build
Traceback (most recent call last):
File "setup.py", line 3, in <module>
from Cython.Build import cythonize
ImportError: No module named Cython.Build
It's because cython isn't installed yet.
What's odd is that a great many projects are written this way. A quick github search reveals as much: https://github.com/search?utf8=%E2%9C%93&q=install_requires+cython&type=Code
As I understand it, this is where PEP 518 comes in - also see some clarifications by one of its authors.
The idea is that you add yet another file to your Python project / package: pyproject.toml. It is supposed to contain information on build environment dependencies (among other stuff, long term). pip (or just any other package manager) could look into this file and before running setup.py (or any other build script) install the required build environment. A pyproject.toml could therefore look like this:
[build-system]
requires = ["setuptools", "wheel", "Cython"]
It is a fairly recent development and, as of yet (January 2019), it is not finalized / approved by the Python community, though (limited) support was added to pip in May 2017 / the 10.0 release.
One solution to this is to not make Cython a build requirement, and instead distribute the Cython generated C files with your package. I'm sure there is a simpler example somewhere, but this is what pandas does - it conditionally imports Cython, and if not present can be built from the c files.
https://github.com/pandas-dev/pandas/blob/3ff845b4e81d4dde403c29908f5a9bbfe4a87788/setup.py#L433
Edit: The doc link from #danny has an easier to follow example.
http://docs.cython.org/en/latest/src/reference/compilation.html#distributing-cython-modules
When you use setuptool, you should add cython to setup_requires (and also to install_requires if cython is used by installation), i.e.
# don't import cython, it isn't yet there
from setuptools import setup, Extension
# use Extension, rather than cythonize (it is not yet available)
cy_extension = Extension(name="mylib_proxy", sources=["mylib_proxy.pyx"])
setup(
name='mylib',
...
ext_modules = [cy_extension],
setup_requires=["cython"],
...
)
Cython isn't imported (it is not yet available when setup.pystarts), but setuptools.Extension is used instead of cythonize to add cython-extension to the setup.
It should work now. The reason: setuptools will try to import cython, after setup_requires are fulfilled:
...
try:
# Attempt to use Cython for building extensions, if available
from Cython.Distutils.build_ext import build_ext as _build_ext
# Additionally, assert that the compiler module will load
# also. Ref #1229.
__import__('Cython.Compiler.Main')
except ImportError:
_build_ext = _du_build_ext
...
It becomes more complicated, if your Cython-extension uses numpy, but also this is possible - see this SO post.
It doesn't make sense in general. It is, as you suspect, an attempt to use something that (possibly) has yet to be installed. If tested on a system that already has the dependency installed, you might not notice this defect. But run it on a system where your dependency is absent, and you will certainly notice.
There is another setup() keyword argument, setup_requires, that can appear to be parallel in form and use to install_requires, but this is an illusion. Whereas install_requires triggers a lovely ballet of automatic installation in environments that lack the dependencies it names, setup_requires is more documentation than automation. It won't auto-install, and certainly not magically jump back in time to auto-install modules that have already been called for in import statements.
There's more on this at the setuptools docs, but the quick answer is that you're right to be confused by a module that is trying to auto-install its own setup pre-requisites.
For a practical workaround, try installing cython separately, and then run this setup. While it won't fix the metaphysical illusions of this setup script, it will resolve the requirements and let you move on.

Categories

Resources