I have created a simple c-function (using the guide here Create Numpy ufunc) and I'm now trying to distribute my package on pypi. For it to work, it needs to compile the c-file(s) into a .so -file, which then can be imported from python and everything is good. To compile it needs the header file numpy/ndarraytypes.h from Numpy.
When building and installing locally, it works fine. This since I know where the header files are located. However when distributing it, where can we find the Numpy folder? It is obvious from the logs that numpy gets installed before my package is built and installed so I just need to include the correct Numpy folder.
from setuptools import setup
from setuptools import Extension
if __name__ == "__main__":
setup(
name="myPack",
install_requires=[
"numpy>=1.22.3", # <- where is this one located after it gets installed?
],
ext_modules=[
Extension(
'npufunc',
sources=['npufunc.c'],
include_dirs=[
# what to put here for it to find the "numpy/ndarraytypes.h" header file when installing?
"/usr/local/Cellar/numpy/1.22.3_1/lib/python3.9/site-packages/numpy/core/include/" # <- this one I have locally and when added, installtion works fine
]
),
]
)
You can query numpy for the include directory from python via
import numpy
numpy.get_include()
This should return a string (/usr/lib/python3.10/site-packages/numpy/core/include on my system) which you can add to the include_dirs. See here for the docs.
As for you question: numpy is a build dependency for you project. The old setup.py method is kind of bad at handling those, which is why it has been superseded by the more modern pyproject.toml approach, which (among other things) makes such specifications possible. A fairly minimal pyproject.toml for your setting (setuptools + numpy) would look like this:
[build-system]
requires = ["setuptools", "wheel", "oldest-supported-numpy"]
build-backend = "setuptools.build_meta"
Given such a pyproject.toml you can build the extension using the python build module by calling
python -m build
which should produce a wheel with a compiled so inside it.
Related
I'm trying to use the CFFI package in Python to create a Python interface for already existing C-code.
I am able to compile a C library by following this blog post. Now I want to make it so that this python library is available without any fancy updates to the sys.path.
I found that maybe creating a distribution through Python's setuptools setup() function would accomplish this and I got it to mostly work by creating a setup.py file as such
import os
import sys
from setuptools import setup, find_packages
os.chdir(os.path.dirname(sys.argv[0]) or ".")
setup(
name="SobelFilterTest",
version="0.1",
description="An example project using Python's CFFI",
packages=find_packages(),
install_requires=["cffi>=1.0.0"],
setup_requires=["cffi>=1.0.0"],
cffi_modules=[
"./src/build_sobel.py:ffi",
"./src/build_file_operations.py:ffi",
],
)
, but I run into this error
build/temp.linux-x86_64-3.5/_sobel.c:492:19: fatal error: sobel.h: No such file or directory
From what I can tell, the problem is that the sobel.h file does not get uploaded into the build folder created by setuptools.setup(). I looked for suggestions of what to do including using Extensions() and writing a MANIFEST.in file, and both seem to add a relative path to the correct header files:
MANIFEST.in
setup.py
SobelFilterTest.egg-info/PKG-INFO
SobelFilterTest.egg-info/SOURCES.txt
SobelFilterTest.egg-info/dependency_links.txt
SobelFilterTest.egg-info/requires.txt
SobelFilterTest.egg-info/top_level.txt
src/file_operations.h
src/macros.h
src/sobel.h
But I still get the same error message. Is there a correct way to go about adding the header file to the build folder? Thanks!
It's actually not pip that is missing the .h file, but rather the compiler (like gcc). Therefore it's not about adding the missing file to setup, but rather make sure that cffi can find it. One way (like mentioned in the comments) is to make it available to the compiler through environment variables, but there is another way.
When setting the source with cffi you can add directories for the compiler like this:
from cffi import FFI
ffibuilder = FFI()
ffibuilder.set_source("<YOUR SOURCE HERE>", include_dirs=["./src"])
# ... Rest of your code
"""
This doesn't make sense to me. How can I use the setup.py to install Cython and then also use the setup.py to compile a library proxy?
import sys, imp, os, glob
from setuptools import setup
from Cython.Build import cythonize # this isn't installed yet
setup(
name='mylib',
version='1.0',
package_dir={'mylib': 'mylib', 'mylib.tests': 'tests'},
packages=['mylib', 'mylib.tests'],
ext_modules = cythonize("mylib_proxy.pyx"), #how can we call cythonize here?
install_requires=['cython'],
test_suite='tests',
)
Later:
python setup.py build
Traceback (most recent call last):
File "setup.py", line 3, in <module>
from Cython.Build import cythonize
ImportError: No module named Cython.Build
It's because cython isn't installed yet.
What's odd is that a great many projects are written this way. A quick github search reveals as much: https://github.com/search?utf8=%E2%9C%93&q=install_requires+cython&type=Code
As I understand it, this is where PEP 518 comes in - also see some clarifications by one of its authors.
The idea is that you add yet another file to your Python project / package: pyproject.toml. It is supposed to contain information on build environment dependencies (among other stuff, long term). pip (or just any other package manager) could look into this file and before running setup.py (or any other build script) install the required build environment. A pyproject.toml could therefore look like this:
[build-system]
requires = ["setuptools", "wheel", "Cython"]
It is a fairly recent development and, as of yet (January 2019), it is not finalized / approved by the Python community, though (limited) support was added to pip in May 2017 / the 10.0 release.
One solution to this is to not make Cython a build requirement, and instead distribute the Cython generated C files with your package. I'm sure there is a simpler example somewhere, but this is what pandas does - it conditionally imports Cython, and if not present can be built from the c files.
https://github.com/pandas-dev/pandas/blob/3ff845b4e81d4dde403c29908f5a9bbfe4a87788/setup.py#L433
Edit: The doc link from #danny has an easier to follow example.
http://docs.cython.org/en/latest/src/reference/compilation.html#distributing-cython-modules
When you use setuptool, you should add cython to setup_requires (and also to install_requires if cython is used by installation), i.e.
# don't import cython, it isn't yet there
from setuptools import setup, Extension
# use Extension, rather than cythonize (it is not yet available)
cy_extension = Extension(name="mylib_proxy", sources=["mylib_proxy.pyx"])
setup(
name='mylib',
...
ext_modules = [cy_extension],
setup_requires=["cython"],
...
)
Cython isn't imported (it is not yet available when setup.pystarts), but setuptools.Extension is used instead of cythonize to add cython-extension to the setup.
It should work now. The reason: setuptools will try to import cython, after setup_requires are fulfilled:
...
try:
# Attempt to use Cython for building extensions, if available
from Cython.Distutils.build_ext import build_ext as _build_ext
# Additionally, assert that the compiler module will load
# also. Ref #1229.
__import__('Cython.Compiler.Main')
except ImportError:
_build_ext = _du_build_ext
...
It becomes more complicated, if your Cython-extension uses numpy, but also this is possible - see this SO post.
It doesn't make sense in general. It is, as you suspect, an attempt to use something that (possibly) has yet to be installed. If tested on a system that already has the dependency installed, you might not notice this defect. But run it on a system where your dependency is absent, and you will certainly notice.
There is another setup() keyword argument, setup_requires, that can appear to be parallel in form and use to install_requires, but this is an illusion. Whereas install_requires triggers a lovely ballet of automatic installation in environments that lack the dependencies it names, setup_requires is more documentation than automation. It won't auto-install, and certainly not magically jump back in time to auto-install modules that have already been called for in import statements.
There's more on this at the setuptools docs, but the quick answer is that you're right to be confused by a module that is trying to auto-install its own setup pre-requisites.
For a practical workaround, try installing cython separately, and then run this setup. While it won't fix the metaphysical illusions of this setup script, it will resolve the requirements and let you move on.
I am building a package in Cython. I am using the following as the structure for setup.py:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
import numpy
import scipy
extensions = [
Extension("xxxxx",["xxxx/xxxxx.pyx"],
include_dirs=[numpy.get_include(),"."]),
Extension("nnls",["xxxxx/xxxxx.pyx"],
include_dirs=[numpy.get_include(),"."]),
]
setup(
name='xxxxxx',
version='0.0.0',
description='''********''',
url='xxxxxxx',
author='xxxxx',
author_email='xxxxx',
packages=[
'xxxxx',
],
install_requires=[
'cython',
'numpy',
'scipy',
],
ext_modules=cythonize(extensions),
)
However, I am getting an error upon installation in Python 3. It is working in Python 2 however, it is not compiling in Python 3 having the following error:
dynamic module does not define module export function
How can I solve this problem? Is the structure of the setup.py the reason why this is not compiling?
You need to call setup.py with Python 3 (python3 setup.py build_ext, maybe --inplace). It's because Python 3 defines a different name for the init function called when the module starts, and so you need to build it using Python 3 to ensure the correct name is generated.
See dynamic module does not define init function (PyInit_fuzzy) and How to specify Python 3 source in Cython's setup.py? for slightly more detail (it's bordering on a duplicate of these questions, but isn't quite in my view)
I experienced this and found that I had to use the same name of .pyx as the module name, e.g.
makefile:
# (default)
# INSTALL_DIR:=/usr/lib/python3.6/site-packages
# (my venv)
INSTALL_DIR:=/home/<username>/python3_venv/lib/python3.6/site-packages
all:
sudo python3 setup_myproj.py install --install-lib ${INSTALL_DIR}
setup_myproj.py
from distutils.core import setup, Extension
from Cython.Build import cythonize
ext = Extension("myproj",
sources=["myproj.pyx", "myCppProjFacade.cpp"],
<etc>
language="c++"
)
setup(name="myproj",
version="0.0.1",
ext_modules=cythonize(ext))
client module, run after installing to venv
import myproj as myCppProjWrapper
...
I also found that if the "myproj" names are different, under <python-lib-dir>/<python-vers>/site-packages the .so and .egg-info names are different and the client fails to load it.
In addition I found that the client's environment does not need to have the cython package installed.
I had the same error for torchvision. FIxed it by downgrading the installation versions:
pip install torch==1.2.0+cu92 torchvision==0.4.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html
I am working on a fork of a python projet (tryton) which uses setuptools for packaging. I am trying to extend the server part of the project, and would like to be able to use the existing modules with my fork.
Those modules are distributed with setuptools packaging, and are requiring the base project for installation.
I need a way to make it so that my fork is considered an acceptable requirement for those modules.
EDIT : Here is what I used in my setup.py :
from setuptools import setup
setup(
...
provides=["trytond (2.8.2)"],
...
)
The modules I want to be able to install have those requirements :
from setuptools import setup
setup(
...
install_requires=["trytond>=2.8"]
...
)
As it is, with my package installed, trying to install a module triggers the installation of the trytond package.
Don’t use provides, it comes from a packaging specification (a metadata PEP) that is not implemented by any tool. The requiremens in the install_requires argument map to the name in your other setup.py. IOW, replace your provides with setup(name='trytond', version='2.8.2').
If you are building rpms, it is possible to use the setup.cfg as follows:
[bdist_rpm]
provides = your-package = 0.8
obsoletes = your-package
I would like to create a setup.py script for a python package with several submodules that depend on both cython and f2py. I have attempted to use setuptools and numpy.distutils, but have so far failed:
Using setuptools
I am able to compile my cython extensions (and create an installation for the rest of the package) using setuptools. I have, however, been unable to figure out how to use setuptools to generate the f2py extension. After extensive searching, I only found rather old messages like this one that state that f2py modules must be compiled using numpy.distutils.
Using numpy.distutils
I am able to compile my f2py extensions (and create an installation for the rest of the package) using numpy.distutils. I have, however, been unable to figure out how to get numpy.distutils to compile my cython extensions as it always attempts to use pyrex to compile it (and I am using extensions specific to cython) recent. I have done a search to figure out how to get numpy.distutils for cython files and - at least as of a year ago - they recommend applying a monkey patch to numpy.distutils. It seems applying such a monkey patch also restricts the options that can be passed to Cython.
My question is: what is the recommended way to write a setup.py script for packages that depend on both f2py and cython? Is applying a patch to numpy.distutils really the way to go still?
You can just call each separately in your setup.py as in
http://answerpot.com/showthread.php?601643-cython%20and%20f2py
# Cython extension
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(
ext_modules = [Extension( 'cext', ['cext.pyx'] )],
cmdclass = {'build_ext': build_ext},
script_args = ['build_ext', '--inplace'],
)
# Fortran extension
from numpy.distutils.core import setup, Extension
setup(
ext_modules = [Extension( 'fext', ['fext.f90'] )],
)
Your calling context (I think they call this namespace, not sure)
has to change as to what the current object Extension and function
setup() is.
The first setup() call, it's the distutils.extension.Extension
and distutils.core.setup()
The second setup() call, it's the numpy.distutils.core.Extension
and numpy.distutils.core.setup()
Turns out this is no longer true. With both setuptools and distutils (at least the numpy version) it is possible to have extensions with C, Cython and f2py. The only caveat is that to compile f2py modules one must always use numpy.distutils for both the setup and Extension functions. But setuptools can still be used for the installation (for example, allowing the installation of a developer version with python setup.py develop).
To use distutils exclusively you use the following:
from numpy.distutils.core import setup
from numpy.distutils.extension import Extension
To use setuptools, you need to import it before the distutils imports:
import setuptools
And then the rest of the code is identical:
from numpy import get_include
from Cython.Build import cythonize
NAME = 'my_package'
NUMPY_INC = get_include()
extensions = [
Extension(name=NAME + ".my_cython_ext",
include_dirs=[NUMPY_INC, "my_c_dir"]
sources=["my_cython_ext.pyx", "my_c_dir/my_ext_c_file.c"]),
Extension(name=NAME + ".my_f2py_ext",
sources=["my_f2py_ext.f"]),
]
extensions = cythonize(extensions)
setup(..., ext_modules=extensions)
Obviously you need to put all your other stuff in the setup() call. In the above I assume that you'll use numpy with Cython, along with an external C file (my_ext_c_file.c) that will be at my_c_dir/, and that the f2py module is only one Fortran file. Adjust as needed.