I have a packaged project mytools which uses setuptools' setup to store its version in a setup.py project file e.g.
import setuptools
setuptools.setup(
name='mytools',
version='0.1.0'
)
I'd like to get the common mytools.__version__ feature based on the version value e.g.
import mytools
mytools.__version__
>>>'0.1.0'
Is there native / simple way in setuptools to do so? Couldn't find a reference to __version__ in setuptools.
Furthermore, I don't want to store the version in __init__.py because I'd prefer to keep the version in its current place (setup.py).
The many answers to similar questions do not speak to my specific problem, e.g. How can I get the version defined in setup.py (setuptools) in my package?
Adding __version__ to all top-level modules and packages is a recommendation from PEP 396.
Lately I have seen growing concerns raised about this recommendation and its actual usefulness, for example here:
https://gitlab.com/python-devs/importlib_resources/-/issues/100
https://gitlab.com/python-devs/importlib_metadata/-/merge_requests/125
some more that I can't find right now...
With that said...
Such a thing is often solved like the following:
# my_top_level_module/__init__.py
import importlib.metadata
__version__ = importlib.metadata.version('MyProject')
References:
https://docs.python.org/3/library/importlib.metadata.html
https://importlib-metadata.readthedocs.io/en/latest/using.html#distribution-versions
Related
I love the way CMake allows me to single source version my C/C++ projects, by letting me say:
project(Tutorial VERSION 1.0)
in my CMakeLists.txt file and, then, use placeholders of the form:
#define ver_maj #Tutorial_VERSION_MAJOR#
#define ver_min #Tutorial_VERSION_MINOR#
in my *.h.in file, which, when run through configure_file(), becomes:
#define ver_maj 1
#define ver_min 0
in my equivalent *.h file.
I'm then able to include that file anywhere I need access to my project version numbers.
This is decidedly different than the experience I have with Python projects.
In that case, I'm often rebuilding, because I forgot to sync. up the version numbers in my pyproject.toml and <module>/__init__.py files.
What is the preferred way to achieve something similar to CMake style single source project versioning in a Python project?
In the __init__.py you can set the __version__ like this:
from importlib.metadata import version as _get_version
# Set PEP396 version attribute
__version__ = _get_version('<put-your-package-name-here>')
Or, if your package targeted older Python:
import sys
if sys.version_info >= (3, 8):
from importlib.metadata import version as _get_version
else:
from importlib_metadata import version as _get_version
# …
and a package requirement:
importlib-metadata = {version = ">= 3.6", markers="python_version < '3.8'"}
(use the proper syntax for your build tool)
Also, there are some extensions for build tools (e.g., setuptools-scm or hatch-vcs) that allows avoiding setting explicit version in the pyproject.toml for the sake of e.g., git. So, the only source of version becomes VCS used.
Given a PyPI package name, like PyYAML, how can one programmatically
determine the modules available within the package (distribution package) that could be imported?
Detail
I'm not specifically interested in PyYAML, it's just a good example of a popular PyPI package which has a different
package name (PyYAML)
from it's primary module name (yaml)
such that you can't easily guess the module name from the package name.
I've seen other answers to questions that sound like this but are different, likely because of a naming collision
package meaning a python construct allowing for a collection of modules
package meaning a "Distribution Package", an archive file that
contains Python packages, modules, and other resource files that are used to distribute a Release.
My question is about the relationship between distribution packages and the modules within.
Possible Solution Spaces
Areas that seem like they might be fruitful (but which I've not had success with yet) are :
The pydoc.help function
(surfaced as the help built-in)
outputs a complete list of all available modules when called as help('modules'). This
shows modules that have not been imported but could be. It outputs in a human readable form
to stdout, and I've been unable to figure out how the pydoc code
enumerates the modules.
I could imagine calling this, gathering the module list, installing a new distribution package into a virtualenv with
pip programatically, calling it again and diffing the results.
Progamatically installing a distribution package with pip in order to
Iterate through elements of the python path to find modules
My project johnnydep provides exactly this feature:
$ johnnydep --fields=import_names PyYAML
name import_names
------ --------------
PyYAML yaml
Note that some distributions export multiple top-level names, some distributions export none at all, and there is not necessarily any obvious relationship between the distribution name (used with a pip install command) and the package name (used with an import statement) - though it is a common convention for them to be matched.
For example, the popular project setuptools exposes three top-level names:
$ johnnydep --fields=import_names setuptools
name import_names
---------- ---------------------------------------
setuptools easy_install, pkg_resources, setuptools
API usage is via attribute access:
>>> from johnnydep.lib import JohnnyDist
>>> jdist = JohnnyDist("setuptools")
>>> jdist.import_names
['easy_install', 'pkg_resources', 'setuptools']
If you are interested to know submodule names, not top-level names, that's possible with stdlib pkgutil, for example:
>>> import pkgutil, requests
>>> [name for finder, name, ispkg in pkgutil.walk_packages(requests.__path__)]
['__version__',
'_internal_utils',
'adapters',
'api',
'auth',
'certs',
'compat',
'cookies',
'exceptions',
'help',
'hooks',
'models',
'packages',
'sessions',
'status_codes',
'structures',
'utils']
I'm trying to use the CFFI package in Python to create a Python interface for already existing C-code.
I am able to compile a C library by following this blog post. Now I want to make it so that this python library is available without any fancy updates to the sys.path.
I found that maybe creating a distribution through Python's setuptools setup() function would accomplish this and I got it to mostly work by creating a setup.py file as such
import os
import sys
from setuptools import setup, find_packages
os.chdir(os.path.dirname(sys.argv[0]) or ".")
setup(
name="SobelFilterTest",
version="0.1",
description="An example project using Python's CFFI",
packages=find_packages(),
install_requires=["cffi>=1.0.0"],
setup_requires=["cffi>=1.0.0"],
cffi_modules=[
"./src/build_sobel.py:ffi",
"./src/build_file_operations.py:ffi",
],
)
, but I run into this error
build/temp.linux-x86_64-3.5/_sobel.c:492:19: fatal error: sobel.h: No such file or directory
From what I can tell, the problem is that the sobel.h file does not get uploaded into the build folder created by setuptools.setup(). I looked for suggestions of what to do including using Extensions() and writing a MANIFEST.in file, and both seem to add a relative path to the correct header files:
MANIFEST.in
setup.py
SobelFilterTest.egg-info/PKG-INFO
SobelFilterTest.egg-info/SOURCES.txt
SobelFilterTest.egg-info/dependency_links.txt
SobelFilterTest.egg-info/requires.txt
SobelFilterTest.egg-info/top_level.txt
src/file_operations.h
src/macros.h
src/sobel.h
But I still get the same error message. Is there a correct way to go about adding the header file to the build folder? Thanks!
It's actually not pip that is missing the .h file, but rather the compiler (like gcc). Therefore it's not about adding the missing file to setup, but rather make sure that cffi can find it. One way (like mentioned in the comments) is to make it available to the compiler through environment variables, but there is another way.
When setting the source with cffi you can add directories for the compiler like this:
from cffi import FFI
ffibuilder = FFI()
ffibuilder.set_source("<YOUR SOURCE HERE>", include_dirs=["./src"])
# ... Rest of your code
"""
This doesn't make sense to me. How can I use the setup.py to install Cython and then also use the setup.py to compile a library proxy?
import sys, imp, os, glob
from setuptools import setup
from Cython.Build import cythonize # this isn't installed yet
setup(
name='mylib',
version='1.0',
package_dir={'mylib': 'mylib', 'mylib.tests': 'tests'},
packages=['mylib', 'mylib.tests'],
ext_modules = cythonize("mylib_proxy.pyx"), #how can we call cythonize here?
install_requires=['cython'],
test_suite='tests',
)
Later:
python setup.py build
Traceback (most recent call last):
File "setup.py", line 3, in <module>
from Cython.Build import cythonize
ImportError: No module named Cython.Build
It's because cython isn't installed yet.
What's odd is that a great many projects are written this way. A quick github search reveals as much: https://github.com/search?utf8=%E2%9C%93&q=install_requires+cython&type=Code
As I understand it, this is where PEP 518 comes in - also see some clarifications by one of its authors.
The idea is that you add yet another file to your Python project / package: pyproject.toml. It is supposed to contain information on build environment dependencies (among other stuff, long term). pip (or just any other package manager) could look into this file and before running setup.py (or any other build script) install the required build environment. A pyproject.toml could therefore look like this:
[build-system]
requires = ["setuptools", "wheel", "Cython"]
It is a fairly recent development and, as of yet (January 2019), it is not finalized / approved by the Python community, though (limited) support was added to pip in May 2017 / the 10.0 release.
One solution to this is to not make Cython a build requirement, and instead distribute the Cython generated C files with your package. I'm sure there is a simpler example somewhere, but this is what pandas does - it conditionally imports Cython, and if not present can be built from the c files.
https://github.com/pandas-dev/pandas/blob/3ff845b4e81d4dde403c29908f5a9bbfe4a87788/setup.py#L433
Edit: The doc link from #danny has an easier to follow example.
http://docs.cython.org/en/latest/src/reference/compilation.html#distributing-cython-modules
When you use setuptool, you should add cython to setup_requires (and also to install_requires if cython is used by installation), i.e.
# don't import cython, it isn't yet there
from setuptools import setup, Extension
# use Extension, rather than cythonize (it is not yet available)
cy_extension = Extension(name="mylib_proxy", sources=["mylib_proxy.pyx"])
setup(
name='mylib',
...
ext_modules = [cy_extension],
setup_requires=["cython"],
...
)
Cython isn't imported (it is not yet available when setup.pystarts), but setuptools.Extension is used instead of cythonize to add cython-extension to the setup.
It should work now. The reason: setuptools will try to import cython, after setup_requires are fulfilled:
...
try:
# Attempt to use Cython for building extensions, if available
from Cython.Distutils.build_ext import build_ext as _build_ext
# Additionally, assert that the compiler module will load
# also. Ref #1229.
__import__('Cython.Compiler.Main')
except ImportError:
_build_ext = _du_build_ext
...
It becomes more complicated, if your Cython-extension uses numpy, but also this is possible - see this SO post.
It doesn't make sense in general. It is, as you suspect, an attempt to use something that (possibly) has yet to be installed. If tested on a system that already has the dependency installed, you might not notice this defect. But run it on a system where your dependency is absent, and you will certainly notice.
There is another setup() keyword argument, setup_requires, that can appear to be parallel in form and use to install_requires, but this is an illusion. Whereas install_requires triggers a lovely ballet of automatic installation in environments that lack the dependencies it names, setup_requires is more documentation than automation. It won't auto-install, and certainly not magically jump back in time to auto-install modules that have already been called for in import statements.
There's more on this at the setuptools docs, but the quick answer is that you're right to be confused by a module that is trying to auto-install its own setup pre-requisites.
For a practical workaround, try installing cython separately, and then run this setup. While it won't fix the metaphysical illusions of this setup script, it will resolve the requirements and let you move on.
I am working on a fork of a python projet (tryton) which uses setuptools for packaging. I am trying to extend the server part of the project, and would like to be able to use the existing modules with my fork.
Those modules are distributed with setuptools packaging, and are requiring the base project for installation.
I need a way to make it so that my fork is considered an acceptable requirement for those modules.
EDIT : Here is what I used in my setup.py :
from setuptools import setup
setup(
...
provides=["trytond (2.8.2)"],
...
)
The modules I want to be able to install have those requirements :
from setuptools import setup
setup(
...
install_requires=["trytond>=2.8"]
...
)
As it is, with my package installed, trying to install a module triggers the installation of the trytond package.
Don’t use provides, it comes from a packaging specification (a metadata PEP) that is not implemented by any tool. The requiremens in the install_requires argument map to the name in your other setup.py. IOW, replace your provides with setup(name='trytond', version='2.8.2').
If you are building rpms, it is possible to use the setup.cfg as follows:
[bdist_rpm]
provides = your-package = 0.8
obsoletes = your-package