SITUATION:
I have a python library, which is controlled by git, and bundled with distutils/setuptools. And I want to automatically generate version number based on git tags, both for setup.py sdist and alike commands, and for the library itself.
For the first task I can use git describe or alike solutions (see How can I get the version defined in setup.py (setuptools) in my package?).
And when, for example, I am in a tag '0.1' and call for 'setup.py sdist', I get 'mylib-0.1.tar.gz'; or 'mylib-0.1-3-abcd.tar.gz' if I altered the code after tagging. This is fine.
THE PROBLEM IS:
The problem comes when I want to have this version number available for the library itself, so it could send it in User-Agent HTTP header as 'mylib/0.1-3-adcd'.
If I add setup.py version command as in How can I get the version defined in setup.py (setuptools) in my package?, then this version.py is generated AFTER the tag is made, since it uses the tag as a value. But in this case I need to make one more commit after the version tag is made to make the code consistent. Which, in turns, requires a new tag for further bundling.
THE QUESTION IS:
How to break this circle of dependencies (generate-commit-tag-generate-commit-tag-...)?
You could also reverse the dependency: put the version in mylib/__init__.py, parse that file in setup.py to get the version parameter, and use git tag $(setup.py --version) on the command line to create your tag.
git tag -a v$(python setup.py --version) -m 'description of version'
Is there anything more complicated you want to do that I haven’t understood?
A classic issue when toying with keyword expansion ;)
The key is to realize that your tag is part of the release management process, not part of the development (and its version control) process.
In other word, you cannot include a release management data in a development repository, because of the loop you illustrates in your question.
You need, when generating the package (which is the "release management part"), to write that information in a file that your library will look for and use (if said file exists) for its User-Agent HTTP header.
Since this topic is still alive and sometimes gets to search results, I would like to mention another solution which first appeared in 2012 and now is more or less usable:
https://github.com/warner/python-versioneer
It works in different way than all mentioned solutions: you add git tags manually, and the library (and setup.py) reads the tags, and builds the version string dynamically.
The version string includes the latest tag, distance from that tag, current commit hash, "dirtiness", and some other info. It has few different version formats.
But it still has no branch name for so called "custom builds"; and commit distance can be confusing sometimes when two branches are based on the same commit, so it is better to tag & release only one selected branch (master).
Eric's idea was the simple way to go, just in case this is useful here is the code I used (Flask's team did it this way):
import re
import ast
_version_re = re.compile(r'__version__\s+=\s+(.*)')
with open('app_name/__init__.py', 'rb') as f:
version = str(ast.literal_eval(_version_re.search(
f.read().decode('utf-8')).group(1)))
setup(
name='app-name',
version=version,
.....
)
If you found versioneer excessively convoluted, you can try bump2version.
Just add the simple bumpversion configuration file in the root of your library. This file indicates where in your repository there are strings storing the version number. Then, to update the version in all indicated places for a minor release, just type:
bumpversion minor
Use patch or major if you want to release a patch or a major.
This is not all about bumpversion. There are other flag-options, and config options, such as tagging automatically the repository, for which you can check the official documentation.
Following OGHaza's solution in a similar SO question I keep a file _version.py that I parse in setup.py. With the version string from there, I git tag in setup.py. Then I set the setup version variable to a combination of version string plus the git commit hash. So here is the relevant part of setup.py:
from setuptools import setup, find_packages
from codecs import open
from os import path
import subprocess
here = path.abspath(path.dirname(__file__))
import re, os
VERSIONFILE=os.path.join(here,"_version.py")
verstrline = open(VERSIONFILE, "rt").read()
VSRE = r"^__version__ = ['\"]([^'\"]*)['\"]"
mo = re.search(VSRE, verstrline, re.M)
if mo:
verstr = mo.group(1)
else:
raise RuntimeError("Unable to find version string in %s." % (VERSIONFILE,))
if os.path.exists(os.path.join(here, '.git')):
cmd = 'git rev-parse --verify --short HEAD'
git_hash = subprocess.check_output(cmd)
# tag git
gitverstr = 'v' + verstr
tags = subprocess.check_output('git tag')
if not gitverstr in tags:
cmd = 'git tag -a %s %s -m "tagged by setup.py to %s"' % (gitverstr, git_hash, verstr)
subprocess.check_output(cmd)
# use the git hash in the setup
verstr += ', git hash: %s' % git_hash
setup(
name='a_package',
version = verstr,
....
As was mentioned in another answer, this is related to the release process and not to the development process, as such it is not a git issue in itself, but more how is your release work process.
A very simple variant is to use this:
python setup.py egg_info -b ".`date '+%Y%m%d'`git`git rev-parse --short HEAD`" build sdist
The portion between the quotes is up for customization, however I tried to follow the typical Fedora/RedHat package names.
Of note, even if egg_info implies relation to .egg, actually it's used through the toolchain, for example for bdist_wheel as well and has to be specified in the beginning.
In general, your pre-release and post-release versions should live outside setup.py or any type of import version.py. The topic about versioning and egg_info is covered in detail here.
Example:
v1.3.4dev.20200813gitabcdef0
The v1.3.4 is in setup.py or any other variation you would like
The dev and 20200813gitabcdef0 is generated during the build process (example above)
None of the files generated during build are checked in git (usually in .gitignore they are filtered by default); sometimes there is a separate "deployment" repository, or similar, completely separate from the source one
A more complex way would be to have your release work process encoded in a Makefile which is outside the scope of this question, however a good source of inspiration can be found here and here. You will find good correspondeces between Makefile targets and setup.py commands.
Related
The library I'm working on generates python files according to an executable (which turns ANTLRv4 .g4 files into python files), and I have the following install step:
import os
import subprocess
from setuptools import setup
from setuptools.command.install import install
class AntlrInstallCommand(install):
def run(self):
output_dir = compile_grammar()
print(f"Compiled ANTLRv4 grammar in {output_dir}")
install.run(self)
def compile_grammar():
parser_dir = os.path.join(os.path.dirname(__file__), "my_project/grammar")
subprocess.check_output(
["antlr", "MyGrammar.g4", "-Dlanguage=Python3", "-visitor", "-o", "gen"],
cwd=parser_dir,
)
# The result is created in the subfolder `gen`
return os.path.join(parser_dir, "gen")
setup(
...
install_requires=[
"setuptools",
"antlr4-python3-runtime==4.9.2",
...
],
cmdclass={"install": AntlrInstallCommand},
license="MIT",
python_requires=">=3.6",
)
Which works great if I'm pip install'ing the project on a machine that has antlr installed (since I'm calling it via subprocess).
Ideally, attempting to do this on a machine that doesn't have antlr installed would first install the executable(with the correct version) in either a system directory like /usr/bin, or whatever relevant python bin directory we're working in, but right now it errors out with the following message(which is expected):
running install
error: [Errno 2] No such file or directory: 'antlr'
----------------------------------------
ERROR: Failed building wheel for my_project
I see a couple of solutions each with slight caveats:
sympy uses ANTLR, but it requires the user to install antlr first. See here
setuptools-antlr allows me to download an antlr jar as a giant blob in a python package, and then I can invoke it here. However, the version doesn't match mine (which is 4.9.2).
java2python precompiles the files for me and writes them into the github repo. However, these files are extremely large and are very hard to read as they're autogenerated. If I slightly modify the grammar and don't modify the parser it would also lead to unexpected bugs. As a result, I would like to hide this complexity from the repository as it's tangential to development.
If I can get the right version of the antlr binary and be able to invoke it at install time, that would be optimal. Otherwise I'm okay with picking one of these alternatives. Any suggestions for either case would be appreciated.
I'm banging my head against the wall on this - mostly because I'm really new to Yocto and just getting into the swing of things. I've been building the image github.com/EttusResearch/oe-manifests and been successfully.
Now, I'd like to add tensorflow as a package, avoiding it's bazel and java dependancies I decided to create a recipe of my own, using the whl for armv7.
I've followed this article: Yocto recipe python whl package
And used this whl repo: https://github.com/lhelontra/tensorflow-on-arm/releases
I created a layer and then added a recipe which I've named tensorflow_2.0.0.bb which contains:
SRC_URI = "https://github.com/lhelontra/tensorflow-on-arm/releases/download/v2.0.0/tensorflow-2.0.0-cp37-none-linux_armv7l.whl;downloadfilename=v2.0.0.zip;subdir=${BP}"
SRC_URI[md5sum] = "0af281677f40e4aa1da7bb1b2ba72e18"
SRC_URI[sha256sum] = "3cb1be51fe3081924ddbe69e92a51780458accafd12e39482a872b27b3afff8c"
LICENSE = "BSD-3-Clause"
inherit nativesdk python3-dir
LIC_FILES_CHKSUM = "file:///${S}/tensorflow-2.0.0.dist-info/LICENSE;md5=64a34301f8e355f57ec992c2af3e5157"
PV ="2.0.0"
PN = "tensorflow"
do_unpack[depends] += "unzip-native:do_populate_sysroot"
PROVIDES += "tensorflow"
DEPENDS += "python3"
FILES_${PN} += "\
${libdir}/${PYTHON_DIR}/site-packages/* \
"
do_install() {
install -d ${D}${libdir}/${PYTHON_DIR}/site-packages/tensorflow-2.0.0.dist-info
install -d ${D}${libdir}/${PYTHON_DIR}/site-packages/tensorflow
install -m 644 ${S}/tensorflow/* ${D}${libdir}/${PYTHON_DIR}/site-packages/tensorflow/
install -m 644 ${S}/tensorflow-2.0.0.dist-info/* ${D}${libdir}/${PYTHON_DIR}/site-packages/tensorflow-2.0.0.dist-info/
}
The problem is, during building this recipe I get the following error:
ERROR: Nothing PROVIDES 'virtual/x86_64-oesdk-linux-compilerlibs' (but /home/sudilav/oe-core/../meta-tensorflow/recipes-devtools/tensorflow/tensorflow_2.0.0.bb DEPENDS on or otherwise requires it). Close matches:
virtual/nativesdk-x86_64-oesdk-linux-compilerlibs
virtual/x86_64-oesdk-linux-go-crosssdk
virtual/x86_64-oesdk-linux-gcc-crosssdk
ERROR: Required build target 'tensorflow' has no buildable providers.
Missing or unbuildable dependency chain was: ['tensorflow', 'virtual/x86_64-oesdk-linux-compilerlibs']
Given I'm downloading and unzipping a whl, I can't see why it's flagging up these dependancies. I think the whl does compile, but it's a lot of code to check through. Has anyone seen this before? There's not much from google on this error :/
Bitbake has a tool to create a file with the dependencies tree.
bitbake -g
or, for a specific recipe:
bitbake -g {recipe name}
There is a dedicated tool too display these trees, like kgraphviewer and also online tools.
I personally just open these files with a text editor, they are pretty easy to read.
Just search the file for "virtual/x86_64-oesdk-linux-compilerlibs" and see who needs it.
Hope this helps.
The following is my setup.py:
from setuptools import setup, find_packages
packages=find_packages("src")
setup(name='myapp',
version='0.2.0',
url='http://loom.st',
author='Loom',
author_email='admin#loom.st',
package_dir={'': 'src'},
packages=packages,
)
I built rpm with command python setup.py bdist_rpm and have got files:
myapp-0.2.0-1.noarch.rpm
myapp-0.2.0-1.src.rpm
myapp-0.2.0.tar.gz
Why I have 1 in rpm file names and how I can manage what to show on this place?
The 1 is called the release number. As you can see in the documentation:, when you call setup.py, you can pass him the option --release to define the release number like this:
python setup.py bdist_rpm --release=0
This number is called release number. For the same version (0.2.0 in your case) you can have various releases. E.g. because ABI of some dependency changed and you need to rebuild it agains updated dependency. Or you added some security patch. Part of the release number is usually dist tag. E.g.: myapp-0.2.0-1.el6.noarch.rpm, myapp-0.2.0-1.el5.noarch.rpm. So the ".el5" and ".el6" are in fact part of release number. And it helps you better describe what release it is actualy. Because %{python_sitelib} is on el5 is different from path on el6 so the binary RPMs are different.
Release number usualy start with 1.
You can find more information at https://fedoraproject.org/wiki/Packaging:NamingGuidelines#Release_Tag
BTW, you will get better result if you use pyp2rpm for generating rpm packges.
I need to package a virtualenv as an rpm. I found a sample spec file for plone here
My project uses python 2.7 and for that I've built python from source. Therefore I changed some of the spec file to
/usr/local/bin/virtualenv-3.4 --no-site-packages --distribute %{_builddir}/usr/local/virtualenvs/%{shortname}
I'm getting the following error on rpmbuild -bb requirements.spec
+ /usr/sbin/prelink -u /var/lib/jenkins/rpmbuild/BUILDROOT/requirements-1.0-1.x86_64/usr/local/virtualenvs/requirements/bin/python
/usr/sbin/prelink: /var/lib/jenkins/rpmbuild/BUILDROOT/requirements-1.0-1.x86_64/usr/local/virtualenvs/requirements/bin/python does not have .gnu.prelink_undo section
I'm assuming I need to rebuild python and enable the prelinking during the ./configure. How can I do that?
I had a similar issue recently with a SPEC file that was also based on this example from plone.
In my case I'm using python27 RPMs from IUS repository and want to avoid building it from source.
My workaround was to disable prelink completely in my SPEC file:
add this: %define __prelink_undo_cmd %{nil}
comment out this:
# # This avoids prelink & RPM helpfully breaking the package signatures:
# /usr/sbin/prelink -u $RPM_BUILD_ROOT/usr/local/virtualenvs/%{shortname}/bin/python
Scripts generated by zc.buildout using zc.recipe.egg, on our <package>/bin/ directory look like this:
#! <python shebang> -S
import sys
sys.path[0:0] = [
... # some paths derived from the eggs
... # some other paths included with zc.recipe.egg `extra-path`
]
# some user initialization code from zc.recipe.egg `initialization`
# import function, call function
What I have not been able to was to find a way to programmatically prepend a path at the sys.path construction introduced in every script. Is this possible?
Why: I have a version of my python project installed globally and another version of it installed locally (off-buildout tree). I want to be able to switch between these two versions.
Note: Clearly, one can use the zc.recipe.egg/initialization property to add something like:
initialization = sys.path[0:0] = [ /add/path/to/my/eggs ]
But, is there any other way? Extra points for an example!
Finally, I got a working environment by creating my own buildout recipe that you can find here: https://github.com/idiap/local.bob.recipe. The file that contains the recipe is this one: https://github.com/idiap/local.bob.recipe/blob/master/config.py. There are lots of checks which are specific to our software at the class constructor and some extra improvements as well, but don't get bothered with that. The "real meat (TM)" is on the install() method of that class. It goes like this more or less:
egg_link = os.path.join(self.buildout['buildout']['eggs-directory'], 'external-package.egg-link')
f = open(egg_link, 'wt')
f.write(self.options['install-directory'] + '\n')
f.close()
self.options.created(egg_link)
return self.options.created()
This will do the trick. My external (CMake-based) package now only has to create the right .egg-info file in parallel with the python package(s) it builds. Than, I can tie, using the above recipe, the usage of a specific package installation like this:
[buildout]
parts = external_package python
develop = .
eggs = my_project
external_package
recipe.as.above
[external_package]
recipe = recipe.as.above:config
install-directory = ../path/to/my/local/package/build
[python]
recipe = zc.recipe.egg
interpreter = python
eggs = ${buildout:eggs}
If you wish to switch installations, just change the install-directory property above. If you wish to use the default installation available system wide, just remove altogether the recipe.as.above constructions from your buildout.cfg file. Buildout will just find the global installation w/o requiring any extra configuration. Uninstallation will work properly as well. So, switching between builds will just work.
Here is a fully working buildout .cfg file that we use here: https://github.com/idiap/bob.project.example/blob/master/localbob.cfg
The question is: Is there an easier way to achieve the same w/o having this external recipe?
Well, what you miss is probably the most useful buildout extension, mr.developer.
Typically the package, let's say foo.bar will be in some repo, let's say git.
Your buildout will look like
[buildout]
extensions = mr.developer
[sources]
foo.bar = git git#github.com:foo/foo.bar.git
If you don't have your package in a repo, you can use fs instead of git, have a look at the documentation for details.
Activating the "local" version is done by
./bin/develop a foo.bar
Deactivating by
./bin/develop d foo.bar
There are quite a few other things you can do with mr.developer, do check it out!