Increase Version number if Travis at github was successful - python

I wrote a simple script in Python.
Now I would like travis to check my code. After travis was successful, the version number should get increased.
Up to now my script has no version number yet. I can store it anywhere where it makes sense for the auto-increment workflow.
How to do this for Python code?
Update
It works now:
run tests
bumpversion
push tag to master
Unfortunately travis does not support "after-all". This means if I want to run the tests for several Python versions, I have no way to bumpversion after the tests of all python versions were successful.
In my case I will check on Python2.7 only until travis resolved this issue: https://github.com/travis-ci/travis-ci/issues/929
Here is my simple script: https://github.com/guettli/compare-with-remote
Solved :-)
It works now:
Developer pushes to github
Travis-CI runs
If all tests are successful bumpversion increases the version
The new version in setup.py get's pushed to the github repo
A new release of the python package gets uploaded to pypi with the tool twine.
I explain the way I do CI with github, travis and pypi here: https://github.com/guettli/github-travis-bumpversion-pypi

If you accept having extra commit for your versioning, you could add this script in continuous_integration/increment_version.py
import os
import pkg_resources
if __name__ == "__main__":
version = pkg_resources.get_distribution("compare_with_remote").version
split_version = version.split('.')
try:
split_version[-1] = str(int(split_version[-1]) + 1)
except ValueError:
# do something about the letters in the last field of version
pass
new_version = '.'.join(split_version)
os.system("sed -i \"s/version='[0-9.]\+'/version='{}'/\" setup.py"
.format(new_version))
os.system("git add -u")
os.system("git commit -m '[ci skip] Increase version to {}'"
.format(new_version))
os.system("git push")
And change your .travis.yml to
after_success:
- python continuous_integration/increment_version.py
I am not sure about how to make the git push part work as it need some testing with the repo rights but I assume that you could probably setup something to allow travis to push in your repo. you could look into that post for instance.
Also note that I used python to perform the git operations but they can be added as extra line in the after_success field:
after_success:
- python continuous_integration/increment_version.py
- git add -u
- git commit -m "[ci skip] version changed"
- git push
I just find it convenient to put the version number in the commit msg.
Also, it is very important to add [ci skip] in the commit message to avoid infinite increment. Maybe it would be safer to trigger the version change on a specific commit msg tag.

Not Python-specific, but this tutorial explains auto-incrementing version numbers by adding .travis.yaml entries which update git tags with each successful build. It seems like a good balance between manual and auto-versioning.
While the tutorial does use npm's package.json for the initial version-check, you could implement a simple equivalent (in Python or otherwise).

Assuming that,
only commits with successful Travis CI builds get merged into the master branch (e.g. using Pull Requests)
the package is always installed with pip from a git repo using, for instance,
pip install git+https://github.com/user/package.git
For auto-incrementing version number, one could simply define the version as the number of commits in the master branch. This can be done with the following lines in setup.py,
minor_version = check_output(['git', 'rev-list',
'--count', 'master']).decode('latin-1').strip()
setup(version='0.{}'.format(minor_version), ...)

Related

Poetry Install crashes because excluded dependency cannot be found

One of our repositories relies on another first-party one. Because we're in the middle of a migration from (a privately hosted) gitlab to azure, some of our dependencies aren't available in gitlab, which is where the problem comes up.
Our pyproject.toml file has this poetry group:
# pyproject.toml
[tool.poetry.group.cli.dependencies]
cli-parser = { path = "../cli-parser" }
In the Gitlab-CI, this cannot resolve. Therefore, we want to run the pipelines without this dependency. There is no code being run that actually relies on this library, nor files being imported. Therefore, we factored it out into a separate poetry group. In the gitlab-ci, that looks like this:
# .gitlab-ci.yml
install-poetry-requirements:
stage: install
script:
- /opt/poetry/bin/poetry --version
- /opt/poetry/bin/poetry install --without cli --sync
As visible, poetry is instructed to omit the cli dependency group. However, it still crashes on it:
# Gitlab CI logs
$ /opt/poetry/bin/poetry --version
Poetry (version 1.2.2)
$ /opt/poetry/bin/poetry install --without cli --sync
Directory ../cli-parser does not exist
If I comment out the cli-parser line in pyproject.toml, it will install successfully (and the pipeline passes), but we cannot do that because we need it in production.
I can't find another way to tell poetry to omit this library. Is there something I missed, or is there a workaround?
Good, Permanent Solution
As finswimmer mentioned in a comment, Poetry 1.4 should handle this perfectly good. If you're reading this question after it is published, the code in the question should resolve any issues.
Hacky, Bad, Temporary Solution
Since the original problem was in gitlab CI pipelines, I used a workaround there. Right in front of the install command, I used the following command:
sed -i '/cli-parser/d' pyproject.toml
This modifies the projects' pyproject.toml in-place to remove the line that has my module. This prevents poetry from ever parsing the dependency.
See the sed man page for more information on how this works.
Keep in mind that if your pipeline has any permanent effects, like for example turning your package into an installable wheel, or build artifacts being used elsewhere, this WILL break your setup.

Releasing for Python (with pbr): version not generated

This is something I am new to, but I made a small Python library on Github and looking to release it on PyPI. The pbr library is supposed to make things easier by taking versions from git tags, etc.
However, pbr is not deriving the version number from the git tag.
Here is what I tried:
Push code to Github and create a release with semantic tagname v1.0.0
Make sure the tag is also in my local repository: git fetch --tags
Generate and upload a release: python setup.py sdist upload -r pypi
The release is made and pbr seems to works fine, only the version number is 0.0.1.dev2. The last number seems to increase with the number of commits.
I have tried to explicitly checkout the revision at the tag: git checkout tags/v1.0.1, but that made no difference.
Why is pbr not following my Git tags?
edit: this is the code on Github
Note: pbr expects Git tags to be signed for use in calculating versions.
See https://docs.openstack.org/pbr/latest/user/features.html#version
You have to sign you tags with GPG:
git tag -s $version
Make sure if your version tag contains a 'v' that you use pbr >= 4.0.0, for me this was the problem in deploying to pypi from travis. Updating pbr before deploying fixed that.
See also:
https://bugs.launchpad.net/pbr/+bug/1744478
https://github.com/openstack-dev/pbr/commit/4c775e7890e90fc2ea77c66020659e52d6a61414

how to pip uninstall with virtualenv on heroku cedar stack?

I tried to uninstall a module on heroku with:
heroku run bin/python bin/pip uninstall whatever
Pip shows the module in the /app tree then claims to have uinstalled the module, but running the same command again shows it installed in the same location in the /app tree.
Is there a way to get pip uinstall to succeed?
Heroku run instantiates a new dyno and runs the command specified in that dyno only. Dynos are ephemeral which is why the results of the pip uninstall don't stick.
Updated 2013-09-30: the current way to clear the virtualenv seems to specify a different python runtime version in runtime.txt as stated on Github and in the Heroku's devcenter reference.
Be aware that Heroku currently "only endorses and supports the use of Python 2.7.4 and 3.3.2" so unless your application supports both Python 2.7.4 and 3.3.2, you may want to test it with the runtime you'll want to switch to (currently available at http://envy-versions.s3.amazonaws.com/$PYTHON_VERSION.tar.bz2, though it shouldn't be an issue to switch e.g. between 2.7.4 and 2.7.3 in most cases).
Thanks #Jesse for your up-to-date answer and to the commenters who made me aware of the issue.
Was up-to-date in ~november 2012 (I haven't since updated the linked buildpack, my pull request was closed and the CLEAN_VIRTUALENV feature was dropped at some point by the official buildpack):
As David explained, you cannot pip uninstall one package but you can purge and reinstall the whole virtualenv. Use the user-env-compile lab feature with the CLEAN_VIRTUALENV option to purge the virtualenv:
heroku labs:enable user-env-compile
heroku config:add CLEAN_VIRTUALENV=true
Currently this won't work because there is a bug. You'll need to use my fork of the buildpack until this get fixed upstream (pull request was closed) :
heroku config:add BUILDPACK_URL=https://github.com/blaze33/heroku-buildpack-python.git
Now push your new code and you'll notice that the whole virtualenv gets reinstalled.
Andrey's answer no longer works since March 23 2012. The new style virtualenv commit moved the virtual env from /app to /app/.heroku/venv but the purge branch wasn't updated to catch up so that you end up with a virtualenv not being in PYTHONHOME.
To avoid reinstalling everything after each push, disable the option:
heroku labs:disable user-env-compile
heroku config:remove CLEAN_VIRTUALENV BUILDPACK_URL
There is now a simpler way to clear the pip cache. Just change the runtime environment, for example from 'python-2.7.3' to 'python-2.7.2', or vice versa.
To do this add a file called runtime.txt to the root of your repository that contains just the runtime string (as show above) in it.
For this to work you need to have turned on the Heroku labs user-env-compile feature. See https://devcenter.heroku.com/articles/labs-user-env-compile
By default virtualenv is cached between deploys.
To avoid caching of packages you can run:
heroku config:add BUILDPACK_URL=git#github.com:heroku/heroku-buildpack-python.git#purge
That way everything will be built from scratch after you push some changes. To enable the caching just remove the BUILDPACK_URL config variable.
Now to uninstall specific package(s):
Remove the corresponding record(s) from the requirements.txt;
Commit and push the changes.
Thanks to Lincoln from Heroku Support Team for the clarifications.
I've created some fabfile recipes for Maxime and Jesse answers that allow re-installing the requirements with one fab command: https://gist.github.com/littlepea/5096814 (look at the docstrings for explanation and examples).
For Maxime's answer I've created a task 'heroku_clean' (or 'hc'), it'll look something like that:
fab heroku_clean
Or using an alias and specifying a heroku app:
fab hc:app=myapp
For Jesse's answer I've created a task 'heroku_runtime' (or 'hr'), it sets heroku python runtime and commits runtime.txt (also creates it if it didn't exist):
fab heroku_runtime:2.7.2
If runtime version is not passed it will just toggle it between 2.7.2 and 2.7.3, so the easiest way to change and commit the runtime is:
fab hr
Then you can just deploy (push to heroku origin) you app and the virtualenv will be rebuilt. I've also added a 'heroku_deploy' task ('hr') that I use for heroku push and scale that also can be user together with 'heroku_runtime' task. This is my preferred method of deploying and rebuilding virtualenv - everything happens in one command and I can choose when to rebuild it, I don't like to do it every time like Maxime's answer suggest because it can take a long time:
fab hd:runtime=yes
This is an equivalent of:
fab heroku_runtime
fab heroku_deploy

Heroku Cedar Python: requirement in github - clone fails with error 128

i wanted to use the pyfire github library https://github.com/mariano/pyfire
this is what pip freeze produced for me:
-e git+ssh://git#github.com/mariano/pyfire.git#db856bb666c4b381c08f2f4bf7c9ac7aaa233221#egg=pyfire-dev
but when doing so the clone that is being done to install the dependencies fails with error code, i can't
heroku run console
to check out the full error log...
any experience with this or ideas?
thanks a lot in advance
pip freeze appears to produce the wrong results you should be able to modify your requirements.txt to:
git+https://github.com/mariano/pyfire.git#db856bb666c4b381c08f2f4bf7c9ac7aaa233221#egg=pyfire-dev
I realized this was already solved for you, but this didn't work for me or #amrox or #tomtaylor.
If you remove the commit part, it works for me. i.e. changing the line in requirements.txt to:
git+https://github.com/mariano/pyfire.git
When I install the git repo including the specific commit locally, git seems to realize the end part is a specific commit, but when I try this on heroku and track the progress, it is pretty clear it is treating the commit part as a tag. Since there is no tag with that name, that's why it fails. They may be using a different version of git.

git-python get commit feed from a repository

am working on a code which I would like to retrieve the commits from a repository on github. Am not entirely sure how to do such a thing, I got git-python but most the api's are for opening a local git repository on the same file system.
Can someone advice?
regards,
For me the following worked best:
Imports:
import os
import datetime
import git
Get current repository, assuming that you're there:
repo = git.Repo(os.getcwd())
Get active branch:
master = repo.head.reference
Current branch:
master.name
Latest commit id:
master.commit.hexsha
Latest commit message:
master.commit.message
Latest commit date:
datetime.datetime.fromtimestamp(master.commit.committed_date)
Latest commit author email:
master.commit.author.email
Latest commit author name:
master.commit.author.name
It seems the easiest thing here is to use the commandline (I'm assuming Linux or any other Unix here, but should be the same on Windows) to clone an existing repository first:
git clone git://github.com/forsberg/misctools.git
This will create the misctools directory.
Now, from python, you can open this repository and update it using pull:
#!/usr/bin/env python
from git import *
repo = Repo("misctools")
o = repo.remotes.origin
o.pull()
master = repo.head.reference
print master.log()
It's all documented at http://packages.python.org/GitPython/0.3.2/tutorial.html
I really advise using only the command line git, git-python its used for macros or complicated things, not just for pulling, pushing or cloning :)
If that's what you're after, I have a bash script to send myself emails about the latest git commits. It runs as a cronjob.
https://github.com/martinxyz/config/blob/master/scripts/email-git-commit-summary.sh

Categories

Resources