I tried to uninstall a module on heroku with:
heroku run bin/python bin/pip uninstall whatever
Pip shows the module in the /app tree then claims to have uinstalled the module, but running the same command again shows it installed in the same location in the /app tree.
Is there a way to get pip uinstall to succeed?
Heroku run instantiates a new dyno and runs the command specified in that dyno only. Dynos are ephemeral which is why the results of the pip uninstall don't stick.
Updated 2013-09-30: the current way to clear the virtualenv seems to specify a different python runtime version in runtime.txt as stated on Github and in the Heroku's devcenter reference.
Be aware that Heroku currently "only endorses and supports the use of Python 2.7.4 and 3.3.2" so unless your application supports both Python 2.7.4 and 3.3.2, you may want to test it with the runtime you'll want to switch to (currently available at http://envy-versions.s3.amazonaws.com/$PYTHON_VERSION.tar.bz2, though it shouldn't be an issue to switch e.g. between 2.7.4 and 2.7.3 in most cases).
Thanks #Jesse for your up-to-date answer and to the commenters who made me aware of the issue.
Was up-to-date in ~november 2012 (I haven't since updated the linked buildpack, my pull request was closed and the CLEAN_VIRTUALENV feature was dropped at some point by the official buildpack):
As David explained, you cannot pip uninstall one package but you can purge and reinstall the whole virtualenv. Use the user-env-compile lab feature with the CLEAN_VIRTUALENV option to purge the virtualenv:
heroku labs:enable user-env-compile
heroku config:add CLEAN_VIRTUALENV=true
Currently this won't work because there is a bug. You'll need to use my fork of the buildpack until this get fixed upstream (pull request was closed) :
heroku config:add BUILDPACK_URL=https://github.com/blaze33/heroku-buildpack-python.git
Now push your new code and you'll notice that the whole virtualenv gets reinstalled.
Andrey's answer no longer works since March 23 2012. The new style virtualenv commit moved the virtual env from /app to /app/.heroku/venv but the purge branch wasn't updated to catch up so that you end up with a virtualenv not being in PYTHONHOME.
To avoid reinstalling everything after each push, disable the option:
heroku labs:disable user-env-compile
heroku config:remove CLEAN_VIRTUALENV BUILDPACK_URL
There is now a simpler way to clear the pip cache. Just change the runtime environment, for example from 'python-2.7.3' to 'python-2.7.2', or vice versa.
To do this add a file called runtime.txt to the root of your repository that contains just the runtime string (as show above) in it.
For this to work you need to have turned on the Heroku labs user-env-compile feature. See https://devcenter.heroku.com/articles/labs-user-env-compile
By default virtualenv is cached between deploys.
To avoid caching of packages you can run:
heroku config:add BUILDPACK_URL=git#github.com:heroku/heroku-buildpack-python.git#purge
That way everything will be built from scratch after you push some changes. To enable the caching just remove the BUILDPACK_URL config variable.
Now to uninstall specific package(s):
Remove the corresponding record(s) from the requirements.txt;
Commit and push the changes.
Thanks to Lincoln from Heroku Support Team for the clarifications.
I've created some fabfile recipes for Maxime and Jesse answers that allow re-installing the requirements with one fab command: https://gist.github.com/littlepea/5096814 (look at the docstrings for explanation and examples).
For Maxime's answer I've created a task 'heroku_clean' (or 'hc'), it'll look something like that:
fab heroku_clean
Or using an alias and specifying a heroku app:
fab hc:app=myapp
For Jesse's answer I've created a task 'heroku_runtime' (or 'hr'), it sets heroku python runtime and commits runtime.txt (also creates it if it didn't exist):
fab heroku_runtime:2.7.2
If runtime version is not passed it will just toggle it between 2.7.2 and 2.7.3, so the easiest way to change and commit the runtime is:
fab hr
Then you can just deploy (push to heroku origin) you app and the virtualenv will be rebuilt. I've also added a 'heroku_deploy' task ('hr') that I use for heroku push and scale that also can be user together with 'heroku_runtime' task. This is my preferred method of deploying and rebuilding virtualenv - everything happens in one command and I can choose when to rebuild it, I don't like to do it every time like Maxime's answer suggest because it can take a long time:
fab hd:runtime=yes
This is an equivalent of:
fab heroku_runtime
fab heroku_deploy
Related
One of our repositories relies on another first-party one. Because we're in the middle of a migration from (a privately hosted) gitlab to azure, some of our dependencies aren't available in gitlab, which is where the problem comes up.
Our pyproject.toml file has this poetry group:
# pyproject.toml
[tool.poetry.group.cli.dependencies]
cli-parser = { path = "../cli-parser" }
In the Gitlab-CI, this cannot resolve. Therefore, we want to run the pipelines without this dependency. There is no code being run that actually relies on this library, nor files being imported. Therefore, we factored it out into a separate poetry group. In the gitlab-ci, that looks like this:
# .gitlab-ci.yml
install-poetry-requirements:
stage: install
script:
- /opt/poetry/bin/poetry --version
- /opt/poetry/bin/poetry install --without cli --sync
As visible, poetry is instructed to omit the cli dependency group. However, it still crashes on it:
# Gitlab CI logs
$ /opt/poetry/bin/poetry --version
Poetry (version 1.2.2)
$ /opt/poetry/bin/poetry install --without cli --sync
Directory ../cli-parser does not exist
If I comment out the cli-parser line in pyproject.toml, it will install successfully (and the pipeline passes), but we cannot do that because we need it in production.
I can't find another way to tell poetry to omit this library. Is there something I missed, or is there a workaround?
Good, Permanent Solution
As finswimmer mentioned in a comment, Poetry 1.4 should handle this perfectly good. If you're reading this question after it is published, the code in the question should resolve any issues.
Hacky, Bad, Temporary Solution
Since the original problem was in gitlab CI pipelines, I used a workaround there. Right in front of the install command, I used the following command:
sed -i '/cli-parser/d' pyproject.toml
This modifies the projects' pyproject.toml in-place to remove the line that has my module. This prevents poetry from ever parsing the dependency.
See the sed man page for more information on how this works.
Keep in mind that if your pipeline has any permanent effects, like for example turning your package into an installable wheel, or build artifacts being used elsewhere, this WILL break your setup.
I am attempting to make a web app with flask, and when I attempt to run my script through the command line, I get a "ModuleNotFoundError: No module named 'google.cloud'". However, when I run the script in Sublime, I do not get this error.
I have already attempted installing google, google-cloud, and conda using pip.
Here are the lines that are involved in importing from google.cloud. The console states that the first line is the one the compilation is failing at.
from google.cloud import vision
from google.cloud.vision import types
I was expecting the code to be output to my localhost, but this compile time error is preventing this.
The library|package that you need is called google-cloud-vision, see:
https://pypi.org/project/google-cloud-vision/
You could add this directly to your project (at its current version) using:
pip install "google-cloud-vision==0.36.0"
However...
Your problem may be a consequence of different python environments and I encourage you to review virtualenv:
https://virtualenv.pypa.io/en/latest/
Among other things, virtualenv enables (a) the creation of isolated python environments; (b) "clean room" like behavior wherein you can recreate python environments easily and predictably. This latter benefit may help with your "it works .... but doesn't work ... " issue.
One additional good practice, with|without virtualenv is to persist pip install ... to (conventionally) requirements.txt:
pip install -r requirements.txt
And, in this case, to have requirements.txt similar to:
flask==1.0.2
google-cloud-vision==0.36.0
I wrote a simple script in Python.
Now I would like travis to check my code. After travis was successful, the version number should get increased.
Up to now my script has no version number yet. I can store it anywhere where it makes sense for the auto-increment workflow.
How to do this for Python code?
Update
It works now:
run tests
bumpversion
push tag to master
Unfortunately travis does not support "after-all". This means if I want to run the tests for several Python versions, I have no way to bumpversion after the tests of all python versions were successful.
In my case I will check on Python2.7 only until travis resolved this issue: https://github.com/travis-ci/travis-ci/issues/929
Here is my simple script: https://github.com/guettli/compare-with-remote
Solved :-)
It works now:
Developer pushes to github
Travis-CI runs
If all tests are successful bumpversion increases the version
The new version in setup.py get's pushed to the github repo
A new release of the python package gets uploaded to pypi with the tool twine.
I explain the way I do CI with github, travis and pypi here: https://github.com/guettli/github-travis-bumpversion-pypi
If you accept having extra commit for your versioning, you could add this script in continuous_integration/increment_version.py
import os
import pkg_resources
if __name__ == "__main__":
version = pkg_resources.get_distribution("compare_with_remote").version
split_version = version.split('.')
try:
split_version[-1] = str(int(split_version[-1]) + 1)
except ValueError:
# do something about the letters in the last field of version
pass
new_version = '.'.join(split_version)
os.system("sed -i \"s/version='[0-9.]\+'/version='{}'/\" setup.py"
.format(new_version))
os.system("git add -u")
os.system("git commit -m '[ci skip] Increase version to {}'"
.format(new_version))
os.system("git push")
And change your .travis.yml to
after_success:
- python continuous_integration/increment_version.py
I am not sure about how to make the git push part work as it need some testing with the repo rights but I assume that you could probably setup something to allow travis to push in your repo. you could look into that post for instance.
Also note that I used python to perform the git operations but they can be added as extra line in the after_success field:
after_success:
- python continuous_integration/increment_version.py
- git add -u
- git commit -m "[ci skip] version changed"
- git push
I just find it convenient to put the version number in the commit msg.
Also, it is very important to add [ci skip] in the commit message to avoid infinite increment. Maybe it would be safer to trigger the version change on a specific commit msg tag.
Not Python-specific, but this tutorial explains auto-incrementing version numbers by adding .travis.yaml entries which update git tags with each successful build. It seems like a good balance between manual and auto-versioning.
While the tutorial does use npm's package.json for the initial version-check, you could implement a simple equivalent (in Python or otherwise).
Assuming that,
only commits with successful Travis CI builds get merged into the master branch (e.g. using Pull Requests)
the package is always installed with pip from a git repo using, for instance,
pip install git+https://github.com/user/package.git
For auto-incrementing version number, one could simply define the version as the number of commits in the master branch. This can be done with the following lines in setup.py,
minor_version = check_output(['git', 'rev-list',
'--count', 'master']).decode('latin-1').strip()
setup(version='0.{}'.format(minor_version), ...)
I am working on a product with a large number of python dependencies within a corporation that does not permit servers to contact external machines. Any attempt to circumvent this rule would be judged harshly.
The application is deployed via a batch-script (it's 32 bit windows) into a virtualenv. This batch script (ideally) should do nothing more than
# Precondition: Source code has been checked-out into myprog/src
cd myprog/src
setup.py install # <-- fails because of dependencies
myprog.exe
The problem comes with managing the dependencies - since it's impossible for the server to connect to the outside world my only solution is to have the script easy_install each of the dependencies before the setup starts, something like this:
cd myproc/deps/windows32
easy_install foo-1.2.3.egg
easy_install bar-2.3.4.egg
easy_install baz-3.4.5.egg <-- works but is annoying/wrong
cd ../../myprog/src
setup.py install
myprog.exe
What I'd like to do is make it so that the setup.py script knows where to fetch it's dependencies from. Ideally this should be set as a command-line argument or environment variable, that way I'm not going to hard-code the location of the dependencies into the project.
Ideally I'd like all of the eggs to be part of a 'distributions' directory: This can be on a network drive, shared on a web-server or possibly even be deployed to a local folder on each of the servers.
Can this be done?
I think what you are looking for is these options to pip: --no-index and --find-links:
--no-index
--find-links /my/local/archives
--find-links http://some.archives.com/archives
Docs are here.
i wanted to use the pyfire github library https://github.com/mariano/pyfire
this is what pip freeze produced for me:
-e git+ssh://git#github.com/mariano/pyfire.git#db856bb666c4b381c08f2f4bf7c9ac7aaa233221#egg=pyfire-dev
but when doing so the clone that is being done to install the dependencies fails with error code, i can't
heroku run console
to check out the full error log...
any experience with this or ideas?
thanks a lot in advance
pip freeze appears to produce the wrong results you should be able to modify your requirements.txt to:
git+https://github.com/mariano/pyfire.git#db856bb666c4b381c08f2f4bf7c9ac7aaa233221#egg=pyfire-dev
I realized this was already solved for you, but this didn't work for me or #amrox or #tomtaylor.
If you remove the commit part, it works for me. i.e. changing the line in requirements.txt to:
git+https://github.com/mariano/pyfire.git
When I install the git repo including the specific commit locally, git seems to realize the end part is a specific commit, but when I try this on heroku and track the progress, it is pretty clear it is treating the commit part as a tag. Since there is no tag with that name, that's why it fails. They may be using a different version of git.