pipenv install installs dependencies every time / Pycharm doesn't recognize them - python

I'm having a variety of problems with my pipenv setup (another question here differences between users even after using Pipfile and Pipfile.lock with explicit versions) and I just noticed something else that seems funky.
It turns out in my project folder (with both a Pipfile and Pipfile lock created, with an initial pipenv install having been run, and without pipenv shell invoked), I can run pipenv install as many times as I want and every time it says that it is installing 74 dependencies. Does this mean that the pipenv install isn't taking effect, or does it just mean it's running through the dependencies to make sure they are installed?
It seems like there might be a problem, because when I open Pycharm for the project for that folder, it gives me the below alert ("Package requirements..." with the option to install requirements from Pipfile.lock).
I'm on the latest Pycharm which is setup to use the pipenv environment that I created with pipenv install, and I can confirm that it's using that environment based on Pycharm->Preferences->Project->Project Interpreter where it shows that it's using the right virtualenv for this folder.
But it seems that both pipenv install and Pycharm don't think the dependencies have been installed.

To answer your second question, the requirements are not being installed again. Every time you run pipenv install it will say that it's installing all requirements from your Pipfile.lock file, but if you run pipenv install -v to make it verbose and see the output, you'll see things like the following:
Installed version (4.1.2) is most up-to-date (past versions: 4.1.2)
Requirement already up-to-date: whitenoise==4.1.2 in c:\users\mihai\.virtualenvs\pipenvtest-1zyry8jn\lib\site-packages (from -r C:\Users\Mihai\AppData\Local\Temp\pipenv-1th31ie1-requirements\pipenv-r4e3zcr7-requirement.txt (line 1))
(4.1.2)
Since it is already installed, we are trusting this package without checking its hash. To ensure a completely repeatable environment, install into an empty virtualenv.
Cleaning up...
Removed build tracker 'C:\\Users\\Mihai\\AppData\\Local\\Temp\\pip-req-tracker-ip_gjf7h'
So to answer your question, it just runs through them to check if they're installed, installing them only if necessary.

Related

Virtual Environment Being Ignored or Overridden

I have been having an issue where pip and python seem to be ignoring the fact that they are in a virtualenv. I found the following:
pip command in virtualenv ignored in favour of system pip command
pip not pointing to virtual enviroment, in virtual enviroment
But neither seem to answer my issue.
The key issue seems to be when pip is trying to install modules it is detecting the module at the system level and then thinking it is already there, tries to uninstall it then fails because it isn't in the same environment.
For example, I need to update Wernzeug, so I try:
[venv] me#somemachine: pip install werkzeug
which results in:
Requirement already satisfied: werkzeug in /python/3.7.2/rh60_64/modules (0.15.2)
The weird bit starts if I run an uninstall:
[venv] me#somemachine: pip uninstall werkzeug
When I get:
Not uninstalling werkzeug at
/python/3.7.2/rh60_64/modules, outside environment
/venv Can't uninstall 'Werkzeug'. No files were found to uninstall.
Here's the kicker, due to the way the server is built I'm having to use the 'sw' command to get python (in this case 3.7.2). I'm wondering if this has something to do with the venv getting confused with where it is supposed to be looking. I do not have root access to the server, nor am I likely to get it, so I can't mess with the system installed modules.
Other useful info:
The base system is REHL 6.6
pip -V output:
pip 22.0.4 from /venv/lib/python3.7/site-packages/pip (python 3.7)
Any pointers or ideas that might help this are more than welcome.

pip uninstall fails with "owned by OS" - even under sudo

I'm working on a DevOps project for a client who's using Python. Though I never used it professionally, I know a few things, such as using virtualenv and pip - though not in great detail.
When I looked at the staging box, which I am trying to prepare for running a functional test suite, I saw chaos. Tons of packages installed globally, and those installed inside a virtualenv not matching the requirements.txt of the project. OK, thought I, there's a lot of cleaning up. Starting with global packages.
However, I ran into a problem at once:
➜ ~ pip uninstall PyYAML
Not uninstalling PyYAML at /usr/lib/python2.7/dist-packages, owned by OS
OK, someone must've done a 'sudo pip install PyYAML'. I think I know how to fix it:
➜ ~ sudo pip uninstall PyYAML
Not uninstalling PyYAML at /usr/lib/python2.7/dist-packages, owned by OS
Uh, apparently I don't.
A search revealed some similar conflicts caused by users installing packages bypassing pip, but I'm not convinced - why would pip even know about them, if that was the case? Unless the "other" way is placing them in the same location pip would use - but if that's the case, why would it fail to uninstall under sudo?
Pip denies to uninstall these packages because Debian developers patched it to behave so. This allows you to use both pip and apt simultaneously. The "original" pip program doesn't have such functionality
Update: my answer is relevant only to old versions of Pip. For the latest versions, Pip is configured to modify only the files which reside only in its "home directory" - that is /usr/local/lib/python3.* for Debian. For the latest tools, you will get these errors when you try to delete the package, installed by apt:
For pip 9.0.1-2.3~ubuntu1 (installed from Ubuntu repository):
Not uninstalling pyyaml at /usr/lib/python3/dist-packages, outside environment /usr
For pip 10.0.1 (original, installed from pypi.org):
Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
The point is not that pip cannot install the package because you don't have enough permissions, but because it is not a package installed through pip, so it doesn't want to uninstall it.
dist-packages is where packages installed by the OS package manager reside; as they are handled by another package manager (e.g. apt on Ubuntu/Debian, pacman on Arch, rpm/yum on CentOS, ... ) pip won't touch them (but still has to know about them as they are installed packages, so they can be used to satisfy dependencies of pip-installed packages).
You should also probably avoid to touch them unless you use the correct package manager, and even so, they may have been installed automatically to satisfy the dependencies of some program, so you may not remove them without breaking it. This can usually be checked quite easily, although the exact way depends from the precise Linux distribution you are using.

Pipfile.lock version not matching installed package version

I'm using pipenv in a new project I'm working on. Initial pipenv install was Django, with Pipfile showing:
[packages]
django = "*"
and Pipfile.lock showing:
"version": "==1.11.7"
pipenv graph and pip list (from within the pipenv virtualenv) both show that Django version 1.11.7 is installed
However, when I do a subsequent pipenv install new-package the Pipfile.lock is updated to show:
"version": "==2.0"
for Django, even though pipenv graph and pip list both show that version 1.11.7 is still installed locally. This obviously causes problems, as the local Django version is different to that which will be installed in a fresh environment, based on the Pipfile.lock.
It seems like pipenv install new_package is updating the specified version of packages which have already been installed, without updating those packages to the latest version - which seems to be counterintuitive to me. As far as I can see the only way to keep the Pipfile.lock in sync with the local environment would be to either pin all package versions in the Pipfile, or to follow up every pipenv install ... with a pipenv update - neither of which seems to be a particularly intuitive workflow.
I haven't been able to find any documentation or useful answers online which really clarify this behaviour. Is this the expected behaviour, or am I missing something? What is the 'recommended' workflow for handling this situation using pipenv?
This seems to be a similar/same issue to that described in these pipenv issues. My reading of the responses in the older, issue is that this behaviour is as expected, and that:
In order to keep the pipenv generated environment up-to-date with the Pipfile.lock contents a call to pipenv update will be required
In order to prevent updates of 'unrelated' packages during a pipenv install new-package it's necessary to pin version in the Pipfile
It would appear from current responses to this issue that there are no immediate plans to change this behaviour.

What's the difference between direct pip install and the requirements.txt?

I'm confused. I've got a working pip install command (meaning: it installs a version of a library from Github which works for me), and I have a non-working (meaning: it installs a version of a library which does not work for me) way of putting that requirement into a requirements.txt file.
More concrete:
If I type on the command line
pip install -e 'git://github.com/mozilla/elasticutils.git#egg=elasticutils'
and then test my program, all works fine. If I put this line into my requirements.txt:
-e git://github.com/mozilla/elasticutils.git#egg=elasticutils
and then run my program, it breaks with an error (only the library should have changed, so I guess sth has changed in that library between the two versions).
But shouldn't both versions do exactly the same?? (Of course I've done my best to remove the installed version of the library between the two tests again, using pip uninstall elasticutils.)
Any information welcome …
Yep, as I wrote in my comment above, there seems to be a dependency-override when the requirements.txt states different than the dependencies in the packages. In my case installing the package manually also installed the (newer) version of requests, namely 1.2.0. Using the requirements.txt always installed (due to the override) the version 0.14.2 of requests.
Problem solved by updating the requests version in the requirements.txt :-)
Well I don't know exactly what's the difference, but when I want something to be installed from the requirements.txt and it's a git repo I do the following line:
#git+https://github.com/user/package_name.git
and then installing as following:
pip install -r requirements.txt

How can I get pip install's -I flag to work with a requirements file?

I feel like there must be a way to do this, but for the life of me I can't figure out how: I want to run pip against a requirements file in a virtualenv so that no matter what packages are in the virtualenv before I run pip, the requirements file is totally fulfilled (including specific versions) after I run it.
The problem now is that if I have an older version of a package installed in the virtualenv than is listed in the requirements file, it complains about the version mismatch and exits (it should just update the package to the given version). The command I'm running is pip install -I -r requirements.txt and according to pip's help, -I is supposed to make pip "Ignore the installed packages (reinstalling instead)" but it definitely isn't doing that.
What am I missing?
(It'd be nice if pip skipped the packages that are already fulfilled too.)
I figured out what the cause of my pip problems was. Long story short, source left over in the virtualenv's build directory was causing an error that made packages upgrades fail. What I actually should have been doing was clearing out that directory (which pip doesn't always do I guess) before running the pip install and it seems to do everything I want after when paired with the --upgrade/-U flag.

Categories

Resources