I'm using pipenv in a new project I'm working on. Initial pipenv install was Django, with Pipfile showing:
[packages]
django = "*"
and Pipfile.lock showing:
"version": "==1.11.7"
pipenv graph and pip list (from within the pipenv virtualenv) both show that Django version 1.11.7 is installed
However, when I do a subsequent pipenv install new-package the Pipfile.lock is updated to show:
"version": "==2.0"
for Django, even though pipenv graph and pip list both show that version 1.11.7 is still installed locally. This obviously causes problems, as the local Django version is different to that which will be installed in a fresh environment, based on the Pipfile.lock.
It seems like pipenv install new_package is updating the specified version of packages which have already been installed, without updating those packages to the latest version - which seems to be counterintuitive to me. As far as I can see the only way to keep the Pipfile.lock in sync with the local environment would be to either pin all package versions in the Pipfile, or to follow up every pipenv install ... with a pipenv update - neither of which seems to be a particularly intuitive workflow.
I haven't been able to find any documentation or useful answers online which really clarify this behaviour. Is this the expected behaviour, or am I missing something? What is the 'recommended' workflow for handling this situation using pipenv?
This seems to be a similar/same issue to that described in these pipenv issues. My reading of the responses in the older, issue is that this behaviour is as expected, and that:
In order to keep the pipenv generated environment up-to-date with the Pipfile.lock contents a call to pipenv update will be required
In order to prevent updates of 'unrelated' packages during a pipenv install new-package it's necessary to pin version in the Pipfile
It would appear from current responses to this issue that there are no immediate plans to change this behaviour.
Related
Is there a best practice for using pipenv for deterministic builds when you're going to be developing and running application code on multiple platforms (i.e. Windows, Linux, and Mac)?
For instance, the requirements for pytest defines the atomicwrites library as a conditional dependency if you are installing pytest in a Windows-based Python environment.
However, if I define...
[dev-packages]
pytest = "*"
as a requirement in my project's Pipfile and run pipenv install --dev to generate my initial Pipfile.lock in a Linux-based Python environment, then atomicwrites is neither installed nor specified in any way in the resulting Pipfile.lock file.
Later, after I git commit my new Linux-generated Pipfile.lock, either I or someone else will eventually pull that committed Pipfile.lock file down onto their Windows machine and will run pipenv install --dev to generate their own local pipenv environment.
However, when they go to run the pytest test-runner, it will fail because atomicwrites will not have been installed in their pipenv environment, therefore the pytest command will fail because of the missing dependency.
What's more, my Windows test-build will also fail when using a CI service like GitHub Actions or Azure Pipelines because pipenv will fail to install the atomicwrites dependency there too (because it will not be specified in the repo's Pipfile.lock specifications).
This pytest example is a super simple example of this issue. In this case, it'd be easy enough just to add atomicwrites as one of my [dev-packages] requirements in my Pipfile so that it gets installed regardless of the platform, or to even add sys_platform = "== 'win32'" to specify that it should only be installed by pipenv on Windows platforms.
However, these platform-conditional dependencies become a much harder issue to deal with when my project has many dependencies, all with their own platform-conditional dependencies.
I've seen this issue discussed in a couple different locations, such as here, and here.
However, I have yet to find any straightforward method for dealing with this (short of not using pipenv or deleting the Pipfile.lock file prior to running pipenv install --dev on a different platform).
Do any pipenv users out there have a recommended best practice for dealing with this multiple-os Pipfile.lock install issue?
This is obviously too late to be helpful to you, but I have found that generating the lock file in these multiple environments while keeping outdated versions resolves most of the problems with pipenv.
For example, our CI/CD uses GitHub images, so in a Windows Subsystem for Linux shell:
pipenv install --dev
Then, in a Windows shell
pipenv install --dev --keep-outdated
However, running in Windows can sometimes pin a dependency to that platform (colorama currently does this as of this answer's writing). To avoid that, you can regenerate the lock file again in WSL:
pipenv lock --dev --keep-outdated
This will keep the "outdated" packages from a Windows-only environment, but will generally fix the platform conditionals.
Note that the dance above is ultimately not foolproof--I found this question because this exact method was not working for atomicwrites. However, it seems to resolve the vast majority of issues and those that cannot generally can be solved by adding the package to your dependencies manually.
Why is my pipenv stuck in the "Locking..." stage when installing [numpy|opencv|pandas]?
When running pipenv install pandas or pipenv update it hangs for a really long time with a message and loading screen that says it's still locking. Why? What do I need to do?
Your package(s) are being installed and your wheel is being built
Perhaps better terminology to describe this state would be 'Building and Locking...' or something similar.
This is especially likely to happen if you're installing numpy, opencv, pandas, or other large packages.
What's going on in the background is that pipenv is downloading your package and maybe building the wheel.
The remedy in this case is often a strong dose of patience.
What is Locking?
To understand more about what "Locking" is in the pipenv context you can read more here: https://pipenv.kennethreitz.org/en/latest/basics/#pipenv-lock
$ pipenv lock is used to create a Pipfile.lock, which declares all dependencies (and sub-dependencies) of your project, their latest available versions, and the current hashes for the downloaded files. This ensures repeatable, and most importantly deterministic, builds.
However, there are times when it is not just a slow/large install, but is instead an issue with your Pipfile[.lock]. If you're fairly certain that this is the problem try pipenv lock --clear and rerun your pipenv install command, also check this thread for more information.
I had this happen to me just now. Pipenv got stuck locking forever, 20+ minutes with no end in sight, and pipenv --rm didn't help.
In the end, the problem was that I had run pipenv install "boto3~=1.21.14" to upgrade boto3 from boto3 = "==1.17.105". But I had other conflicting requirements (in my case, botocore = "==1.20.105" and s3transfer = "==0.4.2") which are boto3 dependencies. The new version of boto3 required newer versions of these two packages, but the == requirements didn't allow that. Pipenv didn't explain this, and just spun "Locking…" forever.
So if you run into this, I would advise you to carefully look at your Pipenv packages, see if there are any obvious conflicts, and loosen or remove package requirements where possible.
In my case, I was able to just remove the s3transfer and botocore packages from the list entirely, and rely on boto3 to fetch the necessary versions.
This is an open issue with pipenv
https://github.com/pypa/pipenv/issues/3827
I suggest go back to pip
For folks trying to use pipenv with an existing requirements.txt file in the working dir, you may find this
Github post helpful.
Note: I also used pipenv --rm before attempting what I show.
HTH ;)
P.S. Here's a shout out to Zebradil's script to create a requirements.txt, in case you're collaborating with others who don't use pipenv.
try doing pipenv --rm - removes virtual environment
then pipenv shell - this will again initiate virtual env
then pipenv install installs all the packages again
worked for me
I'm having a variety of problems with my pipenv setup (another question here differences between users even after using Pipfile and Pipfile.lock with explicit versions) and I just noticed something else that seems funky.
It turns out in my project folder (with both a Pipfile and Pipfile lock created, with an initial pipenv install having been run, and without pipenv shell invoked), I can run pipenv install as many times as I want and every time it says that it is installing 74 dependencies. Does this mean that the pipenv install isn't taking effect, or does it just mean it's running through the dependencies to make sure they are installed?
It seems like there might be a problem, because when I open Pycharm for the project for that folder, it gives me the below alert ("Package requirements..." with the option to install requirements from Pipfile.lock).
I'm on the latest Pycharm which is setup to use the pipenv environment that I created with pipenv install, and I can confirm that it's using that environment based on Pycharm->Preferences->Project->Project Interpreter where it shows that it's using the right virtualenv for this folder.
But it seems that both pipenv install and Pycharm don't think the dependencies have been installed.
To answer your second question, the requirements are not being installed again. Every time you run pipenv install it will say that it's installing all requirements from your Pipfile.lock file, but if you run pipenv install -v to make it verbose and see the output, you'll see things like the following:
Installed version (4.1.2) is most up-to-date (past versions: 4.1.2)
Requirement already up-to-date: whitenoise==4.1.2 in c:\users\mihai\.virtualenvs\pipenvtest-1zyry8jn\lib\site-packages (from -r C:\Users\Mihai\AppData\Local\Temp\pipenv-1th31ie1-requirements\pipenv-r4e3zcr7-requirement.txt (line 1))
(4.1.2)
Since it is already installed, we are trusting this package without checking its hash. To ensure a completely repeatable environment, install into an empty virtualenv.
Cleaning up...
Removed build tracker 'C:\\Users\\Mihai\\AppData\\Local\\Temp\\pip-req-tracker-ip_gjf7h'
So to answer your question, it just runs through them to check if they're installed, installing them only if necessary.
I wonder if it's possible to grab the change logs of updated packages after running
pip install [package_name] --upgrade
or
pipenv update
Most of packages will have change logs in their repo. e.g
https://github.com/kennethreitz/requests-html/releases
https://github.com/pytest-dev/pytest/blob/master/CHANGELOG.rst
https://github.com/urllib3/urllib3/blob/master/CHANGES.rst
It will be more productive if I could have the latest updates via CLI.
As a work around, I have been using the Changelogs module it uses heuristics to attempt to get a changelog, but of course due to plenty of packages doing their own thing, it will never be able to work for all packages.
I did test it on your 3 sample packages, requests, pytest, urllib3 and was successful for printing out a changelog for them.
I am using requirement.txt to specify the package dependencies that are used in my python application. And everything seems to work fine for packages of which either there are no internal dependencies or for the one using the package dependencies which are not already installed.
The issue occurs when i try to install a package which has a nested dependency on some other package and an older version of this package is already installed.
I know i can avoid this while installing a package manually bu using pip install -U --no-deps <package_name>. I want to understand how to do this using the requirement.txt as the deployment and requirement installation is an automated process.
Note:
The already installed package is not something i am directly using in my project but is part of a different project on the same server.
Thanks in advance.
Dependency resolution is a fairly complicated problem. A requirements.txt just specifies your dependencies with optional version ranges. If you want to "lock" your transitive dependencies (dependencies of dependencies) in place you would have to produce a requirements.txt that contains exact versions of every package you install with something like pip freeze. This doesn't solve the problem but it would at least point out to you on an install which dependencies conflict so that you can manually pick the right versions.
That being said the new (as of writing) officially supported tool for managing application dependencies is Pipenv. This tool will both manage the exact versions of transitive dependencies for you (so you won't have to maintain a "requirements.txt" manually) and it will isolate the packages that your code requires from the rest of the system. (It does this using the virtualenv tool under the hood). This isolation should fix your problems with breaking a colocated project since your project can have different versions of libraries than the rest of the system.
(TL;DR Try using Pipenv and see if your problem just disappears)