Why is my pipenv stuck in the "Locking..." stage when installing [numpy|opencv|pandas]?
When running pipenv install pandas or pipenv update it hangs for a really long time with a message and loading screen that says it's still locking. Why? What do I need to do?
Your package(s) are being installed and your wheel is being built
Perhaps better terminology to describe this state would be 'Building and Locking...' or something similar.
This is especially likely to happen if you're installing numpy, opencv, pandas, or other large packages.
What's going on in the background is that pipenv is downloading your package and maybe building the wheel.
The remedy in this case is often a strong dose of patience.
What is Locking?
To understand more about what "Locking" is in the pipenv context you can read more here: https://pipenv.kennethreitz.org/en/latest/basics/#pipenv-lock
$ pipenv lock is used to create a Pipfile.lock, which declares all dependencies (and sub-dependencies) of your project, their latest available versions, and the current hashes for the downloaded files. This ensures repeatable, and most importantly deterministic, builds.
However, there are times when it is not just a slow/large install, but is instead an issue with your Pipfile[.lock]. If you're fairly certain that this is the problem try pipenv lock --clear and rerun your pipenv install command, also check this thread for more information.
I had this happen to me just now. Pipenv got stuck locking forever, 20+ minutes with no end in sight, and pipenv --rm didn't help.
In the end, the problem was that I had run pipenv install "boto3~=1.21.14" to upgrade boto3 from boto3 = "==1.17.105". But I had other conflicting requirements (in my case, botocore = "==1.20.105" and s3transfer = "==0.4.2") which are boto3 dependencies. The new version of boto3 required newer versions of these two packages, but the == requirements didn't allow that. Pipenv didn't explain this, and just spun "Locking…" forever.
So if you run into this, I would advise you to carefully look at your Pipenv packages, see if there are any obvious conflicts, and loosen or remove package requirements where possible.
In my case, I was able to just remove the s3transfer and botocore packages from the list entirely, and rely on boto3 to fetch the necessary versions.
This is an open issue with pipenv
https://github.com/pypa/pipenv/issues/3827
I suggest go back to pip
For folks trying to use pipenv with an existing requirements.txt file in the working dir, you may find this
Github post helpful.
Note: I also used pipenv --rm before attempting what I show.
HTH ;)
P.S. Here's a shout out to Zebradil's script to create a requirements.txt, in case you're collaborating with others who don't use pipenv.
try doing pipenv --rm - removes virtual environment
then pipenv shell - this will again initiate virtual env
then pipenv install installs all the packages again
worked for me
Related
Is there a best practice for using pipenv for deterministic builds when you're going to be developing and running application code on multiple platforms (i.e. Windows, Linux, and Mac)?
For instance, the requirements for pytest defines the atomicwrites library as a conditional dependency if you are installing pytest in a Windows-based Python environment.
However, if I define...
[dev-packages]
pytest = "*"
as a requirement in my project's Pipfile and run pipenv install --dev to generate my initial Pipfile.lock in a Linux-based Python environment, then atomicwrites is neither installed nor specified in any way in the resulting Pipfile.lock file.
Later, after I git commit my new Linux-generated Pipfile.lock, either I or someone else will eventually pull that committed Pipfile.lock file down onto their Windows machine and will run pipenv install --dev to generate their own local pipenv environment.
However, when they go to run the pytest test-runner, it will fail because atomicwrites will not have been installed in their pipenv environment, therefore the pytest command will fail because of the missing dependency.
What's more, my Windows test-build will also fail when using a CI service like GitHub Actions or Azure Pipelines because pipenv will fail to install the atomicwrites dependency there too (because it will not be specified in the repo's Pipfile.lock specifications).
This pytest example is a super simple example of this issue. In this case, it'd be easy enough just to add atomicwrites as one of my [dev-packages] requirements in my Pipfile so that it gets installed regardless of the platform, or to even add sys_platform = "== 'win32'" to specify that it should only be installed by pipenv on Windows platforms.
However, these platform-conditional dependencies become a much harder issue to deal with when my project has many dependencies, all with their own platform-conditional dependencies.
I've seen this issue discussed in a couple different locations, such as here, and here.
However, I have yet to find any straightforward method for dealing with this (short of not using pipenv or deleting the Pipfile.lock file prior to running pipenv install --dev on a different platform).
Do any pipenv users out there have a recommended best practice for dealing with this multiple-os Pipfile.lock install issue?
This is obviously too late to be helpful to you, but I have found that generating the lock file in these multiple environments while keeping outdated versions resolves most of the problems with pipenv.
For example, our CI/CD uses GitHub images, so in a Windows Subsystem for Linux shell:
pipenv install --dev
Then, in a Windows shell
pipenv install --dev --keep-outdated
However, running in Windows can sometimes pin a dependency to that platform (colorama currently does this as of this answer's writing). To avoid that, you can regenerate the lock file again in WSL:
pipenv lock --dev --keep-outdated
This will keep the "outdated" packages from a Windows-only environment, but will generally fix the platform conditionals.
Note that the dance above is ultimately not foolproof--I found this question because this exact method was not working for atomicwrites. However, it seems to resolve the vast majority of issues and those that cannot generally can be solved by adding the package to your dependencies manually.
Poetry has a very good version solver, too good sometimes :) I'm trying to use poetry in a project that uses two incompatible packages. However they are incompatible only by declaration as one of them is no longer developed, but otherwise they work together just fine.
With pip I'm able to install these in one environment (with an error printed) and it works. Poetry will declare that the dependencies versions can't be resolved and refuse to install anything.
Is there a way to force poetry to install these incompatible dependencies? Thank you!
No.
Alternative solutions might be:
contacting the offending package's maintainers and asking for a fix + release
forking the package and releasing a fix yourself
vendoring the package in your source code - there is no need to install it if it's already there, and many of the usual downsides to vendoring disappear if the project in question is not maintained anymore
installing the package by hand post poetry install with an installer that has the option to ignore the dependency resolver, like pip (similar to what you're already doing)
The answers to this question seemed to ignore the --save-dev part.
What is pip's equivalent of `npm install package --save-dev`?
I am using pipenv and have several packages that I only want to install during local development (e.g. pytest, unittest, matplotlib).
How can I achieve this using pipenv? I can't see anything about that on the manual page.
You do can achieve this through pipenv: https://pipenv-fork.readthedocs.io/en/latest/basics.html#pipenv-install
You can use pipenv install --dev whatever.
But actually I will not recommend pipenv as it handles locking very slowly. Which makes it almost useless.
I wonder if it's possible to grab the change logs of updated packages after running
pip install [package_name] --upgrade
or
pipenv update
Most of packages will have change logs in their repo. e.g
https://github.com/kennethreitz/requests-html/releases
https://github.com/pytest-dev/pytest/blob/master/CHANGELOG.rst
https://github.com/urllib3/urllib3/blob/master/CHANGES.rst
It will be more productive if I could have the latest updates via CLI.
As a work around, I have been using the Changelogs module it uses heuristics to attempt to get a changelog, but of course due to plenty of packages doing their own thing, it will never be able to work for all packages.
I did test it on your 3 sample packages, requests, pytest, urllib3 and was successful for printing out a changelog for them.
I'm using pip to install modules from a requirements file produced with pip freeze. However the problem sometimes it's unable to install or download one module and then everything fails and doesn't install anything. Is there a way to make it install the modules that satisfy the requirements?
With pip only, I would say no. pip and Python packages generally are designed to work in such a way that you might need dependencies installed in order to install the package itself. Thus, they don't have an option to try despite of failures.
However, pip install -r requirements.txt simply goes through the file line-by-line. You can iterate the every single item yourself and call pip install for it, without caring the result (was the installation successfully or not). With shell scripting this could be done e.g.:
cat requirements.txt|xargs pip install
The example does not understand comments, spaces, etc. so you might need to how something more complex in place for a real-life scenario.
Alternative you can simply run pip in loop until it gives a successful return value.
But as a real solution I would recommend you to set up your own Python package mirror server, or a local cache - which would be another question.