We have a lock file which has not changed since April 2021. Recently, we have started seeing the following error on pipenv install --deploy:
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
gunicorn==20.1.0 from https://files.pythonhosted.org/packages/e4/dd/5b190393e6066286773a67dfcc2f9492058e9b57c4867a95f1ba5caf0a83/gunicorn-20.1.0-py3-none-any.whl (from -r /tmp/pipenv-g7_1pdnq-requirements/pipenv-d64a8p6k-hashed-reqs.txt (line 32)):
Expected sha256 e0a968b5ba15f8a328fdfd7ab1fcb5af4470c28aaf7e55df02a99bc13138e6e8
Got 9dcc4547dbb1cb284accfb15ab5667a0e5d1881cc443e0677b4882a4067a807e
We have opened an issue in the project GitHub https://github.com/benoitc/gunicorn/issues/2889
We believe that it would be unsafe to use this new version without confirmation it is correct and safe in case someone has maliciously updated the package in the package repository.
Is there a way we can grab the wheel file from a previous docker build and force that to be used for the time being so we can safely build with the existing version and checksum?
Thanks
Thanks to #Ouroborus for the answer:
e0... is for the .tar.gz (source) package, 9d... is for the .whl package. (See the "view hashes" links on PyPI's gunicorn files page) I'm not sure why your systems are choosing to download the wheel now when they downloaded the source previously. However, those are both valid hashes for that module and version.
Related
I upload a package to pypi, but I got some trouble after upload, so I delete it completely, and I tried to re-upload, but there are some error after upload again:
HTTP Error 400: This filename has previously been used, you should use a different version.
error: HTTP Error 400: This filename has previously been used, you should use a different version.
It seems pypi can track the upload activity, I delete project and account and upload again, but I can see the previous record. Why?
How can I solve the problem?
In short, you cannot reupload a distribution with the same name due to stability reasons. Here you can read more about this issue at https://github.com/pypa/packaging-problems/issues/74.
You need to change the distribution's file name, usually done by increasing the version number, and upload it again.
Yes you can reupload the package with same name.
I had faced similar issue what I did was increased the version number in setup.py and delete the folders generated by running python setup.py sdist i.e. dist and your_package_name-egg.info and again run the commands python setup.py sdist to make the package upload ready.
I think pypi tracks the repo from folder generated by sdist i.e. dist and your_package_name-egg.info so you have to delete it.
If you are running your local pypi server then you can use -o,--overwrite option which will allow overwriting existing package files.
pypi-server -p 8080 --overwrite ~/packages &
When installing a package using pip, I get the following message:
Obtaining some-package from git+git://github.com/some-user/some-package.git#commit-hash#egg=some_package-dev (from -r requirements.txt
(line 3))
git clone in /Users/me/Development/some-env/src/some-package exists with
URL https://github.com/some-user/some-package.git
The plan is to install the git repository git://github.com/some-user/some-package.git
What to do? (s)witch, (i)gnore, (w)ipe, (b)ackup
I see that this particular case is probably caused by a change of protocol in URL (new requirement uses git://, while the one already installed uses https://).
However, I wonder what exactly happens if I choose either of the choices (switch, ignore, wipe, backup). I'm unable to find an explanation in pip documentation.
A patch explaining this option was merged into the PIP documentation, but it was not released until Pip 6.0 (2014-12-22). (https://github.com/pypa/pip/commit/b5e54fc61c06268c131f1fad3bb4471e8c37bb25). Here is what that patch says:
--exists-action option
This option specifies default behavior when path already exists.
Possible cases: downloading files or checking out repositories for installation,
creating archives. If --exists-action is not defined, pip will prompt
when decision is needed.
(s)witch
Only relevant to VCS checkout. Attempt to switch the checkout
to the appropriate url and/or revision.
(i)gnore
Abort current operation (e.g. don't copy file, don't create archive,
don't modify a checkout).
(w)ipe
Delete the file or VCS checkout before trying to create, download, or checkout a new one.
(b)ackup
Rename the file or checkout to {name}{'.bak' * n}, where n is some number
of .bak extensions, such that the file didn't exist at some point.
So the most recent backup will be the one with the largest number after .bak.
And here is a link to description of that option in the now-updated documentation: https://pip.pypa.io/en/stable/cli/pip/#exists-action-option.
I recently uploaded a package to PyPI under a name with mixed-case letters, QualysAPI.
In retrospect I think it'd be better to have the package name be all lowercase per PEP 8. Is there a way I can change it?
Here's what happens when I try manually edit the package name on Pypi:
Forbidden
Package name conflicts with existing package 'QualysAPI'
Here's what happens when I try to edit the package name via python setup.py sdist upload:
Upload failed (403): You are not allowed to edit 'qualysapi' package information
Deleted package. Reupload package with all lowercase. Lost all history of package but doesn't matter since Github has revisions online.
I would like to use distutils (setup.py) to be able to install a python package (from a local repository), which requires another package from a different local repository. Since I am lacking decent documentation of the setup command (I only found some examples
here and here, confused by setup-terms extras_require, install_require and dependency_links found here and here), does anyone have a complete setup.py file that shows how this can be handled, i.e. that distutils handles the installation of a package found in some SVN repository, when the main package I am installing right now requires that?
More detailed explanation: I have two local svn (or git) repositories basicmodule and extendedmodule. Now I checkout extendedmodule and run python setup.py install. This setup.py files knows that extendedmodule requires basicmodule, and automatically downloads it from the repository and installs it (in case it is not installed yet). How can I solve this with setup.py? Or maybe there is another, better way to do this?
EDIT: Followup question
Based on the answer by Tom I have tried to use a setup.py as follows:
from setuptools import setup
setup(
name = "extralibs",
version = "0.0.2",
description = ("Some extra libs."),
packages=['extralib'],
install_requires = "basiclib==1.9dev-r1234",
dependency_links = ["https://source.company.xy/svn/MainDir/SVNDir/basiclib/trunk#20479#egg=basiclib-1.9dev-r1234"]
)
When trying to install this as a normal user I get the following error:
error: Can't download https://source.company.xy/svn/MainDir/SVNDir/basiclib/trunk#20479: 401 Authorization Required
But when I do a normal svn checkout with the exact same link it works:
svn co https://source.company.xy/svn/MainDir/SVNDir/basiclib/trunk#20479
Any suggestion how to solve this without changing ANY configuration of the svn repository?
I think the problem is that your svn client is authentified (caching realm somewhere in ~/.subversion directory) what your distutils http client don't know how to do.
Distutils supports svn+http link type in dependency links. So you may try adding "svn+" before your dependency link providing username and password:
dependency_links =
["svn+https://user:password#source.company.xy/svn/MainDir/SVNDir/basiclib/trunk#20479#egg=basiclib-1.9dev-r1234"]
For security reasons you should not put your username and password in your setup.py file. One way to do that it fetching authentication information from an environment variable or event try to fetch it from your subversion configuration directory (~/.subversion)
Hope that help
Check out the answers to these two questions. They both give specific examples on how install_requires and dependency_links work together to achieve what you want.
Can Pip install dependencies not specified in setup.py at install time?
Can a Python package depend on a specific version control revision of another Python package?
We're using a requirements.txt file to store all the external modules needed. Every module but one is gathered from internet. The other one is stored on a folder under the one holding the requirements.txt file.
BTW, this module can be easily installed with pip install
I've tried using this:
file:folder/module
or this:
file:./folder/module
or even this:
folder/module
but always throws me an error.
Does anyone know which is the right way to do this?
Thanks
In the current version of pip (1.2.1) the way relative paths in a requirements file are interpreted is ambiguous and semi-broken. There is an open issue on the pip repository which explains the various problems and ambiguities in greater detail:
https://github.com/pypa/pip/issues/328
Long story short the current implementation does not match the description in the pip documentation, so as of this writing there is no consistent and reliable way to use relative paths in requirements.txt.
THAT SAID, placing the following in my requirements.txt:
./foo/bar/mymodule
works when there is a setup.py at the top level of the mymodule directory. Note the lack of the file:: protocol designation and the inclusion of the leading ./. This path is not relative to the requirements.txt file, but rather to the current working directory. Therefore it is necessary to navigate into the same directory as the requirements.txt and then run the command:
pip install -r requirements.txt
Its based off the current working directory (find with os.getcwd() if needed) and the relative path you provide in the requirements file.
Your requirements file should look like this:
fabric==1.13.1
./some_fir/some_package.whl
packaging==16.8
Note this will only work for .whl files not .exe
Remember to keep an eye on the pip install output for errors.
For me only the file: directive worked. This even works with AWS SAM, i.e. sam build. Here is my requirements.txt and the englishapps is my own custom Python package that I need in AWS Lambda
requests
file:englishapps-0.0.1-py3-none-any.whl
As was mentioned before, the files are relative to the current working directory, not the requirement.txt.
Since v10.0 requirements files support environment variables in the format: ${VAR_NAME}. This could be used as a mechanism to specify a file location relative to the requirements.txt. For example:
# Set REQUIREMENTS_DIRECTORY outside of pip
${REQUIREMENTS_DIRECTORY}/folder/module
Another option is to use the environment manager called Pipenv to manage this use case.
The steps after you do the pipenv install for a new project:
pipenv install -e app/deps/fastai (-e is editable, and is optional)
then you will see the following line in your Pipfile:
fastai = {editable = true,path = "./app/deps/fastai"}
here's similar issues:
https://github.com/pypa/pipenv/issues/209#issuecomment-337409290
https://stackoverflow.com/a/53507226/7032846
A solution that worked for me both for local and remote files (via Windows share). Here an example of requirements.txt
file:////REMOTE_SERVER/FOLDER/myfile.whl