I am writing a python application and I have the following dependencies:
Requires-Dist: keyring (==23.9.3) Requires-Dist: keyrings.alt (==4.2.0) Requires-Dist: pandas (==1.3.5) Requires-Dist: pyarrow (==10.0.0) Requires-Dist: requests (==2.28.1) Requires-Dist: requests-toolbelt (==0.10.1) Requires-Dist: toml (==0.10.2)
In the above list, each dependency has its subsequent transitive dependencies. For example "requests" has urllib3 dependency and the version of the same should be above 1.21.1 and below 1.27.
- requests [required: ==2.28.1, installed: 2.28.1] - certifi [required: >=2017.4.17, installed: 2022.9.14] - charset-normalizer [required: >=2,<3, installed: 2.0.4] - idna [required: >=2.5,<4, installed: 3.3] - urllib3 [required: >=1.21.1,<1.27, installed: 1.25.8]
When my application wheel file is installed (using pip install) is there any way I can make sure that the highest required/supported version of the transitive dependency is also automatically installed?
Currently my application fails to run because the APIs I am using from "requests" is looking for a method from urllib3 version 1.26 or above.
ERROR - __init__() got an unexpected keyword argument 'allowed_methods'
So if there is any way to automatically install the highest required/supported version of the transitive dependencies automatically this problem can be solved. Similar issue can be observed with other transitive dependency versions as well. So request help on this. Thanks in advance.
When my application wheel file is installed (using pip install) is there any way I can make sure that the highest required/supported version of the transitive dependency is also automatically installed?
Currently my application fails to run because the APIs I am using from "requests" is looking for a method from urllib3 version 1.26 or above.
ERROR - __init__() got an unexpected keyword argument 'allowed_methods'
So if there is any way to automatically install the highest required/supported version of the transitive dependencies automatically this problem can be solved. Similar issue can be observed with other transitive dependency versions as well. So request help on this. Thanks in advance.
Related
I am having some trouble with dependencies within my python conda environement. I need to have both libraries Tornado x msgpack-rpc-python. However both seems to be not compatible.
Here are the errors I am receiving:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
nbclassic 0.4.8 requires tornado>=6.1, but you have tornado 4.5.3 which is incompatible.
jupyter-server 1.21.0 requires tornado>=6.1.0, but you have tornado 4.5.3 which is incompatible.
jupyter-client 7.4.7 requires tornado>=6.2, but you have tornado 4.5.3 which is incompatible.
ipyparallel 8.4.1 requires tornado>=5.1, but you have tornado 4.5.3 which is incompatible.
ipykernel 6.15.2 requires tornado>=6.1, but you have tornado 4.5.3 which is incompatible.
HOWEVER when I try to upgrate tornado I get this error:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of **the following dependency conflicts.
msgpack-rpc-python 0.4.1 requires tornado<5,>=3, but you have tornado 6.2 which is incompatible.**
I also tried to downgrade the list above(jupyter client, ..) however the environnement turn out to be unsustainable and non functional. I tried several times to create a new environment but no luck , I suppose I am doing things wrong but not sure what to do??
I have been trying to find a solution to this circular problem but I have been stuck on it for a long long time and getting out of ideas. I have downloaded Synk to help out but it is telling me the same thing is still a circular solution : updagrate tornado.
Background information:
Linux
Ubuntu 20.04
Coding in Visual Studio
Environment py38 (3.8.15)
Any help is appreciated!!
Thanks so much,
When I run my requirements.txt file I get the following error messages
ERROR: Cannot install PyJWT==2.0.0 and djangorestframework-jwt==1.11.0 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested PyJWT==2.0.0
djangorestframework-jwt 1.11.0 depends on PyJWT<2.0.0 and >=1.5.2
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies
The two dependencies are written like the following:
PyJWT==2.0.0
djangorestframework-jwt==1.11.0
But what I'm most confused about is the error messages part saying: djangorestframework-jwt 1.11.0 depends on PyJWT<2.0.0 and >=1.5.2
Wouldn't the PyJWT version 2.0.0 be good enough?
This kind of conflict is a pain in the ass.
Pip says that the version must be between PyJWT<2.0.0 and >=1.5.2, sou you can use the exact 2.0.0.
Downgrade it to PyJWT==1.7.1 and it should works!
I am trying to install cuckoo sandbox(malware analysis tool).
I am doing pip install -U cuckoo as stated in cuckoo documentation, but it gives me following error
pandas 0.23.3 has requirement python-dateutil>=2.5.0, but you'll have python-dateutil 2.4.2 which is incompatible
So I thought maybe there is some package named python-dateutil and pandas is using its some version which is >= 2.5.0 but cuckoo needs its 2.4.2 version, so to not cause instability it's not getting installed.
So I thought of creating a virtualenv venv and install cuckoo in that. As there are no pandas in venv/lib/python2.7/site-packages installing a previous version of python-dateutil shouldn't be a problem. But again I am getting the same error. I am not getting where is the problem.
I have similar import error on Spark executors as described here, just with psycopg2: ImportError: No module named numpy on spark workers
Here it says "Although pandas is too complex to distribute as a *.py file, you can create an egg for it and its dependencies and send that to executors".
So the question is "How to create egg file from package and it dependencies?" Or wheel, in case eggs are legacy. Is there any command for this in pip?
You want to be making a wheel. They are newer, more robust than eggs, and are supported by both Python 2/3.
For something as popular as numpy, you don't need to bother making the wheel yourself. They package wheels in their distribution, so you can just download it. Many python libraries will have a wheel as part of their distribution. See here: https://pypi.python.org/pypi/numpy
If you're curious, see here how to make one in general: https://pip.pypa.io/en/stable/reference/pip_wheel/.
Alternatively, you could just install numpy on your target workers.
EDIT:
After your comments, I think it's pertinent to mention the pipdeptree utility. If you need to see by hand what the pip dependencies are, this utility will list them for you. Here's an example:
$ pipdeptree
3to2==1.1.1
anaconda-navigator==1.2.1
ansible==2.2.1.0
- jinja2 [required: <2.9, installed: 2.8]
- MarkupSafe [required: Any, installed: 0.23]
- paramiko [required: Any, installed: 2.1.1]
- cryptography [required: >=1.1, installed: 1.4]
- cffi [required: >=1.4.1, installed: 1.6.0]
- pycparser [required: Any, installed: 2.14]
- enum34 [required: Any, installed: 1.1.6]
- idna [required: >=2.0, installed: 2.1]
- ipaddress [required: Any, installed: 1.0.16]
- pyasn1 [required: >=0.1.8, installed: 0.1.9]
- setuptools [required: >=11.3, installed: 23.0.0]
- six [required: >=1.4.1, installed: 1.10.0]
- pyasn1 [required: >=0.1.7, installed: 0.1.9]
- pycrypto [required: >=2.6, installed: 2.6.1]
- PyYAML [required: Any, installed: 3.11]
- setuptools [required: Any, installed: 23.0.0
If you're using Pyspark and need to package your dependencies, pip can't do this for you automatically. Pyspark has its own dependency management that pip knows nothing about. The best you can do is list the dependencies and shove them over by hand, as far as I know.
Additionally, Pyspark isn't dependent on numpy or psycopg2, so pip can't possibly tell you that you'd need them if all you're telling pip is your version of Pyspark. That dependency has been introduced by you, so you're responsible for giving it to Pyspark.
As a side note, we use bootstrap scripts that install our dependencies (like numpy) before we boot our clusters. It seems to work well. That way you list the libs you need once in a script, and then you can forget about it.
HTH.
You can install wheel using pip install wheel.
Then create a .whl using python setup.py bdist_wheel. You'll find it in the dist directory in root directory of the python package. You might also want to pass --universal if you want a single .whl file for both python 2 and python 3.
More info on wheel.
my goal is simple, i want to get the dependency of a PyPi package remotely without needing to download it completely.
I seem to understand (reading the pip code) that pip when resolving dependencies seems to read the egg once the package has been downloaded...
Is there any other way ?
Use pipdeptree to view dependencies of installed PyPI packages.
Install:
pip install pipdeptree
Then run:
pipdeptree
You'll see something like that:
Warning!!! Possible conflicting dependencies found:
* Mako==0.9.1 -> MarkupSafe [required: >=0.9.2, installed: 0.18]
Jinja2==2.7.2 -> MarkupSafe [installed: 0.18]
------------------------------------------------------------------------
Lookupy==0.1
wsgiref==0.1.2
argparse==1.2.1
psycopg2==2.5.2
Flask-Script==0.6.6
- Flask [installed: 0.10.1]
- Werkzeug [required: >=0.7, installed: 0.9.4]
- Jinja2 [required: >=2.4, installed: 2.7.2]
- MarkupSafe [installed: 0.18]
- itsdangerous [required: >=0.21, installed: 0.23]
alembic==0.6.2
- SQLAlchemy [required: >=0.7.3, installed: 0.9.1]
- Mako [installed: 0.9.1]
- MarkupSafe [required: >=0.9.2, installed: 0.18]
ipython==2.0.0
slugify==0.0.1
redis==2.9.1
As jinghli notes, there isn't currently a reliable way to get the dependency of an arbitrary PyPi package remotely without needing to download it completely. And in fact the dependencies sometimes depend on your environment, so an approach like Brian's of executing setup.py code is needed in the general case.
The way the Python ecosystem handles dependencies started evolving in the 1990's before the problem was well understood. PEP 508 -- Dependency specification for Python Software Packages sets us on course to improve the situtation, and an "aspirational" draft approach in PEP 426 -- Metadata for Python Software Packages 2.0 may improve it more in the future, in conjunction with the reimplementation of PyPI as Warehouse.
The current situation is described well in the document Python Dependency Resolution.
PyPI does provide a json interface to download metadata for each package. The info.requires_dist object contains a list of names of required packages with optional version restrictions etc. It is often missing, but it is one place to start.
E.g. Django (json) indicates:
{
"info": {
...
"requires_dist": [
"bcrypt; extra == 'bcrypt'",
"argon2-cffi (>=16.1.0); extra == 'argon2'",
"pytz"
],
...
}
I've just needed to find a way to do this and this is what I came up with (stolen from pip).
def dist_metadata(setup_py):
'''Get the dist object for setup.py file'''
with open(setup_py) as f:
d = f.read()
try:
# we have to do this with current globals else
# imports will fail. secure? not really. A
# problem? not really if your setup.py sources are
# trusted
exec d in globals(), globals()
except SystemExit:
pass
return distutils.core._setup_distribution
https://stackoverflow.com/a/12505166/3332282 answers why the exec incantation is subtle and hard to get right.
Sadly, pip doesn't have this function. The metadata available for packages on PyPI does not include information about the dependencies.
Normally, you can find the detailed dependency from the README file from project website.
pip search can give some information about the package. It can tell you what is it based on.
$ pip search flask
Flask - A microframework based on Werkzeug, Jinja2 and good intentions