Version mismatch between scipy and poetry - python

I'm using the poetry dependency manager for some of my development (RTL-SDR application). However, when I try to add scipy to the environment (calling poetry add scipy inside Windows 11 Powershell), I get the following output:
Using version ^1.10.0 for scipy
Updating dependencies
Resolving dependencies...
The current project's Python requirement (>=3.11,<4.0) is not compatible with some of the required packages Python requirement:
- scipy requires Python <3.12,>=3.8, so it will not be satisfied for Python >=3.12,<4.0
Because no versions of scipy match >1.10.0,<2.0.0
and scipy (1.10.0) requires Python <3.12,>=3.8, scipy is forbidden.
So, because sdr1 depends on scipy (^1.10.0), version solving failed.
• Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties
For scipy, a possible solution would be to set the `python` property to ">=3.11,<3.12"
https://python-poetry.org/docs/dependency-specification/#python-restricted-dependencies,
https://python-poetry.org/docs/dependency-specification/#using-environment-markers
However,using py -V, I verify that my python version is 3.11.0. So, everything should work, right?
Suggestions on resolving this would be most appreciated.

The solution I came up with isn't pretty, and it doesn't answer my questions, but it does work. As Imre_G suggested, I looked at [tool.poetry.dependencies] in pyproject.toml. There, I changed the Python requirement from
python="^3.11.0"
to
python="==3.11.0"
Apparently, Poetry accepted this as effectively excluding version 3.12 where "^3.11.0" did not.
Thanks for the help.

Where it is getting this "Python >=3.12,<4.0" I don't know.
Specifying python="^3.11.0" is equivalent to >=3.11,<4.0.
This indicates that the project is expected to work for every version in that range, not just any.
Since Scipy does not support 3.12, the project cannot support 3.12, and therefore cannot meet the requirement.

Related

Can I force pip to install a package even with a version conflict?

During an installation, pip is throwing an error due to version conflicts
ERROR: Could not find a version that satisfies the requirement XXX==1.2.1
This is due to a package made by the company for witch I work ( not open source work ).
The reason for witch it is blocking is because it is locked in python version 3.6 and I am attempting to use python 3.9
Is it possible or not to ask pip to force install a package even though it was not built / tested for this specific version of python ?
To be clear, I am fully aware that this is normally not a good idea. I however have little alternatives as the team that manages that specific package no longer exists. It is a dependency we are attempting to remove but until we can, we need to use it
Can I ask pip to use the latest version of the package even though it may break ? I'll add that there are no other package conflicts, just this one with python version so I should - in theory - be the only issue
I haven't tried it myself yet, but according to the documentation, pip install has the following argument that may help to bypass it:
--ignore-requires-python
Ignore the Requires-Python information.
https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-ignore-requires-python

SolverProblemError on install Tensorfow with poetry

when i add tensoflow with poetry (poetry add tensorflow) i get this error :
Using version ^2.7.0 for tensorflow
Updating dependencies
Resolving dependencies... (0.8s)
SolverProblemError
The current project's Python requirement (>=3.6,<4.0) is not compatible with some of the required packages Python requirement:
- tensorflow-io-gcs-filesystem requires Python >=3.6, <3.10, so it will not be satisfied for Python >=3.10,<4.0
- tensorflow-io-gcs-filesystem requires Python >=3.6, <3.10, so it will not be satisfied for Python >=3.10,<4.0
- tensorflow-io-gcs-filesystem requires Python >=3.6, <3.10, so it will not be satisfied for Python >=3.10,<4.0
....
For tensorflow-io-gcs-filesystem, a possible solution would be to set the `python` property to ">=3.6,<3.10"
For tensorflow-io-gcs-filesystem, a possible solution would be to set the `python` property to ">=3.6,<3.10"
For tensorflow-io-gcs-filesystem, a possible solution would be to set the `python` property to ">=3.6,<3.10"
The error message tells you, that your project aims to be compatible for python >=3.6,<4.0 (You probably have ^3.6 in your pyproject.toml), but pytorch says it's compatible only for >=3.6, <3.10. This is only a subset of the range of you project definition.
Poetry doesn't care about your current environment. It cares about a valid project definition at all.
The solution is already suggested within the error message. Set the version range for python in your pyproject.toml to >=3.6, <3.10.
There is a conflict between the Python and the Tensorflow version in your project. As the message suggests, you can set the Python version between 3.6 and 3.10. I can confirm python=3.8 works well with tensorflow=2.7.0 today in Ubuntu 20.04. This new Tensorflow version has been released last month, and it fixes the recent AlreadyExists error that happens with Tensorflow and Keras in the other versions, so I can recommend using this combination.

Why isn't pip installing the latest version of a package, even when a newer version is on PyPI?

I was trying to upgrade to the latest version of a package I had installed with pip, but for some reason it won't get the latest version. I've tried uninstalling the package in question, or even reinstalling pip entirely, but it still refuses to get the latest version from PyPI. When I try to pin the package version (e.g. pip install package==0.10.0) it says that it "Could not find a version that satisfies the requirement package==0.10.0 (from versions: ...)"
pip search package even acknowledges that the installed version isn't the latest, labeling the two versions for me.
I've seen other questions with external files or local versions, but I've tried the respective solutions (--allow-external doesn't exist anymore, and --no-cache-dir doesn't help) and I'm still stuck on the older version.
I was trying to upgrade Quart. Maybe other packages have something else going on.
In this particular case, Quart had dropped support for Python 3.6 (the version I had installed) and only supported 3.7 or later. (This was a fairly recent change to the project, so I just didn't see the news.)
However, when attempting to install a package only supported by a later Python, pip doesn't really explain why it couldn't find a version to satisfy the requirement - instead, it just lists all the versions that should work with the current Python, without indicating that more exist and just can't be installed.
The only real options to fix are:
Update your Python to meet the package's requirements
Ask/help the maintainer to backport the package to the version you have.

Anaconda installing tensorflow and fancyimpute

As a premise, I would specify that I am new to Python, so please forgive eventual inaccuracies.
So, I have recently installed Anaconda, and updated the Python version to 3.7.1.
In order to impute some missing values in my dataset using KNN, I've found a useful function in a package called fancyimpute.
However, such package is not among those already available (that is, from Spyder, the IDE I'm using, I cannot simply import it), so I need to install it.
Opening, as such, the Anaconda prompt and typing "conda install fancyimpute" doesn't work, returning the following:
"PackagesNotFoundError: The following packages are not available from current channels:
fancyimpute
Current channels:
(here a list of some channels)
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page".
Going, therefore, with "pip install fancyimpute" (although to me it's still not clear the difference between conda install and pip install) after a while I obtain the following:
"Collecting tensorflow (from fancyimpute)
Could not find a version that satisfies the requirement tensorflow (from fancyimpute) (from versions: )
No matching distribution found for tensorflow (from fancyimpute)"
I have therefore now run "conda install tensorflow", and is already about 30 minutes that is Solving environment.
What can I do? How can I obtain the desired package and avoid similar problems in future? Many thanks and hope I was clear in exposing the problem.
UPDATE: https://anaconda.org/search?q=fancyimpute From here, it seems that fancyimpute isn't available on my platform, w-64. How can I overcome this problem?
SOLVED: Apparently, I have solved the problem.
I have first created an ad hoc environment and installed tensorflow using conda.
Then, I have pip installed fancyimpute: at this point, got a couple of new errors ("Failed building wheel for fastacache", and same for cvxpy), both solved installing Microsoft Visual C++ Build Tools. So, finally, I have been able to install also fancyimpute.
Nonetheless, at this point, I coulnd't import it (ImportError: DLL load failed: The specified module could not be found. Failed to load the native TensorFlow runtime). After uninstalling and reinstalling tensorflow using conda-forge as channel, now it works.
User brittainhard on anaconda.org had the same idea. To use his/her version of the library (hosted on anaconda.org):
conda install -c brittainhard fancyimpute

How to verify a locally installed version of a module is the one used by pip?

I have manually installed datatable (from h2o.ai) https://github.com/h2oai/datatable from HEAD of master
make build
make install
They were successful. However when running pip3 freeze I see the (v old) default version (0.6.0) that had been installed via
pip3 install datatable
some months back:
$pip3 freeze | grep datatable
datatable==0.6.0
I am uncertain whether:
the locally built version of datatable is not being used
the locally built version of datatable is being used but not reported by pip3
if that were the case: how to verify the locally built/installed version were being used (or not)
Tips appreciated.
Updates
Based on (great) comments below:
import datatable then print(datatable.__version__)
0.6.0
But the datatable.__file__ shows the local version:
In [3]: print(datatable.__file__)
/git/datatable/datatable/__init__.py
Does this possibly mean that the local installation is being used - but that the version reported by that locally built one is still the same (v old) one that was published to pip repositories months earlier?
To look precisely at the module being used, the best way, as mentioned by #duhaime is to use import datatable; print(datatable.__file__).
If your local installation was done correctly, then you should also make sure that 1) the location where you installed it is in your PYTHONPATH, 2) that if it is, the path is placed before that of the standard paths (lookup is sequential).
An easy way to check that it is in the path if you don't know where to look is just to uninstall the version installed through pip.
EDIT
Based on the edit to the question, yes, the version is still the same (see here)

Categories

Resources