If I've installed a python package using both conda and pip then when I call a function from that package, will I be using the version from conda or from pip?
My situation is as follows: I'm trying to use the from_estimator method released in scikit-learn version 1.0. this past September. However, my current version of scikit-learn was version 0.24.2 so I decided to update the package. I previously had installed conda, but never actually used it to update/install a package; instead I always used pip. This time I decided to try to install scikit-learn version 1.0 with conda. However, when I wrote conda install scikit-learn=1.0 in terminal, I got the error
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- scikit-learn=1.0
Frustrated, I decided to simply install scikit-learn version 1.0 with pip by typing pip install -U scikit-learn in terminal. This gave me the message
Requirement already satisfied: scikit-learn in /home/eturok/anaconda3/lib/python3.7/site-packages (0.24.2)
Collecting scikit-learn
Downloading scikit_learn-1.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (23.1 MB)
|████████████████████████████████| 23.1 MB 17.2 MB/s
Requirement already satisfied: joblib>=0.11 in /home/eturok/anaconda3/lib/python3.7/site-packages (from scikit-learn) (1.0.1)
Requirement already satisfied: scipy>=1.1.0 in /home/eturok/anaconda3/lib/python3.7/site-packages (from scikit-learn) (1.7.1)
Requirement already satisfied: numpy>=1.14.6 in /home/eturok/anaconda3/lib/python3.7/site-packages (from scikit-learn) (1.20.3)
Requirement already satisfied: threadpoolctl>=2.0.0 in /home/eturok/anaconda3/lib/python3.7/site-packages (from scikit-learn) (2.2.0)
Installing collected packages: scikit-learn
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 0.24.2
Uninstalling scikit-learn-0.24.2:
Successfully uninstalled scikit-learn-0.24.2
Successfully installed scikit-learn-1.0
Now typing pip list and conda list in terminal returns scikit-learn 1.0 and scikit-learn 1.0 pypi_0 pypi respectively.
My questions are:
If I only updated scikit-learn with pip, why is scikit-learn also updated with conda?
Why at the beginning was I unable to update scikit-learn with conda? This version of scikit-learn was released pretty recently. Is it possible that conda hasn't yet gotten around to supporting the new release of scikit-learn yet?
In general, how does conda decide which packages to support?
If I've installed a python package using both conda and pip then when I call a function from that package, will I be using the version from conda or from pip?
Not a conda expert, but since no one is bothering to answer:
I don't think it was. You got an error and installation was aborted. Might it be that conda list just sees the pip installed package and reports it as comming from pypi?
Yes, conda default repo for scikit-learn does not have 1.0 yet, 0.24 is the last (see here: https://anaconda.org/anaconda/scikit-learn). The conda-forge repo for scikit learn does have 1.0 (see here: https://anaconda.org/conda-forge/scikit-learn)
I don't know how packages move from conda-forge to conda. Probably you should always use conda forge as in conda install -c conda-forge scikit-learn. Or, even better, always use pip.
No idea, don't do it :) Always use either conda or venv virtual environments and in such cases burn them with fire and start anew.
Related
I'm experimenting with my python package. I use anaconda and, inside my environment, I have packages that I need already installed:
conda list | egrep "rdkit|numpy"
returns
numpy 1.19.2 py39h89c1606_0
numpy-base 1.19.2 py39h2ae0177_0
rdkit 2022.03.1b1.0 py39he30056e_1 rdkit/label/beta
My setup.py has two dependencies without specifying versions:
install_requires=[
'rdkit',
'numpy',
],
But when I run pip install . it installs another version of rdkit
Collecting rdkit
Using cached rdkit-2022.3.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (22.9 MB)
The question is, how to prevent pip from such behavior? I want something like: if a package has already been installed, whatever the version, just keep it and do nothing.
You could try:
pip install --no-cache-dir
to avoid cached versions.
Check this link, it might have what you need:
pip uses incorrect cached package version, instead of the user-specified version
This question already has answers here:
Pip install from pypi works, but from testpypi fails (cannot find requirements)
(2 answers)
Closed 2 years ago.
I have put a package on test.pypi which does require tensorflow>=1.15.0. However, when I install it using
pip install -i https://test.pypi.org/simple/ kmeanstf==0.7.0a4
I get the message
Looking in indexes: https://test.pypi.org/simple/
Collecting kmeanstf==0.7.0a4
Downloading https://testfiles.pythonhosted.org/packages/75/80/faf86ac10310e12015709d9763de9c0ebcf33df1f0bc884448993001ae8e/kmeanstf-0.7.0a4-py3-none-any.whl
ERROR: Could not find a version that satisfies the requirement tensorflow>=1.15.0 (from
kmeanstf==0.7.0a4) (from versions: 0.12.1, 2.0.0a0)
ERROR: No matching distribution found for tensorflow>=1.15.0 (from kmeanstf==0.7.0a4)
However, on pypi all Versions of tensorflow are present (including 1.15.0 and 2.0.0): https://pypi.org/project/tensorflow/#history
When I lower the requirements to just 'tensorflow' (is done in kmeanstf==0.7.0a1), the version 0.12.1 from tensorflow is installed which is much too ancient for my package. It is however, one of the two versions mentioned in the above error message. Is 0.12.1 indeed the default on pypi?
What can I do here (apart from asking the users of my package to install tensorflow themselves)?
pip --version
pip 19.3.1 from
/home/.../miniconda2/envs/empty/lib/python3.6/site-packages/pip (python 3.6)
You forced the index URL to be https://test.pypi.org/simple/ so pip looks for tensorflow at https://test.pypi.org/project/tensorflow/ and there're only 2 versions that have downloadable wheels suitable for your platform.
If you want to install kmeanstf from test.pypi.org and tensorflow from pypi.org you need to provide an extra URL:
pip install -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ kmeanstf==0.7.0a4
I have been trying to install pandas and numpy in my virtualenv. The box is an Amazon Linux AMI instance. This is my command log:
Activate venv and check packages
[ec2-user#ip-0-0-0-0 www]$ source datasci_venv/bin/activate
(datasci_venv) [ec2-user#ip-0-0-0-0 www]$ pip freeze
Django==1.11
requests==2.20.1
Then using pip install to install pandas and numpy. Note that when installing Django and requests, they were successfully installed:
(datasci_venv) [ec2-user#ip-0-0-0-0 www]$ pip install pandas && pip install numpy
Collecting pandas
Using cached https://files.pythonhosted.org/packages/e1/d8/feeb346d41f181e83fba45224ab14a8d8af019b48af742e047f3845d8cff/pandas-0.23.4-cp36-cp36m-manylinux1_x86_64.whl
Requirement already satisfied: pytz>=2011k in ./datasci_venv/lib/python3.6/dist-packages (from pandas) (2018.7)
Collecting numpy>=1.9.0 (from pandas)
Using cached https://files.pythonhosted.org/packages/ff/7f/9d804d2348471c67a7d8b5f84f9bc59fd1cefa148986f2b74552f8573555/numpy-1.15.4-cp36-cp36m-manylinux1_x86_64.whl
Requirement already satisfied: python-dateutil>=2.5.0 in ./datasci_venv/lib/python3.6/dist-packages (from pandas) (2.7.5)
Requirement already satisfied: six>=1.5 in ./datasci_venv/lib/python3.6/dist-packages (from python-dateutil>=2.5.0->pandas) (1.11.0)
Installing collected packages: numpy, pandas
Successfully installed numpy-1.15.4 pandas-0.23.4
Collecting numpy
Using cached https://files.pythonhosted.org/packages/ff/7f/9d804d2348471c67a7d8b5f84f9bc59fd1cefa148986f2b74552f8573555/numpy-1.15.4-cp36-cp36m-manylinux1_x86_64.whl
tabula-py 1.3.0 requires pandas, which is not installed.
Installing collected packages: numpy
Successfully installed numpy-1.15.4
So it seems that they are correctly installed, since there are no error messages. However, when I check my packages again, they are not there:
(datasci_venv) [ec2-user#ip-0-0-0-0 www]$ pip freeze
Django==1.11
requests==2.20.1
And hence I decided to check which command I am using but it says I'm using the programs in the venv:
(datasci_venv) [ec2-user#ip-0-0-0-0 www]$ which pip
/var/www/datasci_venv/bin/pip
(datasci_venv) [ec2-user#ip-0-0-0-0 www]$ which python
/var/www/datasci_venv/bin/python
(datasci_venv) [ec2-user#ip-0-0-0-0 www]$
Hence, I'm kinda lost on what to do and check. Any help or solution is appreciated.
I had the same issue, it appears that pip is installing the package in the lib64 folder of your virtualenv rather than on the lib folder. You need to force the target folder by doing this:
pip install --target datasci_venv/lib/python3.6/dist-packages/ numpy
Hope this help!
I tried to install pandas on my cmd and this is the output
Requirement already satisfied: pandas in c:\users\name\anaconda3\lib\site-packages (0.23.0)
Requirement already satisfied: python-dateutil>=2.5.0 in c:\users\name\anaconda3\lib\site-packages (from pandas) (2.7.3)
Requirement already satisfied: pytz>=2011k in c:\users\name\anaconda3\lib\site-packages (from pandas) (2018.4)
Requirement already satisfied: numpy>=1.9.0 in c:\users\name\anaconda3\lib\site-packages (from pandas) (1.14.3)
Requirement already satisfied: six>=1.5 in c:\users\name\anaconda3\lib\site-packages (from python-dateutil>=2.5.0->pandas) (1.11.0)
**distributed 1.21.8 requires msgpack, which is not installed.**
This last line is in red.
Im on windows 10, I installed anaconda
This seems to work for me.
First I tried
pip install msgpack
And if you need this too,
pip install msgpack-python
Then install whatever you need. In your case,
conda install pandas
You should install msgpack and then install pandas again.
How are you installing pandas? If you're using Anaconda, then
conda install pandas
is typically enough to make everything work. This is because Anaconda is using binary installs - it is uploading prebuilt code and has already done the combinatorics to make everything work together - and it gets everything it needs for a package.
Sometimes, of course, you have to go into a dependency combination that is tough, or you are pulling from non-core Anaconda repos, etc. In that case, you can try
conda install msgpack
# or
pip install msgpack
# or
conda install -c conda-forge msgpack
The right choice sort of depends on what you're doing. Using the -c flag with conda gives you access to non-core repositories - these carry fewer guarantees about working together, but gives you access to many more versions of the package, usually.
I am getting a similar error when trying to install pymc3. I solved it by using conda rather than pip.
The first time I used pip install pymc3 and I got the same error as you:
distributed 1.21.8 requires msgpack, which is not installed
Then I installed using conda instead: conda install pymc3, and it installed fine.
My understanding is that conda handles all the dependent packages for you, which pip does not.
I found this on the Anaconda site:
Use anaconda to install msgpack for python with this command:
conda install -c conda-forge msgpack-python
It seems to have worked for me.
conda install pip
pip uninstall -y msgpack-python
pip install msgpack
TCIP-scheduler
run these commands
I am a package maintainer and have a library on pip - Django-spaghetti-and-meatballs.
Its at version 0.2.0
It is listed on pip as version 0.2.0
setup.py imports some code that clearly states it is version 0.2.0
But when users try to run pip install django-spaghetti-and-meatballs==0.2.0 they (and I) get:
legostormtroopr:~/workspace $ pip install django-spaghetti-and-meatballs==0.2.0
Downloading/unpacking django-spaghetti-and-meatballs==0.2.0
Could not find a version that satisfies the requirement django-spaghetti-and-meatballs==0.2.0 (from versions: 0.1.0, 0.1.0rc5, 0.1.1)
Cleaning up...
No distributions matching the version for django-spaghetti-and-meatballs==0.2.0
Storing debug log for failure in /home/ubuntu/.pip/pip.log
What have I done wrong?
PS. It feels a little spammy to link to the library and such, but its a problem and I felt it better to point to the problem directly.
By looking at the list at: https://pypi.python.org/simple/django-spaghetti-and-meatballs/ you could see that the wheel had not been uploaded.
Running:
python setup.py bdist_wheel upload
will rebuild the wheel and upload it.