I installed Ansible 2.8.2 using dnf on Fedora 30. I have an Ansible plug-in that requires a library. Using pip3 install I installed the required library.
When I run ansible-playbook directly, I see a ModuleNotFoundError for that module.
But if I run python3 /usr/bin/ansible-playbook, the module is found.
How can I get Ansible as installed by dnf to see this library?
Edit: further info: as installed from dnf, the main Ansible script has a shebang for /usr/bin/python3 -s. If I remove the -s, this problem is solved.
What's the benefit that the repo maintainers were seeking in adding this -s flag?
Is there a case to be made for asking the repo maintainers to omit the flag?
How can I get pip3 to install the library I need into a directory that will be seen when the -s flag is in effect?
Edit: Here's the output of ansible --version, and thanks for asking.
ansible 2.8.2
config file = /home/jdashton/proj/ansible-ccharacter/ansible.cfg
configured module search path = ['/home/jdashton/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 16:32:37) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
This is different from the suggested duplicate question because that question describes a "remote" task being run on localhost, in which case Ansible uses the default "remote" python interpreter. In this question, the library is being called from a local plugin, under Ansible's own python process. The -s flag in the shebang line at the top of the /usr/bin/ansible script is preventing Ansible from seeing some local libraries.
Edit: Following the suggestion from #zigarn I tried removing the library and reinstalling it as root. This resulted in the library being reinstalled into the same directory /usr/local/lib/... as before. Is there a way to get pip3 to install into the system library?
Here are the commands I attempted:
# pip3 uninstall tenacity
Uninstalling tenacity-5.0.4:
Would remove:
/usr/local/lib/python3.7/site-packages/tenacity-5.0.4.dist-info/*
/usr/local/lib/python3.7/site-packages/tenacity/*
Proceed (y/n)? y
Successfully uninstalled tenacity-5.0.4
# pip3 install tenacity
Collecting tenacity
Using cached https://files.pythonhosted.org/packages/6a/93/dfcf5b1b46ab29196274b78dcba69fab5e54b6dc303a7eed90a79194d277/tenacity-5.0.4-py2.py3-none-any.whl
Requirement already satisfied: six>=1.9.0 in /usr/lib/python3.7/site-packages (from tenacity) (1.12.0)
Installing collected packages: tenacity
Successfully installed tenacity-5.0.4
# pip show tenacity
Name: tenacity
Version: 5.0.4
Summary: Retry code until it succeeeds
Home-page: https://github.com/jd/tenacity
Author: Julien Danjou
Author-email: julien#danjou.info
License: Apache 2.0
Location: /usr/local/lib/python3.7/site-packages
Requires: six
Required-by:
Also, for clarity, here is the source for the Ansible installation:
# dnf list ansible
Last metadata expiration check: 0:09:48 ago on Mon 12 Aug 2019 11:12:58 AM EDT.
Installed Packages
ansible.noarch 2.8.2-1.fc30 #updates
According to the documentation -s option of python is "Don’t add the user site-packages directory to sys.path."
So I would guess that you did pip3 install as your user instead as root, so the library was installed at your user side instead of system wide.
Try to reinstall the library with pip3 as root and it should be ok.
Update 2:
the main problem turned out to be a different one from what I had thought it was, and asked for help here. I moved the new question to a new post:
Install custom python package in virtualenv
Update:
ok, so I screwed up my non-virtualenv by accident.
The non-virtualenv (normal bash) I could easily fix by removing the manually installed (via pip) lxml and running
conda install lxml --force
But for some reason, that doesn't work in the virtualenv.
There, running
conda install lxml --force
works without error message, but when I run python and simply say
>>> import lxml
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named lxml
Any suggestions??
old message:
I'm trying to use virtualenv for my python flask application.
The python code runs perfectly fine without the virtualenv.
I've installed the packages I need in the virtualenv, but I after installing lxml via
pip install lxml
Installing collected packages: lxml
Successfully installed lxml-3.6.0
I get the following error message when running my code:
File "/Users/XXX/xxx/flask-aws/lib/python2.7/site-packages/docx-0.2.4-py2.7.egg/docx.py", line 17, in <module>
from lxml import etree
ImportError: dlopen(/Users/XXX/xxx/flask-aws/lib/python2.7/site-packages/lxml/etree.so, 2): Library not loaded: libxml2.2.dylib
Referenced from: /Users/XXX/xxx/flask-aws/lib/python2.7/site-packages/lxml/etree.so
Reason: Incompatible library version: etree.so requires version 12.0.0 or later, but libxml2.2.dylib provides version 10.0.0
I have seen other people report similar problems at stackoverflow, and one guy remarked that the problem might related to the virtualenv, but there was no solution.
Once again: The python code runs perfectly fine without virtualenv! But inside virtualenv, I can't get it to work.
I'm using Anaconda Python 2.7 on a Mac.
I'd appreciate any help guys!
I had the same error and stumbled upon this link, after searching for the incompatible library error "libxml2.2.dylib provides version 10.0.0"
Installing libxml2 that worked for me:
brew install libxml2
brew link --force libxml2
Solution that works for me in virtual environment is to force pip to recompile lxml:
pip install lxml --force-reinstall --ignore-installed --no-binary :all:
Trying to install xgboost is failing..?
The version is Anaconda 2.1.0 (64-bit) on Windows & enterprise.
How do I proceed? I have been using R it seems its quite easy to install new package in R from RStudio, but not so in spyder as I need to go to a command-window to do it and then in this case it fails..
import sys
print (sys.version)
2.7.8 |Anaconda 2.1.0 (64-bit)| (default, Jul 2 2014, 15:12:11) [MSC v.1500 64 bit (AMD64)]
C:\anaconda\Lib\site-packages>pip install -U xgboost
Downloading/unpacking xgboost
Could not find a version that satisfies the requirement xgboost (from versions: 0.4a12, 0.4a13)
Cleaning up...
No distributions matching the version for xgboost
Storing debug log for failure in C:\Users\c_kazum\pip\pip.log
------------------------------------------------------------
C:\Users\c_kazum\AppData\Local\Continuum\Anaconda\Scripts\pip-script.py run on 08/27/15 12:52:30
Downloading/unpacking xgboost
Getting page https://pypi.python.org/simple/xgboost/
URLs to search for versions for xgboost:
* https://pypi.python.org/simple/xgboost/
Analyzing links from page https://pypi.python.org/simple/xgboost/
Found link https://pypi.python.org/packages/source/x/xgboost/xgboost-0.4a12.tar.gz#md5=4d768e034a28590497bb79279f036946 (from https://pypi.python.org/simple/xgboost/), version: 0.4a12
Found link https://pypi.python.org/packages/source/x/xgboost/xgboost-0.4a13.tar.gz#md5=5f53d51e4305c679192b3cabda2b0dbe (from https://pypi.python.org/simple/xgboost/), version: 0.4a13
Ignoring link https://pypi.python.org/packages/source/x/xgboost/xgboost-0.4a12.tar.gz#md5=4d768e034a28590497bb79279f036946 (from https://pypi.python.org/simple/xgboost/), version 0.4a12 is a pre-release (use --pre to allow).
Ignoring link https://pypi.python.org/packages/source/x/xgboost/xgboost-0.4a13.tar.gz#md5=5f53d51e4305c679192b3cabda2b0dbe (from https://pypi.python.org/simple/xgboost/), version 0.4a13 is a pre-release (use --pre to allow).
Could not find a version that satisfies the requirement xgboost (from versions: 0.4a12, 0.4a13)
Cleaning up...
Removing temporary dir c:\users\c_kazum\appdata\local\temp\pip_build_c_kazum...
No distributions matching the version for xgboost
Exception information:
Traceback (most recent call last):
File "C:\Users\c_kazum\AppData\Local\Continuum\Anaconda\lib\site-packages\pip\basecommand.py", line 122, in main
status = self.run(options, args)
File "C:\Users\c_kazum\AppData\Local\Continuum\Anaconda\lib\site-packages\pip\commands\install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "C:\Users\c_kazum\AppData\Local\Continuum\Anaconda\lib\site-packages\pip\req.py", line 1177, in prepare_files
url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
File "C:\Users\c_kazum\AppData\Local\Continuum\Anaconda\lib\site-packages\pip\index.py", line 322, in find_requirement
raise DistributionNotFound('No distributions matching the version for %s' % req)
DistributionNotFound: No distributions matching the version for xgboost
I'm a bit late to answer but I would still go ahead and answer it for anyone who still has an issue with the installation. I followed the steps listed in
https://www.kaggle.com/c/otto-group-product-classification-challenge/forums/t/13043/run-xgboost-from-windows-and-python
Their is a concise version of these steps at https://github.com/dmlc/xgboost/tree/master/windows. I will summarize what I did below.
1) Download Visual Basic Studio. You can download the community edition at visual studio website. There is a "free visual studio button on the upper right corner"
2) Copy all content from the git hub repository of xgboost/tree/master/windows and Open Visual studio existing project on Visual studio
3) There are a couple of drop down menus you need to select ( "Release" and "X64" and then select build --> build all from the upper menu. It should look something like the attached screenshot.
4) if you see the message ========== Build: 3 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========, it is all good
5) Browse to python-packages folder where the setup file for XGB resides and run the install command 'python setup.py install'.
Hope this helps.
This is an xgboost issue, not an Anaconda issue as you originally tagged (I don't use Anaconda but I got this too).
EDIT: from your updates, your breakage is caused by 32-bit msys somewhere on your path, whereas you have a 64-bit install of Python. Mine and all other people's reported breakage since Aug 25 was the 0.4a12/3 prereleases:
ORIGINAL ANSWER - Based on the limited information you provided (here, as opposed to on the Kaggle thread) and no verbose fail logs:
Apparently the latest versions of xgboost on pypi, 0.4a12 and 0.4a13 are both pre-releases, which pip will not use by default, unless you do pip install --pre xgboost.
I found this all out by digging around with pip install -v xgboost, which shows helpful verbose information on why an attempted install failed (below); then use pip help and pip install -h to see all install options:
pip install -v xgboost Downloading/unpacking xgboost Ignoring link
https://pypi.python.org/packages/source/x/xgboost/xgboost-0.4a12.tar.gz#md5=4d768e034a28590497bb79279f036946
(from https://pypi.python.org/simple/xgboost/), version 0.4a12 is a
pre-release (use --pre to allow). Ignoring link
https://pypi.python.org/packages/source/x/xgboost/xgboost-0.4a13.tar.gz#md5=5f53d51e4305c679192b3cabda2b0dbe
(from https://pypi.python.org/simple/xgboost/), version 0.4a13 is a
pre-release (use --pre to allow).
Then pip install -h tells you:
Install Options:
-e, --editable <path/url> Install a project in editable mode ...
...
--pre Include pre-release and development versions. By default, pip only finds stable versions.
And finally:
pip install --pre xgboost
(PS xgboost maintainers made a recent change in Aug 2015)
This solved my problem
$ sudo apt-get install gcc-5
$ env CC=gcc CXX=g++
$ pip install xgboost
Finally it worked for me (Mac OS)
Follow steps below in command prompt:
run whole command below "/bin......install.sh) "
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
Note: check this for above command(https://brew.sh/)
brew install libomp
pip3 install xgboost
I'm installing several Python packages in Ubuntu 12.04 using the following requirements.txt file:
numpy>=1.8.2,<2.0.0
matplotlib>=1.3.1,<2.0.0
scipy>=0.14.0,<1.0.0
astroML>=0.2,<1.0
scikit-learn>=0.14.1,<1.0.0
rpy2>=2.4.3,<3.0.0
and these two commands:
$ pip install --download=/tmp -r requirements.txt
$ pip install --user --no-index --find-links=/tmp -r requirements.txt
(the first one downloads the packages and the second one installs them).
The process is frequently stopped with the error:
Could not find a version that satisfies the requirement <package> (from matplotlib<2.0.0,>=1.3.1->-r requirements.txt (line 2)) (from versions: )
No matching distribution found for <package> (from matplotlib<2.0.0,>=1.3.1->-r requirements.txt (line 2))
which I fix manually with:
pip install --user <package>
and then run the second pip install command again.
But that only works for that particular package. When I run the second pip install command again, the process is stopped now complaining about another required package and I need to repeat the process again, ie: install the new required package manually (with the command above) and then run the second pip install command.
So far I've had to manually install six, pytz, nose, and now it's complaining about needing mock.
Is there a way to tell pip to automatically install all needed dependencies so I don't have to do it manually one by one?
Add: This only happens in Ubuntu 12.04 BTW. In Ubuntu 14.04 the pip install commands applied on the requirements.txt file work without issues.
Although it doesn't really answers this specific question. Others got the same error message with this mistake.
For those who like me initial forgot the -r: Use pip install -r requirements.txt the -r is essential for the command.
The original answer:
https://stackoverflow.com/a/42876654/10093070
I had installed python3 but my python in /usr/bin/python was still the old 2.7 version
This worked (<pkg> was pyserial in my case):
python3 -m pip install <pkg>
This approach (having all dependencies in a directory and not downloading from an index) only works when the directory contains all packages. The directory should therefore contain all dependencies but also all packages that those dependencies depend on (e.g., six, pytz etc).
You should therefore manually include these in requirements.txt (so that the first step downloads them explicitly) or you should install all packages using PyPI and then pip freeze > requirements.txt to store the list of all packages needed.
Just a reminder to whom google this error and come here.
Let's say I get this error:
$ python3 example.py
Traceback (most recent call last):
File "example.py", line 7, in <module>
import aalib
ModuleNotFoundError: No module named 'aalib'
Since it mentions aalib, I was thought to try aalib:
$ python3.8 -m pip install aalib
ERROR: Could not find a version that satisfies the requirement aalib (from versions: none)
ERROR: No matching distribution found for aalib
But it actually wrong package name, ensure pip search(service disabled at the time of writing), or google, or search on pypi site to get the accurate package name:
Then install successfully:
$ python3.8 -m pip install python-aalib
Collecting python-aalib
Downloading python-aalib-0.3.2.tar.gz (14 kB)
...
As pip --help stated:
$ python3.8 -m pip --help
...
-v, --verbose Give more output. Option is additive, and can be used up to 3 times.
To have a systematic way to figure out the root causes instead of rely on luck, you can append -vvv option of pip command to see details, e.g.:
$ python3.8 -u -m pip install aalib -vvv
User install by explicit request
Created temporary directory: /tmp/pip-ephem-wheel-cache-b3ghm9eb
Created temporary directory: /tmp/pip-req-tracker-ygwnj94r
Initialized build tracking at /tmp/pip-req-tracker-ygwnj94r
Created build tracker: /tmp/pip-req-tracker-ygwnj94r
Entered build tracker: /tmp/pip-req-tracker-ygwnj94r
Created temporary directory: /tmp/pip-install-jfurrdbb
1 location(s) to search for versions of aalib:
* https://pypi.org/simple/aalib/
Fetching project page and analyzing links: https://pypi.org/simple/aalib/
Getting page https://pypi.org/simple/aalib/
Found index url https://pypi.org/simple
Getting credentials from keyring for https://pypi.org/simple
Getting credentials from keyring for pypi.org
Looking up "https://pypi.org/simple/aalib/" in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTPS connection (1): pypi.org:443
https://pypi.org:443 "GET /simple/aalib/ HTTP/1.1" 404 13
[hole] Status code 404 not in (200, 203, 300, 301)
Could not fetch URL https://pypi.org/simple/aalib/: 404 Client Error: Not Found for url: https://pypi.org/simple/aalib/ - skipping
Given no hashes to check 0 links for project 'aalib': discarding no candidates
ERROR: Could not find a version that satisfies the requirement aalib (from versions: none)
Cleaning up...
Removed build tracker: '/tmp/pip-req-tracker-ygwnj94r'
ERROR: No matching distribution found for aalib
Exception information:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pip/_internal/cli/base_command.py", line 186, in _main
status = self.run(options, args)
File "/usr/lib/python3/dist-packages/pip/_internal/commands/install.py", line 357, in run
resolver.resolve(requirement_set)
File "/usr/lib/python3/dist-packages/pip/_internal/legacy_resolve.py", line 177, in resolve
discovered_reqs.extend(self._resolve_one(requirement_set, req))
File "/usr/lib/python3/dist-packages/pip/_internal/legacy_resolve.py", line 333, in _resolve_one
abstract_dist = self._get_abstract_dist_for(req_to_install)
File "/usr/lib/python3/dist-packages/pip/_internal/legacy_resolve.py", line 281, in _get_abstract_dist_for
req.populate_link(self.finder, upgrade_allowed, require_hashes)
File "/usr/lib/python3/dist-packages/pip/_internal/req/req_install.py", line 249, in populate_link
self.link = finder.find_requirement(self, upgrade)
File "/usr/lib/python3/dist-packages/pip/_internal/index/package_finder.py", line 926, in find_requirement
raise DistributionNotFound(
pip._internal.exceptions.DistributionNotFound: No matching distribution found for aalib
From above log, there is pretty obvious the URL https://pypi.org/simple/aalib/ 404 not found. Then you can guess the possible reasons which cause that 404, i.e. wrong package name. Another thing is I can modify relevant python files of pip modules to further debug with above log. To edit .whl file, you can use wheel command to unpack and pack.
After 2 hours of searching, I found a way to fix it with just one line of command. You need to know the version of the package (Just search up PACKAGE version).
Command:
python3 -m pip install --pre --upgrade PACKAGE==VERSION.VERSION.VERSION
Below command worked for me -
python -m pip install flask
Not always, but in some cases the package already exists. For example - getpass. It is not listed by "pip list" but it can be imported and used:
If I try to pip install getpass I get the following error:
"Could not find a version that satisfies the requirement getpass"
Try installing flask through the powershell using the following command.
pip install --isolated Flask
This will allow installation to avoide environment variables and user configuration.
If you facing this issue at the workplace. This might be the solution for you.
pip install -U <package_name> --user --proxy=<your proxy>
Pip install from pypi.org.
pip install -U -i https://pypi.org/simple package
One possible error, pip package requires python intepreter which you are not using.
I ran into the same problem, it occurred only when I ran commands from my Docker image (or Dockerfile). Finally many hours later I managed to solve it by updating my python intepreter. Pointed out that my pip-package required python>=3,7 but my Docker image was using python 3.6.
Tip: To check out if you have similar problem, just check pip package requirements and your python version. Private pip package intepreter requirements are wrote down inside setup.py or setup.cfg. Public pip packages are usuially hosted in pypi.org where you can just check intepreter requirements with your browser. To check your python intepreter version just write for example python --version or python3 --version in your console
General problem description
As other answers point out there can also be other requirements that you are not satisfying and that is why pip can not found suitable package version for you. All the requirements are wrote down in pip package documentation and can be easily readed from https://pypi.org/project/graphene-django/your-package
I got this error while installing awscli on Windows 10 in anaconda (python 3.7).
While troubleshooting, I went to the answer https://stackoverflow.com/a/49991357/6862405 and then to https://stackoverflow.com/a/54582701/6862405. Finally found that I need to install the libraries PyOpenSSL, cryptography, enum34, idna and ipaddress. After installing these (using simply pip install command), I was able to install awscli.
When I lost my internet connection, I had this error.
Since it's a pretty annoying problem that may stuck beginners for a long period of time, here I write a complete guild.
if you are running pip install PACKAGE or python -m pip install PACKAGE, and a no matching version found error reported, here's how to solve the problem.
search your package on browser, for example my package is pycypto, here I search pycypto pypi
find your package, open the link on pypi, click download file
open a python shell, import any of your installed package, for example, I have installed Pillow before.
>>> import PIL
>>> PIL.__path__
['/Applications/MAMP/htdocs/canvas/src/zzd/env/lib/python3.7/site-packages/PIL']
PACKAGE.__path__ function will gives you the side packages path where all packages should go into.
PLUS:
if you have no idea what packages you installed before, run pip list to get a list of installed packages.
after we obtain the path, open a shell, cd to the path
cd /Applications/MAMP/htdocs/canvas/src/zzd/env/lib/python3.7/site-packages/
open
unzip the downloaded file, drag it into site-packages.
cd into the downloaded directory, and run setup.py to install
cd pycrypto-2.6.1
python setup.py install
Then you should be able to import and use the package in python.
Same error in slightly different circumstances, on MacOs. Apparently setuptools versions past 45 can expose some issues and this command got me past it:
pip3 install setuptools==45
If the package is local, don't miss the relative path.
E.g.
pip install ./<pkg>
finally worked in my case, while
pip install <pkg>
yielded:
ERROR: Could not find a version that satisfies the requirement <pkg> (from versions: none)
ERROR: No matching distribution found for <pkg>
I had a problem installing pandas-1.4.3, and the problem was my python patch version. pandas-1.4.3 required python version 3.8.13 and did not work with 3.8.9:
python install -r requirements.txt # or pip install pandas==1.4.3
# -> Could not find a version that satisfies...
conda activate my_project # creates a virtual env for a new python version
conda install python=3.8.13 # installing the new python version
python --version # displays 3.8.13
pip install -r python/requirements.txt
# -> pandas installed as expected
Search in google if you find some other version of that package available
use that for example
I was getting errors using the glob so I used glob2 instead