The Goal
Install ssdeep PyPi package on a M1 Macbook Pro.
The Problem
When I run pip install ssdeep I get 2 errors
The first error is caused because fuzzy.h cannot be found.
warnings.warn(
running egg_info
creating /private/var/folders/0f/4c82tsj50n10zqq89fndcslc0000gn/T/pip-pip-egg-info-ai0atrdv/ssdeep.egg-info
writing /private/var/folders/0f/4c82tsj50n10zqq89fndcslc0000gn/T/pip-pip-egg-info-ai0atrdv/ssdeep.egg-info/PKG-INFO
writing dependency_links to /private/var/folders/0f/4c82tsj50n10zqq89fndcslc0000gn/T/pip-pip-egg-info-ai0atrdv/ssdeep.egg-info/dependency_links.txt
writing requirements to /private/var/folders/0f/4c82tsj50n10zqq89fndcslc0000gn/T/pip-pip-egg-info-ai0atrdv/ssdeep.egg-info/requires.txt
writing top-level names to /private/var/folders/0f/4c82tsj50n10zqq89fndcslc0000gn/T/pip-pip-egg-info-ai0atrdv/ssdeep.egg-info/top_level.txt
writing manifest file '/private/var/folders/0f/4c82tsj50n10zqq89fndcslc0000gn/T/pip-pip-egg-info-ai0atrdv/ssdeep.egg-info/SOURCES.txt'
src/ssdeep/__pycache__/_ssdeep_cffi_a28e5628x27adcb8d.c:266:14: fatal error: 'fuzzy.h' file not found
#include "fuzzy.h"
^~~~~~~~~
1 error generated.
Traceback (most recent call last):
File "/Users/user/proj/.venv/lib/python3.11/site-packages/setuptools/_distutils/unixccompiler.py", line 186, in _compile
self.spawn(compiler_so + cc_args + [src, '-o', obj] + extra_postargs)
File "/Users/user/proj/.venv/lib/python3.11/site-packages/setuptools/_distutils/ccompiler.py", line 1007, in spawn
spawn(cmd, dry_run=self.dry_run, **kwargs)
File "/Users/user/proj/.venv/lib/python3.11/site-packages/setuptools/_distutils/spawn.py", line 70, in spawn
raise DistutilsExecError(
distutils.errors.DistutilsExecError: command '/usr/bin/clang' failed with exit code 1
The second error has to do with setuptools.installer being deprecated. I'm not sure this is all that important though. I think resolving the first error would resolve this one as well.
/Users/user/proj/.venv/lib/python3.11/site-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
Attempted Solutions
Solution 1: Install SSDeep with Homebrew brew install ssdeep
Result: pip install ssdeep has the same error about fuzzy.h missing
Solution 2: Use the prepackaged version of SSDeep BUILD_LIB=1 pip install ssdeep
Result: The error about fuzzy.h goes away but the second error regarding setuptools.installer being deprecated remains.
References
Compiling SSDeep and pydeep on MacOS X 10.9+ This was pretty out of date though.
SSDeep Documentation
ssdeep package at PyPI is a Python wrapper for ssdeep library written in C. So first you have to compile and install ssdeep, then other python-ssdeep requirements, then compile and install python-ssdeep.
I found a solution. Essentially what's going on is that Homebrew installs ssdeep in a location that the ssdeep PyPi package is not expecting. You can point the PyPi package to the correct locations with the following steps.
1: Install ssdeep with homebrew brew install ssdeep
2: List homebrew directories for ssdeep brew ls ssdeep
This produces output like
/opt/homebrew/Cellar/ssdeep/2.14.1/bin/ssdeep
/opt/homebrew/Cellar/ssdeep/2.14.1/include/ (2 files)
/opt/homebrew/Cellar/ssdeep/2.14.1/lib/libfuzzy.2.dylib
/opt/homebrew/Cellar/ssdeep/2.14.1/lib/ (2 other files)
/opt/homebrew/Cellar/ssdeep/2.14.1/share/man/man1/ssdeep.1
3: Set the LDFLAGS environment variable to the path of the ssdeep lib directory from the output in step 2.
LDFLAGS="-L/opt/homebrew/Cellar/ssdeep/2.14.1/lib"
4: Set the C_INCLUDE_PATH environment variable to the path of the ssdeep include directory from the output in step 2.
export C_INCLUDE_PATH=/opt/homebrew/Cellar/ssdeep/2.14.1/include
5: Install ssdeep from PyPi pip install ssdeep
#HopAlongPolly answer almost worked for me but i got an error which the root cause seems to be:
ld: warning: ignoring file /opt/homebrew/Cellar/ssdeep/2.14.1/lib/libfuzzy.dylib, building for macOS-x86_64 but attempting to link with file built for macOS-arm64
to solve this i ran BUILD_LIB=1 pip install ssdeep
If your env is pretty new you will get the following errors:
/bin/sh: libtoolize: command not found
/bin/sh: automake: command not found
to solve this you need to run brew install libtool automake and then create the following a symlink somewhere in your path from libtoolize to the glibtoolize binary that was installed via brew (This is needed as build process looks for libtoolize but homebrew installs glibtoolize). Pretty sure there is a cleaner way to point to the correct binary but the symlink did the job ;)
In summary do the steps that #HopAlongPolly recommended then run
brew install libtool automake
ln -s /opt/homebrew/bin/glibtoolize /opt/homebrew/bin/libtoolize
BUILD_LIB=1 pip install ssdeep
I have Flask application that I want to apply to Heroku.
I have made a Procfile as such:
web: gunicorn routes:app
and a requirements.txt:
click==6.7
Flask==0.12
gunicorn==19.6.0
itsdangerous==0.24
Jinja2==2.8.1
MarkupSafe==1.0
Werkzeug==0.14.1
but whenever I try to run the command:
git push heroku master
I always get this error:`App not compatible with buildpack: https://buildpack-registry.s3.amazonaws.com/buildpacks/heroku/python.tgz
even though I have set the buildpack to python. My main python file is called routes.py, so the Profile should be correct, and I have made a lot of research and all the dependencies seems to be there, what could be the issue?
for all my dependencies I also have a Pipfile and a Pipfile.lock..
I tried using pip install --upgrade -r requirements.txt
and this resulted in this error:
No matching distribution found for adium-theme-ubuntu==0.3.4 (from -r /tmp/build_622384b275f7a5f640333152a3b25ba1/requirements.txt (line 1))
output of Git Status:
On branch master
Your branch is ahead of 'origin/master' by 3 commits.
(use "git push" to publish your local commits)
nothing to commit, working directory clean
Errors i get from my requirements.txt
File "/app/.heroku/python/lib/python3.6/site-packages/setuptools/__init__.py", line 5, in <module>
import distutils.core
File "/app/.heroku/python/lib/python3.6/distutils/core.py", line 16, in <module>
from distutils.dist import Distribution
File "/app/.heroku/python/lib/python3.6/distutils/dist.py", line 9, in <module>
import re
File "/app/.heroku/python/lib/python3.6/re.py", line 142, in <module>
class RegexFlag(enum.IntFlag):
AttributeError: module 'enum' has no attribute 'IntFlag'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-oqscorl3/enum34/
Push rejected, failed to compile Python app.
When I recently received similar errors, I was able to drop the version numbers from my requirements.txt to get a successful deploy.
For anyone still seeking an answer, it appears the requirements.txt is missing dependency adium-theme-ubuntu==0.3.4 as stated in the error message. The package doesn't exist on PyPI, so pip install won't work. That's why the deployment failed.
In this case, it's often helpful to run pip freeze again to see if any new dependency caused error.
If nothing helps, better do a clean reinstall of everything:
rm -rf venv
python -m venv venv
pip install DEPENDENCY-NAME
Then try deploying again.
I'm trying to install XGBoost on my AWS Ubuntu machine.
I followed the instructions and installed GCC and cmake. However, when I write
pip install xgboost
I get the following error
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-sorzhu8y/xgboost/setup.py", line 29, in <module>
LIB_PATH = libpath['find_lib_path']()
File "/tmp/pip-build-sorzhu8y/xgboost/xgboost/libpath.py", line 45, in find_lib_path
'List of candidates:\n' + ('\n'.join(dll_path)))
XGBoostLibraryNotFound: Cannot find XGBoost Libarary in the candicate path, did you install compilers and run build.sh in root path?
List of candidates:
/tmp/pip-build-sorzhu8y/xgboost/xgboost/libxgboost.so
/tmp/pip-build-sorzhu8y/xgboost/xgboost/../../lib/libxgboost.so
/tmp/pip-build-sorzhu8y/xgboost/xgboost/./lib/libxgboost.so
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-sorzhu8y/xgboost/
Any ideas what could be causing this?
According to python documentation:
This module makes available standard errno system symbols. The value
of each symbol is the corresponding integer value. The names and
descriptions are borrowed from linux/include/errno.h, which should be
pretty all-inclusive.
So Error code 1 is defined in errno.h and means Operation not permitted.
Your setuptools do not appear to be installed.
So, try to upgrade python tools
pip install --upgrade setuptools
If it's already up to date, check that the module ez_setup is not missing. If it is, then
pip install ez_setup
Then try again
pip install unroll
If it's still not working, maybe pip didn't install/upgrade setup_tools properly so you might want to try
easy_install -U setuptools
and again
pip install unroll
for me was resolved after upgrading pip
let us know about you..
There are instruction in the site: http://xgboost.readthedocs.io/en/latest/build.html
First:
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost
make -j4
Then:
cd python-package
sudo python setup.py install
I'm installing several Python packages in Ubuntu 12.04 using the following requirements.txt file:
numpy>=1.8.2,<2.0.0
matplotlib>=1.3.1,<2.0.0
scipy>=0.14.0,<1.0.0
astroML>=0.2,<1.0
scikit-learn>=0.14.1,<1.0.0
rpy2>=2.4.3,<3.0.0
and these two commands:
$ pip install --download=/tmp -r requirements.txt
$ pip install --user --no-index --find-links=/tmp -r requirements.txt
(the first one downloads the packages and the second one installs them).
The process is frequently stopped with the error:
Could not find a version that satisfies the requirement <package> (from matplotlib<2.0.0,>=1.3.1->-r requirements.txt (line 2)) (from versions: )
No matching distribution found for <package> (from matplotlib<2.0.0,>=1.3.1->-r requirements.txt (line 2))
which I fix manually with:
pip install --user <package>
and then run the second pip install command again.
But that only works for that particular package. When I run the second pip install command again, the process is stopped now complaining about another required package and I need to repeat the process again, ie: install the new required package manually (with the command above) and then run the second pip install command.
So far I've had to manually install six, pytz, nose, and now it's complaining about needing mock.
Is there a way to tell pip to automatically install all needed dependencies so I don't have to do it manually one by one?
Add: This only happens in Ubuntu 12.04 BTW. In Ubuntu 14.04 the pip install commands applied on the requirements.txt file work without issues.
Although it doesn't really answers this specific question. Others got the same error message with this mistake.
For those who like me initial forgot the -r: Use pip install -r requirements.txt the -r is essential for the command.
The original answer:
https://stackoverflow.com/a/42876654/10093070
I had installed python3 but my python in /usr/bin/python was still the old 2.7 version
This worked (<pkg> was pyserial in my case):
python3 -m pip install <pkg>
This approach (having all dependencies in a directory and not downloading from an index) only works when the directory contains all packages. The directory should therefore contain all dependencies but also all packages that those dependencies depend on (e.g., six, pytz etc).
You should therefore manually include these in requirements.txt (so that the first step downloads them explicitly) or you should install all packages using PyPI and then pip freeze > requirements.txt to store the list of all packages needed.
Just a reminder to whom google this error and come here.
Let's say I get this error:
$ python3 example.py
Traceback (most recent call last):
File "example.py", line 7, in <module>
import aalib
ModuleNotFoundError: No module named 'aalib'
Since it mentions aalib, I was thought to try aalib:
$ python3.8 -m pip install aalib
ERROR: Could not find a version that satisfies the requirement aalib (from versions: none)
ERROR: No matching distribution found for aalib
But it actually wrong package name, ensure pip search(service disabled at the time of writing), or google, or search on pypi site to get the accurate package name:
Then install successfully:
$ python3.8 -m pip install python-aalib
Collecting python-aalib
Downloading python-aalib-0.3.2.tar.gz (14 kB)
...
As pip --help stated:
$ python3.8 -m pip --help
...
-v, --verbose Give more output. Option is additive, and can be used up to 3 times.
To have a systematic way to figure out the root causes instead of rely on luck, you can append -vvv option of pip command to see details, e.g.:
$ python3.8 -u -m pip install aalib -vvv
User install by explicit request
Created temporary directory: /tmp/pip-ephem-wheel-cache-b3ghm9eb
Created temporary directory: /tmp/pip-req-tracker-ygwnj94r
Initialized build tracking at /tmp/pip-req-tracker-ygwnj94r
Created build tracker: /tmp/pip-req-tracker-ygwnj94r
Entered build tracker: /tmp/pip-req-tracker-ygwnj94r
Created temporary directory: /tmp/pip-install-jfurrdbb
1 location(s) to search for versions of aalib:
* https://pypi.org/simple/aalib/
Fetching project page and analyzing links: https://pypi.org/simple/aalib/
Getting page https://pypi.org/simple/aalib/
Found index url https://pypi.org/simple
Getting credentials from keyring for https://pypi.org/simple
Getting credentials from keyring for pypi.org
Looking up "https://pypi.org/simple/aalib/" in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTPS connection (1): pypi.org:443
https://pypi.org:443 "GET /simple/aalib/ HTTP/1.1" 404 13
[hole] Status code 404 not in (200, 203, 300, 301)
Could not fetch URL https://pypi.org/simple/aalib/: 404 Client Error: Not Found for url: https://pypi.org/simple/aalib/ - skipping
Given no hashes to check 0 links for project 'aalib': discarding no candidates
ERROR: Could not find a version that satisfies the requirement aalib (from versions: none)
Cleaning up...
Removed build tracker: '/tmp/pip-req-tracker-ygwnj94r'
ERROR: No matching distribution found for aalib
Exception information:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pip/_internal/cli/base_command.py", line 186, in _main
status = self.run(options, args)
File "/usr/lib/python3/dist-packages/pip/_internal/commands/install.py", line 357, in run
resolver.resolve(requirement_set)
File "/usr/lib/python3/dist-packages/pip/_internal/legacy_resolve.py", line 177, in resolve
discovered_reqs.extend(self._resolve_one(requirement_set, req))
File "/usr/lib/python3/dist-packages/pip/_internal/legacy_resolve.py", line 333, in _resolve_one
abstract_dist = self._get_abstract_dist_for(req_to_install)
File "/usr/lib/python3/dist-packages/pip/_internal/legacy_resolve.py", line 281, in _get_abstract_dist_for
req.populate_link(self.finder, upgrade_allowed, require_hashes)
File "/usr/lib/python3/dist-packages/pip/_internal/req/req_install.py", line 249, in populate_link
self.link = finder.find_requirement(self, upgrade)
File "/usr/lib/python3/dist-packages/pip/_internal/index/package_finder.py", line 926, in find_requirement
raise DistributionNotFound(
pip._internal.exceptions.DistributionNotFound: No matching distribution found for aalib
From above log, there is pretty obvious the URL https://pypi.org/simple/aalib/ 404 not found. Then you can guess the possible reasons which cause that 404, i.e. wrong package name. Another thing is I can modify relevant python files of pip modules to further debug with above log. To edit .whl file, you can use wheel command to unpack and pack.
After 2 hours of searching, I found a way to fix it with just one line of command. You need to know the version of the package (Just search up PACKAGE version).
Command:
python3 -m pip install --pre --upgrade PACKAGE==VERSION.VERSION.VERSION
Below command worked for me -
python -m pip install flask
Not always, but in some cases the package already exists. For example - getpass. It is not listed by "pip list" but it can be imported and used:
If I try to pip install getpass I get the following error:
"Could not find a version that satisfies the requirement getpass"
Try installing flask through the powershell using the following command.
pip install --isolated Flask
This will allow installation to avoide environment variables and user configuration.
If you facing this issue at the workplace. This might be the solution for you.
pip install -U <package_name> --user --proxy=<your proxy>
Pip install from pypi.org.
pip install -U -i https://pypi.org/simple package
One possible error, pip package requires python intepreter which you are not using.
I ran into the same problem, it occurred only when I ran commands from my Docker image (or Dockerfile). Finally many hours later I managed to solve it by updating my python intepreter. Pointed out that my pip-package required python>=3,7 but my Docker image was using python 3.6.
Tip: To check out if you have similar problem, just check pip package requirements and your python version. Private pip package intepreter requirements are wrote down inside setup.py or setup.cfg. Public pip packages are usuially hosted in pypi.org where you can just check intepreter requirements with your browser. To check your python intepreter version just write for example python --version or python3 --version in your console
General problem description
As other answers point out there can also be other requirements that you are not satisfying and that is why pip can not found suitable package version for you. All the requirements are wrote down in pip package documentation and can be easily readed from https://pypi.org/project/graphene-django/your-package
I got this error while installing awscli on Windows 10 in anaconda (python 3.7).
While troubleshooting, I went to the answer https://stackoverflow.com/a/49991357/6862405 and then to https://stackoverflow.com/a/54582701/6862405. Finally found that I need to install the libraries PyOpenSSL, cryptography, enum34, idna and ipaddress. After installing these (using simply pip install command), I was able to install awscli.
When I lost my internet connection, I had this error.
Since it's a pretty annoying problem that may stuck beginners for a long period of time, here I write a complete guild.
if you are running pip install PACKAGE or python -m pip install PACKAGE, and a no matching version found error reported, here's how to solve the problem.
search your package on browser, for example my package is pycypto, here I search pycypto pypi
find your package, open the link on pypi, click download file
open a python shell, import any of your installed package, for example, I have installed Pillow before.
>>> import PIL
>>> PIL.__path__
['/Applications/MAMP/htdocs/canvas/src/zzd/env/lib/python3.7/site-packages/PIL']
PACKAGE.__path__ function will gives you the side packages path where all packages should go into.
PLUS:
if you have no idea what packages you installed before, run pip list to get a list of installed packages.
after we obtain the path, open a shell, cd to the path
cd /Applications/MAMP/htdocs/canvas/src/zzd/env/lib/python3.7/site-packages/
open
unzip the downloaded file, drag it into site-packages.
cd into the downloaded directory, and run setup.py to install
cd pycrypto-2.6.1
python setup.py install
Then you should be able to import and use the package in python.
Same error in slightly different circumstances, on MacOs. Apparently setuptools versions past 45 can expose some issues and this command got me past it:
pip3 install setuptools==45
If the package is local, don't miss the relative path.
E.g.
pip install ./<pkg>
finally worked in my case, while
pip install <pkg>
yielded:
ERROR: Could not find a version that satisfies the requirement <pkg> (from versions: none)
ERROR: No matching distribution found for <pkg>
I had a problem installing pandas-1.4.3, and the problem was my python patch version. pandas-1.4.3 required python version 3.8.13 and did not work with 3.8.9:
python install -r requirements.txt # or pip install pandas==1.4.3
# -> Could not find a version that satisfies...
conda activate my_project # creates a virtual env for a new python version
conda install python=3.8.13 # installing the new python version
python --version # displays 3.8.13
pip install -r python/requirements.txt
# -> pandas installed as expected
Search in google if you find some other version of that package available
use that for example
I was getting errors using the glob so I used glob2 instead
I am trying to install a Python package and I get a dependency error but I am sure I have fulfilled that requirement.
It says that it can't find libdickinson.so, but this library is already installed (system wide) and its files are in /user/local/lib/. What am I doing wrong?
This is my console output:
(iwidget)chris#mint-desktop ~ $ pip install pthelma
Downloading/unpacking pthelma
Downloading pthelma-0.7.2.tar.gz (50kB): 50kB downloaded
Running setup.py egg_info for package pthelma
libdickinson.so: cannot open shared object file: No such file or directory
Please make sure you have installed dickinson
(see http://dickinson.readthedocs.org/).
Complete output from command python setup.py egg_info:
libdickinson.so: cannot open shared object file: No such file or directory
Please make sure you have installed dickinson
(see http://dickinson.readthedocs.org/).
----------------------------------------
Command python setup.py egg_info failed with error code 1 in /home/chris/.virtualenvs/iwidget/build/pthelma
Storing complete log in /home/chris/.pip/pip.log
(iwidget)chris#mint-desktop ~ $ ls /usr/local/lib/
libdickinson.a libdickinson.la libdickinson.so libdickinson.so.0 libdickinson.so.0.0.0 python2.7/ python3.2/ site_ruby/
(iwidget)chris#mint-desktop ~ $
Also try the above command as superuser:
sudo pip install pthelma
and just go through the thread given below:
Why can't Python find shared objects that are in directories in sys.path?
Try building it yourself and installing from the GIT repo:
git clone https://github.com/openmeteo/pthelma.git
Also, try running it as super user (pip).
sudo pip install pthelma
It looks like it can't see the libdickinson.so file but if you're confident it's installed and setup correctly you can, as I said, try cloning the source and building it that way.