I have tried to install Scrapy on mac 10.8.2. Here's what I did:
In terminal, I ran the command from with myuser directory:
pip install --user scrapy
I got the following message in Terminal:
Successfully installed scrapy
Cleaning up...
Next I do the following from the same myuser dir:
scrapy shell http://example.com
Here's the error I am getting:
-bash: scrapy: command not found
I believe this is a path issue, scrapy has been installed in /Library/Python/2.7/lib/python/site-packages. How do I get scrapy to run?
--user option is used when you want to install a package into the local user's $HOME, e.g. on Mac it should be $HOME/Library/Python/2.7/lib/python/site-packages.
scrapy executable could be found at $HOME/Library/Python/2.7/bin/scrapy. So, you should edit your .bash_login file and modify PATH env variable:
PATH="$HOME/Library/Python/2.7/bin/:$PATH"
Or, just reinstall scrapy without --user flag.
Hope that helps.
Related
I am trying to get scrapyd to deploy but everytime I run the command
sudo scrapyd-deploy local
I get the following error
Unable to execute /usr/local/bin/scrapyd-deploy: No such file or directory
I did the following to try and trouble shoot
reinstall python
pip install scrapy
pip install scrapyd
pip install scrapyd-client
I checked usr/local/bin and found that the following files exist
scrapy
scrapyd
scrapyd-deploy
I'm not sure why the scrapy files exist in the folder but when I try to run scrapyd-deploy local it cannot find them.
I had upgraded to os mojave after which all of the errors started. When I first tired to run scrapyd, brew installed python.
I was able to resolve issue by starting over.
I did the following:
brew uninstall python
pip uninstall scrapy
pip uninstall scrapyd
pip uninstall scrapyd-client
I deleted docker
I then reinstalled scrapyd-client after which the error resolved and I was able to deploy scrapyd.
So I have been using regular windows command prompt and wanted to try using bash as most forums give commands in bash and it's a little cumbersome to try to find the translation to windows. Currently trying out Spotify API and I want to run a virtual environment.
I do the following windows command and everything runs fine:
[WINDOWS]
python -m pip install virtualenv
this, does not:
[BASH]
pip install virtualenv
and I get returned bash: pip: command not found
SO I go to install pip using sudo easy_install pip and get returned bash: sudo: command not found.
I am running CMDER as admin in bash so I thought ok, I will try easy_install pip and returned bash: easy_install: command not found. SO i went to the actual python directory and went to install pip again and no luck.
Any insight on how I can address this?
[Windows]]1[Bash]2
You can try to install pip by downloading the get-pip.py from here and then run it using python get-pip.py
After that
You might need to set your Environment Variable to include PIP in your path. you can use Environment Variables in Control Panel and add the path to System Variables.
I ran into this issue as well. Not sure what causes it, but switching to cmd.exe and running pip install ... worked without issue.
I installed scrapy via sudo pip install scrapy. It installed the python modules into site-packages and I can import scrapy in my python environment. However, attempting to use the command line tool throws an error:
scrapy startproject demo
has error The program 'scrapy' is not currently installed. and tells me to install python-scrapy.
whereis scrapy has no output. Got tired of trying to track down the install path, so I ran find -name "*crap*", which also turned up nothing useful. It seems that the commandline tool wasn't installed by pip. What am I missing with this pip install?
The problem is sudo pip install scrapy installs scrapy in a directory not accessible by the current user, if you are not root.
You need to remove scrapy first sudo pip uninstall scrapy then reinstall with the -H sudo flag sudo -H pip install scrapy this will make it such that your command line can detect the scrapy installation.
This also does not answer the question why is scrapy command line tool not available, but if scrapy is importable as you comment, you can use:
$ python -m scrapy.cmdline version -v
$ python -m scrapy.cmdline shell <url>
scrapy is an alias to this in fact, as specified in Scrapy's setup.py entry_points section, and should have been setup by pip install.
This doesn't answer the question of what's wrong with the pip install, but to anyone with a working scrapy package but non-functional commandline command, you can create a script to run scrapy command line tool for you:
#! /usr/bin/python2.7
# path to python 2.7 (python 3 doesn't work well with scrapy atm)
import sys
import scrapy.cmdline
sys.exit(scrapy.cmdline.execute())
saved in a file (with execute permissions) called scrapy somewhere in your $PATH.
Verify whether you have these packages or not:
w3lib, cssselect, parsel, attrs, pyasn1-modules, service-identity, PyDispatcher, queuelib, zope.interface, constantly, incremental, Twisted, scrapy
I used :
$ pip install scrapy
on ubuntu 16.04 and all these packages were installed by it. After this I tried:
$ scrapy startproject demo
and it worked for me with this output:
New Scrapy project 'demo', using template directory '/home/*machine_name*/anaconda2/lib/python2.7/site-packages/scrapy/templates/project', created in:
/home/*machine_name*/demo
You can start your first spider with:
cd demo
scrapy genspider example example.com
Scrapy is not installed on you machine. If you want to install first run these command Which is used to install python-dev on you system
sudo apt-get install build-essential libssl-dev libffi-dev python-dev libxml2-dev
before these commands you should have run upgrade commands
sudo apt-get update
and
sudo apt-get upgrade
after these run
pip install scrapy
when it finish run following to check whether scrapy is installed or not
scrapy version
if version prompts you have installed scrapy successfully.
I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get
Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages
When I try to run scrapy from the command, with scrapy crawl ... for example, I get.
The program 'scrapy' is currently not installed.
What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it?
Without sudo, pip installs into $HOME/.local/bin, $HOME/.local/lib, etc. Add the following line to your ~/.bashrc or ~/.profile (or the appropriate place for other shells):
export PATH="${PATH}:${HOME}/.local/bin"
then open a new terminal or reload .bashrc, and it should find the command.
I had the same error. Running scrapy in a virtual environment solved it.
Create a virtual env : python3 -m venv env
Activate your env : source env/bin/activate
Install Scrapy with pip : pip install scrapy
Start your crawler : scrapy crawl your_project_name_here
For example my project name was kitten, I just did the following in step 4
scrapy crawl kitten
NOTE: I did this on Mac OS running Python 3+
I tried the following sudo pip install scrapy , however was promtly advised by Ubuntu 16.04 that it was already installed.
I had to first use sudo pip uninstall scrapy, then sudo pip install scrapy for it to successfully install.
Now you should successfully be able to run scrapy.
I faced the same problem and solved using following method. I think scrapy is not usable by the current user.
Uninstall scrapy.
sudo pip uninstall scrapy
Install scrapy again using -H.
sudo -H pip install scrapy
Should work properly.
If you install scrapy only in virtualenv, then scrapy command isn't exists in your system bin directory. You could check it like this:
$ which scrapy
For me it is in(because I sudo installed it):
/usr/local/bin/scrapy
You could try full path to your scrapy.
For example if it is installed in virtualenv:
(env) linux#hero:~dev/myscrapy$ python env/bin/scrapy
Note: We recommend installing Scrapy inside a virtual environment on all platforms.
I had the same issue. sudo pip install scrapy fixed my problem, although I don't know why must use sudo.
make sure you activate command that is
"Scripts\activate.bat"
A good way to work around is using pyenv to manage the python version.
$ brew install pyenv
# Any version 3.6 or above
$ pyenv install 3.7.3
$ pyenv global 3.7.3
# Update Environment to reflect correct python version controlled by pyenv
$ echo -e '\nif command -v pyenv 1>/dev/null 2>&1; then\n eval "$(pyenv init -)"\nfi' >> ~/.zshrc
# Refresh Terminal
# or source $~/.zshrc
$ exec $0
$ which python
/Users/mbbroberg/.pyenv/shims/python
$ python -V
Python 3.7.3
# Install scrapy
$ pip install scrapy
$ scrapy --version
Reference:
https://opensource.com/article/19/5/python-3-default-mac
scrapy crawl is not how you start a scrapy program. You start it by doing
scrapy startproject myprojectname
Then to actually start a scrapy program go into myprojectname/spiders and then you can call
scrapy crawl "yourspidername"
To have scrapy create a spider you can cd into your directory and execute
scrapy genspider mydomain mydomain.com
Additionally, you can test if your scrapy actually works by executing
scrapy shell "google.com"
All this information can be found in their Documentation.
If something happens then you have actually installed scrapy and you are crawling (haha) your way to success!
P.S. Scrapy does not work well on Python3 so if you're running on there and you still have troubles, use Python 2.7!
I want to install Scrapy like this
pip install scrapy
However I'm getting
-bash: /usr/local/bin/pip: /usr/bin/python: bad interpreter: No such file or directory
How can I get fix this?
It might be that you do not have a python interpreter installed or may have it in a non-standard location.
Similar issue with YUM:
How to fix "Bad interpreter" error when using yum?