I'm learning Python and have Anaconda installed, and I'm trying to familiarize myself with the process of getting an Eye-Color Detection project working.
I'm running into the following error after going through readme:
Eye-Color-Detection git:(master) ✗ sudo pip install -r requirements.txt
WARNING: The directory '/Users/{user}/Library/Caches/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
When trying to update:
(tweet) ➜ Eye-Color-Detection git:(master) ✗ conda update --all
[Errno 13] Permission denied: '/Users/{user}/opt/anaconda3/envs/tweet/lib/python3.8/site-packages/wrapt/__init__.py' -> '/Users/{user}/opt/anaconda3/envs/tweet/lib/python3.8/site-packages/wrapt/__init__.py.c~'
Q: How might I go about doing this correctly within the same Conda environment?
Most of the time, a sudo pip install is almost never what you really want. While in some cases, it may "appear" to work and solve you're immediate problem. More often than not you've just broken your system python without knowing it.
In the context of that repo, I'd ignore the repo's README and do this.
$ git clone https://github.com/ghimiredhikura/Eye-Color-Detection
$ cd Eye-Color-Detection
Create a virtualenv environment, change yourenvname as you like.
$ conda create -n yourenvname python=3.x
$ conda activate yourenvname
Install the dependencies and run the code
$ pip install -r requirements.txt
$ python3 eye-color.py --input_path=sample/2.jpg --input_type=ima
As fixing you conda environment may be difficult to debug depending on what else you've sudo'd in attempting to resolve your issue. If you happen to be familiar with "regular" virtualenv's created using python's builtin virtual environment tooling, then you could also try this to get you going.
$ python3 -m venv .venv --copies
$ source .venv/bin/active
$ pip install -r requirements.txt
$ python3 eye-color.py --input_path=sample/2.jpg --input_type=image
What you need to do is you have to change the directory premission to writable.
You can do it using this command,
$ sudo chmod 7777 /Users/{user}/library/caches/
to change permissions recursively,
$ sudo chmod 7777 -R /Users/{user}/library/caches/
or you can own that directory by using this command,
$ sudo chown OWNER:GROUP /Users/{user}/library/caches/
where OWNER is the username for your computer which you can find in the terminal by using this command.
$ whoami
GROUP is optional.
Related
When running pip install django-cron I get the following error:
ERROR: Error [Errno 13] Permission denied: '/vagrant/.venv/bin/python' while executing command python setup.py egg_info
ERROR: Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/vagrant/.venv/bin/python'
Consider using the `--user` option or check the permissions.
However, if I use --user, I get a different error saying:
ERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv.
My venv is activated.
When I previously tried installing libraries, everything worked, if I use the sudo command I get the following warning:
WARNING: The directory '/home/vagrant/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Using -H doesn't resolve the issue sadly, I am not sure how I can change my access to the .venv file, any help would be appreciated.
I only get this error for Python modules django-cron and django-crontab, but other modules like pillow can be installed successfully.
Edit 4:
My setup is a bit janky, as I am using Vagrant, but I have PyCharm Community Editon, so I end up downloading the packages twice, once just so the editor would recognize it and another time for Vagrant where I run the program, and when I did this in PyCharm, it worked in PyCharm.
This is the Vagrantfile I used:
Vagrant.configure("2") do |config|
config.vm.box = "bento/ubuntu-18.04"
config.vm.network "forwarded_port", guest: 8080, host: 8080
config.vm.provision "shell", inline: <<-SHELL
sudo apt-get install python3-distutils -y
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
sudo python3 get-pip.py
rm get-pip.py
sudo pip install virtualenv
cd /vagrant
virtualenv -p /usr/bin/python3 .venv --always-copy
echo "cd /vagrant && source /vagrant/.venv/bin/activate" >> /home/vagrant/.profile
SHELL
end
By default, Vagrant provisioning scripts are executed as root. Since you create the virtual environment during the provisioning the directories are owned by root and not accessible for the normal user (vagrant).
To solve this, you should set the shell provisioning option "privileged" to false.
Change this line:
config.vm.provision "shell", inline: <<-SHELL
to:
config.vm.provision "shell", privileged: false, inline: <<-SHELL
Alternatively, you could modify your provisioning script to run the virtualenv command as the vagrant user using the following command:
sudo -u vagrant virtualenv -p /usr/bin/python3 .venv --always-copy
UPDATE:
Although the above is generally true, it's not the cause of the problem in your case, since you installed the virtual environment inside /vagrant, which is a virtual mount of the directory on your host machine (the directory where your Vagrantfile is stored). Normal file permissions do not apply, or at least not in the usual way, for this directory.
It seems that the Python modules django-cron and django-crontab have an issue with this mount, for whatever reason (might be a bug).
Creating the virtual environment inside the VM file system instead of the host file system solves the problem. You could use the following Vagrantfile. I tested this and I could install django-cron without errors.
Vagrant.configure("2") do |config|
config.vm.box = "bento/ubuntu-18.04"
config.vm.network "forwarded_port", guest: 8080, host: 8080
config.vm.provision "shell", privileged: false, inline: <<-SHELL
sudo apt-get install python3-distutils -y
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
sudo python3 get-pip.py
rm get-pip.py
sudo pip install virtualenv
virtualenv -p /usr/bin/python3 /home/vagrant/venv --always-copy
echo "cd /vagrant && source /home/vagrant/venv/bin/activate" >> /home/vagrant/.profile
SHELL
end
This usually happens when you don't have write access to your /vagrant/.venv folder. You can check access with ls -l cmd.
If so, you should change your access on your /vagrant/.venv folder.
Just try to use the pip command in cmd
pip install <packagename>
I try to create a virtual environment to let me install NumPy, SciPy and Matplotlib
I write this: python3.8 -m venv work3.8
and the result was: Error: Command '['/home/mohammed/work3.8/bin/python3.8', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1.
Probably you got the message because the venv package is not present on your system for python3.8or that you need sudo before the command.
Solve it by installing the venv package for python3.8:
apt install python3.8-venv
Then create a new venv virtual environment:
python3.8 -m venv ~/.venvs/work3.8
The above assumes that inside your home directory, you have previously created a directory called .venv, inside of which the venv virtual environments will be stored. If not, it can be created with mkdir ~/.venv
You might need to prepend sudo to the commands, depending on what privileges your user has, like so:
sudo apt install python3.8-venv
sudo python3.8 -m venv ~/.venvs/work3.8
Please try: python3 -m venv [your directory]
After that, if you want to use your new virtual enviroment type: source [your directory]/bin/activate
Then if you want to leave your venv type: deactivate
(checked on Debian)
I hope that this helps you,
Kind regards
I'm trying to automate the deployment of my Python-Flask app on Ubuntu 18.04 using Bash by going through the motion of preparing all the necessary files/directories and cloning the source code from Github followed by creating the virtual environment, installing the pre-requisite modules and etc.
Now because I have to execute my Bash script using sudo, this means that the entire script will be executed as root except where I specify otherwise using sudo -u myuser and when it comes to activating my virtual environment, I get the following output: sudo: source: command not found and my subsequent pip installs are all installed outside of the virtual environment. Excerpts of my code below:
#!/bin/bash
...
sudo -u "$user" python3 -m venv .env
sudo -u $SUDO_USER source /srv/www/www.mydomain.com/.env/bin/activate
sudo -u "$user" pip install wheel
sudo -u "$user" pip install uwsgi
sudo -u "$user" pip install -r requirements.txt
...
Now for the life of me, I can't figure out how to activate the virtual environment in the context of the virtual environment if this makes any sense.
I've scoured the web and most of the questions/answers I found revolves around how to activate the virtual environment in a Bash script but not how to activate the virtual environment as a separate user within a Bash script that was executed as sudo.
That's because source is not an executable file, but a built-in bash command. It won't work with sudo, since the latter accepts a program name (i.e. executable file) as argument.
P.S. It's not clear why you have to execute the whole script as root. If you need to execute only a number of commands as root (e.g. for starting/stopping a service) and run a remaining majority as a regular user, you can use sudo only for these commands. E.g. the following script
#!/bin/bash
# The `whoami` command outputs the current username. Unlike `source`, this is
# a full-fledged executable file, not a built-in command
whoami
sudo whoami
sudo -u postgres whoami
on my machine outputs
trolley813
root
postgres
P.P.S. You probably don't need to activate an environment as root.
If you should ever encounter the following error when creating a Python virtual environment using the pyvenv command:
user$ pyvenv my_venv_dir
Error: Command '['/home/user/my_venv_dir/bin/python', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1
... then the answer (below) provides a simple way to work around it, without resorting to setuptools and it's related acrobatics.
Here's an approach that is fairly O/S agnostic...
Both the pyvenv and python commands themselves include a --without-pip option that enable you to work around this issue; without resorting to setuptool or other headaches. Taking note of my inline comments below, here's how to do it, and is very easy to understand:
user$ pyvenv --without-pip ./pyvenv.d # Create virtual environment this way;
user$ python -m venv --without-pip ./pyvenv.d # --OR-- this newer way. Both work.
user$ source ./pyvenv.d/bin/activate # Now activate this new virtual environment.
(pyvenv.d) user$
# Within it, invoke this well-known script to manually install pip(1) into /pyvenv.d:
(pyvenv.d) user$ curl https://bootstrap.pypa.io/get-pip.py | python
(pyvenv.d) user$ deactivate # Next, reactivate this virtual environment,
user$ source ./pyvenv.d/bin/activate # which will now include the pip(1) command.
(pyvenv.d) user$
(pyvenv.d) user$ which pip # Verify that pip(1) is indeed present.
/path/to/pyvenv.d/bin/pip
(pyvenv.d) user$ pip install --upgrade pip # And finally, upgrade pip(1) itself;
(pyvenv.d) user$ # although it will likely be the
# latest version already.
# And that's it!
I hope this helps. \(◠﹏◠)/
2020, on python3.8 on WSL2 (Ubuntu) the following solved this for me:
sudo apt install python3.8-venv
If you have a module whose file is named after a python standard library, in the folder on which you invoke python -m venv venv, then this command will fail without a hint about that.
For instance, you name a file email.py.
What I did to find this was coding a bash script which moves the .py files off the current directory one by one (to a holdspace/ subdir), and try at each move to invoke the venv dir creation. If the python -m venv venv command exits with 0 code, so it's successful and the last file moved was the culprit.
#!/bin/bash
if [ ! -d ./holdspace ]
then
mkdir holdspace/
fi
for file in *.py
do
mv "$file" holdspace/
python -m venv venv >/dev/null 2>&1
if [ $? -eq 0 ]
then
echo "$file was the culprit."
rm -rf venv/
break
else
echo "$file isn't the culprit."
fi
rm -rf venv/
done
mv holdspace/*.py .
The following should fix it
brew update
brew upgrade
brew install zlib
I am attempting to install Portia, a python app from Github: https://github.com/scrapinghub/portia
I use the following steps at the command line:
set up new virtualenv 'portia' in Mac terminal
git clone https://github.com/scrapinghub/portia.git
follow readme instructions:
cd slyd
pip install -r requirements.txt
run Portia
cd slyd
twistd -n slyd
But every time I attempt the last step to run the program, I get the following error:
ImportError: No module named scrapy
Any idea why this error is occurring? All previous steps seem to install correctly. Is it an error earlier in my install process?
Thanks!
I don't have the rep to upvote Alagappan's answer but he's correct. Also, if you're as inexperienced as I am, you may need further clarity on this.
You have to create, activate and navigate into the virtualenv before installing anything (including cloning portia from github). Here's the whole thing working from start to finish:
1: cd to wherever you’d like to store your project...
and Install virtualenv:
$ pip install virtualenv
2: Create the virtual environment. (I called mine “portia” but this can be anything.):
$ virtualenv portia
3: Activate the virtual environment you created (change the path to reflect the name you used here if not “portia”.):
$ source portia/bin/activate
At this point your terminal should have display the virtualenv name in parenthesis before the standard directory path prompt:
(name-of-virtualenv) [your-machine]:[current-directory]: [user]$
...and if you list the files within your pwd you’ll see the name of you virtualenv there.
4: cd into your virtualenv (“portia” for me):
$ cd portia
5: Now you can clone portia from github into your virtualenv...
$ git clone https://github.com/scrapinghub/portia
6: cd into the cloned portia/slyd...
$ cd portia/slyd
7/8: pip install twisted and Scrapy...
$ pip install twisted
$ pip install Scrapy
You’re virtualenv should still be activated and you should still be in [virtualenv-name]/portia/slyd
9: Install the requirements.txt:
$ pip install -r requirements.txt
10: Run slyd:
$ twistd -n slyd
--- No more scrapy error! ---
Another Installation Method For Portia: Using Vagrant
Here is the method that made me install Portia with ease. Works with Mac, Windows and Linux. With a few commands and clicks, you'll get a fully functional web scraper.
Things Needed:
VirtualBox
Vagrant
Clone the repo for Portia or download the zip file.
Additional Steps To Take:
Install VirtualBox.
Install Vagrant
Open your terminal and navigate to where you cloned the Portia repo or where you've extracted it (in case of a zip file).
Then make a command vagrant up - This will download and setup a VirtualBox Guest VM for you + will install all the necessary requirements for Portia and will install Portia from start to finished.
After the above process, you may now open your browser and navigate to
http://the-virtualbox-ip:8000/static/main.html
And you're setup.
It's quite simple, you just need to install the python module scrapy in the same way that the Twitter API requires setuptools
pip install scrapy
I suppose the issue you are facing is because of the virtualenv. Once you setup a new virtual environment you need to run the activate script in order to start using it. In your case you'll have to run the following command:
$ source portia/bin/activate
On successful activation, your prompt will look like:
(portia) $
Can you check if you activated your virtual environment before you installed the packages using pip? I believe doing so will fix your issue.