Permission denied while installing django-cron in Vagrant - python

When running pip install django-cron I get the following error:
ERROR: Error [Errno 13] Permission denied: '/vagrant/.venv/bin/python' while executing command python setup.py egg_info
ERROR: Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/vagrant/.venv/bin/python'
Consider using the `--user` option or check the permissions.
However, if I use --user, I get a different error saying:
ERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv.
My venv is activated.
When I previously tried installing libraries, everything worked, if I use the sudo command I get the following warning:
WARNING: The directory '/home/vagrant/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Using -H doesn't resolve the issue sadly, I am not sure how I can change my access to the .venv file, any help would be appreciated.
I only get this error for Python modules django-cron and django-crontab, but other modules like pillow can be installed successfully.
Edit 4:
My setup is a bit janky, as I am using Vagrant, but I have PyCharm Community Editon, so I end up downloading the packages twice, once just so the editor would recognize it and another time for Vagrant where I run the program, and when I did this in PyCharm, it worked in PyCharm.
This is the Vagrantfile I used:
Vagrant.configure("2") do |config|
config.vm.box = "bento/ubuntu-18.04"
config.vm.network "forwarded_port", guest: 8080, host: 8080
config.vm.provision "shell", inline: <<-SHELL
sudo apt-get install python3-distutils -y
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
sudo python3 get-pip.py
rm get-pip.py
sudo pip install virtualenv
cd /vagrant
virtualenv -p /usr/bin/python3 .venv --always-copy
echo "cd /vagrant && source /vagrant/.venv/bin/activate" >> /home/vagrant/.profile
SHELL
end

By default, Vagrant provisioning scripts are executed as root. Since you create the virtual environment during the provisioning the directories are owned by root and not accessible for the normal user (vagrant).
To solve this, you should set the shell provisioning option "privileged" to false.
Change this line:
config.vm.provision "shell", inline: <<-SHELL
to:
config.vm.provision "shell", privileged: false, inline: <<-SHELL
Alternatively, you could modify your provisioning script to run the virtualenv command as the vagrant user using the following command:
sudo -u vagrant virtualenv -p /usr/bin/python3 .venv --always-copy
UPDATE:
Although the above is generally true, it's not the cause of the problem in your case, since you installed the virtual environment inside /vagrant, which is a virtual mount of the directory on your host machine (the directory where your Vagrantfile is stored). Normal file permissions do not apply, or at least not in the usual way, for this directory.
It seems that the Python modules django-cron and django-crontab have an issue with this mount, for whatever reason (might be a bug).
Creating the virtual environment inside the VM file system instead of the host file system solves the problem. You could use the following Vagrantfile. I tested this and I could install django-cron without errors.
Vagrant.configure("2") do |config|
config.vm.box = "bento/ubuntu-18.04"
config.vm.network "forwarded_port", guest: 8080, host: 8080
config.vm.provision "shell", privileged: false, inline: <<-SHELL
sudo apt-get install python3-distutils -y
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
sudo python3 get-pip.py
rm get-pip.py
sudo pip install virtualenv
virtualenv -p /usr/bin/python3 /home/vagrant/venv --always-copy
echo "cd /vagrant && source /home/vagrant/venv/bin/activate" >> /home/vagrant/.profile
SHELL
end

This usually happens when you don't have write access to your /vagrant/.venv folder. You can check access with ls -l cmd.
If so, you should change your access on your /vagrant/.venv folder.

Just try to use the pip command in cmd
pip install <packagename>

Related

Anaconda Sudo PIP permissions Problems

I'm learning Python and have Anaconda installed, and I'm trying to familiarize myself with the process of getting an Eye-Color Detection project working.
I'm running into the following error after going through readme:
Eye-Color-Detection git:(master) ✗ sudo pip install -r requirements.txt
WARNING: The directory '/Users/{user}/Library/Caches/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
When trying to update:
(tweet) ➜ Eye-Color-Detection git:(master) ✗ conda update --all
[Errno 13] Permission denied: '/Users/{user}/opt/anaconda3/envs/tweet/lib/python3.8/site-packages/wrapt/__init__.py' -> '/Users/{user}/opt/anaconda3/envs/tweet/lib/python3.8/site-packages/wrapt/__init__.py.c~'
Q: How might I go about doing this correctly within the same Conda environment?
Most of the time, a sudo pip install is almost never what you really want. While in some cases, it may "appear" to work and solve you're immediate problem. More often than not you've just broken your system python without knowing it.
In the context of that repo, I'd ignore the repo's README and do this.
$ git clone https://github.com/ghimiredhikura/Eye-Color-Detection
$ cd Eye-Color-Detection
Create a virtualenv environment, change yourenvname as you like.
$ conda create -n yourenvname python=3.x
$ conda activate yourenvname
Install the dependencies and run the code
$ pip install -r requirements.txt
$ python3 eye-color.py --input_path=sample/2.jpg --input_type=ima
As fixing you conda environment may be difficult to debug depending on what else you've sudo'd in attempting to resolve your issue. If you happen to be familiar with "regular" virtualenv's created using python's builtin virtual environment tooling, then you could also try this to get you going.
$ python3 -m venv .venv --copies
$ source .venv/bin/active
$ pip install -r requirements.txt
$ python3 eye-color.py --input_path=sample/2.jpg --input_type=image
What you need to do is you have to change the directory premission to writable.
You can do it using this command,
$ sudo chmod 7777 /Users/{user}/library/caches/
to change permissions recursively,
$ sudo chmod 7777 -R /Users/{user}/library/caches/
or you can own that directory by using this command,
$ sudo chown OWNER:GROUP /Users/{user}/library/caches/
where OWNER is the username for your computer which you can find in the terminal by using this command.
$ whoami
GROUP is optional.

Activating a python virtual environment within a bash script fails with "sudo: source: command not found"

I'm trying to automate the deployment of my Python-Flask app on Ubuntu 18.04 using Bash by going through the motion of preparing all the necessary files/directories and cloning the source code from Github followed by creating the virtual environment, installing the pre-requisite modules and etc.
Now because I have to execute my Bash script using sudo, this means that the entire script will be executed as root except where I specify otherwise using sudo -u myuser and when it comes to activating my virtual environment, I get the following output: sudo: source: command not found and my subsequent pip installs are all installed outside of the virtual environment. Excerpts of my code below:
#!/bin/bash
...
sudo -u "$user" python3 -m venv .env
sudo -u $SUDO_USER source /srv/www/www.mydomain.com/.env/bin/activate
sudo -u "$user" pip install wheel
sudo -u "$user" pip install uwsgi
sudo -u "$user" pip install -r requirements.txt
...
Now for the life of me, I can't figure out how to activate the virtual environment in the context of the virtual environment if this makes any sense.
I've scoured the web and most of the questions/answers I found revolves around how to activate the virtual environment in a Bash script but not how to activate the virtual environment as a separate user within a Bash script that was executed as sudo.
That's because source is not an executable file, but a built-in bash command. It won't work with sudo, since the latter accepts a program name (i.e. executable file) as argument.
P.S. It's not clear why you have to execute the whole script as root. If you need to execute only a number of commands as root (e.g. for starting/stopping a service) and run a remaining majority as a regular user, you can use sudo only for these commands. E.g. the following script
#!/bin/bash
# The `whoami` command outputs the current username. Unlike `source`, this is
# a full-fledged executable file, not a built-in command
whoami
sudo whoami
sudo -u postgres whoami
on my machine outputs
trolley813
root
postgres
P.P.S. You probably don't need to activate an environment as root.

Add Miniconda binaries to path in Docker container

I'm using the following commands in my Dockerfile to install Miniconda. After I install it, I want to use the binaries in ~/miniconda3/bin like python and conda. I tried exporting the PATH with this new path prepended to it, but the subsequent pip command fails (pip is located in ~/miniconda3/bin.
Curiously, if I run the container in interactive terminal mode, the path is set correctly and I'm able to call the binaries as expected. It seems as though the issue is only when building the container itself.
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install -y python3.7
RUN apt-get install -y curl
RUN curl https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh --output miniconda.sh
RUN bash miniconda.sh -b
RUN export PATH="~/miniconda3/bin:$PATH"
RUN pip install pydub # errors out when building
Here's the result of echo $PATH
~/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Here's the error I get
/bin/sh: 1: pip: not found
export won't work. Try ENV
Replace
RUN export PATH="~/miniconda3/bin:$PATH"
with
ENV PATH="~/miniconda3/bin:$PATH"
Even though Miniconda is located in ~, it default installs to the root directory unless otherwise specified.
Here's the right command.
RUN export PATH="/root/miniconda3/bin:$PATH"
It looks like your export PATH ... command is putting the literal symbol ~ into the path. Try this:
ENV PATH="$HOME/miniconda3/bin:$PATH"

How to run pytest in jenkins

I am trying to run pytest in jenkins.
When i try to install pytest in build option in jenkins it says pip command not found. Even tried setting a virtual env but no success.
I AM RUNNING JENKINS IN DOCKER CONTAINER
#!/bin/bash
cd /usr/bin
pip install pytest
py.test test_11.py
#!/bin/bash
source env1/bin/activate
pip install pytest
py.test test_11.py
Dockerfile
FROM Jenkins
USER root
Errors:
Started by user admin
Running as SYSTEM
Building on master in workspace /var/jenkins_home/workspace/pyproject
[pyproject] $ /bin/bash /tmp/jenkins5312265766264018610.sh
/tmp/jenkins5312265766264018610.sh: line 4: pip: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Started by user admin
Running as SYSTEM
Building on master in workspace /var/jenkins_home/workspace/pyproject
[pyproject] $ /bin/bash /tmp/jenkins6002566555689593419.sh
/tmp/jenkins6002566555689593419.sh: line 4: pip: command not found
/tmp/jenkins6002566555689593419.sh: line 5: py.test: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
well, the error is daylight clear, pip is not installed in the running environment.
I did some digging myself and found out that jenkins image has only python2.7 installed, and pip is not installed.
I would start by installing pip first and continue from there, so modify Dockerfile to:
FROM jenkins
USER root
RUN apt-get update && apt-get install -y python-pip && rm -rf /var/lib/apt/lists/*
hope this helps you find your way.
more helpful information could be:
your jenkins pipeline script (at least until step 'Execute shell')
python version you intend to use.
how and where you run the virtual-env creation command.

Homebrew/Python errors after trying to install Certbot

I'm trying to install Certbot on my macOS machine (10.14.4) to generate a certificate, but as usual, some Homebrew errors are standing in the way.
After running, brew update and brew install certbot, I tried a command based on sudo certbot certonly -a manual -d example.com --email your#email.com, but I get sudo: certbot: command not found. I also tried brew upgrade.
brew doctor shows:
Warning: The following directories do not exist:
/usr/local/sbin
You should create these directories and change their ownership to your account.
sudo mkdir -p /usr/local/sbin
sudo chown -R $(whoami) /usr/local/sbin
Warning: You have unlinked kegs in your Cellar.
Leaving kegs unlinked can lead to build-trouble and cause brews that depend on
those kegs to fail to run properly once built. Run `brew link` on these:
python#2
python
brew link python returns Linking /usr/local/Cellar/python/3.7.3... Error: Permission denied # dir_s_mkdir - /usr/local/Frameworks.
For some reason, it looks like I have 2 versions of Python installed now and I don't want to run any of the commands that Homebrew suggests until I know I need to. python --version returns Python 2.7.10.
Should I uninstall one of my Pythons? Is one of them the system version or is that a third installation somewhere else? Which one should I symlink and how do I get the certbot command working? Thanks in advance
sudo mkdir /usr/local/Frameworks
sudo chmod 1777
then
brew link python3
this will install your python3 on your mac
i would not deinstall python 2.7 because there are still a lot of scripts depends on python 2.7!

Categories

Resources