I'm running some commands through a Makefile and I have to create a new python virtualenv, activate it and install requirements in one Make recipe/goal and then reuse it before running other recipes/goals, but the problem is, I have to activate that env on every subsequent Make Goal like so
SHELL := bash
# For Lack of a better mechanism will have to activate the venv on every Make recipe because every step is in its own shell
.ONESHELL:
install:
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
test: install
source .venv/bin/activate
pytest
synth: install
source .venv/bin/activate
cdk synth --no-color
diff: install
source .venv/bin/activate
cdk diff --no-color
bootstrap: install
source .venv/bin/activate
cdk bootstrap --no-color
deploy: install
source .venv/bin/activate
cdk deploy --no-color
.PHONY: install test synth diff bootstrap deploy
Is there a better way to do this, i.e saves me having to do source .venv/bin/activate on every single goal ?
In other words can I run all make goals in a Makefile in the same SHELL basically ?
You cannot run all goals in the same shell (that cannot work, since the shell must exit after each recipe is complete else make cannot know whether the commands in the recipe were successful or not, and hence whether the rule was successful or not).
You can of course put the sourcing in a variable so you don't have to type it out. You could also put the entire thing in a function, like this:
run = . .venv/bin/activate && $1
then:
install:
python -m venv .venv
$(call run,pip install -r requirements.txt)
test: install
$(call run,pytest)
etc.
If you don't want to do any of that your only choice is to use recursive make invocations. However this is going to be tricky because you only want to do it after the install rule is created. Not sure it will actually lead to a more understandable makefile.
Related
I am building a python project -- potion. I want to use Github actions to automate some linting & testing before merging a new branch to master.
To do that, I am using a slight modification of a Github recommended python actions starter workflow -- Python Application.
During the step of "Install dependencies" within the job, I am getting an error. This is because pip is trying to install my local package potion and failing.
The code that is failing if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
The corresponding error is:
ERROR: git+https#github.com:<github_username>/potion.git#82210990ac6190306ab1183d5e5b9962545f7714#egg=potion is not a valid editable requirement. It should either be a path to a local project or a VCS URL (beginning with bzr+http, bzr+https, bzr+ssh, bzr+sftp, bzr+ftp, bzr+lp, bzr+file, git+http, git+https, git+ssh, git+git, git+file, hg+file, hg+http, hg+https, hg+ssh, hg+static-http, svn+ssh, svn+http, svn+https, svn+svn, svn+file).
Error: Process completed with exit code 1.
Most likely, the job is not able install the package potion because it is not able to find it. I installed it on my own computer using pip install -e . and later used pip freeze > requirements.txt to create the requirements file.
Since I use this package for testing therefore I need to install this package so that pytest can run its tests properly.
How can I install a local package (which is under active development) on Github Actions?
Here is part of the Github workflow file python-app.yml
...
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.8
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Lint with flake8
...
Note 1: I have already tried changing from git+git#github.com:<github_username>... to git_git#github.com/<github_username>.... Pay attention to / instead of :.
Note 2: I have also tried using other protocols such as git+https, git+ssh, etc.
Note 3: I have also tried to remove the alphanumeric #8221... after git url ...potion.git
The "package under test", potion in your case, should not be part of the requirements.txt. Instead, simply add your line
pip install -e .
after the line with pip install -r requirements.txt in it. That installs the already checked out package in development mode and makes it available locally for an import.
Alternatively, you could put that line at the latest needed point, i.e. right before you run pytest.
I'm learning Python and have Anaconda installed, and I'm trying to familiarize myself with the process of getting an Eye-Color Detection project working.
I'm running into the following error after going through readme:
Eye-Color-Detection git:(master) ✗ sudo pip install -r requirements.txt
WARNING: The directory '/Users/{user}/Library/Caches/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
When trying to update:
(tweet) ➜ Eye-Color-Detection git:(master) ✗ conda update --all
[Errno 13] Permission denied: '/Users/{user}/opt/anaconda3/envs/tweet/lib/python3.8/site-packages/wrapt/__init__.py' -> '/Users/{user}/opt/anaconda3/envs/tweet/lib/python3.8/site-packages/wrapt/__init__.py.c~'
Q: How might I go about doing this correctly within the same Conda environment?
Most of the time, a sudo pip install is almost never what you really want. While in some cases, it may "appear" to work and solve you're immediate problem. More often than not you've just broken your system python without knowing it.
In the context of that repo, I'd ignore the repo's README and do this.
$ git clone https://github.com/ghimiredhikura/Eye-Color-Detection
$ cd Eye-Color-Detection
Create a virtualenv environment, change yourenvname as you like.
$ conda create -n yourenvname python=3.x
$ conda activate yourenvname
Install the dependencies and run the code
$ pip install -r requirements.txt
$ python3 eye-color.py --input_path=sample/2.jpg --input_type=ima
As fixing you conda environment may be difficult to debug depending on what else you've sudo'd in attempting to resolve your issue. If you happen to be familiar with "regular" virtualenv's created using python's builtin virtual environment tooling, then you could also try this to get you going.
$ python3 -m venv .venv --copies
$ source .venv/bin/active
$ pip install -r requirements.txt
$ python3 eye-color.py --input_path=sample/2.jpg --input_type=image
What you need to do is you have to change the directory premission to writable.
You can do it using this command,
$ sudo chmod 7777 /Users/{user}/library/caches/
to change permissions recursively,
$ sudo chmod 7777 -R /Users/{user}/library/caches/
or you can own that directory by using this command,
$ sudo chown OWNER:GROUP /Users/{user}/library/caches/
where OWNER is the username for your computer which you can find in the terminal by using this command.
$ whoami
GROUP is optional.
I am trying to build a CI/CD process for my Python scripts and applications. I am able to build my venv within the testing container but when I rsync it over to the target server, the version of Python seems to break. This is what I am trying:
- cp -a ./. $APP_DIR
- cd $APP_DIR
- python3 -m venv venv
- source venv/bin/activate
- pip3 install -r requirements.txt
...
- rsync...
All environments involved are running Python 3.6.8
When I activate the venv on the target server and run which python3 I get /usr/bin/python3 which is incorrect.
Why? Why does venv break when deployed to a server via rsync?
I'm new to Python development and the virtual environment process. Should venv's only be created on the server (or container) that they need to run on? Sometimes my target servers don't have python3-venv installed on them. Is it possible to deploy a venv with the code and use it to run my scripts?
When creating an environment via venv, it stores the absolute path of the environment path into bin/activate. Additionally some symlinks are created in the new environment pointing to existing python installation.
As a consequence of this the environment is only valid on the hosts and path venv was executed. This is also stated in the documentation (some parts omitted):
Running this command creates the target directory [...] and places a pyvenv.cfg file in it with a home key pointing to the Python installation from which the command was run. It also creates a bin [...] subdirectory containing a copy/symlink of the Python binary/binaries (as appropriate for the platform or arguments used at environment creation time).
You can easily check this fact by these commands:
mkdir /tmp/example_dir_for_stackoverflow
cd /tmp/example_dir_for_stackoverflow
python3 -m venv venv
grep stackoverflow venv/bin/activate
It will output:
VIRTUAL_ENV="/tmp/example_dir_for_stackoverflow/venv"
If you rsync this environment to another system onto a different path and/or different python installation, the settings in bin/activate don't match and it don't work.
In my opinion yout best bet is to exclude venv folder from rsync with
rsync --exclude 'venv' source/ destination/
The requirements.txt file is your best friend for keeping your dependencies satisfied everywhere.
I also suggest you to install python3-venv package from your Linux distribution if you're satisfied by the provided Python version. Else install another Python version at all (you'll find in the Internet how to install a different Python for your distribution).
By example:
Host 1 (This is where you develop and you may add something to your venv)
cd /tmp/
mkdir app_base # base folder for venv/ and app_code/
cd app_base/
mkdir app_code # base folder for code only
# LOCAL virtual environment creation and activatin
python3 -m venv venv
source venv/bin/activate
# Just an example of whatever you may need
pip install numpy
# Let's say that it could be enough for your app to work.
# Create requirements.txt
pip3 freeze >requirements.txt
Server, Container, whatever remote..
SETUP
This should run once (or at least before rsync). It's the same first 5 lines from the above snippet.
cd /tmp/
mkdir app_base
cd app_base/
mkdir app_code
python3 -m venv venv
Now that you've done the setup on the remote host, let's return to Host 1, where you develop.
You need to rsync your app_code and requirements.txt (and maybe some other stuff), but not the venv folder
Host 1
You can wrap this in a cron job
rsync -xav -e ssh --exclude 'venv' /tmp/app_base/ user#X.X.X.X:/tmp/app_base/
Then, finally, you can keep your server virtual environment up to your needs, directly running this on the server.
Server, Container, whatever remote..
cd /tmp/app_base
source venv/bin/activate
pip3 install -r requirements.txt
Now, on the remote host, you should be able to run (the unit test of) your code.
The 'strict' answer to your bolded question
Why? Why does venv break when deployed to a server via rsync?
is: some Python packages (like the numpy I've used in the example) provide binary routines, for performance reasons. Copying the virtual environment folder will only work in the very same Linux distribution or Windows version, with the very same architecture and Python version. And it's not the purpose virtual environments were created for.
I'm trying to automate the deployment of my Python-Flask app on Ubuntu 18.04 using Bash by going through the motion of preparing all the necessary files/directories and cloning the source code from Github followed by creating the virtual environment, installing the pre-requisite modules and etc.
Now because I have to execute my Bash script using sudo, this means that the entire script will be executed as root except where I specify otherwise using sudo -u myuser and when it comes to activating my virtual environment, I get the following output: sudo: source: command not found and my subsequent pip installs are all installed outside of the virtual environment. Excerpts of my code below:
#!/bin/bash
...
sudo -u "$user" python3 -m venv .env
sudo -u $SUDO_USER source /srv/www/www.mydomain.com/.env/bin/activate
sudo -u "$user" pip install wheel
sudo -u "$user" pip install uwsgi
sudo -u "$user" pip install -r requirements.txt
...
Now for the life of me, I can't figure out how to activate the virtual environment in the context of the virtual environment if this makes any sense.
I've scoured the web and most of the questions/answers I found revolves around how to activate the virtual environment in a Bash script but not how to activate the virtual environment as a separate user within a Bash script that was executed as sudo.
That's because source is not an executable file, but a built-in bash command. It won't work with sudo, since the latter accepts a program name (i.e. executable file) as argument.
P.S. It's not clear why you have to execute the whole script as root. If you need to execute only a number of commands as root (e.g. for starting/stopping a service) and run a remaining majority as a regular user, you can use sudo only for these commands. E.g. the following script
#!/bin/bash
# The `whoami` command outputs the current username. Unlike `source`, this is
# a full-fledged executable file, not a built-in command
whoami
sudo whoami
sudo -u postgres whoami
on my machine outputs
trolley813
root
postgres
P.P.S. You probably don't need to activate an environment as root.
I am attempting to install Portia, a python app from Github: https://github.com/scrapinghub/portia
I use the following steps at the command line:
set up new virtualenv 'portia' in Mac terminal
git clone https://github.com/scrapinghub/portia.git
follow readme instructions:
cd slyd
pip install -r requirements.txt
run Portia
cd slyd
twistd -n slyd
But every time I attempt the last step to run the program, I get the following error:
ImportError: No module named scrapy
Any idea why this error is occurring? All previous steps seem to install correctly. Is it an error earlier in my install process?
Thanks!
I don't have the rep to upvote Alagappan's answer but he's correct. Also, if you're as inexperienced as I am, you may need further clarity on this.
You have to create, activate and navigate into the virtualenv before installing anything (including cloning portia from github). Here's the whole thing working from start to finish:
1: cd to wherever you’d like to store your project...
and Install virtualenv:
$ pip install virtualenv
2: Create the virtual environment. (I called mine “portia” but this can be anything.):
$ virtualenv portia
3: Activate the virtual environment you created (change the path to reflect the name you used here if not “portia”.):
$ source portia/bin/activate
At this point your terminal should have display the virtualenv name in parenthesis before the standard directory path prompt:
(name-of-virtualenv) [your-machine]:[current-directory]: [user]$
...and if you list the files within your pwd you’ll see the name of you virtualenv there.
4: cd into your virtualenv (“portia” for me):
$ cd portia
5: Now you can clone portia from github into your virtualenv...
$ git clone https://github.com/scrapinghub/portia
6: cd into the cloned portia/slyd...
$ cd portia/slyd
7/8: pip install twisted and Scrapy...
$ pip install twisted
$ pip install Scrapy
You’re virtualenv should still be activated and you should still be in [virtualenv-name]/portia/slyd
9: Install the requirements.txt:
$ pip install -r requirements.txt
10: Run slyd:
$ twistd -n slyd
--- No more scrapy error! ---
Another Installation Method For Portia: Using Vagrant
Here is the method that made me install Portia with ease. Works with Mac, Windows and Linux. With a few commands and clicks, you'll get a fully functional web scraper.
Things Needed:
VirtualBox
Vagrant
Clone the repo for Portia or download the zip file.
Additional Steps To Take:
Install VirtualBox.
Install Vagrant
Open your terminal and navigate to where you cloned the Portia repo or where you've extracted it (in case of a zip file).
Then make a command vagrant up - This will download and setup a VirtualBox Guest VM for you + will install all the necessary requirements for Portia and will install Portia from start to finished.
After the above process, you may now open your browser and navigate to
http://the-virtualbox-ip:8000/static/main.html
And you're setup.
It's quite simple, you just need to install the python module scrapy in the same way that the Twitter API requires setuptools
pip install scrapy
I suppose the issue you are facing is because of the virtualenv. Once you setup a new virtual environment you need to run the activate script in order to start using it. In your case you'll have to run the following command:
$ source portia/bin/activate
On successful activation, your prompt will look like:
(portia) $
Can you check if you activated your virtual environment before you installed the packages using pip? I believe doing so will fix your issue.