Execute `pipenv` commands from within python script - python

I am trying to automate setup of a bunch of Python repos:
repo = Repo(repository_path)
# switch to `master` branch
try:
repo.git.checkout("master")
except GitError:
logger.error("Branch `master` does not exist.")
return
# if virtual environment directory does not exist run pipenv install command
if not repository_path.joinpath(".venv").exists():
os.chdir(repository_path)
subprocess.call("pipenv install")
I am executing this from another venv and it seems that packages are installed inside current venv.
In fact I need to call pipenv update to update Pipfile.lock on bunch of repos but there are cases when venv is not set up at all.
Any hint how to run pipenv from within python script without interfering current venv ?

Related

Why are .pth files loaded twice in a Python virtual environment?

In the user site .pth files are processed once at interpreter startup:
$ echo 'import sys; sys.stdout.write("hello world\n")' > ~/.local/lib/python3.8/site-packages/hello.pth
$ python3.8 -c ""
hello world
And it's the same behaviour in the system site, e.g. /usr/local/lib/python3.8/site-packages/.
But in venv they are processed twice:
$ rm ~/.local/lib/python3.8/site-packages/hello.pth
$ /usr/local/bin/python3.8 -m venv .venv
$ source .venv/bin/activate
(.venv) $ echo 'import sys; sys.stdout.write("hello world\n")' > .venv/lib/python3.8/site-packages/hello.pth
(.venv) $ python -c ""
hello world
hello world
Why are path configuration files processed twice in a virtual environment?
Looks like it all happens in the site module (not so surprising). In particular in the site.main() function.
The loading of the .pth files happens either in site.addsitepackages() or in site.addusersitepackages(), depending in which folder the file is placed. Well more precisely both these function call site.addpackage(), where it actually happens.
In your first example, outside a virtual environment, the file is placed in the directory for user site packages. So the console output happens when site.main() calls site.addusersitepackages().
In the second example, within a virtual environment, the file is placed in the virtual environment's own site packages directory. So the console output happens when site.main() calls site.addsitepackages() directly and also via site.venv() a couple of lines earlier that also calls site.addsitepackages() if it detects that the interpreter is running inside a virtual environment, i.e. if it finds a pyvenv.cfg file.
So in short: inside a virtual environment site.addsitepackages() runs twice.
As to what the intention is for this behavior, there is a note:
# Doing this here ensures venv takes precedence over user-site
addsitepackages(known_paths, [sys.prefix])
Which, from what I can tell, matters in case the virtual environment has been configured to allow system site packages.
Maybe it could have been solved differently so that the path configuration files are not loaded twice.

Pipenv: Multiple Environments

Right now I'm using virtualenv and just switching over to Pipenv. Today in virtualenv I load in different environment variables and settings depending on whether I'm in development, production, or testingby setting DJANGO_SETTINGS_MODULE to myproject.settings.development, myproject.settings.production, and myproject.settings.testing.
I'm aware that I can set an .env file, but how can I have multiple versions of that .env file?
I'm far from a Python guru, but one solution I can think of would be to create Pipenv scripts that run shell scripts to change the PIPENV_DOTENV_LOCATION and run your startup commands.
Example Pipfile scripts:
[scripts]
development = "./scripts/development.sh"
development.sh Example:
#!/bin/sh
PIPENV_DOTENV_LOCATION=/path/to/.development_env pipenv run python test.py
Then run pipenv run development
You should create different .env files with different prefixes depending on the environment, such as production.env or testing.env. With pipenv, you can use the PIPENV_DONT_LOAD_ENV=1 environment variable to prevent pipenv shell from automatically exporting the .env file and combine this with export $(cat .env | xargs).
export $(cat production.env | xargs) && PIPENV_DONT_LOAD_ENV=1 pipenv shell would configure your environment variables for production and then start a shell in the virtual environment.

virtualenvwrapper: how to update project path?

When I move a project folders I have to manually update the project path in the .project file to get the workon command to work. Is it possible to update the path automatically?
According to the docs you can use setvirtualenvproject. This will automatically move you to the project folder if you use the workon command:
bono~$: setvirtualenvproject ~/.virtualenvs/your-virtual-env/ ~/path/to/your/project
Or, as beruic mentioned, it's easier to activate the environment and move to your desired working directory first. Please note that this not always work on my system, but it is a lot easier if it does work for you:
$ workon your-virtual-env
$ cd ~/path/to/your/project
$ setvirtualenvproject
In the future it might also be handy to specify the project path for the virtualenv on creation. You have to specify the project with the -a flag.
The -a option can be used to associate an existing project directory with the new environment.
You can use it something like this:
bono~$: cd ~/your/project
bono~$: mkvirtualenv my-project -a $(pwd)
The next time you use workon you will automatically be moved to your project directory.
Alternative
If you want to automatically detect directory changes and set the correct virtualenvwrapper then and there you can have a look at this post. It's a bit too expansive to go into detail here, but I think you can find what you're looking for there if that's what you meant.
You can just activate your virtual environment, go to the folder you want as project folder and call setvirtualenvproject:
$ workon [your_project]
$ cd [desired_project_folder]
$ setvirtualenvproject
Then the current folder will be set as project folder in the current virtualenv.

Installing Django app from GitHub [duplicate]

This question already has answers here:
How to run cloned Django project?
(7 answers)
Closed last year.
create virtualenv
create project (chat)
follow instructions at
https://github.com/qubird/django-chatrooms, after which there is a src folder in the root of the virtualenv
navigate to virtualenv/src/chatrooms and run the command python setup.py install, this installs app folder with all files and folders at virtualEnv/src/chatrooms/chatrooms
How do I get this to install to my project, not to virtualEnv/src/chatrooms/chatrooms? I also checked
Can't install Django app from git
and
how can i download code from git hub using command line but am still stuck.
Just follow these:
cd in to the directory where you want your project to store you source code eg. home/.
Then run django-admin startproject chat
This will create a chat directory in your current directory
Now cd to the chat directory.
run virtualenv env
This will create a directory namely env. Now just activate the virtualenv by running source /env/bin/activate (if you are in the chat dir).
As you now have you virtualenv ready and activated, just installed all your apps by running pip install .. and you are ready to go.
And don't worry about the env folder and its content or where your installed app code goes (until you want to change something in the installed app, which is usually not the case).
All you have to see is if your installed app works or not.
The method below installs directly to the prjoect level directory with no nedd to manually move file , .ie, not installed to src/chatrooms/charooms
1 create virtualenv (optional)
2 create project
3 cd to project
4 run git init
5 run command git clone "{protocol:url}"
6 add app to settings & and urls to main UURLCONF file

How to use virtualenvwrapper in Supervisor?

When I was developing and testing my project, I used to use virtualenvwrapper to manage the environment and run it:
workon myproject
python myproject.py
Of course, once I was in the right virtualenv, I was using the right version of Python, and other corresponding libraries for running my project.
Now, I want to use Supervisord to manage the same project as it is ready for deployment. The question is what is the proper way to tell Supervisord to activate the right virtualenv before executing the script? Do I need to write a separate bash script that does this, and call that script in the command field of Supervisord config file?
One way to use your virtualenv from the command line is to use the python executable located inside of your virtualenv.
for me i have my virtual envs in .virtualenvs directory. For example
/home/ubuntu/.virtualenvs/yourenv/bin/python
no need to workon
for a supervisor.conf managing a tornado app i do:
command=/home/ubuntu/.virtualenvs/myapp/bin/python /usr/share/nginx/www/myapp/application.py --port=%(process_num)s
Add your virtualenv/bin path to your supervisord.conf's environment:
[program:myproj-uwsgi]
process_name=myproj-uwsgi
command=/home/myuser/.virtualenvs/myproj/bin/uwsgi
--chdir /home/myuser/projects/myproj
-w myproj:app
environment=PATH="/home/myuser/.virtualenvs/myproj/bin:%(ENV_PATH)s"
user=myuser
group=myuser
killasgroup=true
startsecs=5
stopwaitsecs=10
First, run
$ workon myproject
$ dirname `which python`
/home/username/.virtualenvs/myproject/bin
Add the following
environment=PATH="/home/username/.virtualenvs/myproject/bin"
to the related supervisord.conf under [program:blabla] section.

Categories

Resources