Virtual Environments: python -m venv VS echo layout python3 - python

I'm fairly new to python, but have built a few small projects. I have been taught, and have always used, the following commands to start a virtual environment: echo layout python3 > .envrc and then direnv allow.
What are the differences or advantages to using python -m venv <virtualenv name> versus echo layout?

Those two commands do entirely different things.
venv
The python -m venv <env_name> command creates a virtual environment as a subdirectory full of files in your filesystem. When it's done, a new virtual environment is sitting there ready for you to activate and use, but this command doesn't actually activate it yet.
Activating the virtual environment so you can use it is a separate step. The command to do this depends on which operating system and which shell you're using (see the "Command to activate virtual environment" table in the docs linked above).
The activation command alters only your current command-line shell session. This is why you have to re-activate the virtual environment in every shell session you start. This kind of annoyance is also what direnv exists to solve.
direnv and .envrc
First, about that echo command...
In both MS-DOS and Unix / Linux (and presumably recent versions of Macintosh), echo layout python3 just emits a string "layout python3".
The > redirects the echo command's output to a file, in this case .envrc. The redirection creates the file if it doesn't already exist, and then replaces its contents (if any) with that string. The end result is a file in your current working directory containing just:
layout python3
The .envrc file, and direnv allow
.envrc is a config file used by the direnv application. Whenever you cd into a directory containing a .envrc file, direnv reads it and executes the direnv instructions found inside.
direnv allow is a security feature. Since malicious .envrc files could be hidden almost anywhere (especially in world-writable directories like /var/tmp/), you could cd into a seemingly innocent directory and get a nasty surprise from someone else's .envrc land mine. The allow command specifically white-lists a directory's .envrc file, and apparently un-lists it if it discovers the .envrc file has changed since it was allowed.
Finally, back to direnv
I don't use direnv, but layout <language> is a direnv command to adjust your environment for developing in language, in this case activating a Python 3 virtual environment. The docs hint that it's more "helpful" than just that, but they don't go into any detail. (Also, you could have written your own direnv function called python3 that does something completely different.)
The goal of all that is to automatically enable your Python virtual environment as soon as you cd into its directory. This eliminates one kind of human error, namely forgetting to enable the virtual environment. For details, see Richard North's "Practical direnv", especially the "Automatic Python virtualenv switching section.
(Dis-)Advantages and Opinions
If that's the kind of mistake you've made frequently, and you trust that the direnv command will never fall prey to a malicious .envrc file (or otherwise "helpfully" mess up something you're working on), then it might be worth it to you.
The biggest down-side I see to direnv (aside from the security implications) is that it trains you to forget about a vital step in using Python virtual environments... namely, actually using the virtual environment. This goes double for any other "help" it silently provides without telling you. (The fact that I keep putting "help" in quotes should suggest what I think of utilities like this.)
If you ever find yourself working somewhere direnv isn't installed, the odds are good that you'll forget to activate your virtual environments, or forget whatever else direnv has been doing for you. And the odds are even better that you'll have forgotten how to do it.

Related

Enviroment variables in pyenv-virtualenv

I have created a virtual environment with pyenv virtualenv 3.5.9 projectname for developing a django project.
How can I set environment variables for my code to use?
I tried to add the environment variable DATABASE_USER in /Users/developer/.pyenv/versions/projectname/bin/activate
like this:
export DATABASE_USER="dbuser"
When I tried to echo $DATABASE_USER an empty string gets printed.
Tried to install zsh-autoenv
And now I can echo $DATABASE_USER and get the value set in the .autoenv.zsh file.
But I can't seem to get the environment variable to be available to my django code:
If I try to os.getenv('DATABASE_USER', '') in the python shell inside the virtualenv, I get ''
What could be wrong? Is the zsh-autoenv variables just available for the zsh shell and not python manage.py shell ?
I was wondering a similar thing, and I stumbled across a reddit thread where someone else had asked the same question, and eventually followed up noting some interesting finds.
As you noticed, pyenv doesn't seem to actually use the bin/activate file. They didn't say what the activation method is, but like you, adding environment variables there yielded no results.
In the end, they wound up installing autoenv, which bills itself as directory-based environments. It allows you to create an .env file in your directory, and when you cd to that directory, it runs the .env file. You can use it for environment variables, or you could add anything else to it.
I noticed on the autoenv page that they say you should probably use direnv instead, as it has better features and is higher quality software. Neither of these are Python or pyenv specific, and if you call your python code from outside of the directory, they may not work. Since you're using pyenv, you're probably running your code from within the directory anyway, so I think there's a good chance either one could work.

Make VS Code terminal match debug environment on a Mac

I'm teaching a beginners python class, the environment is Anaconda, VS Code and git (plus a few extras from a requirements.txt).
For the windows students this runs perfectly, however the mac students have an existing python (2.7) to contend with.
The windows students (i.e. they have a windows computer), their environment when they debug matches their console environment. However the mac students seem to be locked to their 2.7 environment.
I've tried aliasing, as suggested here and here
alias python2='python'
alias python='python3'
alias pip2='pip'
alias pip='pip3'
I've modified the .bash_profile file
echo 'export PATH="/Users/$USER/anaconda3/bin:$PATH"' >>.bash_profile
Both of these seem to work perfectly to modify their Terminal environments, when launched externally to VS Code. Neither seem to do anything to the environment launched from [cmd]+[`].
I've also tried conda activate base in the terminal, which seems to have no effect on a python --version or a which python
They can run things using python 3, but that means that they need to remember that they are different to the other 2/3 of the students. It's othering for them, and more work for me!
The students are doing fine, launching things from their external terminal, but it would streamline things greatly if the environments could be as consistent as possible across the OSs.
Whilst they are complete beginners, they can run a shell script. They currently have one that installs pip requirements and vs code extensions.
Is there a configuration that will keep the terminal in line with the debug env?
In my opinion the best practice is to create Python virtual environments (personally I love using conda environments, especially on Mac where you stuck with unremovable old Python version). Then VSCode will automatically (after installing very powerful Python extension) find all your virtual environments. This way you will teach your students a good practice of handling Python zoo a.k.a. package incompatibilities. Terminal environments settings will be consistent with VSCode, without being dependent on unneeded any more aliases. Obviously, virtual environments are OS independent, so you will be more consistent and remove unnecessary confusion between different students.
The additional bonus of the virtenvs is that you can create one exactly according to your requirements.txt and switch from one to another with a single click (in terminal it takes two commands: deactivate -> activate).
You can read more about how to handle Python virtual environments on VSCode site
Given the aliases are run just once and are not persistent in .bash_profile, python targets the default interpreter rather than the expected conda python3 interpreter.
Try to symlink conda's python3 executable to capture the python namespace
ln -sf /Users/$USER/anaconda3/bin/python3 /Users/$USER/anaconda3/bin/python
This will create or update the symlink. Use the same approach for pip and pip3.
Python in vscode let's you select which interpreter will be used to run the scripts.
It is in settings under "python.pythonPath", just set it to point to the interpreter of choice.
It can be set on a project basis as well (which is how you ensure that a project that has a virtual environment will execute using that interpreter and packages), you just select Workspace in the settings pane and add the desired python interpreter there.

activating virtualenv with direnv doesn't activate virtualenv

I'm using direnv to source my virtualenv when I change into the directory.
/project
.envrc
/env <--- my virtualenv
.envrc
source env/bin/activate
When I change directory into /project I get the output:
direnv: loading .envrc
direnv: export +VIRTUAL_ENV -PS2 ~PATH
It prepends the env directory to my PATH environment variable so when I run which python and which pip both point to python and pip that's in my env directory
=> which python
/USER/project/env/bin/python
=> which pip
/USER/project/env/bin/pip
However it doesn't seem to run source env/bin/activate as I expect it to. I expect it to activate my virtualenv by adding the virtualenv name (env) to my CLI prompt and give access to the deactivate command, neither of that happens. Is there something I'm misunderstanding about how direnv and virtualenv work? I'm new to python so I'm not sure if there are existing tools to do something like this.
I think it's important to understand how direnv works to form a proper mental model first; direnv doesn't load the .envrc directly in the current shell. Instead, it starts a new bash shell, executes the .envrc in there, records the changes in environment and exports the diff bash into the current shell.
What is happening here is that:
virtualenv is using $PS1 to set the prompt. This is a local variable and thus not re-exported. direnv also filters PS1 because it causes segfaults on the old macOS bash when it's unset.
The deactivate() function is not exported from the bash sub-shell as it's not an environment variable.
In practice the activation worked as you noticed. python is in the right path and running pip or easy_install is going to install things in the virtualenv. deactivation is not necessary as direnv will automatically unload the environment when cd-ing out of the directory.
To restore the custom prompt, there is more info available on the wiki: https://github.com/direnv/direnv/wiki/Python#restoring-the-ps1
There is a "hidden" feature to do what you want in direnv. You have to take a look at the toolbox that is loaded by direnv for you to use in the .envrc files. You can use the layout command with python (layout python3) to activate a virtualenv on entering the dir, and deactivating it when exiting the directory. It will even take care of creating the virtualenv the first time.
Also take a look at source_up that keep loading .envrc files higher in the file system. I start all my projects by creating a .envrc file with the following:
layout python3
source_up
This will create, activate and deactivate a python virtualenv automatically, and keep on reading variables from higher-level .envrc files. Environement variables for the current project only will go in the local .envrc.

virtualenvwrapper: when does `workon` changes to the project directory?

I use virtualenvwrapper to manage my environments. I create my projects with the -a <path-to-project> argument and PROJECT_HOME is not set, because my projects don't share a common path. VIRTUALENVWRAPPER_WORKON_CD is set to 1 though and a valid .project file exist in the virtual environments.
When I use the workon command, it only changes the working directory to the path of the project sometimes, while at other times, the directory stays the same, despite the environment being activated correctly.
So when and how does the directory change trough workon happens? And are there things I have to do / set for it to work?
I have a small solution albeit kinda hacky
Go to the folder which this variable, $VIRTUALENVWRAPPER_HOOK_DIR is pointing to. You might need to be in a virtual environment to see that variable, but usually it has the same value as $WORKON_HOME
In this folder you should see a script called postactivate
Add this as the last line in the script: cdproject
Now what this does is that every time you type workon <project_name>, this script will run after the virtual environment is activated, and the directory you were working in will switch to the directory for that project.
For more life-cycle hooks, see here!
NOTE
I didn't fully test this to make sure it works no matter how the project was created, but to make sure it works, I recommend creating your virtual environments with mkvirtualenv -a <env_name> or if the environment exists and with the environment activated, go to it's project folder and run setvirtualenvproject. Now the next time you try to do workon ..., the script will kick in and take you to your project folder
Addendum
Since this is a shell script, you can do some fancy tweaks than just that one line. For example you might want to only do something if the activated project has a certain pattern, or part of a certain group of projects. Checkout some of the $VIRTUALENVWRAPPER_* variables to see what other useful information you can get
setvirtualenvproject is now called setprojectdir. To use the current directory write setprojectdir .

Can I move a virtualenv?

This question is not a duplicate.
It pertains not just to renaming a virtual environment, but to actually moving it to a different directory, including, potentially, a different user's directory.
This is not the same as merely renaming a virtual environment, especially to people unfamiliar with virtualenvs.
If I create a virtualenv, and I move it to a different folder, will it still work?
$ virtualenv -p /usr/bin/python3 /home/me/Env/my-python-venv
$ source Env/my-python-venv/bin/activate
(my-python-venv) $
...later that day, the virtual environment MOVED...
(my-python-venv) $ deactivate
$ mkdir -p /home/me/PeskyPartyPEnvs
$ mv /home/me/Env/my-python-venv /home/me/PeskyPartyPEnvs/
Question:
Will this work?
$ source /home/me/PeskyPartyPEnvs/my-python-venv/bin/activate
(my-python-venv) $ /home/me/PeskyPartyPEnvs/my-python-venv/bin/pip3 install foaas
I mean this as less of a question about the wisdom of trying this (unless that wisdom is humorous, of course), and more about whether it's possible. I really want to know whether it's possible to do in Python 3, or whether I just have to suck it up and clone it.
Can I just mv a virtualenv like that without sadness? I do want to avoid sadness.
Yes. It is possible to move it on the same platform. You can use --relocatable on an existing environment.
From --help:
--relocatable -- Make an EXISTING virtualenv environment relocatable.
This fixes up scripts and makes all .pth files relative.
HOWEVER, this does NOT seem to change the activate script, and rather only changes the pip* and easy_install* scripts. In the activate script, the $VIRTUAL_ENV environment variable hardcoded as the original /path/to/original/venv. The $VIRTUAL_ENV variable is used to set the PATH of your active environment too, so it must be changed based on the new location in order to call python and pip etc. without absolute path.
To fix this issue, you can change the $VIRTUAL_ENV environment variable in the activate script (for example using sed), and everything should be good to go.
An example of usage:
$ cd ~/first
$ virtualenv my-venv
$ grep 'VIRTUAL_ENV=' my-venv/bin/activate
VIRTUAL_ENV="/home/username/first/my-venv"
$ virtualenv --relocatable my-venv
Making script my-venv/bin/easy_install relative
Making script my-venv/bin/easy_install-2.7 relative
Making script my-venv/bin/pip relative
Making script my-venv/bin/pip2 relative
Making script my-venv/bin/pip2.7 relative
### Note that `activate` has not been touched
$ mkdir ~/second
$ mv my-venv ~/second
$ cd ~/second
$ grep 'VIRTUAL_ENV=' my-venv/bin/activate
VIRTUAL_ENV=/home/username/first/my-venv
### (This variable hasn't been changed, it still refers to the old, now non-existent directory!)
$ sed -i -e 's|username/first|username/second|' my-venv/bin/activate
## sed can be used to change the path.
## Note that the `-i` (in place) flag won't work on all machines.
$ source my-venv/bin/activate
(my-venv) $ pip install foass
...
(my-venv) $ python
[...]
> import foass
Hooray, now you can install things and load them into your newly located virtual environment.
For Python 3.3+ (with new venv built-in module)
Short Answer (regardless of version):
There's no clean, direct way to move a virtual environment
Just recreate, it's easy!!
Long Answer:
As of Python v3.3, the virtualenv package has become a built-in module named venv.
The --relocatable option mentioned in other answers has not been included in venv, and currently there is no good, safe way that I'm aware of to either rename or relocate a Python virtual environment.
However, it is fairly simple to recreate a virtual environment, with all its currently installed packages. See this answer, or see the section below. During the process you can recreate the new environment in whatever location and with whatever name you desire.
In the answer linked above, he mentions some 3rd party packages which may support direct renames or moves. If you are settled on pursuing a way to move a virtual environment, you could look into if those work with venv as well.
Note: In that answer, it is focused on virtualenv, rather than venv. See next section for how to translate.
venv vs. older virtualenv command syntax
The command to use venv is:
python -m venv
rather than just virtualenv, which installs as a command in the original package. Where "python" refers to however you run your python executable, which could be a variety of things, such as:
python
py or py -3.7 or similar (the Python Launcher for Windows for Python 3.3+ and bundled with Python for Windows, or the py package that can be installed separately for Linux [and MacOS?])
python3 (convention for linux environments that dual install python 2 and 3)
If you are having issues, use the absolute path to the python executable you want to run: e.g. c:\program files\python37\python.exe
If you are unsure which version is being run, you can always python --version to find out.
How to recreate a virtual environment
Creating/recreating a virtual environment is easy and should become second nature after you work with them for a bit. This process mirrors what you would do to distribute your script as a package (with it's dependencies) in the first half, and then what someone would do to install your script/package for further development.
First, get an updated list of what is in the virtual environment. With it active, get the Python version it uses and save out the list of dependencies to a file.
Use python --version with the virtual environment activated to see what version of Python it is using.
This is for clarity - you may want to update the Python version for various reasons - at least to the latest patch version
For example, if the existing venv is using Python v3.7.4, but now v3.7.6 is out - use v3.7.6 instead, which should including only non-breaking security and bug fixes.
Use python -m pip freeze > requirements.txt to create the list of current package dependencies and put them into the requirements.txt file. This command works in Linux or the Git Bash for sure - not 100% sure about Powershell or Command Line in Windows.
Now create a new virtual environment and then add the dependencies from the old one.
Make your new venv.
Make sure you are using the correct version of python that you want to install to the venv.
If you want it to be exactly the same Python version:
While in the old venv, type "python --version", then make sure you create the new venv with that version of the python command.
For the new venv folder entry in the command:
Either add an absolute or relative path to the desired final folder location.
Use python -m venv my_new_venv to create a new virtual environment in the current working directory in a new my_new_venv folder.
The name of the venv folder will be the name of the venv (what shows up in the prompt when it is activated).
Install your dependencies from the requirements.txt file.
python -m pip install -r requirements.txt
You might need to reinstall local packages that are in development mode.
Note, if you ever need to see the specific location a package is installed to, use:
python -m pip list -v
The -v or "verbose" option will add some extra information about each package that is installed, including the path it is installed in. This is useful to make sure you are keeping virtual, user, and system installed packages straight.
At this point you can just delete the old venv folder and all contents. I recommend using a GUI for that - file deletions are often permanent from the linux command line, and a small typo can be bad news.
BUT ALAS:
No, you can't simply mv. There are workarounds, but it might be easier to reinstall.
(my-python-venv)$ /home/me/PeskyPartyPEnvs/pip3 install foaas
zsh: /home/me/PeskyPartyPEnvs/pip3: bad interpreter: /home/me/Env/my-python-venv/bin/python3: no such file or directory
(my-python-venv)$ deactivate
$
... presses enter a lot in frustration, and the following works
$
$
$ pip3 search foaas
Except it is not from my-python-venv, ergo sadness.
Want to mv your virtualenv and use it, otherwise unmodified?
Short Answer:
Well, ya can't.
The --relocatable argument to virtualenv appears to allow you to do this.
YES, YOU CAN! (In windows)
The workaround is easy, just move your virtual environment anywhere then edit activate.bat inside scripts\:
Move to the virtual environment to the desired directory
Right-click and edit activate.bat located at venv_folder\scripts.
Change VIRTUAL_ENV variable from:
set VIRTUAL_ENV=C:\old_directory\venv_name
into
set VIRTUAL_ENV=C:\new_directory\venv_name
Save the edited batch file, and thats it!
NOTE: My solution should work and save windows users setting up new virtual environments, I doubt this will work in other operating system since .bat is from MS-DOS
Yes, this should be possible if you haven't done anything that depends on the current directory of the virtualenv.
However, if you have the choice, the best thing to do is to create new virtualenv and start using the new virtualenv instead. This is the safest choice and least likely to cause issues later.
The documentation does mention that:
Each virtualenv has path information hard-coded into it,
For example, if you have run setvirtualenvproject then it won't be able to switch to the right directory after you run workon ... so in that case you'd need to fix that manually.
In general a virtualenv is little more than a directory with the necessary Python interpreter files plus packages that you need.
Using answers of this and other threads about similar topic, I've made a bash script that, located and executed within the virtualenv directory itself, will help with your virtualenv moves.
After doing virtualenv --relocatable yourenv you'll need to change your VIRTUAL_ENV variable every time you move the directory, so if you don't wan't to change it manually, use this.
#!/bin/bash \n
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
EXISTING=$(grep 'VIRTUAL_ENV=' bin/activate)
NEWDIR=VIRTUAL_ENV=\"$DIR\"
sed -i -e "s|$EXISTING|$NEWDIR|" bin/activate
source bin/activate
I hope it helps.
I wrote a venv-move script.
The first argument is the path to the venv. It deletes any __pycache__ under that path.
It detects the old path, and replaces it with the current path, after confirming. It seems to work okay, even when moving to a different machine of the same type.
It would make sense to re-write this in Python, but the program would be longer.
#!/bin/bash -eu
venv=$1
old=`perl -ne '/VIRTUAL_ENV="(.*?)"/ && print "$1\n"' "$venv/bin/activate"`
new=$PWD/$venv
find "$venv" -name __pycache__ | xargs rm -rf --
files=`fgrep -r "$old" "$venv" -l`
echo "replace $old with $new in:"
echo "$files"
read -p "[yn] ? " YN
[ "$YN" = y ]
sed -i "s:$old:$new:g" $files
TL;DR
virtualenv-clone is included part of virtualenvwrapper
virtualenv-clone /path/to/old/venv /path/to/new/venv
Alternatively
You could also try cpvirtualenv
cpvirtualenv /path/to/old/venv /path/to/new/venv
But cpvirtualenv expects the /path/to/old/venv to be existing inside $WORKON_HOME and if it isn't it fails. Since this calls virtualenv-clone you may as well use that instead; to avoid errors like
mark#Desktop:~/venvs$ cpvirtualenv ./random/ $WORKON_HOME/random
Copying random as /home/mark/.virtualenvs/venvs/random...
Usage: virtualenv-clone [options] /path/to/existing/venv /path/to/cloned/venv
virtualenv-clone: error: src dir '/home/mark/.virtualenvs/venvs/random' does not exist
Warning as per virtualenvwrapper documentation
Copying virtual environments is not well supported. Each virtualenv
has path information hard-coded into it, and there may be cases where
the copy code does not know it needs to update a particular file. Use
with caution.
What does it actually do ?
As per virtualenv-clone PyPi page
A script for cloning a non-relocatable virtualenv.
Virtualenv provides a way to make virtualenv's relocatable which could
then be copied as we wanted. However making a virtualenv relocatable
this way breaks the no-site-packages isolation of the virtualenv as
well as other aspects that come with relative paths and /usr/bin/env
shebangs that may be undesirable.
Also, the .pth and .egg-link rewriting doesn't seem to work as
intended. This attempts to overcome these issues and provide a way to
easily clone an existing virtualenv.
It performs the following:
copies sys.argv[1] dir to sys.argv[2]
updates the hardcoded VIRTUAL_ENV variable in the activate script to
the new repo location. (--relocatable doesn't touch this)
updates the shebangs of the various scripts in bin to the new Python
if they pointed to the old Python. (version numbering is retained.)
it can also change /usr/bin/env python shebangs to be absolute too,
though this functionality is not exposed at present.
checks sys.path of the cloned virtualenv and if any of the paths are
from the old environment it finds any .pth or .egg link files within
sys.path located in the new environment and makes sure any absolute
paths to the old environment are updated to the new environment.
finally it double checks sys.path again and will fail if there are
still paths from the old environment present.
NOTE: This script requires Python 2.7 or 3.4+

Categories

Resources