Why are .pth files loaded twice in a Python virtual environment? - python

In the user site .pth files are processed once at interpreter startup:
$ echo 'import sys; sys.stdout.write("hello world\n")' > ~/.local/lib/python3.8/site-packages/hello.pth
$ python3.8 -c ""
hello world
And it's the same behaviour in the system site, e.g. /usr/local/lib/python3.8/site-packages/.
But in venv they are processed twice:
$ rm ~/.local/lib/python3.8/site-packages/hello.pth
$ /usr/local/bin/python3.8 -m venv .venv
$ source .venv/bin/activate
(.venv) $ echo 'import sys; sys.stdout.write("hello world\n")' > .venv/lib/python3.8/site-packages/hello.pth
(.venv) $ python -c ""
hello world
hello world
Why are path configuration files processed twice in a virtual environment?

Looks like it all happens in the site module (not so surprising). In particular in the site.main() function.
The loading of the .pth files happens either in site.addsitepackages() or in site.addusersitepackages(), depending in which folder the file is placed. Well more precisely both these function call site.addpackage(), where it actually happens.
In your first example, outside a virtual environment, the file is placed in the directory for user site packages. So the console output happens when site.main() calls site.addusersitepackages().
In the second example, within a virtual environment, the file is placed in the virtual environment's own site packages directory. So the console output happens when site.main() calls site.addsitepackages() directly and also via site.venv() a couple of lines earlier that also calls site.addsitepackages() if it detects that the interpreter is running inside a virtual environment, i.e. if it finds a pyvenv.cfg file.
So in short: inside a virtual environment site.addsitepackages() runs twice.
As to what the intention is for this behavior, there is a note:
# Doing this here ensures venv takes precedence over user-site
addsitepackages(known_paths, [sys.prefix])
Which, from what I can tell, matters in case the virtual environment has been configured to allow system site packages.
Maybe it could have been solved differently so that the path configuration files are not loaded twice.

Related

zsh always show Python virtual env

I currently have this script to show my GitHub branch and virtual env:
setopt PROMPT_SUBST
autoload -Uz vcs_info
precmd() { vcs_info }
zstyle ':vcs_info:git:*' formats '(%b)'
MYPS1=''
MYPS1+='%F{green}'
MYPS1+='${${(%):-%n}:0:1}'
MYPS1+='#'
MYPS1+='${${(%):-%m}:(-4)}' # Get last 4 chars of var machine name
MYPS1+=':'
MYPS1+='%F{yellow}'
MYPS1+='%1~' # Show only the name of the working directory or ~ if it is the home directory
MYPS1+='%F{magenta}'
MYPS1+='${vcs_info_msg_0_}' # Show git branch if any
MYPS1+='%f'
MYPS1+='%# '
PS1=$MYPS1
Sometimes I need to update my .zshrc so I run:
source ~/.zshrc
The problem is, whenever I reload my shell, I cannot see my Python virtual environment anymore even though it's still active.
# After activating virtual env
(my-ve-3.7.13) u#m1:repo-name(github-branch)%
# After reloading my zsh
u#m1:repo-name(github-branch)%
I use pyenv and virtualenvs.
How can I keep the virtual env name in my prompt?
Following #chepner's comment, I figured it out:
Use env to see the list of all env vars. pyenv uses PYENV_VERSION.
Add it to the prompt, use () to have the same look as pyenv does.
...
MYPS1=''
MYPS1+='($PYENV_VERSION) '
MYPS1+='%F{green}'
...

Pipenv: Multiple Environments

Right now I'm using virtualenv and just switching over to Pipenv. Today in virtualenv I load in different environment variables and settings depending on whether I'm in development, production, or testingby setting DJANGO_SETTINGS_MODULE to myproject.settings.development, myproject.settings.production, and myproject.settings.testing.
I'm aware that I can set an .env file, but how can I have multiple versions of that .env file?
I'm far from a Python guru, but one solution I can think of would be to create Pipenv scripts that run shell scripts to change the PIPENV_DOTENV_LOCATION and run your startup commands.
Example Pipfile scripts:
[scripts]
development = "./scripts/development.sh"
development.sh Example:
#!/bin/sh
PIPENV_DOTENV_LOCATION=/path/to/.development_env pipenv run python test.py
Then run pipenv run development
You should create different .env files with different prefixes depending on the environment, such as production.env or testing.env. With pipenv, you can use the PIPENV_DONT_LOAD_ENV=1 environment variable to prevent pipenv shell from automatically exporting the .env file and combine this with export $(cat .env | xargs).
export $(cat production.env | xargs) && PIPENV_DONT_LOAD_ENV=1 pipenv shell would configure your environment variables for production and then start a shell in the virtual environment.

What shebang to use for Python scripts run under a pyenv virtualenv

When a Python script is supposed to be run from a pyenv virtualenv, what is the correct shebang for the file?
As an example test case, the default Python on my system (OS X) does not have pandas installed. The pyenv virtualenv venv_name does. I tried getting the path of the Python executable from the virtualenv.
pyenv activate venv_name
which python
Output:
/Users/username/.pyenv/shims/python
So I made my example script.py:
#!/Users/username/.pyenv/shims/python
import pandas as pd
print 'success'
But when I tried running the script (from within 'venv_name'), I got an error:
./script.py
Output:
./script.py: line 2: import: command not found
./script.py: line 3: print: command not found
Although running that path directly on the command line (from within 'venv_name') works fine:
/Users/username/.pyenv/shims/python script.py
Output:
success
And:
python script.py # Also works
Output:
success
What is the proper shebang for this? Ideally, I want something generic so that it will point at the Python of whatever my current venv is.
I don't really know why calling the interpreter with the full path wouldn't work for you. I use it all the time. But if you want to use the Python interpreter that is in your environment, you should do:
#!/usr/bin/env python
That way you search your environment for the Python interpreter to use.
As you expected, you should be able to use the full path to the virtual environment's Python executable in the shebang to choose/control the environment the script runs in regardless of the environment of the controlling script.
In the comments on your question, VPfB & you find that the /Users/username/.pyenv/shims/python is a shell script that does an exec $pyenv_python. You should be able to echo $pyenv_python to determine the real python and use that as your shebang.
See also: https://unix.stackexchange.com/questions/209646/how-to-activate-virtualenv-when-a-python-script-starts
Try pyenv virtualenvs to find a list of virtual environment directories.
And then you might find a using shebang something like this:
#!/Users/username/.pyenv/python/versions/venv_name/bin/python
import pandas as pd
print 'success'
... will enable the script to work using the chosen virtual environment in other (virtual or not) environments:
(venv_name) $ ./script.py
success
(venv_name) $ pyenv activate non_pandas_venv
(non_pandas_venv) $ ./script.py
success
(non_pandas_venv) $ . deactivate
$ ./script.py
success
The trick is that if you call out the virtual environment's Python binary specifically, the Python interpreter looks around that binary's path location for the supporting files and ends up using the surrounding virtual environment. (See per *How does virtualenv work?)
If you need to use more shell than you can put in the #! shebang line, you can start the file with a simple shell script which launches Python on the same file.
#!/bin/bash
"exec" "pyenv" "exec" "python" "$0" "$#"
# the rest of your Python script can be written below
Because of the quoting, Python doesn't execute the first line, and instead joins the strings together for the module docstring... which effectively ignores it.
You can see more here.
To expand this to an answer, yes, in 99% of the cases if you have a Python executable in your environment, you can just use:
#!/usr/bin/env python
However, for a custom venv on Linux following the same syntax did not work for me since the venv created a link to the Python interpreter which the venv was created from, so I had to do the following:
#!/path/to/the/venv/bin/python
Essentially, however, you are able to call the Python interpreter in your terminal. This is what you would put after #!.
It's not exactly answering the question, but this suggestion by ephiement I think is a much better way to do what you want. I've elaborated a bit and added some more of an explanation as to how this works and how you can dynamically select the Python executable to use:
#!/bin/sh
#
# Choose the Python executable we need. Explanation:
# a) '''\' translates to \ in shell, and starts a python multi-line string
# b) "" strings are treated as string concatenation by Python; the shell ignores them
# c) "true" command ignores its arguments
# c) exit before the ending ''' so the shell reads no further
# d) reset set docstrings to ignore the multiline comment code
#
"true" '''\'
PREFERRED_PYTHON=/Library/Frameworks/Python.framework/Versions/2.7/bin/python
ALTERNATIVE_PYTHON=/Library/Frameworks/Python.framework/Versions/3.6/bin/python3
FALLBACK_PYTHON=python3
if [ -x $PREFERRED_PYTHON ]; then
echo Using preferred python $ALTERNATIVE_PYTHON
exec $PREFERRED_PYTHON "$0" "$#"
elif [ -x $ALTERNATIVE_PYTHON ]; then
echo Using alternative python $ALTERNATIVE_PYTHON
exec $ALTERNATIVE_PYTHON "$0" "$#"
else
echo Using fallback python $FALLBACK_PYTHON
exec python3 "$0" "$#"
fi
exit 127
'''
__doc__ = """What this file does"""
print(__doc__)
import platform
print(platform.python_version())
If you want just a single script with a simple selection of your pyenv virtualenv, you may use a Bash script with your source as a heredoc as follows:
#!/bin/bash
PYENV_VERSION=<your_pyenv_virtualenv_name> python - $# <<EOF
import sys
print(sys.argv)
exit
EOF
I did some additional testing. The following works too:
#!/usr/bin/env -S PYENV_VERSION=<virtual_env_name> python
/usr/bin/env python won't work, since it doesn't know about the virtual environment.
Assuming that you have main.py living next to a ./venv directory, you need to use Python from the venv directory. Or in other words, use this shebang:
#!venv/bin/python
Now you can do:
./main.py
Maybe you need to check the file privileges:
sudo chmod +x script.py

Shell : workon not found [duplicate]

So, once again, I make a nice python program which makes my life ever the more easier and saves a lot of time. Ofcourse, this involves a virtualenv, made with the mkvirtualenv function of virtualenvwrapper. The project has a requirements.txt file with a few required libraries (requests too :D) and the program won't run without these libraries.
I am trying to add a bin/run-app executable shell script which would be in my path (symlink actually). Now, inside this script, I need to switch to the virtualenv before I can run this program. So I put this in
#!/bin/bash
# cd into the project directory
workon "$(cat .venv)"
python main.py
A file .venv contains the virtualenv name. But when I run this script, I get workon: command not found error.
Of course, I have the virtualenvwrapper.sh sourced in my bashrc but it doesn't seem to be available in this shell script.
So, how can I access those virtualenvwrapper functions here? Or am I doing this the wrong way? How do you launch your python tools, each of which has its own virtualenv!?
Just source the virtualenvwrapper.sh script in your script to import the virtualenvwrapper's functions. You should then be able to use the workon function in your script.
And maybe better, you could create a shell script (you could name it venv-run.sh for example) to run any Python script into a given virtualenv, and place it in /usr/bin, /usr/local/bin, or any directory which is in your PATH.
Such a script could look like this:
#!/bin/sh
# if virtualenvwrapper.sh is in your PATH (i.e. installed with pip)
source `which virtualenvwrapper.sh`
#source /path/to/virtualenvwrapper.sh # if it's not in your PATH
workon $1
python $2
deactivate
And could be used simply like venv-run.sh my_virtualenv /path/to/script.py
I can't find the way to trigger the commands of virtualenvwrapper in shell. But this trick can help: assume your env. name is myenv, then put following lines at the beginning of scripts:
ENV=myenv
source $WORKON_HOME/$ENV/bin/activate
This is a super old thread and I had a similar issue. I started digging for a simpler solution out of curiousity.
gnome-terminal --working-directory='/home/exact/path/here' --tab --title="API" -- bash -ci "workon aaapi && python manage.py runserver 8001; exec bash;"
The --workingdirectory forces the tab to open there by default under the hood and the -ci forces it to work like an interactive interface, which gets around the issues with the venvwrapper not functioning as expected.
You can run as many of these in sequence. It will open tabs, give them an alias, and run the script you want.
Personally I dropped an alias into my bashrc to just do this when I type startdev in my terminal.
I like this because its easy, simple to replicate, flexible, and doesn't require any fiddling with variables and whatnot.
It's a known issue. As a workaround, you can make the content of the script a function and place it in either ~/.bashrc or ~/.profile
function run-app() {
workon "$(cat .venv)"
python main.py
}
If your Python script requires a particular virtualenv then put/install it in virtualenv's bin directory. If you need access to that script outside of the environment then you could make a symlink.
main.py from virtualenv's bin:
#!/path/to/virtualenv/bin/python
import yourmodule
if __name__=="__main__":
yourmodule.main()
Symlink in your PATH:
pymain -> /path/to/virtualenv/bin/main.py
In bin/run-app:
#!/bin/sh
# cd into the project directory
pymain arg1 arg2 ...
Apparently, I was doing this the wrong way. Instead of saving the virtualenv's name in the .venv file, I should be putting the virtualenv's directory path.
(cdvirtualenv && pwd) > .venv
and in the bin/run-app, I put
source "$(cat .venv)/bin/activate"
python main.py
And yay!
add these lines to your .bashrc or .bash_profile
export WORKON_HOME=~/Envs
source /usr/local/bin/virtualenvwrapper.sh
and reopen your terminal and try
You can also call the virtualenv's python executable directly. First find the path to the executable:
$ workon myenv
$ which python
/path/to/virtualenv/myenv/bin/python
Then call from your shell script:
#!/bin/bash
/path/to/virtualenv/myenv/bin/python myscript.py

How do I install a script to run anywhere from the command line?

If I have a basic Python script, with it's hashbang and what-not in place, so that from the terminal on Linux I can run
/path/to/file/MyScript [args]
without executing through the interpreter or any file extensions, and it will execute the program.
So would I install this script so that I can type simply
MyScript [args]
anywhere in the system and it will run? Can this be implemented for all users on the system, or must it be redone for each one? Do I simply place the script in a specific directory, or are other things necessary?
The best place to put things like this is /usr/local/bin.
This is the normal place to put custom installed binaries, and should be early in your PATH.
Simply copy the script there (probably using sudo), and it should work for any user.
Walkthrough of making a python script available anywhere:
Make a python script:
cd /home/el/bin
touch stuff.py
chmod +x stuff.py
Find out where your python is:
which python
/usr/bin/python
Put this code in there:
#!/usr/bin/python
print "hi"
Run in it the same directory:
python stuff.py
Go up a directory and it's not available:
cd ..
stuff.py
-bash: stuff.py: command not found
Not found! It's as we expect, add the file path of the python file to the $PATH
vi ~/.bashrc
Add the file:
export PATH=$PATH:/home/el/bin
Save it out, re apply the .bashrc, and retry
source ~/.bashrc
Try again:
cd /home/el
stuff.py
Prints:
hi
The trick is that the bash shell knows the language of the file via the shebang.
you can also use setuptools (https://pypi.org/project/setuptools/)
your script will be:
def hi():
print("hi")
(suppose the file name is hello.py)
also add __init__.py file next to your script (with nothing in it).
add setup.py script, with the content:
#!/usr/bin/env python3
import setuptools
install_requires = [
'WHATEVER PACKAGES YOU NEED GOES HERE'
]
setuptools.setup(
name="some_utils",
version="1.1",
packages=setuptools.find_packages(),
install_requires=install_requires,
entry_points={
'console_scripts': [
'cool_script = hello:hi',
],
},
include_package_data=True,
)
you can now run python setup.py develop in this folder
then from anywhere, run cool_script and your script will run.
Just create ~/bin and put export PATH=$PATH:$HOME/bin in your bashrc/profile. Don't mess with the system, it will bite you back, trust me.
Few more things (relevant to the question but not part of the answer):
The other way export PATH=$HOME/bin:$PATH is NOT safe, for bash will will look into your ~/bin folder for executables, and if their name matches with other executables in your original $PATH you will be surprised by unexpected/non working command execution.
Don't forget to chmod+x when you save your script in ~/bin.
Be aware of what you are putting in your ~/bin folder, if you are just testing something or working on unfinished script, its always better to use ./$SCRIPT_NAME from your CWD to execute the script than putting it under ~/bin.
The quick answer is to symlink your script to any directory included in your system $PATH.
The long answer is described below with a walk through example, (this is what I normally do):
a) Create the script e.g. $HOME/Desktop/myscript.py:
#!/usr/bin/python
print("Hello Pythonista!")
b) Change the permission of the script file to make it executable:
$ chmod +x myscript.py
c) Add a customized directory to the $PATH (see why in the notes below) to use it for the user's scripts:
$ export PATH="$PATH:$HOME/bin"
d) Create a symbolic link to the script as follows:
$ ln -s $HOME/Desktop/myscript.py $HOME/bin/hello
Notice that hello (can be anything) is the name of the command that you will use to invoke your script.
Note:
i) The reason to use $HOME/bin instead of the /usr/local/bin is to separate the local scripts from those of other users (if you wish to) and other installed stuff.
ii) To create a symlink you should use the complete correct path, i.e.
$HOME/bin GOOD ~/bin NO GOOD!
Here is a complete example:
$ pwd
~/Desktop
$ cat > myscript.py << EOF
> #!/usr/bin/python
> print("Hello Pythonista!")
> EOF
$ export PATH="$PATH:$HOME/bin"
$ ln -s $HOME/Desktop/myscript.py $HOME/bin/hello
$ chmod +x myscript.py
$ hello
Hello Pythonista!
Just create symbolic link to your script in /usr/local/bin/:
sudo ln -s /path/to/your/script.py /usr/local/bin/script
Putting the script somewhere in the PATH (like /usr/local/bin) is a good solution, but this forces all the users of your system to use/see your script.
Adding an alias in /etc/profile could be a way to do what you want allowing the users of your system to undo this using the unalias command. The line to be added would be:
alias MyScript=/path/to/file/MyScript
i find a simple alias in my ~/.bash_profile or ~/.zshrc is the easiest:
alias myscript="python path/to/my/script.py"
Type echo $PATH in a shell. Those are the directories searched when you type command, so put it in one of those.
Edit: Apparently don't use /usr/bin, use /usr/local/bin
Acording to FHS, the /usr/local/bin/ is the good place for custom scripts.
I prefer to make them 755 root:root, after copying them there.

Categories

Resources