Using subprocess in anaconda environment - python

I use Python 3.6.6 :: Anaconda, Inc.
I try to use subprocess to call other python script.
subprocess.run("python -V", shell=True)
I tried this code but outcome is
Python 2.7.12
My local python get caught
I also tried
subprocess.run("bash -c 'source /home/hclee/anaconda3/envs/calamari/lib/python3.6/venv/scripts/common/activate
/home/hclee/anaconda3/envs/calamari && python -V && source deactivate'", shell=True)
but I got the same result

Run source activate root in linux, or activate root in Windows to activate the environment before running your code.
If this does not help you, try for a quick and dirty fix e.g.:
subprocess.run('bash -c "source activate root; python -V"', shell=True)
The reason you need to call bash is that python's run will not call the bash environment, but another which is a bit more constrained and does not contain source, so here we need to call bash... But as mentioned, if you need this command, either you are doing something special, or something is wrong with your environment...
deactivate is not needed, it does nothing cause the shell it was run on will be destroyed...
Note: For newest conda versions, the provided code will work, but there are also these options that work similarly:
conda deactivate:
subprocess.run('bash -c "conda deactivate; python -V"', shell=True)
conda activate root or base:
subprocess.run('bash -c "conda activate root; python -V"', shell=True)
subprocess.run('bash -c "conda activate base; python -V"', shell=True)

I don't think sourcing a conda env in every subprocess call of your code is a good idea.
Instead you can find the bin dir of your current sourced env and grab the full path to binaries from there. Then pass these to subprocess when you want to call them.
import os
import sys
import subprocess
# what conda env am I in (e.g., where is my Python process from)?
ENVBIN = sys.exec_prefix
# what binaries am I looking for that are installed in this env?
BIN1 = os.path.join(ENVBIN, "bin", "ipython")
BIN2 = os.path.join(ENVBIN, "bin", "python")
BIN3 = os.path.join(ENVBIN, "bin", "aws")
# let's make sure they exist, no typos.
for bin in (BIN1, BIN2, BIN3):
assert os.path.exists(bin), "missing binary {} in env {}".format(bin, ENVBIN)
# then use their full paths when making subprocess calls
for bin in (BIN1, BIN2, BIN3):
cmd = ["which", bin]
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
print(proc.communicate()[0].decode())
The printed results show that subprocess is using the binaries from the conda environment instead of from the base (default) env.
.../miniconda3/envs/aws/bin/ipython
.../miniconda3/envs/aws/bin/python
.../miniconda3/envs/aws/bin/aws

If your console/notebook is already using the correct environment, you can call subprocess with sys.executable to use current environment:
import sys
import subprocess
subprocess.run(f"{sys.executable} -V", shell=True)

Related

How can I activate the virtual environment from a python script and execute further instructions while inside of it?

Here's what I would like to do with my Python script:
Create virtual environment
Change directory into environment
Activate environment
Install django
Do other stuff...
I've managed to create the environment and change directory with the following:
import subprocess
import os
env_name = "env_new"
subprocess.run(["py", "-m", "venv", env_name])
chdir(env_name)
But activating the environment is a different story:
subprocess.run(["source", "./Scripts/activate"]) # Also tried with activate.bat
Result:
FileNotFoundError: [WinError 2] The system cannot find the file specified
subprocess.run([".", "./Scripts/activate"]) # Also tried with activate.bat
Result:
PermissionError: [WinError 5] Access is denied
Just to clarify, I made sure I was in the correct directory by using print(os.getcwd()).
After this I'd then like to install django. I think it has to happen inside the same run() method like so:
subprocess.run([".", "./Scripts/activate", "&&", "pip", "install", "django"]) # Or something...
Is this even possible?
There is a number of things wrong here. When you run a subprocess, the environment it creates disappears when the subprocess exits.
What you can do is wrap Python inside Python, something like
import subprocess
env_name = "env_new"
subprocess.run(["py", "-m", "venv", env_name])
subprocess.run(["%s/bin/python" % env_name, "-c", """
the rest of your Python code here
"""])
which of course is just a rather pointless complication, and better written as a shell script.
#!/bin/bash
py -m venv env_new
. ./env_new/Scripts/activate
pip install -r requirements.txt
python ./your_real_script_here.py
Original poster here. I just want to expand on triplee's answer above by displaying the full code required to achieve my desired result.
Python Script #1 - django-setup
#!/usr/bin/env python
import subprocess
import sys
if __name__ == '__main__':
if len(sys.argv) > 1:
subprocess.call(f"intermediary.sh {sys.argv[1]}", shell=True)
else:
print("No arguments given")
Shell Script - intermediary.sh
#!/bin/bash
py -m venv "env_$1"
cd "env_$1"
. ./Scripts/activate
pip install django
mkdir "$1"
cd "$1"
django-setup-2
Python Script #2 - django-setup-2
#!/usr/bin/env python
import subprocess
subprocess.run(['django-admin', 'startproject', 'config', '.'])
print("More code here!")
Executing the command django-setup blog would achieve the following result:
env_blog/
Scripts/
Include/
Lib/
blog/
config/
manage.py
pyvenv.cfg
For the creation of a virtual environment and using it, follow here.
Alternatively, you may use Conda to create and manage your virtual environments.

Can I use a single python script to create a virtualenv and install requirements.txt?

I am trying to create a script where i create a virtualenv if it has not been made, and then install requirements.txt in it.
I can't call the normal source /env/bin/activate and activate it, then use pip to install requirements.txt. Is there a way to activate the virtualenv and then install my requirements from a single python script?
my code at the moment:
if not os.path.exists(env_path):
call(['virtualenv', env_path])
else:
print "INFO: %s exists." %(env_path)
try:
call(['source', os.path.join(env_path, 'bin', 'activate')])
except Exception as e:
print e
the error is "No such file directory"
Thanks
source is a shell builtin command, not a program. It cannot and shouldn't be executed with subprocess. You can activate your fresh virtual env by executing activate_this.py in the current process:
if not os.path.exists(env_path):
call(['virtualenv', env_path])
activate_this = os.path.join(env_path, 'bin', 'activate_this.py')
execfile(activate_this, dict(__file__=activate_this))
else:
print "INFO: %s exists." %(env_path)
The source or . command causes the current shell to execute the given source file in it's environment. You'll need a shell in order to use it. This probably isn't as clean as you'd like it, since it uses a string instead of a list to represent the command, but it should work.
import subprocess
subprocess.check_call( [ 'virtualenv', 'env-dir' ] )
subprocess.check_call(
' . env-dir/bin/activate && pip install python-dateutil ',
shell = True
)
Just want to expand the comment of phd, and add example for python3 version
exec(open('env/Scripts/activate_this.py').read(), {'__file__': 'env/Scripts/activate_this.py'})

What shebang to use for Python scripts run under a pyenv virtualenv

When a Python script is supposed to be run from a pyenv virtualenv, what is the correct shebang for the file?
As an example test case, the default Python on my system (OS X) does not have pandas installed. The pyenv virtualenv venv_name does. I tried getting the path of the Python executable from the virtualenv.
pyenv activate venv_name
which python
Output:
/Users/username/.pyenv/shims/python
So I made my example script.py:
#!/Users/username/.pyenv/shims/python
import pandas as pd
print 'success'
But when I tried running the script (from within 'venv_name'), I got an error:
./script.py
Output:
./script.py: line 2: import: command not found
./script.py: line 3: print: command not found
Although running that path directly on the command line (from within 'venv_name') works fine:
/Users/username/.pyenv/shims/python script.py
Output:
success
And:
python script.py # Also works
Output:
success
What is the proper shebang for this? Ideally, I want something generic so that it will point at the Python of whatever my current venv is.
I don't really know why calling the interpreter with the full path wouldn't work for you. I use it all the time. But if you want to use the Python interpreter that is in your environment, you should do:
#!/usr/bin/env python
That way you search your environment for the Python interpreter to use.
As you expected, you should be able to use the full path to the virtual environment's Python executable in the shebang to choose/control the environment the script runs in regardless of the environment of the controlling script.
In the comments on your question, VPfB & you find that the /Users/username/.pyenv/shims/python is a shell script that does an exec $pyenv_python. You should be able to echo $pyenv_python to determine the real python and use that as your shebang.
See also: https://unix.stackexchange.com/questions/209646/how-to-activate-virtualenv-when-a-python-script-starts
Try pyenv virtualenvs to find a list of virtual environment directories.
And then you might find a using shebang something like this:
#!/Users/username/.pyenv/python/versions/venv_name/bin/python
import pandas as pd
print 'success'
... will enable the script to work using the chosen virtual environment in other (virtual or not) environments:
(venv_name) $ ./script.py
success
(venv_name) $ pyenv activate non_pandas_venv
(non_pandas_venv) $ ./script.py
success
(non_pandas_venv) $ . deactivate
$ ./script.py
success
The trick is that if you call out the virtual environment's Python binary specifically, the Python interpreter looks around that binary's path location for the supporting files and ends up using the surrounding virtual environment. (See per *How does virtualenv work?)
If you need to use more shell than you can put in the #! shebang line, you can start the file with a simple shell script which launches Python on the same file.
#!/bin/bash
"exec" "pyenv" "exec" "python" "$0" "$#"
# the rest of your Python script can be written below
Because of the quoting, Python doesn't execute the first line, and instead joins the strings together for the module docstring... which effectively ignores it.
You can see more here.
To expand this to an answer, yes, in 99% of the cases if you have a Python executable in your environment, you can just use:
#!/usr/bin/env python
However, for a custom venv on Linux following the same syntax did not work for me since the venv created a link to the Python interpreter which the venv was created from, so I had to do the following:
#!/path/to/the/venv/bin/python
Essentially, however, you are able to call the Python interpreter in your terminal. This is what you would put after #!.
It's not exactly answering the question, but this suggestion by ephiement I think is a much better way to do what you want. I've elaborated a bit and added some more of an explanation as to how this works and how you can dynamically select the Python executable to use:
#!/bin/sh
#
# Choose the Python executable we need. Explanation:
# a) '''\' translates to \ in shell, and starts a python multi-line string
# b) "" strings are treated as string concatenation by Python; the shell ignores them
# c) "true" command ignores its arguments
# c) exit before the ending ''' so the shell reads no further
# d) reset set docstrings to ignore the multiline comment code
#
"true" '''\'
PREFERRED_PYTHON=/Library/Frameworks/Python.framework/Versions/2.7/bin/python
ALTERNATIVE_PYTHON=/Library/Frameworks/Python.framework/Versions/3.6/bin/python3
FALLBACK_PYTHON=python3
if [ -x $PREFERRED_PYTHON ]; then
echo Using preferred python $ALTERNATIVE_PYTHON
exec $PREFERRED_PYTHON "$0" "$#"
elif [ -x $ALTERNATIVE_PYTHON ]; then
echo Using alternative python $ALTERNATIVE_PYTHON
exec $ALTERNATIVE_PYTHON "$0" "$#"
else
echo Using fallback python $FALLBACK_PYTHON
exec python3 "$0" "$#"
fi
exit 127
'''
__doc__ = """What this file does"""
print(__doc__)
import platform
print(platform.python_version())
If you want just a single script with a simple selection of your pyenv virtualenv, you may use a Bash script with your source as a heredoc as follows:
#!/bin/bash
PYENV_VERSION=<your_pyenv_virtualenv_name> python - $# <<EOF
import sys
print(sys.argv)
exit
EOF
I did some additional testing. The following works too:
#!/usr/bin/env -S PYENV_VERSION=<virtual_env_name> python
/usr/bin/env python won't work, since it doesn't know about the virtual environment.
Assuming that you have main.py living next to a ./venv directory, you need to use Python from the venv directory. Or in other words, use this shebang:
#!venv/bin/python
Now you can do:
./main.py
Maybe you need to check the file privileges:
sudo chmod +x script.py

Writing Shell Commands to VirtualEnv?

Is there a way I can write commands to a virtual environment after it's been activated? For example lets say I have a Python or Bash script which does some, stuff i.e.
Make a virtualenv
Activates it.
Executes the commands to the shell of the newly created virtual environment?
For example I am doing something like this:
activate_this = subprocess.call("/bin/bash --rcfile " + "/home/" + os.getlogin() + "/mission-control/venv/bin/activate", shell=True)
process = execfile(activate_this, dict(__file__=activate_this))
process.communicate(subprocess.call(virtualenv.create_bootstrap_script(textwrap.dedent
("""
import subprocess
subprocess.call("pip install -r " + os.environ['VIRTUAL_ENV'] + "/requirements.txt", shell=True)
"""
))))
I would like to install the requirements.txt file after I activate the environment however I can't get the subprocess module to communicate with the shell after the virtual environment is created. I think it might have to do with me creating a new virtual environment via execfile, which therefore is creating a new process.
Also I know shell=True is bad practice but as of right now I am not concerned with the possibility of unsanitized input.
. "$VIRTUAL_ENV/bin/activate"
pip install -r "$VIRTUAL_ENV/requirements.txt"
First of all, thanks to #Ryne Everett for the help. So I solved this by just ditching the Python solution and creating a Bash file which I call from subprocess in my Python script. The subprocess call executes the Bash file which handles creating and executing within the virtualenv. I am not sure how to solve this using just Python. I am sure there is a way but this seems like a simpler solution. The Bash script is the following:
#!/bin/bash
MISSION_CONTROL="$PWD"
if [ ! -d "$MISSION_CONTROL/venv" ]; then
virtualenv $MISSION_CONTROL/venv --no-site-packages
echo "Welcome to Mission Control..."
/bin/bash --rcfile $MISSION_CONTROL/venv/bin/activate
fi
if [ -d "$MISSION_CONTROL/venv" ]; then
pip install -r $MISSION_CONTROL/requirements.txt
fi
EDIT: This may also be useful for people who are trying to do something similiar: How to source virtualenv activate in a Bash script

How to set virtualenv for a crontab?

I want to set up a crontab to run a Python script.
Say the script is something like:
#!/usr/bin/python
print "hello world"
Is there a way I could specify a virtualenv for that Python script to run in? In shell I'd just do:
~$ workon myenv
Is there something equivalent I could do in crontab to activate a virtualenv?
Is there something equivalent I could do in crontab to activate a virtualenv?
This works well for me...
## call virtualenv python from crontab
0 9 * * * /path/to/virtenv/bin/python /path/to/your_cron_script.py
I prefer using python directly from the virtualenv instead of hard-coding the virtualenv $PATH into the script's shebang... or sourcing the venv activate
If you're using "workon" you're actually using "virtualenv wrapper" which is another layer of abstraction that sits on top of virtualenv. virtualenv alone can be activated by cd'ing to your virtualenv root directory and running:
source bin/activate
workon is a command provided by virtualenv wrapper, not virtualenv, and it does some additional stuff that is not necessarily required for plain virtualenv. All you really need to do is source the bin/activate file in your virtualenv root directory to "activate" a virtualenv.
You can setup your crontab to invoke a bash script which does this:
#! /bin/bash
cd my/virtual/env/root/dir
source bin/activate
# virtualenv is now active, which means your PATH has been modified.
# Don't try to run python from /usr/bin/python, just run "python" and
# let the PATH figure out which version to run (based on what your
# virtualenv has configured).
python myScript.py
With bash, you can create a generic virtual env wrapper that you can use to invoke any command, much like how time can wrapper any command.
virt_env_wrapper.bash:
#!/bin/bash
source path/to/virtual/env/bin/activate
"$#"
Bash's magical incantation "$#" re-escapes all tokens on the original command line so that if you were to invoke:
virt_env_wrapper.bash python foo.py bar 'baz blap'
foo.py would see a sys.argv of ['bar', 'baz blap']
I'm not sure about workon, but it's pretty straightforward for venv. The only thing to remember is that crontab uses sh by default, not bash, so you need to use the . command instead of source.
Here are examples if you have a file ~/myproject/main.py:
* * * * * cd ~/myproject && . .venv/bin/activate && python main.py > /tmp/out1 2>&1
You could also directly call the specific path of the python in the venv directory, then you don't need to call activate.
* * * * * ~/myproject/.venv/bin/python ~/myproject/main.py > /tmp/out2 2>&1
The downside of that is you would need to specify the project path twice, which makes maintenance trickier. To avoid that, you could use a shell variable so you only specify the project path once:
* * * * * project_dir=~/myproject ; $project_dir/.venv/bin/python $project_dir/main.py > /tmp/out3 2>&1

Categories

Resources