Question
My default Python is 2.7, but I have a script that requires Python 3.4. I am trying to create a function in R that will:
Switch to Python 3.4
Run this script
Switch back to Python 2.7
Import results into R
To switch between Python versions, I use my cluster's "dotkit" system, like this:
use Python-2.7
use Python-3.4
"use" is a bash function that is imported in my .bashrc file. It sets all of my path variables (PATH, LIBRARY_PATH, LD_LIBRARY_PATH, CPATH, C_INCLUDE_PATH, etc). The problem is that when I try to call this function in R, I get the following error:
system('use Python-3.4')
sh: use: command not found
It seems like this is a problem with my PATH. I am using the correct shell:
system('echo $SHELL')
/bin/bash
My $PATH variable also looks good. However, when I create a script that essentially does the same thing:
load_py34.sh:
#!/bin/bash
source ~/.bashrc
use Python-3.4
and call this script through R, then it actually runs, but for some reason, it doesn't change my python version within R. (I have verified that this script works from the command line.)
> R
> system('python --version')
Python 2.7.1
> system('sh load_py34.sh')
Prepending: R-3.4 (ok)
> system('python --version')
Python 2.7.1
So I'm a little confused, but if anyone can help, I would really appreciate it.
Suggested fixes
When I combine them into a single command, I still have the same problem:
> system("sh load_py34.sh; python --version")
Prepending: Python-3.4 (already loaded)
Python 2.7.1
When I try calling bash directly, I still have a problem with the PATH:
> system("bash -c 'use Python-3.4; python --version'")
bash: use: command not found
Python 2.7.1
.bashrc is only loaded for interactive bash sessions.
"use" is a bash function that is imported in my .bashrc file. It sets
all of my path variables.
If set via export, the environment of the calling process will not be altered.
export [-fn] [name[=word]] ... The supplied names are marked for automatic export to the environment of subsequently executed commands. (https://man7.org/linux/man-pages/man1/bash.1.html)
Child processes do not normally have access to the parent process' environment. (This poses a problem because system() creates a sub-process.)
The source and . built-ins execute the commands in the current shell environment, hence why your script works.
Other commands (executables, non-shell-builtins) are executed by the fork-and-exec mechanism, whereby the executing shell process forks, creating a child process with an identical environment and state. This new child process is the process in which the command is executed. Changes to the environment of that process are not replicated to the parent's environment.
This means that you will not be able to rely on system('...') to modify the environment of the R process, or that of processes spawned by subsequent system() invocations.
In a single invocation to system(), you can construct a command-line that changes the environment of the spawned shell like so:
bash -c 'source ~/.bashrc; use Python-3.4; python --version'
Mind you, ~/.bashrc is not really the best place to put this functionality (might be subjective).
When you call system() it uses /bin/sh, not /bin/bash. sh doesn't read your .bashrc file when it starts up, so it does not know any of the functions you've defined there.
To use the function from your .bashrc, you must get bash to run it instead:
system("bash -c 'use Python-3.4; python --version'")
Edit: placement of closing single quote.
Related
I am trying to run python code on a build server. In order to keep the agent clean, I'm creating a virutal environment which can be deleted after the task. The python script calls python via subprocess. The Questions are:
why does the call to subprocess not use the same python virtual env the actual script was called in?
How can this be achieved?
Miminal example:
tmp.py:
from subprocess import check_output
import sys
# python interpreter used to call this script
print(sys.executable)
# check which python interpreter is used when calling subprocess
print(check_output(f'python -c "import sys\nprint(sys.executable)').decode())
run.bat:
#echo off
python -m venv .\test_venv
call .\test_venv\Scripts\activate.bat
python tmp.py
output, where the second line is the default python installation on my computer:
λ run.bat
D:\tmp\pytest\test_venv\Scripts\python.exe
D:\tools\python\python.exe
desired output:
λ run.bat
D:\tmp\pytest\test_venv\Scripts\python.exe
D:\tmp\pytest\test_venv\Scripts\python.exe
I am on 64 bit Windows 10.
The subprocess you create uses the operating system's general PATH traversal to find and run the commands you specify, and doesn't know anything about the parent process.
You already know the value of sys.executable; if that's specifically what you want to run, say so:
print(check_output([sys.executable, "-c", "import sys\nprint(sys.executable)"]), text=True)
(This also avoids the shell, which was providing no value at all. Without an explicit shell=True, your code would only work on Windows.)
(Conversely, on any sane platform, the environment, including the virtual environment, would be inherited by child processes.)
However, Python calling Python is almost always an antipattern. Instead, you want to refactor the code so you can import it and run it in the same process.
I have the virutalenv created and installed. I have also installed jsnapy tool inside my virutal env.
This is the script that we are using:
Filename : venv.py
import os
os.system('/bin/bash --rcfile ~/TestAutomation/End2EndAutomation/bin/activate')
os.system('End2EndAutomation/bin/jsnapy')
ubuntu#server:~/TestAutomation$ python venv.py
(End2EndAutomation) ubuntu#sdno-server:~/TestAutomation$ ^C
We need to know, is how we can get into virutalenv, run a command and deactivate it using python script?
[EDIT1]
i used the code given in the comment. its just entering virutal env. When i issue exit, its running jsnapy command.
ubuntu#server:~/TestAutomation$ python venv.py
(End2EndAutomation) ubuntu#server:~/TestAutomation$ exit
exit
usage:
This tool enables you to capture and audit runtime environment of
networked devices running the Junos operating system (Junos OS)
Tool to capture snapshots and compare them
It supports four subcommands:
--snap, --check, --snapcheck, --diff
1. Take snapshot:
jsnapy --snap pre_snapfile -f main_configfil
Each call to os.system() will create a new bash instance and terminate the previous one. To run all the commands in one bash instance you could put all your commands inside a single bash script and call that from os.system()
run.sh
source ~/TestAutomation/End2EndAutomation/bin/activate
End2EndAutomation/bin/jsnapy
deactivate
Python
os.system('source run.sh')
Alternatively, you could write a multiline bash command, as long as it's all in one os.system() call.
Two successive calls to os.system() will create two independent processes, one after the other. The second will run when the first finishes. Any effects of commands executed in the first process will have been forgotten and flushed when the second runs.
You want to run the activation and the command which needs to be run in the virtualenv in the same process, i.e. the same single shell instance.
To do that, you can use bash -c '...' to run a sequence of commands. See below.
However, a better solution is to simply activate the virtual environment from within Python itself.
p = os.path.expanduser('~/TestAutomation/End2EndAutomation/bin/activate_this.py')
execfile(p, dict(__file__=p))
subprocess.check_call(['./End2EndAutomation/bin/jsnapy'])
For completeness, here is the Bash solution, with comments.
import subprocess
subprocess.check_call(['bash', '-c', """
. ~/TestAutomation/End2EndAutomation/bin/activate
./End2EndAutomation/bin/jsnapy"""])
The preference for subprocess over os.system is recommended even in the os.system documentation.
There is no need to explicitly deactivate; when the bash command finishes, that will implicitly also deactivate the virtual environment.
The --rcfile trick is a nice idea, but it doesn't work when the shell you are calling isn't interactive.
subprocess.run('export FOO=BAR', shell=True)
This simply doesn't work, and I have no idea why.
All I am trying to do I set an environment variable from my python (3.5.1) script, and when I run the above line, nothing happens. No errors are raised, and when I check the environment variable myself, it has not been set.
Other shell commands with subprocess.run() do work, such as ls and pwd, but not export.
.run() was added in Python 3.5 (in case you didn't recognise it), but I have also tried the above line with .call() and .Popen(), with no change in results.
I am aware that I can set environment variables in python with os.environ['FOO'] = "BAR", but I will be using shell commands a lot in my project, and I expect that I will need to string multiple commands together, which will make using export easier than os.environ.
My project will run on Linux, which is what my machine is running on.
It works fine; however, the variable setting only exists in the subprocess. You cannot affect the environment of the local process from a child.
os.environ is the correct solution, as it changes the environment of the local process, and those changes will be inherited by any process started with subprocess.run.
You can also use the env argument to run:
subprocess.run(["cmdname", "arg1", "arg number 2"], env=dict(FOO='BAR', **os.environ))
This runs the command in a modified environment that includes FOO=BAR without modifying the current environment.
I want to set an environment variable with a Python script, influencing the shell I am starting the script in. Here is what I mean
python -c "import os;os.system('export TESTW=1')"
But the command
echo ${TESTW}
returns nothing. Also with the expression
python -c "import os;os.environ['TEST']='1'"
it does not work.
Is there another way to do this in the direct sense? Or is it better to write the variables in a file which I execute from 'outside' of the Python script?
You can influence environment via: putenv BUT it will not influence the caller environment, only environment of forked children.
It's really much better to setup environment before launching the python script.
I may propose such variant. You create a bash script and a python script. In bash script you call the python script with params. One param - one env variable. Eg:
#!/bin/bash
export TESTV1=$(python you_program.py testv1)
export TESTV2=$(python you_program.py testv2)
and you_program.py testv1 returns value just for one env variable.
I would strongly suggest using the solution proposed by chepner and Maxym (where the Python script provides the values and your shell exports the variables). If that is not an option for you, you could still use eval to execute what the python script writes in your current Bash process:
eval $( python -c "print('export TESTW=1')" )
Caution: eval is usually read "evil" in Bash programming. As a general rule of thumb, one should avoid "blindly" executing code that is not fully under one's control. That includes being generated by another program at runtime as in this case. See also Stack Overflow question Why should eval be avoided in Bash, and what should I use instead?.
I want to run a csh file from a python script,
example,
#!/usr/bin/python
import os
os.system("source path/to/file.csh")
and I want this file to run in the same shell as I am running the python script, because the file.csh script is settings some environment variables that I need.
Does anyone know how to do this in Python?
A child process cannot affect the environment of the parent process. The best you can do is to run your csh script in a separate process, get the environment variables that it defines, then set each environment variable in your python script.
Even with that, the python script won't be able to affect the shell in which you run the python script.
The common way to solve this (AFAIK) is to have your script emit shell commands to set the environment, then from the main shell you run the script and eval what you get back.
For more information you might want to check out this question: can a shell script set environment variables of the calling shell
You can kludge it this way:
#!/usr/bin/env python
# This is kludge.py
print "setenv VARNAME \"the value\""
In your case, you can have the file.sh print the setenv line.
Then from csh:
$ eval `./kludge.py`
$ echo $VARNAME
the value
This isn't clean, but it is the only way to have a child process effect the environment of its parent. This is only because the parent process is explicitly letting it happen with eval.