I have a shell function that I would like to test from within a python script. It contains the double bracket [[ syntax which can only be interpreted by bash/zsh/ksh etc., but not the regular shell. In the original file, I read the function from a sh file using the builtins.open function. I simplified this case a bit and already added the file to the script as a string exactly the way it is loaded in the original file. I then paste it into subprocess with the shell argument set to True:
shell_function = """example_shell_function () {
#calling a python script which prints values to stdout
output_string=$(python3 test.py);
output_snippet=$(echo $output_string | tail -n1)
test_sign="#"
#if output_snippet contains "#" then enter condition
if [[ "$output_snippet" =~ "$test_sign" ]]
then
echo "condition met"
else
echo "condition not met"
fi
}"""
shell_commands = "\n".join(shell_function+["example_shell_function"])
process = subprocess.Popen(shell_commands_test_argument,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell = True)
stdout, stderr = process.communicate()
I am running zsh on my machine, but subprocess uses the regular sh binary and returns the error [[: not found on the line where the double brackets are defined in the script. I have tried modifying the subprocess call as following, in order to make sure the function is interpreted by zsh instead of sh:
shell_commands = "\n".join([". /bin/zsh"]+shell_function+["example_shell_function"])
This returns the error /bin/sh: 2: /bin/zsh: : not found, in spite of the zsh binary being present at that location. What is the best way to run this function from within my python script?
Solution proposed by #MarkSetchell worked:
Use executable='/usr/bin/zsh' in your subprocess() call.
Related
This question already has answers here:
Is it possible to change the Environment of a parent process in Python?
(4 answers)
Closed 4 years ago.
I have a bash script that looks like this:
python myPythonScript.py
python myOtherScript.py $VarFromFirstScript
and myPythonScript.py looks like this:
print("Running some code...")
VarFromFirstScript = someFunc()
print("Now I do other stuff")
The question is, how do I get the variable VarFromFirstScript back to the bash script that called myPythonScript.py.
I tried os.environ['VarFromFirstScript'] = VarFromFirstScript but this doesn't work (I assume this means that the python environment is a different env from the calling bash script).
you cannot propagate an environment variable to the parent process. But you can print the variable, and assign it back to the variable name from your shell:
VarFromFirstScript=$(python myOtherScript.py $VarFromFirstScript)
you must not print anything else in your code, or using stderr
sys.stderr.write("Running some code...\n")
VarFromFirstScript = someFunc()
sys.stdout.write(VarFromFirstScript)
an alternative would be to create a file with the variables to set, and make it parse by your shell (you could create a shell that the parent shell would source)
import shlex
with open("shell_to_source.sh","w") as f:
f.write("VarFromFirstScript={}\n".format(shlex.quote(VarFromFirstScript))
(shlex.quote allows to avoid code injection from python, courtesy Charles Duffy)
then after calling python:
source ./shell_to_source.sh
You can only pass environment variables from parent process to child.
When the child process is created the environment block is copied to the child - the child has a copy, so any changes in the child process only affects the child's copy (and any further children which it creates).
To communicate with the parent the simplest way is to use command substitution in bash where we capture stdout:
Bash script:
#!/bin/bash
var=$(python myPythonScript.py)
echo "Value in bash: $var"
Python script:
print("Hollow world!")
Sample run:
$ bash gash.sh
Value in bash: Hollow world!
You have other print statements in python, you will need to filter out to only the data you require, possibly by marking the data with a well-known prefix.
If you have many print statements in python then this solution is not scalable, so you might need to use process substitution, like this:
Bash script:
#!/bin/bash
while read -r line
do
if [[ $line = ++++* ]]
then
# Strip out the marker
var=${line#++++}
else
echo "$line"
fi
done < <(python myPythonScript.py)
echo "Value in bash: $var"
Python script:
def someFunc():
return "Hollow World"
print("Running some code...")
VarFromFirstScript = someFunc()
# Prefix our data with a well-known marker
print("++++" + VarFromFirstScript)
print("Now I do other stuff")
Sample Run:
$ bash gash.sh
Running some code...
Now I do other stuff
Value in bash: Hollow World
I would source your script, this is the most commonly used method. This executes the script under the current shell instead of loading another one. Because this uses same shell env variables you set will be accessible when it exits. . /path/to/script.sh or source /path/to/script.sh will both work, . works where source doesn't sometimes.
My perl script is at path:
a/perl/perlScript.pl
my python script is at path:
a/python/pythonScript.py
pythonScript.py gets an argument from stdin, and returns result to stdout. From perlScript.pl , I want to run pythonScript.py with the argument hi to stdin, and save the results in some variable. That's what I tried:
my $ret = `../python/pythonScript.py < hi`;
but I got the following error:
The system cannot find the path specified.
Can you explain the path can't be found?
The qx operator (backticks) starts a shell (sh), in which prog < input syntax expects a file named input from which it will read lines and feed them to the program prog. But you want the python script to receive on its STDIN the string hi instead, not lines of a file named hi.
One way is to directly do that, my $ret = qx(echo "hi" | python_script).
But I'd suggest to consider using modules for this. Here is a simple example with IPC::Run3
use warnings;
use strict;
use feature 'say';
use IPC::Run3;
my #cmd = ('program', 'arg1', 'arg2');
my $in = "hi";
run3 \#cmd, \$in, \my $out;
say "script's stdout: $out";
The program is the path to your script if it is executable, or perhaps python script.py. This will be run by system so the output is obtained once that completes, what is consistent with the attempt in the question. See documentation for module's operation.
This module is intended to be simple while "satisfy 99% of the need for using system, qx, and open3 [...]. For far more power and control see IPC::Run.
You're getting this error because you're using shell redirection instead of just passing an argument
../python/pythonScript.py < hi
tells your shell to read input from a file called hi in the current directory, rather than using it as an argument. What you mean to do is
my $ret = `../python/pythonScript.py hi`;
Which correctly executes your python script with the hi argument, and returns the result to the variable $ret.
The Some of the other answers assume that hi must be passed as a command line parameter to the Python script but the asker says it comes from stdin.
Thus:
my $ret = `echo "hi" | ../python/pythonScript.py`;
To launch your external script you can do
system "python ../python/pythonScript.py hi";
and then in your python script
import sys
def yourFct(a, b):
...
if __name__== "__main__":
yourFct(sys.argv[1])
you can have more informations on the python part here
I am using the line_profiler, which allows you to drop #profile decorators anywhere in a python codebase and returns line output.
However, if you try to execute python code that contains one such #profile decorator without loading this line_profiler module, the code will fail with a NameError, for such a decorator is defined and injected by this external library.
I'd like a bash command that attempts to run my python script with vanilla python. Then, if and only if the error consists of NameError, I want to give it a second try. This is what I have got so far:
python -u $file || python -m kernprof -l -v --outfile=/dev/null $file"
The problem is of course that if my python code has ANY errors at all, be it ValueError or IndentationError or anything, it tries the profiler. I want to ONLY run the profiler if the error contains a string NameError: name 'profile' is not defined is found within stderr.
Wouldn't be better to monkey patch profile when no line_profiles is present ?
Something like
try:
import line_profiles
except:
import warnings
warnings.warn("Profile disabled")
def profile(fn):
def wrapper(*args, **kw):
return fn(*args, **kw)
return wrapper
This way your code runs in either case without complicating matters.
Here's a usable Bash solution that preserves stdout and stderr as separate streams (with the caveat that stderr appears after stdout) and only checks stderr for the error message (which probably is overkill though).
It goes the easy route and simply saves the stderr output to a file. It also handles script names that contain spaces (by properly quoting variable expansions where needed) and/or start with - (by passing -- before the filename to switch off flag processing) as it's an OCD pet peeve of mine.
On success or if there is an error that is not the expected error, the stderr of the first python command is shown. Otherwise (for the expected error), it is hidden.
Usage is $ ./check <script>.
#!/bin/bash
if [[ $# -ne 1 ]]; then
echo "Expected one argument: the script" >&2
exit 1
fi
script=$1
if [[ ! -e $script ]]; then
echo "'$script' does not exist or is not a regular file" >&2
exit 1
fi
if ! python -- "$script" 2>saved_stderr &&
grep -q "NameError: name 'profile' is not defined" saved_stderr; then
# Try again with the kernprof module.
python -m kernprof -l -v --outfile=/dev/null -- "$script"
else
# Either success or an unexpected error. Show stderr.
cat saved_stderr >&2
fi
rm saved_stderr
To check if the return status of a command is zero (i.e., success), it suffices to do
if <cmd>; then <if successful>; fi`
! negates the exit status, so if ! <cmd> ... can be used to check for failure. ! only applies to the python command above, not all of python ... && grep ....
>&2 redirects stdout to stderr. (It's the same as 1>&2 but saves a single character, which is a bit silly, but I included for illustrative purposes as it's a common idiom.)
Creating a simple Python wrapper would seem a lot more straightforward, because inside Python, you have access to the things which go wrong.
Assuming your $file uses the common __name__ == '__main__' idiom something like this:
if __name__ == '__main__':
main()
you can create a wrapper something like
import yourfile
try:
file.main()
except NameError:
import kernprof
# hack hack, quickly constructed from looking at main() in kernprof.py
prof = kernprof.ContextualProfile()
execfile_ = execfile
ns = locals()
try:
prof.runctx('execfile_(%r, globals())' % (yourfile,), ns, ns)
finally:
prof.print_stats()
There are a variety of posts and resources explaining how to use Python to get output of an outside call. I am familiar with using these--I've used Python to get output of jars and exec several times, when it was not realistic or economical to re-implement the functionality of that jar/exec inside Python itself.
I am trying to call a Perl script via Python's subprocess module, but I have had no success with this particular Perl script. I carefully followed the answers here, Call Perl script from Python, but had no results.
I was able to get the output of this test Perl script from this question/answer: How to call a Perl script from Python, piping input to it?
#!/usr/bin/perl
use strict;
use warnings;
my $name = shift;
print "Hello $name!\n";
Using this block of Python code:
import subprocess
var = "world"
args_test = ['perl', 'perl/test.prl', var]
pipe = subprocess.Popen(args_test, stdout=subprocess.PIPE)
out, err = pipe.communicate()
print out, err
However, if I swap out the arguments and the Perl script with the one I need output from, I get no output at all.
args = ['perl', 'perl/my-script.prl', '-a', 'perl/file-a.txt',
'-t', 'perl/file-t.txt', 'input.txt']
which runs correctly when entered on the command line, e.g.
>perl perl/my-script.prl -a perl/file-a.txt -t perl/file-t.txt input.txt
but this produces no output when called via subprocess:
pipe = subprocess.Popen(args, stdout=subprocess.PIPE)
out, err = pipe.communicate()
print out, err
I've done another sanity check as well. This correctly outputs the help message of Perl as a string:
import subprocess
pipe = subprocess.Popen(['perl', '-h'], stdout=subprocess.PIPE)
out, err = pipe.communicate()
print out, err
As shown here:
>>> ================================ RESTART ================================
>>>
Usage: perl [switches] [--] [programfile] [arguments]
-0[octal] specify record separator (\0, if no argument)
-a autosplit mode with -n or -p (splits $_ into #F)
-C[number/list] enables the listed Unicode features
-c check syntax only (runs BEGIN and CHECK blocks)
-d[:debugger] run program under debugger
-D[number/list] set debugging flags (argument is a bit mask or alphabets)
-e program one line of program (several -e's allowed, omit programfile)
-f don't do $sitelib/sitecustomize.pl at startup
-F/pattern/ split() pattern for -a switch (//'s are optional)
-i[extension] edit <> files in place (makes backup if extension supplied)
-Idirectory specify #INC/#include directory (several -I's allowed)
-l[octal] enable line ending processing, specifies line terminator
-[mM][-]module execute "use/no module..." before executing program
-n assume "while (<>) { ... }" loop around program
-p assume loop like -n but print line also, like sed
-P run program through C preprocessor before compilation
-s enable rudimentary parsing for switches after programfile
-S look for programfile using PATH environment variable
-t enable tainting warnings
-T enable tainting checks
-u dump core after parsing program
-U allow unsafe operations
-v print version, subversion (includes VERY IMPORTANT perl info)
-V[:variable] print configuration summary (or a single Config.pm variable)
-w enable many useful warnings (RECOMMENDED)
-W enable all warnings
-x[directory] strip off text before #!perl line and perhaps cd to directory
-X disable all warnings
None
str = "blah -l"
cpuinfo = subprocess.Popen(str.split(),stdout=PIPE,stderr=PIPE)
tuples = cpuinfo.communicate()
In the above code, when I give str=[some_valid_command] gives the output to tuples. When I give an invalid command, I expect the error to be taken to PIPE, but it is still throwing out on the console.... I am not quite sure, where I understood it wrong....
Thanks.....
I'm not sure if you are seeing the stderr actually appear on the console, or are simply running into the Python failure to spawn a process named "blah", which is produced when running the example that you provided...
The output of the example would be Python raising an OSError: [Errno 2] No such file or directory, which is to be expected unless you have an executable script called "blah" in the PATH
I did a simple test, and wrote a bash script like this:
#!/bin/bash
echo "This is stdout"
echo "This is a failure on stderr" >&2
exit 1
After giving that script executable permissions, I repeated your example but instead called my script (named fail.sh in the local directory) as such:
import subprocess
cmd = './fail.sh'
proc = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.communicate()
This returned ('This is stdout\n', 'This is a failure on stderr\n') as expected.
So perhaps what you're really seeing here is that whatever program you're trying to call (if it's not blah), simply doesn't exist on your PATH.
Also a note on using str as a label in Python: str is a built-in type and should not be used as a name for a variable or function, unless you specifically want to "over-load" the built-in function. Same goes for string, which is a class.