Handling specific Python error within Bash call? - python

I am using the line_profiler, which allows you to drop #profile decorators anywhere in a python codebase and returns line output.
However, if you try to execute python code that contains one such #profile decorator without loading this line_profiler module, the code will fail with a NameError, for such a decorator is defined and injected by this external library.
I'd like a bash command that attempts to run my python script with vanilla python. Then, if and only if the error consists of NameError, I want to give it a second try. This is what I have got so far:
python -u $file || python -m kernprof -l -v --outfile=/dev/null $file"
The problem is of course that if my python code has ANY errors at all, be it ValueError or IndentationError or anything, it tries the profiler. I want to ONLY run the profiler if the error contains a string NameError: name 'profile' is not defined is found within stderr.

Wouldn't be better to monkey patch profile when no line_profiles is present ?
Something like
try:
import line_profiles
except:
import warnings
warnings.warn("Profile disabled")
def profile(fn):
def wrapper(*args, **kw):
return fn(*args, **kw)
return wrapper
This way your code runs in either case without complicating matters.

Here's a usable Bash solution that preserves stdout and stderr as separate streams (with the caveat that stderr appears after stdout) and only checks stderr for the error message (which probably is overkill though).
It goes the easy route and simply saves the stderr output to a file. It also handles script names that contain spaces (by properly quoting variable expansions where needed) and/or start with - (by passing -- before the filename to switch off flag processing) as it's an OCD pet peeve of mine.
On success or if there is an error that is not the expected error, the stderr of the first python command is shown. Otherwise (for the expected error), it is hidden.
Usage is $ ./check <script>.
#!/bin/bash
if [[ $# -ne 1 ]]; then
echo "Expected one argument: the script" >&2
exit 1
fi
script=$1
if [[ ! -e $script ]]; then
echo "'$script' does not exist or is not a regular file" >&2
exit 1
fi
if ! python -- "$script" 2>saved_stderr &&
grep -q "NameError: name 'profile' is not defined" saved_stderr; then
# Try again with the kernprof module.
python -m kernprof -l -v --outfile=/dev/null -- "$script"
else
# Either success or an unexpected error. Show stderr.
cat saved_stderr >&2
fi
rm saved_stderr
To check if the return status of a command is zero (i.e., success), it suffices to do
if <cmd>; then <if successful>; fi`
! negates the exit status, so if ! <cmd> ... can be used to check for failure. ! only applies to the python command above, not all of python ... && grep ....
>&2 redirects stdout to stderr. (It's the same as 1>&2 but saves a single character, which is a bit silly, but I included for illustrative purposes as it's a common idiom.)

Creating a simple Python wrapper would seem a lot more straightforward, because inside Python, you have access to the things which go wrong.
Assuming your $file uses the common __name__ == '__main__' idiom something like this:
if __name__ == '__main__':
main()
you can create a wrapper something like
import yourfile
try:
file.main()
except NameError:
import kernprof
# hack hack, quickly constructed from looking at main() in kernprof.py
prof = kernprof.ContextualProfile()
execfile_ = execfile
ns = locals()
try:
prof.runctx('execfile_(%r, globals())' % (yourfile,), ns, ns)
finally:
prof.print_stats()

Related

Assign varible in gnu make from Python sys.exit string

I would like to assign a variable in gnu make to the sys.exit() from a Python script. A simple Python script, let's call it string_gen.py, might look like:
#!/usr/bin/python
import sys
def string_gen():
return "string_file.txt"
if __name__ == "__main__":
sys.exit(string_gen())
In the make file, a target might look like
.PRECIOUS: $(FILE_STRINGS)
$(STRING_DICT): $(STRING_DIR)/string_gen.py $(PYTHON)
$(V)if [[ $(IF_BUILD_STRING_DICT) ]]; then STRING_FILE=$($(PYTHON) $< $(#D)); fi
$(V)if [[ $(IF_BUILD_STRING_DICT) ]]; then $(PYTHON) $(STRING_DIR)/report.py $(STRING_FILE); fi
I would like to assign STRING_FILE to the result of sys.exit() when set in the Python script. I can run the report.py from the command line and it does print "string_file.txt" to the console, but this result is not saved to the STRING_FILE variable in the make file. Is there a way to pass the result of running a Python script and assign it to variable in gnu make?
Edit: The makefile is to provide some context and doesn't present the full makefile. I took out just one small, very small, part in attempts to show what I am trying to do.
You have many many issues here:
.PRECIOUS: $(FILE_STRINGS)
I'm assuming this is related to some part of the makefile you haven't shown us; it has no relevance to the recipe below.
$(STRING_DICT): $(STRING_DIR)/string_gen.py $(PYTHON)
$(V)if [[ $(IF_BUILD_STRING_DICT) ]]; then STRING_FILE=$($(PYTHON) $< $(#D)); fi
First note if $(IF_BUILD_STRING_DICT) expands to the empty string this will be a syntax error. You should quote it, for example '$(IF_BUILD_STRING_DICT)'.
Second you should not use [[ ... ]]: make always runs /bin/sh and this condition syntax is not POSIX: it's supported by bash and some other shells but not by all POSIX shells. So if you run it on a system where /bin/sh is a strictly POSIX shell this will fail. You should either use the POSIX form, like [ -n "$(IF_BUILD_STRING_DICT)" ] or else if you want to require that anyone using your makefile use bash, you should add SHELL := /bin/bash to your makefile.
Third, this syntax is wrong: $($(PYTHON) $< $(#D)) This will expand the make variable named python .../string-gen.py ... which is certainly empty/not set.
The $ character is special to make so if you want to pass that character to the shell, for command substitution, you have the escape it by writing it as $$.
Fourth, as others have pointed out python sys.exit() writes to stderr and command substitution captures only stdout, so this won't work.
Fifth, this assigns the shell variable STRING_FILE: when writing makefiles it's critical to keep firmly in your mind the difference between make variables and shell variables. They are not the same at all. Recipes run in the shell and can only set shell variables.
Sixth, every logical line in a recipe is run in a separate shell which means that when the logical line ends the shell exits and all variables, etc. you have set will disappear. If you want the same variable to be used across multiple shell commands then you have to put them all into the same logical line.
$(V)if [[ $(IF_BUILD_STRING_DICT) ]]; then $(PYTHON) $(STRING_DIR)/report.py $(STRING_FILE); fi
As above, $(STRING_FILE) here is a make variable reference, but in the previous line you set the shell variable STRING_FILE. Which, anyway, is gone because the shell in the previous line exited.
You need to write this as:
$(STRING_DICT): $(STRING_DIR)/string_gen.py $(PYTHON)
$(V)if [ -n '$(IF_BUILD_STRING_DICT)' ]; then \
STRING_FILE=$$($(PYTHON) $< $(#D) 2>&1); \
$(PYTHON) $(STRING_DIR)/report.py $$STRING_FILE; \
fi
Or, if you wanted to do it with less typing:
$(STRING_DICT): $(STRING_DIR)/string_gen.py $(PYTHON)
$(V)[ -z '$(IF_BUILD_STRING_DICT)' ] \
|| $(PYTHON) $(STRING_DIR)/report.py $$($(PYTHON) $< $(#D) 2>&1)
The only reason I can think of for wanting to print the filename to stderr is that you have other text going to stdout. If that's the case (you don't show that in your example) then you should use 3>&2 2>&1 1>&3 instead of just 2>&1.
According to the sys.exit documentation:
any other object is printed to stderr and results in an exit code of 1.
Your output is printed to stderr, but you are reading from stdout.
If the purpose of your program is to simply output the name of a file, just use print. A string argument to sys.exit is intended as an error message, and is written to standard error instead of standard output (which is what the command substitution captures).
#!/usr/bin/python
def string_gen():
return "string_file.txt"
if __name__ == "__main__":
print(string_gen())

Pass variable from Python to Bash

I am writing a bash script in which a small python script is embedded. I want to pass a variable from python to bash. After a few search I only found method based on os.environ.
I just cannot make it work. Here is my simple test.
#!/bin/bash
export myvar='first'
python - <<EOF
import os
os.environ["myvar"] = "second"
EOF
echo $myvar
I expected it to output second, however it still outputs first. What is wrong with my script? Also is there any way to pass variable without export?
summary
Thanks for all answers. Here is my summary.
A python script embedded inside bash will run as child process which by definition is not able to affect parent bash environment.
The solution is to pass assignment strings out from python and eval it subsequently in bash.
An example is
#!/bin/bash
a=0
b=0
assignment_string=$(python -<<EOF
var1=1
var2=2
print('a={};b={}'.format(var1,var2))
EOF
)
eval $assignment_string
echo $a
echo $b
Unless Python is used to do some kind of operation on the original data, there's no need to import anything. The answer could be as lame as:
myvar=$(python - <<< "print 'second'") ; echo "$myvar"
Suppose for some reason Python is needed to spit out a bunch of bash variables and assignments, or (cautiously) compose code on-the-fly. An eval method:
myvar=first
eval "$(python - <<< "print('myvar=second')" )"
echo "$myvar"
Complementing the useful Cyrus's comment in question, you just can't do it. Here is why,
Setting an environment variable sets it only for the current process and any child processes it launches. os.environ will set it only for the shell that is running to execute the command you provided. When that command finishes, the shell goes away, and so does the environment variable.
You can pretty much do that with a shell script itself and just source it to reflect it on the current shell.
There are a few "dirty" ways of getting something like this done. Here is an example:
#!/bin/bash
myvar=$(python - <<EOF
print "second"
EOF
)
echo "$myvar"
The output of the python process is stored in a bash variable. It gets a bit messy if you want to return more complex stuff, though.
You can make python return val and pass it to bash:
pfile.py
print(100)
bfile.sh
var=$(python pfile.py)
echo "$var"
output: 100
Well, this may not be what you want but one option could be running the other batch commands in python using subprocess
import subprocess
x =400
subprocess.call(["echo", str(x)])
But this is more of a temporary work around. The other solutions are more along what you are looking for.
Hope I was able to help!

Getting console output of a Perl script through Python

There are a variety of posts and resources explaining how to use Python to get output of an outside call. I am familiar with using these--I've used Python to get output of jars and exec several times, when it was not realistic or economical to re-implement the functionality of that jar/exec inside Python itself.
I am trying to call a Perl script via Python's subprocess module, but I have had no success with this particular Perl script. I carefully followed the answers here, Call Perl script from Python, but had no results.
I was able to get the output of this test Perl script from this question/answer: How to call a Perl script from Python, piping input to it?
#!/usr/bin/perl
use strict;
use warnings;
my $name = shift;
print "Hello $name!\n";
Using this block of Python code:
import subprocess
var = "world"
args_test = ['perl', 'perl/test.prl', var]
pipe = subprocess.Popen(args_test, stdout=subprocess.PIPE)
out, err = pipe.communicate()
print out, err
However, if I swap out the arguments and the Perl script with the one I need output from, I get no output at all.
args = ['perl', 'perl/my-script.prl', '-a', 'perl/file-a.txt',
'-t', 'perl/file-t.txt', 'input.txt']
which runs correctly when entered on the command line, e.g.
>perl perl/my-script.prl -a perl/file-a.txt -t perl/file-t.txt input.txt
but this produces no output when called via subprocess:
pipe = subprocess.Popen(args, stdout=subprocess.PIPE)
out, err = pipe.communicate()
print out, err
I've done another sanity check as well. This correctly outputs the help message of Perl as a string:
import subprocess
pipe = subprocess.Popen(['perl', '-h'], stdout=subprocess.PIPE)
out, err = pipe.communicate()
print out, err
As shown here:
>>> ================================ RESTART ================================
>>>
Usage: perl [switches] [--] [programfile] [arguments]
-0[octal] specify record separator (\0, if no argument)
-a autosplit mode with -n or -p (splits $_ into #F)
-C[number/list] enables the listed Unicode features
-c check syntax only (runs BEGIN and CHECK blocks)
-d[:debugger] run program under debugger
-D[number/list] set debugging flags (argument is a bit mask or alphabets)
-e program one line of program (several -e's allowed, omit programfile)
-f don't do $sitelib/sitecustomize.pl at startup
-F/pattern/ split() pattern for -a switch (//'s are optional)
-i[extension] edit <> files in place (makes backup if extension supplied)
-Idirectory specify #INC/#include directory (several -I's allowed)
-l[octal] enable line ending processing, specifies line terminator
-[mM][-]module execute "use/no module..." before executing program
-n assume "while (<>) { ... }" loop around program
-p assume loop like -n but print line also, like sed
-P run program through C preprocessor before compilation
-s enable rudimentary parsing for switches after programfile
-S look for programfile using PATH environment variable
-t enable tainting warnings
-T enable tainting checks
-u dump core after parsing program
-U allow unsafe operations
-v print version, subversion (includes VERY IMPORTANT perl info)
-V[:variable] print configuration summary (or a single Config.pm variable)
-w enable many useful warnings (RECOMMENDED)
-W enable all warnings
-x[directory] strip off text before #!perl line and perhaps cd to directory
-X disable all warnings
None

Unit Test for Bash completion script

I would like to write a Unit Test for a (rather complex) Bash completion script, preferrably with Python - just something that gets the values of a Bash completion programmatically.
The test should look like this:
def test_completion():
# trigger_completion should return what a user should get on triggering
# Bash completion like this: 'pbt createkvm<TAB>'
assert trigger_completion('pbt createkvm') == "module1 module2 module3"
How can I simulate Bash completion programmatically to check the completion values inside a testsuite for my tool?
Say you have a bash-completion script in a file called asdf-completion, containing:
_asdf() {
COMPREPLY=()
local cur prev
cur=$(_get_cword)
COMPREPLY=( $( compgen -W "one two three four five six" -- "$cur") )
return 0
}
complete -F _asdf asdf
This uses the shell function _asdf to provide completions for the fictional asdf command. If we set the right environment variables (from the bash man page), then we can get the same result, which is the placement of the potential expansions into the COMPREPLY variable. Here's an example of doing that in a unittest:
import subprocess
import unittest
class BashTestCase(unittest.TestCase):
def test_complete(self):
completion_file="asdf-completion"
partial_word="f"
cmd=["asdf", "other", "arguments", partial_word]
cmdline = ' '.join(cmd)
out = subprocess.Popen(['bash', '-i', '-c',
r'source {compfile}; COMP_LINE="{cmdline}" COMP_WORDS=({cmdline}) COMP_CWORD={cword} COMP_POINT={cmdlen} $(complete -p {cmd} | sed "s/.*-F \\([^ ]*\\) .*/\\1/") && echo ${{COMPREPLY[*]}}'.format(
compfile=completion_file, cmdline=cmdline, cmdlen=len(cmdline), cmd=cmd[0], cword=cmd.index(partial_word)
)],
stdout=subprocess.PIPE)
stdout, stderr = out.communicate()
self.assertEqual(stdout, "four five\n")
if (__name__=='__main__'):
unittest.main()
This should work for any completions that use -F, but may work for others as well.
je4d's comment to use expect is a good one for a more complete test.
bonsaiviking's solution almost worked for me. I had to change the bash string script. I added an extra ';' separator to the executed bash script otherwise the execution wouldn't work on Mac OS X. Not really sure why.
I also generalized the initialization of the various COMP_ arguments a bit to handle the various cases I ended up with.
The final solution is a helper class to test bash completion from python so that the above test would be written as:
from completion import BashCompletionTest
class AdsfTestCase(BashCompletionTest):
def test_orig(self):
self.run_complete("other arguments f", "four five")
def run_complete(self, command, expected):
completion_file="adsf-completion"
program="asdf"
super(AdsfTestCase, self).run_complete(completion_file, program, command, expected)
if (__name__=='__main__'):
unittest.main()
The completion lib is located under https://github.com/lacostej/unity3d-bash-completion/blob/master/lib/completion.py

How to pass variables from python script to bash script

I have a bash script, a.sh , and in it I have call a python script b.py .
The python script calculates something, and I want it to return a value that will be used later in a.sh .
I know I can do
In a.sh:
var=`python b.py`
In b.py:
print x # when x is the value I want to pass
But this is not so convenient, because I also print other messages in b.py
Is there any better way to do it?
Edit:
What I'm doing now is just
var=`python b.py | tail -n 1`
It means I can print many things inside b.py, but only the last line (the last print command, assuming it doesn't contain "\n" in it) will be stored in var.
Thanks for all the answers!
I would print it to a file chosen on the command line then I'd get that value in bash with something like cat.
So you'd go:
python b.py tempfile.txt
var=`cat tempfile.txt`
rm tempfile.txt
[EDIT, another idea based on other answers]
Your other option is to format your output carefully so you can use bash functions like head/tail to pipe only the first/last lines into your next program.
I believe the answer is
.py
import sys
a=['zero','one','two','three']
b = int(sys.argv[1])
###your python script can still print to stderr if it likes to
print >> sys.stderr, "I am no converting"
result = a[b]
print result
.sh
#!/bin/sh
num=2
text=`python numtotext.py $num`
echo "$num as text is $text"
In your python script, redirect another messages to stderr, and print x to stdout:
import sys
...
print >>sys.stderr, "another message"
print x
in the bash script:
...
var=`python b.py 2>/dev/null`
Also, if x is an integer between 0,255, you can use exit code to pass it to the bash:
import sys
...
sys.exit(x)
in bash:
python b.py
var=$?
Please note that exit code is used to indicates errors, 0 means no error, and this breaks the convention.
I'm not sure about "better", but you could write the result to a file then read it back in in Bash and delete it afterwards.
This is definitely ugly, but it's something to keep in mind in case nothing else does the trick.
On bash backsticks works
I usualy do something like
PIP_PATH=`python -c "from distutils.sysconfig \
import get_python_lib; print(get_python_lib())"`
POWELINE_PATH=$PIP_PATH"/powerline"
echo $POWELINE_PATH
You can write the output to a temporary file, and have the shell read and delete that file. This is even less convenient, but reserves stdout for communication with the user.
Alternatively, you can use some kind of format for stdout: the first n lines are certain variables, the rest will be echoed by the parent shell to the user. Also not convenient, but avoids using tempfiles.
In shell script you can use like this python_ret=$(python b.py).
It contains all print messages from python file b.py. Then you can search for a string which you are looking for. For example, if you are looking for 'Exception', you can lieke this
if [[ $python_ret == *"Exception:"* ]]; then
echo "Got some exception."
exit 1
fi
Better to forward the print value from the python script to a temp file before assigning it in a bash value. I believe there's no need to remove the file if this is the case.
!#/bin/bash
python b.py > tempfile.txt
var=`cat tempfile.txt`
Then, get the value:
echo $var

Categories

Resources