Assign varible in gnu make from Python sys.exit string - python

I would like to assign a variable in gnu make to the sys.exit() from a Python script. A simple Python script, let's call it string_gen.py, might look like:
#!/usr/bin/python
import sys
def string_gen():
return "string_file.txt"
if __name__ == "__main__":
sys.exit(string_gen())
In the make file, a target might look like
.PRECIOUS: $(FILE_STRINGS)
$(STRING_DICT): $(STRING_DIR)/string_gen.py $(PYTHON)
$(V)if [[ $(IF_BUILD_STRING_DICT) ]]; then STRING_FILE=$($(PYTHON) $< $(#D)); fi
$(V)if [[ $(IF_BUILD_STRING_DICT) ]]; then $(PYTHON) $(STRING_DIR)/report.py $(STRING_FILE); fi
I would like to assign STRING_FILE to the result of sys.exit() when set in the Python script. I can run the report.py from the command line and it does print "string_file.txt" to the console, but this result is not saved to the STRING_FILE variable in the make file. Is there a way to pass the result of running a Python script and assign it to variable in gnu make?
Edit: The makefile is to provide some context and doesn't present the full makefile. I took out just one small, very small, part in attempts to show what I am trying to do.

You have many many issues here:
.PRECIOUS: $(FILE_STRINGS)
I'm assuming this is related to some part of the makefile you haven't shown us; it has no relevance to the recipe below.
$(STRING_DICT): $(STRING_DIR)/string_gen.py $(PYTHON)
$(V)if [[ $(IF_BUILD_STRING_DICT) ]]; then STRING_FILE=$($(PYTHON) $< $(#D)); fi
First note if $(IF_BUILD_STRING_DICT) expands to the empty string this will be a syntax error. You should quote it, for example '$(IF_BUILD_STRING_DICT)'.
Second you should not use [[ ... ]]: make always runs /bin/sh and this condition syntax is not POSIX: it's supported by bash and some other shells but not by all POSIX shells. So if you run it on a system where /bin/sh is a strictly POSIX shell this will fail. You should either use the POSIX form, like [ -n "$(IF_BUILD_STRING_DICT)" ] or else if you want to require that anyone using your makefile use bash, you should add SHELL := /bin/bash to your makefile.
Third, this syntax is wrong: $($(PYTHON) $< $(#D)) This will expand the make variable named python .../string-gen.py ... which is certainly empty/not set.
The $ character is special to make so if you want to pass that character to the shell, for command substitution, you have the escape it by writing it as $$.
Fourth, as others have pointed out python sys.exit() writes to stderr and command substitution captures only stdout, so this won't work.
Fifth, this assigns the shell variable STRING_FILE: when writing makefiles it's critical to keep firmly in your mind the difference between make variables and shell variables. They are not the same at all. Recipes run in the shell and can only set shell variables.
Sixth, every logical line in a recipe is run in a separate shell which means that when the logical line ends the shell exits and all variables, etc. you have set will disappear. If you want the same variable to be used across multiple shell commands then you have to put them all into the same logical line.
$(V)if [[ $(IF_BUILD_STRING_DICT) ]]; then $(PYTHON) $(STRING_DIR)/report.py $(STRING_FILE); fi
As above, $(STRING_FILE) here is a make variable reference, but in the previous line you set the shell variable STRING_FILE. Which, anyway, is gone because the shell in the previous line exited.
You need to write this as:
$(STRING_DICT): $(STRING_DIR)/string_gen.py $(PYTHON)
$(V)if [ -n '$(IF_BUILD_STRING_DICT)' ]; then \
STRING_FILE=$$($(PYTHON) $< $(#D) 2>&1); \
$(PYTHON) $(STRING_DIR)/report.py $$STRING_FILE; \
fi
Or, if you wanted to do it with less typing:
$(STRING_DICT): $(STRING_DIR)/string_gen.py $(PYTHON)
$(V)[ -z '$(IF_BUILD_STRING_DICT)' ] \
|| $(PYTHON) $(STRING_DIR)/report.py $$($(PYTHON) $< $(#D) 2>&1)
The only reason I can think of for wanting to print the filename to stderr is that you have other text going to stdout. If that's the case (you don't show that in your example) then you should use 3>&2 2>&1 1>&3 instead of just 2>&1.

According to the sys.exit documentation:
any other object is printed to stderr and results in an exit code of 1.
Your output is printed to stderr, but you are reading from stdout.

If the purpose of your program is to simply output the name of a file, just use print. A string argument to sys.exit is intended as an error message, and is written to standard error instead of standard output (which is what the command substitution captures).
#!/usr/bin/python
def string_gen():
return "string_file.txt"
if __name__ == "__main__":
print(string_gen())

Related

Running a zsh script from a python script

I have a shell function that I would like to test from within a python script. It contains the double bracket [[ syntax which can only be interpreted by bash/zsh/ksh etc., but not the regular shell. In the original file, I read the function from a sh file using the builtins.open function. I simplified this case a bit and already added the file to the script as a string exactly the way it is loaded in the original file. I then paste it into subprocess with the shell argument set to True:
shell_function = """example_shell_function () {
#calling a python script which prints values to stdout
output_string=$(python3 test.py);
output_snippet=$(echo $output_string | tail -n1)
test_sign="#"
#if output_snippet contains "#" then enter condition
if [[ "$output_snippet" =~ "$test_sign" ]]
then
echo "condition met"
else
echo "condition not met"
fi
}"""
shell_commands = "\n".join(shell_function+["example_shell_function"])
process = subprocess.Popen(shell_commands_test_argument,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell = True)
stdout, stderr = process.communicate()
I am running zsh on my machine, but subprocess uses the regular sh binary and returns the error [[: not found on the line where the double brackets are defined in the script. I have tried modifying the subprocess call as following, in order to make sure the function is interpreted by zsh instead of sh:
shell_commands = "\n".join([". /bin/zsh"]+shell_function+["example_shell_function"])
This returns the error /bin/sh: 2: /bin/zsh: : not found, in spite of the zsh binary being present at that location. What is the best way to run this function from within my python script?
Solution proposed by #MarkSetchell worked:
Use executable='/usr/bin/zsh' in your subprocess() call.

Variable notation when running python commands with arguments in a bash script

I have a bash script which is running a bunch of python script all with arguments. In order to have a clean code, I wanted to use variables along the scripts
#!/bin/bash
START=0
SCRIPT_PATH="/opt/scripts/"
IP="192.168.1.111"
if [ "$START" = "0" ]; then
printf "%s: Starting\n" "$DATE_TIME"
PORT=1234
TEST_FILE="$SCRIPT_PATH/Test Scripts/test.TXT"
SCRIPT="$SCRIPT_PATH/script1.py"
ARGS="-P $SCRIPT_PATH/script2.py -n 15 -p $PORT -i $IP"
python "$SCRIPT" ${ARGS} -f "${TEST_FILE}" > ./out.log 2>&1 &
fi
This code is actually working but few things I don't understand :
Why, if I add quotes around ${ARGS}, the arguments are not parsed correctly by python ? What would be the best way to write this ?
What is the best method to add -f "${TEST_FILE}" to the ARGS variable without python blocking on the whitespace and throwing the error: "$SCRIPT_PATH/Test " not found
When you wrap quotes around an argument list, the argument vector receives a single argument with everything that is wrapped in quotes and so, the argument parser fails to do its job properly and you have your issue.
Regarding your second question, it is not easy to embed the quotes into the array, because the quotes will be parsed before being stored in the array, and then when you perform the array expansion to run the command, they will be missing and fail. I have tried this several times with no success.
An alternative approach would mean that you modify a little your script to use a custom internal field separator (IFS) to manually tell what should be considered an argument and what not:
#!/bin/bash
START=0
SCRIPT_PATH="/opt/scripts/"
IP="192.168.1.111"
if [ "$START" = "0" ]; then
printf "%s: Starting\n" "$DATE_TIME"
PORT=1234
TEST_FILE="$SCRIPT_PATH/Test Scripts/test.TXT"
SCRIPT="$SCRIPT_PATH/script1.py"
OLD_IFS=$IFS
IFS=';'
ARGS="$SCRIPT;-P;$SCRIPT_PATH/script2.py;-n;15;-p;$PORT;-i;$IP;-f;$TEST_FILE"
python ${ARGS} > ./out.log 2>&1 &
IFS=$OLD_IFS
fi
As you can see, I replace the spaces in ARGS with semicolons. This way, TEST_FILE variable contents will be considered as a single argument for bash and will be properly populated in argument vector. I'm also moving the script to the argument vector for simplicity, otherwise, Python will not get the proper script path and fail, due to this modification we did to IFS.
I was thinking something like this (with some cruft edited out to make it a standalone example):
#!/bin/bash
SCRIPT_PATH="/opt/scripts/"
IP="192.168.1.111"
PORT=1234
TEST_FILE="$SCRIPT_PATH/Test Scripts/test.TXT"
SCRIPT="$SCRIPT_PATH/script1.py"
set -a ARGS
ARGS=(-P "$SCRIPT_PATH/script2.py" -n 15 -p "$PORT" -i "$IP")
ARGS+=(-f "${TEST_FILE}")
python3 -c "import sys; print(*enumerate(sys.argv), sep='\n')" "${ARGS[#]}"

Pass variable from Python to Bash

I am writing a bash script in which a small python script is embedded. I want to pass a variable from python to bash. After a few search I only found method based on os.environ.
I just cannot make it work. Here is my simple test.
#!/bin/bash
export myvar='first'
python - <<EOF
import os
os.environ["myvar"] = "second"
EOF
echo $myvar
I expected it to output second, however it still outputs first. What is wrong with my script? Also is there any way to pass variable without export?
summary
Thanks for all answers. Here is my summary.
A python script embedded inside bash will run as child process which by definition is not able to affect parent bash environment.
The solution is to pass assignment strings out from python and eval it subsequently in bash.
An example is
#!/bin/bash
a=0
b=0
assignment_string=$(python -<<EOF
var1=1
var2=2
print('a={};b={}'.format(var1,var2))
EOF
)
eval $assignment_string
echo $a
echo $b
Unless Python is used to do some kind of operation on the original data, there's no need to import anything. The answer could be as lame as:
myvar=$(python - <<< "print 'second'") ; echo "$myvar"
Suppose for some reason Python is needed to spit out a bunch of bash variables and assignments, or (cautiously) compose code on-the-fly. An eval method:
myvar=first
eval "$(python - <<< "print('myvar=second')" )"
echo "$myvar"
Complementing the useful Cyrus's comment in question, you just can't do it. Here is why,
Setting an environment variable sets it only for the current process and any child processes it launches. os.environ will set it only for the shell that is running to execute the command you provided. When that command finishes, the shell goes away, and so does the environment variable.
You can pretty much do that with a shell script itself and just source it to reflect it on the current shell.
There are a few "dirty" ways of getting something like this done. Here is an example:
#!/bin/bash
myvar=$(python - <<EOF
print "second"
EOF
)
echo "$myvar"
The output of the python process is stored in a bash variable. It gets a bit messy if you want to return more complex stuff, though.
You can make python return val and pass it to bash:
pfile.py
print(100)
bfile.sh
var=$(python pfile.py)
echo "$var"
output: 100
Well, this may not be what you want but one option could be running the other batch commands in python using subprocess
import subprocess
x =400
subprocess.call(["echo", str(x)])
But this is more of a temporary work around. The other solutions are more along what you are looking for.
Hope I was able to help!

Handling specific Python error within Bash call?

I am using the line_profiler, which allows you to drop #profile decorators anywhere in a python codebase and returns line output.
However, if you try to execute python code that contains one such #profile decorator without loading this line_profiler module, the code will fail with a NameError, for such a decorator is defined and injected by this external library.
I'd like a bash command that attempts to run my python script with vanilla python. Then, if and only if the error consists of NameError, I want to give it a second try. This is what I have got so far:
python -u $file || python -m kernprof -l -v --outfile=/dev/null $file"
The problem is of course that if my python code has ANY errors at all, be it ValueError or IndentationError or anything, it tries the profiler. I want to ONLY run the profiler if the error contains a string NameError: name 'profile' is not defined is found within stderr.
Wouldn't be better to monkey patch profile when no line_profiles is present ?
Something like
try:
import line_profiles
except:
import warnings
warnings.warn("Profile disabled")
def profile(fn):
def wrapper(*args, **kw):
return fn(*args, **kw)
return wrapper
This way your code runs in either case without complicating matters.
Here's a usable Bash solution that preserves stdout and stderr as separate streams (with the caveat that stderr appears after stdout) and only checks stderr for the error message (which probably is overkill though).
It goes the easy route and simply saves the stderr output to a file. It also handles script names that contain spaces (by properly quoting variable expansions where needed) and/or start with - (by passing -- before the filename to switch off flag processing) as it's an OCD pet peeve of mine.
On success or if there is an error that is not the expected error, the stderr of the first python command is shown. Otherwise (for the expected error), it is hidden.
Usage is $ ./check <script>.
#!/bin/bash
if [[ $# -ne 1 ]]; then
echo "Expected one argument: the script" >&2
exit 1
fi
script=$1
if [[ ! -e $script ]]; then
echo "'$script' does not exist or is not a regular file" >&2
exit 1
fi
if ! python -- "$script" 2>saved_stderr &&
grep -q "NameError: name 'profile' is not defined" saved_stderr; then
# Try again with the kernprof module.
python -m kernprof -l -v --outfile=/dev/null -- "$script"
else
# Either success or an unexpected error. Show stderr.
cat saved_stderr >&2
fi
rm saved_stderr
To check if the return status of a command is zero (i.e., success), it suffices to do
if <cmd>; then <if successful>; fi`
! negates the exit status, so if ! <cmd> ... can be used to check for failure. ! only applies to the python command above, not all of python ... && grep ....
>&2 redirects stdout to stderr. (It's the same as 1>&2 but saves a single character, which is a bit silly, but I included for illustrative purposes as it's a common idiom.)
Creating a simple Python wrapper would seem a lot more straightforward, because inside Python, you have access to the things which go wrong.
Assuming your $file uses the common __name__ == '__main__' idiom something like this:
if __name__ == '__main__':
main()
you can create a wrapper something like
import yourfile
try:
file.main()
except NameError:
import kernprof
# hack hack, quickly constructed from looking at main() in kernprof.py
prof = kernprof.ContextualProfile()
execfile_ = execfile
ns = locals()
try:
prof.runctx('execfile_(%r, globals())' % (yourfile,), ns, ns)
finally:
prof.print_stats()

Getting console output of a Perl script through Python

There are a variety of posts and resources explaining how to use Python to get output of an outside call. I am familiar with using these--I've used Python to get output of jars and exec several times, when it was not realistic or economical to re-implement the functionality of that jar/exec inside Python itself.
I am trying to call a Perl script via Python's subprocess module, but I have had no success with this particular Perl script. I carefully followed the answers here, Call Perl script from Python, but had no results.
I was able to get the output of this test Perl script from this question/answer: How to call a Perl script from Python, piping input to it?
#!/usr/bin/perl
use strict;
use warnings;
my $name = shift;
print "Hello $name!\n";
Using this block of Python code:
import subprocess
var = "world"
args_test = ['perl', 'perl/test.prl', var]
pipe = subprocess.Popen(args_test, stdout=subprocess.PIPE)
out, err = pipe.communicate()
print out, err
However, if I swap out the arguments and the Perl script with the one I need output from, I get no output at all.
args = ['perl', 'perl/my-script.prl', '-a', 'perl/file-a.txt',
'-t', 'perl/file-t.txt', 'input.txt']
which runs correctly when entered on the command line, e.g.
>perl perl/my-script.prl -a perl/file-a.txt -t perl/file-t.txt input.txt
but this produces no output when called via subprocess:
pipe = subprocess.Popen(args, stdout=subprocess.PIPE)
out, err = pipe.communicate()
print out, err
I've done another sanity check as well. This correctly outputs the help message of Perl as a string:
import subprocess
pipe = subprocess.Popen(['perl', '-h'], stdout=subprocess.PIPE)
out, err = pipe.communicate()
print out, err
As shown here:
>>> ================================ RESTART ================================
>>>
Usage: perl [switches] [--] [programfile] [arguments]
-0[octal] specify record separator (\0, if no argument)
-a autosplit mode with -n or -p (splits $_ into #F)
-C[number/list] enables the listed Unicode features
-c check syntax only (runs BEGIN and CHECK blocks)
-d[:debugger] run program under debugger
-D[number/list] set debugging flags (argument is a bit mask or alphabets)
-e program one line of program (several -e's allowed, omit programfile)
-f don't do $sitelib/sitecustomize.pl at startup
-F/pattern/ split() pattern for -a switch (//'s are optional)
-i[extension] edit <> files in place (makes backup if extension supplied)
-Idirectory specify #INC/#include directory (several -I's allowed)
-l[octal] enable line ending processing, specifies line terminator
-[mM][-]module execute "use/no module..." before executing program
-n assume "while (<>) { ... }" loop around program
-p assume loop like -n but print line also, like sed
-P run program through C preprocessor before compilation
-s enable rudimentary parsing for switches after programfile
-S look for programfile using PATH environment variable
-t enable tainting warnings
-T enable tainting checks
-u dump core after parsing program
-U allow unsafe operations
-v print version, subversion (includes VERY IMPORTANT perl info)
-V[:variable] print configuration summary (or a single Config.pm variable)
-w enable many useful warnings (RECOMMENDED)
-W enable all warnings
-x[directory] strip off text before #!perl line and perhaps cd to directory
-X disable all warnings
None

Categories

Resources