I have the following in a python script (using python 3.4), if there is any failure:
exit(1)
The purpose of this python script is to print() some value, so that it may be assigned to a shell variable like so:
#!/bin/bash
set -e
result=$(python my-script.py)
echo $?
However, if the script fails, the echo above returns 0. I want set -e to immediately interrupt and fail the script if the python script fails (i.e. returns non zero exit code).
How can I make this work? I tried set -eo pipefail as well in my bash script and I've noticed no difference.
I'm running Ubuntu 15.
EDIT
My python script (verbatim) just in case it is the suspect here...
import re
import sys
version_regex = r'(?:(?:release|hotfix)\/|^)([\d.]+)-\d+-g[a-f\d]+$'
result = re.search(version_regex, sys.argv[1])
if not result:
exit(1)
print(result.group(1))
Note that I have tried both sys.exit(1) and exit(1) in my experiments but I never saw my bash script fail.
EDIT 2
parsed_version=$(python parse-git-describe.py $describe_result; echo $?)
echo $parsed_version
echo $?
The above bash script gives me the following output:
1
0
Note that the script parse-git-describe.py is the same as the python script provided earlier.
EDIT 3
Apparently local causes this to break. EDIT 2 above was wrong, that is the result with local inside a shell function:
foo()
{
local parsed_version=$(python parse-git-describe.py $describe_result; echo $?)
echo $parsed_version
echo $?
}
Result is:
1
0
But if I remove local it works fine?
foo()
{
parsed_version=$(python parse-git-describe.py $describe_result; echo $?)
echo $parsed_version
echo $?
}
Result is:
1
1
Why does local matter?
You can to set the exit code if you finish your script by calling sys.exit(code).
The local can only be used within a function in bash. (So your bash script code above doesn't work either.)
Why don't you call your python script in the bash script and check the return value and then jump out of your script if necessary?
$ python -c "exit(0)"
$ [ $? == 0 ] || echo "fail"
$ python -c "exit(1)"
$ [ $? == 0 ] || echo "fail"
fail
You can easily adapt this in your shell script, I guess.
Why does local matter??
The local keyword specifies the variable's scope. The following sample script should illustrate:
#!/bin/bash
HELLO=Hello # global variable, HELLO
function hello {
local HELLO=World # local variable with same name; diff scope
echo $HELLO # this is the local, not global!
}
echo $HELLO # print global
hello # function call!
echo $HELLO # global unchanged!
Running this code from the shell (placed in test.sh script) produces:
➜ Projects ./test.sh
Hello
World
Hello
When you use local, the variable and the exit code it had go out of scope when the function returns to the caller. Your function should return the value to the caller if you don't want that to happen.
Related
Ubuntu 18.04:
I created a script to check if a python script is running and to start it if not. I'm using echo statements for debugging that output to verify.txt.
The debug statements inside the IF statement are not executing. I believe it's the syntax comparing a variable to an int. Does this look correct?
# Script to check if python script is running
echo "(verify.sh): checking to see if scripts are running..." > verify.txt
output="$(pgrep -f -c myprogram.py)"
echo "(verify.sh): assigned the output correctly as $output" > verify.txt
if [[$output -eq 0]];
then
echo "(verify.sh): entered the if loop" > verify.txt
python /home/User/myprogram.py &
echo "(verify.sh): started myprogram.py" > verify.txt
fi
Note: The file name is verify.sh, so I added it to the echo just to keep track of who was writing to the debug file.
You need to add some spaces, it should look like if [[ $output -eq 0 ]].
Try it out.
This question already has answers here:
Is it possible to change the Environment of a parent process in Python?
(4 answers)
Closed 4 years ago.
I have a bash script that looks like this:
python myPythonScript.py
python myOtherScript.py $VarFromFirstScript
and myPythonScript.py looks like this:
print("Running some code...")
VarFromFirstScript = someFunc()
print("Now I do other stuff")
The question is, how do I get the variable VarFromFirstScript back to the bash script that called myPythonScript.py.
I tried os.environ['VarFromFirstScript'] = VarFromFirstScript but this doesn't work (I assume this means that the python environment is a different env from the calling bash script).
you cannot propagate an environment variable to the parent process. But you can print the variable, and assign it back to the variable name from your shell:
VarFromFirstScript=$(python myOtherScript.py $VarFromFirstScript)
you must not print anything else in your code, or using stderr
sys.stderr.write("Running some code...\n")
VarFromFirstScript = someFunc()
sys.stdout.write(VarFromFirstScript)
an alternative would be to create a file with the variables to set, and make it parse by your shell (you could create a shell that the parent shell would source)
import shlex
with open("shell_to_source.sh","w") as f:
f.write("VarFromFirstScript={}\n".format(shlex.quote(VarFromFirstScript))
(shlex.quote allows to avoid code injection from python, courtesy Charles Duffy)
then after calling python:
source ./shell_to_source.sh
You can only pass environment variables from parent process to child.
When the child process is created the environment block is copied to the child - the child has a copy, so any changes in the child process only affects the child's copy (and any further children which it creates).
To communicate with the parent the simplest way is to use command substitution in bash where we capture stdout:
Bash script:
#!/bin/bash
var=$(python myPythonScript.py)
echo "Value in bash: $var"
Python script:
print("Hollow world!")
Sample run:
$ bash gash.sh
Value in bash: Hollow world!
You have other print statements in python, you will need to filter out to only the data you require, possibly by marking the data with a well-known prefix.
If you have many print statements in python then this solution is not scalable, so you might need to use process substitution, like this:
Bash script:
#!/bin/bash
while read -r line
do
if [[ $line = ++++* ]]
then
# Strip out the marker
var=${line#++++}
else
echo "$line"
fi
done < <(python myPythonScript.py)
echo "Value in bash: $var"
Python script:
def someFunc():
return "Hollow World"
print("Running some code...")
VarFromFirstScript = someFunc()
# Prefix our data with a well-known marker
print("++++" + VarFromFirstScript)
print("Now I do other stuff")
Sample Run:
$ bash gash.sh
Running some code...
Now I do other stuff
Value in bash: Hollow World
I would source your script, this is the most commonly used method. This executes the script under the current shell instead of loading another one. Because this uses same shell env variables you set will be accessible when it exits. . /path/to/script.sh or source /path/to/script.sh will both work, . works where source doesn't sometimes.
I am writing a bash script in which a small python script is embedded. I want to pass a variable from python to bash. After a few search I only found method based on os.environ.
I just cannot make it work. Here is my simple test.
#!/bin/bash
export myvar='first'
python - <<EOF
import os
os.environ["myvar"] = "second"
EOF
echo $myvar
I expected it to output second, however it still outputs first. What is wrong with my script? Also is there any way to pass variable without export?
summary
Thanks for all answers. Here is my summary.
A python script embedded inside bash will run as child process which by definition is not able to affect parent bash environment.
The solution is to pass assignment strings out from python and eval it subsequently in bash.
An example is
#!/bin/bash
a=0
b=0
assignment_string=$(python -<<EOF
var1=1
var2=2
print('a={};b={}'.format(var1,var2))
EOF
)
eval $assignment_string
echo $a
echo $b
Unless Python is used to do some kind of operation on the original data, there's no need to import anything. The answer could be as lame as:
myvar=$(python - <<< "print 'second'") ; echo "$myvar"
Suppose for some reason Python is needed to spit out a bunch of bash variables and assignments, or (cautiously) compose code on-the-fly. An eval method:
myvar=first
eval "$(python - <<< "print('myvar=second')" )"
echo "$myvar"
Complementing the useful Cyrus's comment in question, you just can't do it. Here is why,
Setting an environment variable sets it only for the current process and any child processes it launches. os.environ will set it only for the shell that is running to execute the command you provided. When that command finishes, the shell goes away, and so does the environment variable.
You can pretty much do that with a shell script itself and just source it to reflect it on the current shell.
There are a few "dirty" ways of getting something like this done. Here is an example:
#!/bin/bash
myvar=$(python - <<EOF
print "second"
EOF
)
echo "$myvar"
The output of the python process is stored in a bash variable. It gets a bit messy if you want to return more complex stuff, though.
You can make python return val and pass it to bash:
pfile.py
print(100)
bfile.sh
var=$(python pfile.py)
echo "$var"
output: 100
Well, this may not be what you want but one option could be running the other batch commands in python using subprocess
import subprocess
x =400
subprocess.call(["echo", str(x)])
But this is more of a temporary work around. The other solutions are more along what you are looking for.
Hope I was able to help!
I am using the line_profiler, which allows you to drop #profile decorators anywhere in a python codebase and returns line output.
However, if you try to execute python code that contains one such #profile decorator without loading this line_profiler module, the code will fail with a NameError, for such a decorator is defined and injected by this external library.
I'd like a bash command that attempts to run my python script with vanilla python. Then, if and only if the error consists of NameError, I want to give it a second try. This is what I have got so far:
python -u $file || python -m kernprof -l -v --outfile=/dev/null $file"
The problem is of course that if my python code has ANY errors at all, be it ValueError or IndentationError or anything, it tries the profiler. I want to ONLY run the profiler if the error contains a string NameError: name 'profile' is not defined is found within stderr.
Wouldn't be better to monkey patch profile when no line_profiles is present ?
Something like
try:
import line_profiles
except:
import warnings
warnings.warn("Profile disabled")
def profile(fn):
def wrapper(*args, **kw):
return fn(*args, **kw)
return wrapper
This way your code runs in either case without complicating matters.
Here's a usable Bash solution that preserves stdout and stderr as separate streams (with the caveat that stderr appears after stdout) and only checks stderr for the error message (which probably is overkill though).
It goes the easy route and simply saves the stderr output to a file. It also handles script names that contain spaces (by properly quoting variable expansions where needed) and/or start with - (by passing -- before the filename to switch off flag processing) as it's an OCD pet peeve of mine.
On success or if there is an error that is not the expected error, the stderr of the first python command is shown. Otherwise (for the expected error), it is hidden.
Usage is $ ./check <script>.
#!/bin/bash
if [[ $# -ne 1 ]]; then
echo "Expected one argument: the script" >&2
exit 1
fi
script=$1
if [[ ! -e $script ]]; then
echo "'$script' does not exist or is not a regular file" >&2
exit 1
fi
if ! python -- "$script" 2>saved_stderr &&
grep -q "NameError: name 'profile' is not defined" saved_stderr; then
# Try again with the kernprof module.
python -m kernprof -l -v --outfile=/dev/null -- "$script"
else
# Either success or an unexpected error. Show stderr.
cat saved_stderr >&2
fi
rm saved_stderr
To check if the return status of a command is zero (i.e., success), it suffices to do
if <cmd>; then <if successful>; fi`
! negates the exit status, so if ! <cmd> ... can be used to check for failure. ! only applies to the python command above, not all of python ... && grep ....
>&2 redirects stdout to stderr. (It's the same as 1>&2 but saves a single character, which is a bit silly, but I included for illustrative purposes as it's a common idiom.)
Creating a simple Python wrapper would seem a lot more straightforward, because inside Python, you have access to the things which go wrong.
Assuming your $file uses the common __name__ == '__main__' idiom something like this:
if __name__ == '__main__':
main()
you can create a wrapper something like
import yourfile
try:
file.main()
except NameError:
import kernprof
# hack hack, quickly constructed from looking at main() in kernprof.py
prof = kernprof.ContextualProfile()
execfile_ = execfile
ns = locals()
try:
prof.runctx('execfile_(%r, globals())' % (yourfile,), ns, ns)
finally:
prof.print_stats()
I have a batch file that runs a python script. I am running Python 3.2. I want to send a variable like an integer or string from the python script back to the batch file, is this possible?
I know I can accept command line arguments in the Python script with sys.argv. Was hoping there was some feature that allows me to do the reverse.
In your Python script, just write to standard out: sys.stdout.write(...)
I'm not sure what scripting language you are using, maybe you could elaborate on that, for now I'll assume you are using bash (unix shell).
So, In your batch script you can have the output of the python script into a variable like this:
#run the script and store the output into $val
val = `python your_python_script.py`
#print $val
echo $val
EDIT it turns out, it is Windows batch
python your_python_script.py > tmpFile
set /p myvar= < tmpFile
del tmpFile
echo %myvar%
If a int is enough for you, then you can use
sys.exit(value)
in your python script. That exits the application with a status code of value
In your batch file you can then read it as the %errorlevel% environment variable.
You can't "send" a string. You can print it out and have the calling process capture it, but you can only directly return numbers from 0 through 255.
Ignacio is dead on. The only thing you can return is your exit status. What I've done previously is have the python script (or EXE in my case) output the next batch file to be run, then you can put in whatever values you'd like and run it. The batch file that calls the python script then calls the batch file you create.
You can try this batch script for this issue, as an example:
#echo off
REM %1 - This is the parameter we pass with the desired return code for the Python script that will be captured by the ErrorLevel env. variable.
REM A value of 0 is the default exit code, meaning it has all gone well. A value greater than 0 implies an error
REM and this value can be captured and used for any error control logic and handling within the script
set ERRORLEVEL=
set RETURN_CODE=%1
echo (Before Python script run) ERRORLEVEL VALUE IS: [ %ERRORLEVEL% ]
echo.
call python -c "import sys; exit_code = %RETURN_CODE%; print('(Inside python script now) Setting up exit code to ' + str(exit_code)); sys.exit(exit_code)"
echo.
echo (After Python script run) ERRORLEVEL VALUE IS: [ %ERRORLEVEL% ]
echo.
And when you run it a couple of times with different return code values you can see the expected behaviour:
PS C:\Scripts\ScriptTests> & '\TestPythonReturnCodes.cmd' 5
(Before Python script run) ERRORLEVEL VALUE IS: [ 0 ]
(Inside python script now) Setting up exit code to 5
(After Python script run) ERRORLEVEL VALUE IS: [ 5 ]
PS C:\Scripts\ScriptTests> & '\TestPythonReturnCodes.cmd' 3
(Before Python script run) ERRORLEVEL VALUE IS: [ 0 ]
(Inside python script now) Setting up exit code to 3
(After Python script run) ERRORLEVEL VALUE IS: [ 3 ]
PS C:\Scripts\ScriptTests