How to pass variables from python script to bash script - python

I have a bash script, a.sh , and in it I have call a python script b.py .
The python script calculates something, and I want it to return a value that will be used later in a.sh .
I know I can do
In a.sh:
var=`python b.py`
In b.py:
print x # when x is the value I want to pass
But this is not so convenient, because I also print other messages in b.py
Is there any better way to do it?
Edit:
What I'm doing now is just
var=`python b.py | tail -n 1`
It means I can print many things inside b.py, but only the last line (the last print command, assuming it doesn't contain "\n" in it) will be stored in var.
Thanks for all the answers!

I would print it to a file chosen on the command line then I'd get that value in bash with something like cat.
So you'd go:
python b.py tempfile.txt
var=`cat tempfile.txt`
rm tempfile.txt
[EDIT, another idea based on other answers]
Your other option is to format your output carefully so you can use bash functions like head/tail to pipe only the first/last lines into your next program.

I believe the answer is
.py
import sys
a=['zero','one','two','three']
b = int(sys.argv[1])
###your python script can still print to stderr if it likes to
print >> sys.stderr, "I am no converting"
result = a[b]
print result
.sh
#!/bin/sh
num=2
text=`python numtotext.py $num`
echo "$num as text is $text"

In your python script, redirect another messages to stderr, and print x to stdout:
import sys
...
print >>sys.stderr, "another message"
print x
in the bash script:
...
var=`python b.py 2>/dev/null`
Also, if x is an integer between 0,255, you can use exit code to pass it to the bash:
import sys
...
sys.exit(x)
in bash:
python b.py
var=$?
Please note that exit code is used to indicates errors, 0 means no error, and this breaks the convention.

I'm not sure about "better", but you could write the result to a file then read it back in in Bash and delete it afterwards.
This is definitely ugly, but it's something to keep in mind in case nothing else does the trick.

On bash backsticks works
I usualy do something like
PIP_PATH=`python -c "from distutils.sysconfig \
import get_python_lib; print(get_python_lib())"`
POWELINE_PATH=$PIP_PATH"/powerline"
echo $POWELINE_PATH

You can write the output to a temporary file, and have the shell read and delete that file. This is even less convenient, but reserves stdout for communication with the user.
Alternatively, you can use some kind of format for stdout: the first n lines are certain variables, the rest will be echoed by the parent shell to the user. Also not convenient, but avoids using tempfiles.

In shell script you can use like this python_ret=$(python b.py).
It contains all print messages from python file b.py. Then you can search for a string which you are looking for. For example, if you are looking for 'Exception', you can lieke this
if [[ $python_ret == *"Exception:"* ]]; then
echo "Got some exception."
exit 1
fi

Better to forward the print value from the python script to a temp file before assigning it in a bash value. I believe there's no need to remove the file if this is the case.
!#/bin/bash
python b.py > tempfile.txt
var=`cat tempfile.txt`
Then, get the value:
echo $var

Related

escape ampersand & in string when send as argument python

I have written two python scripts A.py and B.py So B.py gets called in A.py like this:
config_object = {}
with open(config_filename) as data:
config_object = json.load(data, object_pairs_hook=OrderedDict)
command = './scripts/B.py --config-file={} --token-a={} --token-b={}'.format(promote_config_filename, config_object['username'], config_object['password'])
os.system(command)
In here config_object['password'] contains & in it. Say it is something like this S01S0lQb1T3&BRn2^Qt3
Now when this value get passed to B.py it gets password as S01S0lQb1T3 So after & whatever it is getting ignored.
How to solve this?
os.system runs a shell. You can escape arbitrary strings for the shell with shlex.quote() ... but a much superior solution is to use subprocess instead, like the os.system documentation also recommends.
subprocess.run(
['./scripts/B.py',
'--config-file={}'.format(promote_config_filename),
'--token-a={}'.format(config_object['username']),
'--token-b={}'.format(config_object['password'])])
Because there is no shell=True, the strings are now passed to the subprocess verbatim.
Perhaps see also Actual meaning of shell=True in subprocess
#tripleee has good suggestions. In terms of why this is happening, if you are running Linux/Unix at least, the & would start a background process. You can search "linux job control" for more info on that. The shortest (but not best) solution is to wrap your special characters in single or double quotes in the final command.
See this bash for a simple example:
$ echo foo&bar
[1] 20054
foo
Command 'bar' not found, but can be installed with:
sudo apt install bar
[1]+ Done echo foo
$ echo "foo&bar"
foo&bar

Executing string of python code within bash script

I've come across a situation where it would be convenient to use python within a bash script I'm writing. I call some executables within my script, then want to do a bit of light data processing with python, then carry on. It doesn't seem worth it to me to write a dedicated script for the processing.
So what I want to do is something like the following:
# do some stuff in bash script
# write some data into datafile.d
python_fragment= << EOF
f = open("datafile.d")
// do some stuff with opened file
print(result)
EOF
result=$(execute_python_fragment $python_fragment) # <- what I want to do
# do some stuff with result
Basically all I want to do is execute a string containing python code. I could of course just make another file containing the python code and execute that, but I'd prefer not to do so. I could do something like echo $python_fragment > temp_code_file, then execute temp_code_file, but that seems inelegant. I just want to execute the string directly, if that's possible.
What I want to do seems simple enough, but haven't figured it out or found the solution online.
Thanks!
You can run a python command direct from the command line with -c option
python -c 'from foo import hello; print (hello())'
Then with bash you could do something like
result=$(python -c '$python_fragment')
You only have to redirect that here-string/document to python
python <<< "print('Hello')"
or
python <<EOF
print('Hello')
EOF
and encapsulate that in a function
execute_python_fragment() {
python <<< "$1"
}
and now you can do your
result=$(execute_python_fragment "${python_fragment}")
You should also add some kind of error control, input sanitizing... it's up to you the level of security you need in this function.
If the string contains the exact python code, then this simple eval() function works.
Here's a really basic example:
>>> eval("print(2)")
2
Hope that helps.
maybe something like
result=$(echo $python_fragment | python3)
only problem is the heredoc assignment in the question doesn't work either. But https://stackoverflow.com/a/1167849 suggests a way to do it if that is what you want to do:
python_fragment=$(cat <<EOF
print('test message')
EOF
) ;
result=$(echo $python_fragment | python3)
echo result was $result

Pass variable from Python to Bash

I am writing a bash script in which a small python script is embedded. I want to pass a variable from python to bash. After a few search I only found method based on os.environ.
I just cannot make it work. Here is my simple test.
#!/bin/bash
export myvar='first'
python - <<EOF
import os
os.environ["myvar"] = "second"
EOF
echo $myvar
I expected it to output second, however it still outputs first. What is wrong with my script? Also is there any way to pass variable without export?
summary
Thanks for all answers. Here is my summary.
A python script embedded inside bash will run as child process which by definition is not able to affect parent bash environment.
The solution is to pass assignment strings out from python and eval it subsequently in bash.
An example is
#!/bin/bash
a=0
b=0
assignment_string=$(python -<<EOF
var1=1
var2=2
print('a={};b={}'.format(var1,var2))
EOF
)
eval $assignment_string
echo $a
echo $b
Unless Python is used to do some kind of operation on the original data, there's no need to import anything. The answer could be as lame as:
myvar=$(python - <<< "print 'second'") ; echo "$myvar"
Suppose for some reason Python is needed to spit out a bunch of bash variables and assignments, or (cautiously) compose code on-the-fly. An eval method:
myvar=first
eval "$(python - <<< "print('myvar=second')" )"
echo "$myvar"
Complementing the useful Cyrus's comment in question, you just can't do it. Here is why,
Setting an environment variable sets it only for the current process and any child processes it launches. os.environ will set it only for the shell that is running to execute the command you provided. When that command finishes, the shell goes away, and so does the environment variable.
You can pretty much do that with a shell script itself and just source it to reflect it on the current shell.
There are a few "dirty" ways of getting something like this done. Here is an example:
#!/bin/bash
myvar=$(python - <<EOF
print "second"
EOF
)
echo "$myvar"
The output of the python process is stored in a bash variable. It gets a bit messy if you want to return more complex stuff, though.
You can make python return val and pass it to bash:
pfile.py
print(100)
bfile.sh
var=$(python pfile.py)
echo "$var"
output: 100
Well, this may not be what you want but one option could be running the other batch commands in python using subprocess
import subprocess
x =400
subprocess.call(["echo", str(x)])
But this is more of a temporary work around. The other solutions are more along what you are looking for.
Hope I was able to help!

Handling specific Python error within Bash call?

I am using the line_profiler, which allows you to drop #profile decorators anywhere in a python codebase and returns line output.
However, if you try to execute python code that contains one such #profile decorator without loading this line_profiler module, the code will fail with a NameError, for such a decorator is defined and injected by this external library.
I'd like a bash command that attempts to run my python script with vanilla python. Then, if and only if the error consists of NameError, I want to give it a second try. This is what I have got so far:
python -u $file || python -m kernprof -l -v --outfile=/dev/null $file"
The problem is of course that if my python code has ANY errors at all, be it ValueError or IndentationError or anything, it tries the profiler. I want to ONLY run the profiler if the error contains a string NameError: name 'profile' is not defined is found within stderr.
Wouldn't be better to monkey patch profile when no line_profiles is present ?
Something like
try:
import line_profiles
except:
import warnings
warnings.warn("Profile disabled")
def profile(fn):
def wrapper(*args, **kw):
return fn(*args, **kw)
return wrapper
This way your code runs in either case without complicating matters.
Here's a usable Bash solution that preserves stdout and stderr as separate streams (with the caveat that stderr appears after stdout) and only checks stderr for the error message (which probably is overkill though).
It goes the easy route and simply saves the stderr output to a file. It also handles script names that contain spaces (by properly quoting variable expansions where needed) and/or start with - (by passing -- before the filename to switch off flag processing) as it's an OCD pet peeve of mine.
On success or if there is an error that is not the expected error, the stderr of the first python command is shown. Otherwise (for the expected error), it is hidden.
Usage is $ ./check <script>.
#!/bin/bash
if [[ $# -ne 1 ]]; then
echo "Expected one argument: the script" >&2
exit 1
fi
script=$1
if [[ ! -e $script ]]; then
echo "'$script' does not exist or is not a regular file" >&2
exit 1
fi
if ! python -- "$script" 2>saved_stderr &&
grep -q "NameError: name 'profile' is not defined" saved_stderr; then
# Try again with the kernprof module.
python -m kernprof -l -v --outfile=/dev/null -- "$script"
else
# Either success or an unexpected error. Show stderr.
cat saved_stderr >&2
fi
rm saved_stderr
To check if the return status of a command is zero (i.e., success), it suffices to do
if <cmd>; then <if successful>; fi`
! negates the exit status, so if ! <cmd> ... can be used to check for failure. ! only applies to the python command above, not all of python ... && grep ....
>&2 redirects stdout to stderr. (It's the same as 1>&2 but saves a single character, which is a bit silly, but I included for illustrative purposes as it's a common idiom.)
Creating a simple Python wrapper would seem a lot more straightforward, because inside Python, you have access to the things which go wrong.
Assuming your $file uses the common __name__ == '__main__' idiom something like this:
if __name__ == '__main__':
main()
you can create a wrapper something like
import yourfile
try:
file.main()
except NameError:
import kernprof
# hack hack, quickly constructed from looking at main() in kernprof.py
prof = kernprof.ContextualProfile()
execfile_ = execfile
ns = locals()
try:
prof.runctx('execfile_(%r, globals())' % (yourfile,), ns, ns)
finally:
prof.print_stats()

Run function from the command line and pass arguments to function

I'm using similar approach to call python function from my shell script:
python -c 'import foo; print foo.hello()'
But I don't know how in this case I can pass arguments to python script and also is it possible to call function with parameters from command line?
python -c 'import foo, sys; print foo.hello(); print(sys.argv[1])' "This is a test"
or
echo "Wham" | python -c 'print(raw_input(""));'
There's also argparse (py3 link) that could be used to capture arguments, such as the -c which also can be found at sys.argv[0].
A second library do exist but is discuraged, called getopt.getopt.
You don't want to do that in shell script.
Try this. Create a file named "hello.py" and put the following code in the file (assuming you are on unix system):
#!/usr/bin/env python
print "Hello World"
and in your shell script, write something lke this
#!/bin/sh
python hello.py
and you should see Hello World in the terminal.
That's how you should invoke a script in shell/bash.
To the main question: how do you pass arguments?
Take this simple example:
#!/usr/bin/env python
import sys
def hello(name):
print "Hello, " + name
if __name__ == "__main__":
if len(sys.argv) > 1:
hello(sys.argv[1])
else:
raise SystemExit("usage: python hello.py <name>")
We expect the len of the argument to be at least two. Like shell programming, the first one (index 0) is always the file name.
Now modify the shell script to include the second argument (name) and see what happen.
haven't tested my code yet but conceptually that's how you should go about
edit:
If you just have a line or two simple python code, sure, -c works fine and is neat. But if you need more complex logic, please put the code into a module (.py file).
You need to create one .py file.
And after you call it this way :
python file.py argv1 argv2
And after in your file, you have sys.argv list, who give you list of argvs.

Categories

Resources