I have a python script (not created by me), let's call it myscript, which I call with several parameters.
So I run the script like this in Windows cmd:
Code:
/wherever/myscript --username=whoever /some/other/path/parameter
And then a prompt appears and I can pass arguments to the python script:
Process started successfully, blabla
Python 2.7.2 blabla
(LoggingConsole)
>>>
And I write my stuff, then quit to be back into cmd:
>>> command1()
>>> command2()
>>> quit()
I suspect some errors occurring in this part, but only once for a hundred trials. So I want to do it by a script.
I want to pipe to this script the internal command1 command2, so that I can test this function thousand times and see when it breaks. I have the following piece of code:
echo 'command1()' | py -i /wherever/myscript --username=whoever /some/other/path/parameter
This unfortunately doesn't generate the same behaviour, as if it would be manually entered.
Can I simulate this behaviour with pipes/redirecting output? Why doesn't it work? I expect that the 'command1()' text will be entered when the script waits for the commands, but it seems I'm wrong.
Thanks!
EDIT 16/02/2021 3:33PM :
I was looking for the cmd shell way to solve this, no python stuff
The piece of script
echo 'command1()' | py -i /wherever/myscript --username=whoever /some/other/path/parameter
is almost correct, just remove the '' :
echo command1() | py -i /wherever/myscript --username=whoever /some/other/path/parameter
my issues were coming from myscript. Once I fixed the weird things on this side, this part was all ok. You can even put all commands together:
echo command1();command2();quit(); | py -i /wherever/myscript --username=whoever /some/other/path/parameter
This question is adapted from a question of gplayersv the 23/08/2012 on unix.com, but the original purpose made the question not answered.
Easy to have pipes.
If you want to get the standard input :
import sys
imput = sys.stdin.read()
print(f'the standard imput was\n{imput}')
sys.stderr.write('This is an error message that will be ignored by piping')
If you want to use the standard input as argument:
echo param | xargs myprogram.py
Python's built-in fileinput module makes this simple and concise:
#!/usr/bin/env python3
import fileinput
with fileinput.input() as f:
for line in f:
print(line, end='')
Than you can accept input in whatever mechanism is easier for you:
$ ls | ./filein.py
$ ./filein.py /etc/passwd
$ ./filein.py < $(uname -r)
Related
I've come across a situation where it would be convenient to use python within a bash script I'm writing. I call some executables within my script, then want to do a bit of light data processing with python, then carry on. It doesn't seem worth it to me to write a dedicated script for the processing.
So what I want to do is something like the following:
# do some stuff in bash script
# write some data into datafile.d
python_fragment= << EOF
f = open("datafile.d")
// do some stuff with opened file
print(result)
EOF
result=$(execute_python_fragment $python_fragment) # <- what I want to do
# do some stuff with result
Basically all I want to do is execute a string containing python code. I could of course just make another file containing the python code and execute that, but I'd prefer not to do so. I could do something like echo $python_fragment > temp_code_file, then execute temp_code_file, but that seems inelegant. I just want to execute the string directly, if that's possible.
What I want to do seems simple enough, but haven't figured it out or found the solution online.
Thanks!
You can run a python command direct from the command line with -c option
python -c 'from foo import hello; print (hello())'
Then with bash you could do something like
result=$(python -c '$python_fragment')
You only have to redirect that here-string/document to python
python <<< "print('Hello')"
or
python <<EOF
print('Hello')
EOF
and encapsulate that in a function
execute_python_fragment() {
python <<< "$1"
}
and now you can do your
result=$(execute_python_fragment "${python_fragment}")
You should also add some kind of error control, input sanitizing... it's up to you the level of security you need in this function.
If the string contains the exact python code, then this simple eval() function works.
Here's a really basic example:
>>> eval("print(2)")
2
Hope that helps.
maybe something like
result=$(echo $python_fragment | python3)
only problem is the heredoc assignment in the question doesn't work either. But https://stackoverflow.com/a/1167849 suggests a way to do it if that is what you want to do:
python_fragment=$(cat <<EOF
print('test message')
EOF
) ;
result=$(echo $python_fragment | python3)
echo result was $result
I've been struggling for the past two hours on this simple example :
I have this line:
python -c 'import yum, pprint; yb = yum.YumBase(); pprint.pprint(yb.conf.yumvar, width=1)'
Which gives:
Loaded plugins: product-id
{'arch': 'ia32e',
'basearch': 'x86_64',
'releasever': '7Server',
'uuid': 'd68993fd-059a-4753-a7ab-1c4a601d206f',
'yum8': 'rhel',
'yum9': '7.1'}
And now, I would just like to get the line with the 'releasever'.
Directly on my Linux, there is no problem:
$ python -c 'import yum, pprint; yb = yum.YumBase(); pprint.pprint(yb.conf.yumvar, width=1)' | grep releasever
'releasever': '7Server',
I have the answer I am looking for.
But when it comes to put it in a script, I am so helpless.
Currently, I have:
#!/bin/ksh
check="$(python -c 'import yum, pprint; yb = yum.YumBase(); pprint.pprint(yb.conf.yumvar, width=1)')"
echo "${check}"
# The echo works as expected. But now, I would like to do a grep on that variable:
check2=${check}|grep releasever
echo "${check2}"
And the result is empty.
I've tried a lot of different things, like brackets, parentheses, quote, double quote, all-in-one command, but I can't get what I want.
I don't know what's happening behind that code, which is very simple. But still…
Can someone help me?
There seem to be 3 options.
It looks like yb.conf.yumvar is a dictionary. In which case, you could either print yb.conf.yumvar['releasever'] directory in your python script in which case the grep would be unnecessary.
Secondly, don't use grep. Instead, have your python script parse the yb.conf.yumvar into json and print it and then use jq from the outside to print the value of releasever
Third, and this is the sanest way, do what you want in Python directly. ksh scripts will be harder to manage over a longer period of time so just do what you want to do in Python and us os.system to execute external programs.
Your problem is that you are not putting check in any kind of std input. This would work fine:
check2=$(echo "$check" | grep releasever)
or you could put check content into a file and do like grep <string> < <file>.
I need to extend a shell script (bash). As I am much more familiar with python I want to do this by writing some lines of python code which depends on variables from the shell script. Adding an extra python file is not an option.
result=`python -c "import stuff; print('all $code in one very long line')"`
is not very readable.
I would prefer to specify my python code as a multiline string and then execute it.
Use a here-doc:
result=$(python <<EOF
import stuff
print('all $code in one very long line')
EOF
)
Tanks to this SO answer I found the answer myself:
#!/bin/bash
# some bash code
END_VALUE=10
PYTHON_CODE=$(cat <<END
# python code starts here
import math
for i in range($END_VALUE):
print(i, math.sqrt(i))
# python code ends here
END
)
# use the
res="$(python3 -c "$PYTHON_CODE")"
# continue with bash code
echo "$res"
This question already has answers here:
'yes' reporting error with subprocess communicate()
(3 answers)
Closed 6 years ago.
I'm trying to launch several bash routines
from a GUI based software. The problem I'm facing is a piping issue.
Here the test bash-script (bashScriptTest.sh):
#!/bin/bash
#---------- Working
ls | sort | grep d > testFile.txt
cat testFile.txt
#---------- NOT working
echo $RANDOM > testFile2.txt
for i in `seq 1 15000`; do
echo $RANDOM >> testFile2.txt
done
awk '{print $1}' testFile2.txt | sort -g | head -1
And here the python script that creates the error:
import subprocess
#
with open('log.txt','w') as outfile:
CLEAN=subprocess.Popen("./bashScriptTest.sh", stdout=outfile, stderr=outfile)
print CLEAN.pid
OUTSEE=subprocess.Popen(['x-terminal-emulator', '-e','tail -f '+outfile.name])
As you can see from running the python script, the Broken-pipe error is encountered
not in the first three pipes (first line) but instead after the huge work done by awk.
I need to manage an huge quantities of routine and subroutines in bash
and also using the shell==True flag doesn't change a thing.
I tried to write everything in the most pythonic way but unfortunately there is no
chance I can rewrite all the piping step inside python.
Another thing to mention is that if you test the bash script inside a terminal
everything works fine.
Any help would be really appreciated. Thanks in advance!
EDIT 1:
The log file containing the error says:
bashScriptTest.sh
log.txt
stack.txt
testFile2.txt
test.py
3
sort: write failed: standard output: Broken pipe
sort: write error
Okay so this is a little bit obscure, but it just so happens that I ran across a similar issue while researching a question on the python-tutor mailing list some time ago.
The reason you're seeing different behavior when running your script via the subprocess module (in python) vs. bash directly, is that python overrides the disposition of SIGPIPEs to SIG_IGN (ignore) for all child processes (globally).
When the following pipeline is executed ...
awk '{print $1}' testFile2.txt | sort -g | head -1
... head will exit after it prints the first line of stdout from the sort command, due to the -1 flag. When the sort command attempts to write more lines to its stdout, a SIGPIPE is raised.
The default action of a SIGPIPE; when the pipeline is executed in a shell like bash, for example; is to terminate the sort command.
As stated earlier, python overrides the default action with SIG_IGN (ignore), so we end up with this bizarre, and somewhat inexplicable, behavior.
That's all well and good, but you might be wondering what to do now? It's dependant on the version of python you're using ...
For Python 3.2 and greater, you're already set. subprocess.Popen in 3.2 added the restore_signals parameter, which defaults to True, and effectively solves the issue without further action.
For previous versions, you can supply a callable to the preexec_fn argument of subprocess.Popen, as in ...
import signal
def default_sigpipe():
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
# ...
with open('log.txt','w') as outfile:
CLEAN=subprocess.Popen("./bashScriptTest.sh",
stdout=outfile, stderr=outfile
preexec_fn=default_sigpipe)
I hope that helps!
EDIT: It should probably be noted that your program is actually functioning properly, AFAICT, as is. You're just seeing additional error messages that you wouldn't normally see when executing the script in a shell directly (for the reasons stated above).
See Also:
https://mail.python.org/pipermail/python-dev/2007-July/073831.html
https://bugs.python.org/issue1652
I know that I can run a python script from my bash script using the following:
python python_script.py
But what about if I wanted to pass a variable / argument to my python script from my bash script. How can I do that?
Basically bash will work out a filename and then python will upload it, but I need to send the filename from bash to python when I call it.
To execute a python script in a bash script you need to call the same command that you would within a terminal. For instance
> python python_script.py var1 var2
To access these variables within python you will need
import sys
print(sys.argv[0]) # prints python_script.py
print(sys.argv[1]) # prints var1
print(sys.argv[2]) # prints var2
Beside sys.argv, also take a look at the argparse module, which helps define options and arguments for scripts.
The argparse module makes it easy to write user-friendly command-line interfaces.
Use
python python_script.py filename
and in your Python script
import sys
print sys.argv[1]
Embedded option:
Wrap python code in a bash function.
#!/bin/bash
function current_datetime {
python - <<END
import datetime
print datetime.datetime.now()
END
}
# Call it
current_datetime
# Call it and capture the output
DT=$(current_datetime)
echo Current date and time: $DT
Use environment variables, to pass data into to your embedded python script.
#!/bin/bash
function line {
PYTHON_ARG="$1" python - <<END
import os
line_len = int(os.environ['PYTHON_ARG'])
print '-' * line_len
END
}
# Do it one way
line 80
# Do it another way
echo $(line 80)
http://bhfsteve.blogspot.se/2014/07/embedding-python-in-bash-scripts.html
use in the script:
echo $(python python_script.py arg1 arg2) > /dev/null
or
python python_script.py "string arg" > /dev/null
The script will be executed without output.
I have a bash script that calls a small python routine to display a message window. As I need to use killall to stop the python script I can't use the above method as it would then mean running killall python which could take out other python programmes so I use
pythonprog.py "$argument" & # The & returns control straight to the bash script so must be outside the backticks. The preview of this message is showing it without "`" either side of the command for some reason.
As long as the python script will run from the cli by name rather than python pythonprog.py this works within the script. If you need more than one argument just use a space between each one within the quotes.
and take a look at the getopt module.
It works quite good for me!
Print all args without the filename:
for i in range(1, len(sys.argv)):
print(sys.argv[i])