I want to convert the following shell evaluation to python2.6(can't upgrade). I can't figure out how to evaluate the output of the command.
Here's the shell version:
status=`$hastatus -sum |grep $hostname |grep Grp| awk '{print $6}'`
if [ $status != "ONLINE" ]; then
exit 1
fi
I tried os.popen and it returns ['ONLINE\n'].
value = os.popen("hastatus -sum |grep `hostname` |grep Grp| awk '{print $6}'".readlines()
print value
Try the subprocess module:
import subprocess
value = subprocess.call("hastatus -sum |grep `hostname` |grep Grp| awk '{print $6}'")
print(value)
Documentation is found here:
https://docs.python.org/2.6/library/subprocess.html?highlight=subprocess#module-subprocess
The recommended way is to use the subprocess module.
The following section of the documentation is instructive:
replacing shell pipeline
I report here for reference:
output=dmesg | grep hda
becomes:
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
The p1.stdout.close() call after starting the p2 is important in order for p1 to receive a SIGPIPE if p2 exits before p1.
Alternatively, for trusted input, the shell’s own pipeline support may still be used directly:
output=dmesg | grep hda
becomes:
output=check_output("dmesg | grep hda", shell=True)
And here the recipe to translate os.popen to the subprocess module:
replacing os.popen()
So in your case you could do something like
import subprocess
output=check_output("hastatus -sum |grep `hostname` |grep Grp| awk '{print $6}'", shell=True)
or
concatenating the Popens as showed in the documentation above (probably what I would do).
Then to test the output you could just use, assuming you're using the first approach:
import sys
import subprocess
....
if 'ONLINE' in output:
sys.exit(1)
Related
This question already has answers here:
How do I use subprocess.Popen to connect multiple processes by pipes?
(9 answers)
Closed 7 years ago.
Hi I am trying to run this command in python's subprocess with shlex split, however, I haven't found anything helpful for this particular case :
ifconfig | grep "inet " | grep -v 127.0.0.1 | grep -v 192.* | awk '{print $2}'
I get an with ifconfig error because the split with the single and double quotes and even the white space before the $ sign are not correct.
Please Help.
You can use shell=True (shell will interpret |) and triple quote string literal (otherwise you need to escape ", ' inside the string literal):
import subprocess
cmd = r"""ifconfig | grep "inet " | grep -v 127\.0\.0\.1 | grep -v 192\. | awk '{print $2}'"""
subprocess.call(cmd, shell=True)
or you can do it in harder way (Replacing shell pipeline from subprocess module documentation):
from subprocess import Popen, PIPE, call
p1 = Popen(['ifconfig'], stdout=PIPE)
p2 = Popen(['grep', 'inet '], stdin=p1.stdout, stdout=PIPE)
p3 = Popen(['grep', '-v', r'127\.0\.0\.1'], stdin=p2.stdout, stdout=PIPE)
p4 = Popen(['grep', '-v', r'192\.'], stdin=p3.stdout, stdout=PIPE)
call(['awk', '{print $2}'], stdin=p4.stdout)
I know there are posts already on how to use subprocess in python to run linux commands but I just cant get the syntax correct for this one. please help. This is the command I need to run...
/sbin/ifconfig eth1 | grep "inet addr" | awk -F: '{print $2}' | awk '{print $1}'
Ok this is what I have at the moment that gives a syntax error...
import subprocess
self.ip = subprocess.Popen([/sbin/ifconfig eth1 | grep "inet addr" | awk -F: '{print $2}' | awk '{print $1}'])
Any help greatly appreciated.
This has been gone over many, many times before; but here is a simple pure Python replacement for the inefficient postprocessing.
from subprocess import Popen, PIPE
eth1 = subprocess.Popen(['/sbin/ifconfig', 'eth1'], stdout=PIPE)
out, err = eth1.communicate()
for line in out.split('\n'):
line = line.lstrip()
if line.startswith('inet addr:'):
ip = line.split()[1][5:]
Here's how to construct the pipe in Python (rather than reverting to Shell=True, which is more difficult to secure).
from subprocess import PIPE, Popen
# Do `which` to get correct paths
GREP_PATH = '/usr/bin/grep'
IFCONFIG_PATH = '/usr/bin/ifconfig'
AWK_PATH = '/usr/bin/awk'
awk2 = Popen([AWK_PATH, '{print $1}'], stdin=PIPE)
awk1 = Popen([AWK_PATH, '-F:', '{print $2}'], stdin=PIPE, stdout=awk2.stdin)
grep = Popen([GREP_PATH, 'inet addr'], stdin=PIPE, stdout=awk1.stdin)
ifconfig = Popen([IFCONFIG_PATH, 'eth1'], stdout=grep.stdin)
procs = [ifconfig, grep, awk1, awk2]
for proc in procs:
print(proc)
proc.wait()
It'd be better to do the string processing in Python using re. Do this to get the stdout of ifconfig.
from subprocess import check_output
stdout = check_output(['/usr/bin/ifconfig', 'eth1'])
print(stdout)
I am trying to automate the process of executing a command. When I this command:
ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10
Into a termianl I get the response:
%CPU PID USER COMMAND
5.7 25378 stackusr whttp
4.8 25656 stackusr tcpproxy
But when I execute this section of code I get an error regarding the format specifier:
if __name__ == '__main__':
fullcmd = ['ps','-eo','pcpu,pid,user,args | sort -k 1 -r | head -10']
print fullcmd
sshcmd = subprocess.Popen(fullcmd,
shell= False,
stdout= subprocess.PIPE,
stderr= subprocess.STDOUT)
out = sshcmd.communicate()[0].split('\n')
#print 'here'
for lin in out:
print lin
This is the error showen:
ERROR: Unknown user-defined format specifier "|".
********* simple selection ********* ********* selection by list *********
-A all processes -C by command name
-N negate selection -G by real group ID (supports names)
-a all w/ tty except session leaders -U by real user ID (supports names)
-d all except session leaders -g by session OR by effective group name
-e all processes -p by process ID
T all processes on this terminal -s processes in the sessions given
a all w/ tty, including other users -t by tty
g OBSOLETE -- DO NOT USE -u by effective user ID (supports names)
r only running processes U processes for specified users
x processes w/o controlling ttys t by tty
I have tryed placing a \ before the | but this has not effect.
You would need to use shell=True to use the pipe character, if you are going to go down that route then using check_output would be the simplest approach to get the output:
from subprocess import check_output
out = check_output("ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10",shell=True,stderr=STDOUT)
You can also simulate a pipe with Popen and shell=False, something like:
from subprocess import Popen, PIPE, STDOUT
sshcmd = Popen(['ps', '-eo', "pcpu,pid,user,args"],
stdout=PIPE,
stderr=STDOUT)
p2 = Popen(["sort", "-k", "1", "-r"], stdin=sshcmd.stdout, stdout=PIPE)
sshcmd.stdout.close()
p3 = Popen(["head", "-10"], stdin=p2.stdout, stdout=PIPE,stderr=STDOUT)
p2.stdout.close()
out, err = p3.communicate()
How do I place output of bash command to Python variable?
I am writing a Python script, which I want to enter the output of
bash command:
rpm -qa --qf '%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH} %{VENDOR}\n' | grep -v 'Red Hat'|wc -l, and place it to Python variable, let say R.
After that I want to do, Python if R != 0
then run some Linux command.
How do I achieve that?
There are various options, but the easiest is probably using subprocess.check_output() with shell=True although this can be security hazard if you don't fully control what command is passed in.
import subprocess
var = subprocess.check_output('rpm -qa --qf '%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH} %{VENDOR}\n' | grep -v 'Red Hat'|wc -l', shell = True)
var = int(var)
You need to use shell=True as otherwise the pipes would not be interpreted.
If you need more control you might want to look at plumbum where you can do:
from plumbum.cmd import rpm, grep, wc
chain = rpm["-qa", "--qf", r"%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH} %{VENDOR}\n"] | grep["-v", "Red Hat"] | wc["-l"]
R = int(chain())
Although I would probably not invoke wc and get the whole output and count its length within python (easier to check that you got just the lines that you expected, piping through wc -l throws away all of the details)
I would recommend envoy primarily because the API is much more intuitive to use for 90% of the use cases.
r = envoy.run('ls ', data='data to pipe in', timeout=2)
print r.status_code # returns status code
print r.std_out # returns the output.
See the Envoy Github page for more details.
You can use stdin.
#!/usr/bin/python
import sys
s = sys.stdin.read()
print s
Then you will run a bash command like this
echo "Hello" | ./myscript.py
Output
Hello
You can replace shell pipeline using Popen:
from subprocess import PIPE,Popen
p1 = Popen(["rpm", "-qa", "--qf", '%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH} %{VENDOR}\n'],stdout=PIPE)
p2 = Popen(["grep", "-v", 'Red Hat'],stdin=p1.stdout,stdout=PIPE)
p1.stdout.close()
p3 = Popen(["wc", "-l"],stdin=p2.stdout,stdout=PIPE)
p2.stdout.close()
out,err = p3.communicate()
If you just want to check if grep returned any matches then forget the wc - l and just check what grep returns:
p1 = Popen(["rpm", "-qa", "--qf", '%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH} %{VENDOR}\n'],stdout=PIPE)
p2 = Popen(["grep", "-v", 'Red Hat'],stdin=p1.stdout,stdout=PIPE)
p1.stdout.close()
out,err = p2.communicate()
if out:
...
Or just use check_output to run the rpm command and check the string for "Red Hat":
out = check_output(["rpm", "-qa", "--qf", '%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH} %{VENDOR}\n'])
if "Red Hat" not in out:
....
Which is the same as inverting the search with grep -v then checking if there are any matches with wc.
I tested following command on bash(Linux) and it works fine:
awk '/string1\/parameters\/string2/' RS= myfile | grep Value | sed 's/.*"\(.*\)"[^"]*$/\1/'
Now I have to call it in a python script, while string1 and string2 are python variables.
I tried it with os.popen but I didn't figure out how to concatenate the characters.
Any ideas how to solve this issue?
Thank you in advance for your help!
You can replace shell pipeline with Popen:
from subprocess import PIPE,Popen
from shlex import split
p1 = Popen(split("awk /string1\/parameters\/string2 RS=myfile"), stdout=PIPE)
p2 = Popen(["grep", "Value"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
p3 = Popen(split("""sed 's/.*"\(.*\)"[^"]*$/\1/'"""), stdin=p2.stdout, stdout=PIPE)
p2.stdout.close() # Allow p2 to receive a SIGPIPE if p3 exits.
output = p3.communicate()[0]
You can use subprocess.check_output() with the variables being substituted into the command using format():
cmd = """awk '/{}\/parameters\/{}/' RS= myfile | grep Value | sed 's/.*"\(.*\)"[^"]*$/\1/'""".format('string1', 'string2')
cmd_output = subprocess.check_output(cmd, shell=True)
But note the warnings regarding the use of shell=True in the referenced documentation.
An alternative is to set up the pipeline yourself using Popen():
import shlex
from subprocess import Popen, PIPE
awk_cmd = """awk '/{}\/parameters\/{}/' RS= myfile""".format('s1', 's2')
grep_cmd = 'grep Value'
sed_cmd = """sed 's/.*"\(.*\)"[^"]*$/\1/'"""
p_awk = Popen(shlex.split(awk_cmd), stdout=PIPE)
p_grep = Popen(shlex.split(grep_cmd), stdin=p_awk.stdout, stdout=PIPE)
p_sed = Popen(shlex.split(sed_cmd), stdin=p_grep.stdout, stdout=PIPE)
for p in p_awk, p_grep:
p.stdout.close()
stdout, stderr = p_sed.communicate()
print stdout