I am trying to automate the process of executing a command. When I this command:
ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10
Into a termianl I get the response:
%CPU PID USER COMMAND
5.7 25378 stackusr whttp
4.8 25656 stackusr tcpproxy
But when I execute this section of code I get an error regarding the format specifier:
if __name__ == '__main__':
fullcmd = ['ps','-eo','pcpu,pid,user,args | sort -k 1 -r | head -10']
print fullcmd
sshcmd = subprocess.Popen(fullcmd,
shell= False,
stdout= subprocess.PIPE,
stderr= subprocess.STDOUT)
out = sshcmd.communicate()[0].split('\n')
#print 'here'
for lin in out:
print lin
This is the error showen:
ERROR: Unknown user-defined format specifier "|".
********* simple selection ********* ********* selection by list *********
-A all processes -C by command name
-N negate selection -G by real group ID (supports names)
-a all w/ tty except session leaders -U by real user ID (supports names)
-d all except session leaders -g by session OR by effective group name
-e all processes -p by process ID
T all processes on this terminal -s processes in the sessions given
a all w/ tty, including other users -t by tty
g OBSOLETE -- DO NOT USE -u by effective user ID (supports names)
r only running processes U processes for specified users
x processes w/o controlling ttys t by tty
I have tryed placing a \ before the | but this has not effect.
You would need to use shell=True to use the pipe character, if you are going to go down that route then using check_output would be the simplest approach to get the output:
from subprocess import check_output
out = check_output("ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10",shell=True,stderr=STDOUT)
You can also simulate a pipe with Popen and shell=False, something like:
from subprocess import Popen, PIPE, STDOUT
sshcmd = Popen(['ps', '-eo', "pcpu,pid,user,args"],
stdout=PIPE,
stderr=STDOUT)
p2 = Popen(["sort", "-k", "1", "-r"], stdin=sshcmd.stdout, stdout=PIPE)
sshcmd.stdout.close()
p3 = Popen(["head", "-10"], stdin=p2.stdout, stdout=PIPE,stderr=STDOUT)
p2.stdout.close()
out, err = p3.communicate()
Related
trying to figure out how to do this:
command = f"adb -s {i} shell"
proc = Popen(command, stdin=PIPE, stdout=PIPE)
out, err = proc.communicate(f'dumpsys package {app_name} | grep version'.encode('utf-8'))
but in this:
command = f"adb -s {i} shell"
proc = run(command, stdin=PIPE, stdout=PIPE, shell=True)
out, err = run(f'dumpsys package {app_name} | grep version', shell=True, text=True, stdin=proc.stdout )
The idea is to make a command which require input of some kind( for example(entering shell)) and afterwards inserting another command to shell.
I've found a way online with communicate, But I wonder how to do it with run() func.
Thanks!
You only need to call run once -- pass the remote command in the input argument (and don't use shell=True in places where you don't need it).
import subprocess, shlex
proc = subprocess.run(['adb', '-s', i, 'shell'],
capture_output=True,
input=f'dumpsys package {shlex.quote(app_name)} | grep version')
shlex.quote prevents an app name that contains $(...), ;, etc from running unwanted commands on your device.
How would one call a shell command from Python which contains a pipe and capture the output?
Suppose the command was something like:
cat file.log | tail -1
The Perl equivalent of what I am trying to do would be something like:
my $string = `cat file.log | tail -1`;
Use a subprocess.PIPE, as explained in the subprocess docs section "Replacing shell pipeline":
import subprocess
p1 = subprocess.Popen(["cat", "file.log"], stdout=subprocess.PIPE)
p2 = subprocess.Popen(["tail", "-1"], stdin=p1.stdout, stdout=subprocess.PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output,err = p2.communicate()
Or, using the sh module, piping becomes composition of functions:
import sh
output = sh.tail(sh.cat('file.log'), '-1')
import subprocess
task = subprocess.Popen("cat file.log | tail -1", shell=True, stdout=subprocess.PIPE)
data = task.stdout.read()
assert task.wait() == 0
Note that this does not capture stderr. And if you want to capture stderr as well, you'll need to use task.communicate(); calling task.stdout.read() and then task.stderr.read() can deadlock if the buffer for stderr fills. If you want them combined, you should be able to use 2>&1 as part of the shell command.
But given your exact case,
task = subprocess.Popen(['tail', '-1', 'file.log'], stdout=subprocess.PIPE)
data = task.stdout.read()
assert task.wait() == 0
avoids the need for the pipe at all.
This:
import subprocess
p = subprocess.Popen("cat file.log | tail -1", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
#for shell=False use absolute paths
p_stdout = p.stdout.read()
p_stderr = p.stderr.read()
print p_stdout
Or this should work:
import os
result = os.system("cat file.log | tail -1")
Another way similar to Popen would be:
command=r"""cat file.log | tail -1 """
output=subprocess.check_output(command, shell=True)
This is a fork from #chown with some improvements:
an alias for import subprocess, makes easier when setting parameters
if you just want the output, you don't need to set stderr or stdin when calling Popen
for better formatting, it's recommended to decode the output
shell=True is necessary, in order to call an interpreter for the command line
#!/usr/bin/python3
import subprocess as sp
p = sp.Popen("cat app.log | grep guido", shell=True, stdout=sp.PIPE)
output = p.stdout.read()
print(output.decode('utf-8'))
$ cat app.log
2017-10-14 22:34:12, User Removed [albert.wesker]
2017-10-26 18:14:02, User Removed [alexei.ivanovich]
2017-10-28 12:14:56, User Created [ivan.leon]
2017-11-14 09:22:07, User Created [guido.rossum]
$ python3 subproc.py
2017-11-14 09:22:07, User Created [guido.rossum]
Simple function for run shell command with many pipes
Using
res, err = eval_shell_cmd('pacman -Qii | grep MODIFIED | grep -v UN | cut -f 2')
Function
import subprocess
def eval_shell_cmd(command, debug=False):
"""
Eval shell command with pipes and return result
:param command: Shell command
:param debug: Debug flag
:return: Result string
"""
processes = command.split(' | ')
if debug:
print('Processes:', processes)
for index, value in enumerate(processes):
args = value.split(' ')
if debug:
print(index, args)
if index == 0:
p = subprocess.Popen(args, stdout=subprocess.PIPE)
else:
p = subprocess.Popen(args, stdin=p.stdout, stdout=subprocess.PIPE)
if index == len(processes) - 1:
result, error = p.communicate()
return result.decode('utf-8'), error
I want to convert the following shell evaluation to python2.6(can't upgrade). I can't figure out how to evaluate the output of the command.
Here's the shell version:
status=`$hastatus -sum |grep $hostname |grep Grp| awk '{print $6}'`
if [ $status != "ONLINE" ]; then
exit 1
fi
I tried os.popen and it returns ['ONLINE\n'].
value = os.popen("hastatus -sum |grep `hostname` |grep Grp| awk '{print $6}'".readlines()
print value
Try the subprocess module:
import subprocess
value = subprocess.call("hastatus -sum |grep `hostname` |grep Grp| awk '{print $6}'")
print(value)
Documentation is found here:
https://docs.python.org/2.6/library/subprocess.html?highlight=subprocess#module-subprocess
The recommended way is to use the subprocess module.
The following section of the documentation is instructive:
replacing shell pipeline
I report here for reference:
output=dmesg | grep hda
becomes:
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
The p1.stdout.close() call after starting the p2 is important in order for p1 to receive a SIGPIPE if p2 exits before p1.
Alternatively, for trusted input, the shell’s own pipeline support may still be used directly:
output=dmesg | grep hda
becomes:
output=check_output("dmesg | grep hda", shell=True)
And here the recipe to translate os.popen to the subprocess module:
replacing os.popen()
So in your case you could do something like
import subprocess
output=check_output("hastatus -sum |grep `hostname` |grep Grp| awk '{print $6}'", shell=True)
or
concatenating the Popens as showed in the documentation above (probably what I would do).
Then to test the output you could just use, assuming you're using the first approach:
import sys
import subprocess
....
if 'ONLINE' in output:
sys.exit(1)
The output of
ps uaxw | egrep 'kms' | grep -v 'grep'
yields:
user1 8148 0.0 0.0 128988 3916 pts/8 S+ 18:34 0:00 kms
user2 11782 0.7 0.3 653568 56564 pts/14 Sl+ 20:29 0:01 kms
Clearly two processes running the program. I want to store this number (2 here) as a variable. Any suggestions on how to do this in python?
I tried the following:
procs = subprocess.check_output("ps uaxw | egrep 'kmns' |grep -v 'grep'",shell=True)
But i get the following (I think when the jobs are not currently running, so number of processes running the jobs is zero):
Traceback (most recent call last): File "", line 1, in
File "/usr/lib64/python2.7/subprocess.py", line 573, in
check_output
raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command 'ps uaxw | egrep 'kmns' |grep
-v 'grep'' returned non-zero exit status 1
How do I get around this?
Btw, here is the function I wrote to detect if my system was busy (which means if the number of cpus > total installed, and if load avg > 0.9 per cpu):
def busy():
import subprocess
output = subprocess.check_output("uptime", shell=False)
words = output.split()
sys.stderr.write("%s\n"%(output))
procs = subprocess.check_output("ps uaxw | egrep '(kmns)' | grep -v 'grep'", shell=True)
kmns_wrds = procs.split("\n")
wrds=words[9]
ldavg=float(wrds.strip(','))+0.8
sys.stderr.write("%s %s\n"%(ldavg,len(kmns_wrds)))
return max(ldavg, len(kmns_wrds)) > ncpus
The above is called by:
def wait_til_free(myseconds):
while busy():
import time
import sys
time.sleep(myseconds)
""" sys.stderr.write("Waiting %s seconds\n"%(myseconds)) """
which basically tells the system to wait while all cpus are taken.
Any suggestions?
Many thanks!
If you're going to do this all with a big shell command, just add the -c argument to grep, so it gives you a count of lines instead of the actual lines:
$ ps uaxw |grep python |grep -v grep
abarnert 1028 0.0 0.3 2529488 55252 s000 S+ 9:46PM 0:02.80 /Library/Frameworks/Python.framework/Versions/3.4/Resources/Python.app/Contents/MacOS/Python /Library/Frameworks/Python.framework/Versions/3.4/bin/ipython3
abarnert 9639 0.0 0.1 2512928 19228 s002 T 3:06PM 0:00.40 /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /usr/local/bin/ipython2
$
$ ps uaxw |grep python |grep -c -v grep
2
Of course you could make this more complicated by adding a | wc -l to the end, or by counting the lines in Python, but why?
Alternatively, why even involve the shell? You can search within Python just as easily as you can run grep—and then you don't have the problem that you've accidentally created a grep process that ps will repeat as matching your search and then need to grep -v it back out:
procs = subprocess.check_output(['ps', 'uaxw']).splitlines()
kms_procs = [proc for proc in procs if 'kms' in proc]
count = len(kms_procs)
Or, even more simply, don't ask ps to give you a whole bunch of information that you don't want and then figure out how to ignore it, just ask for the information you want:
procs = subprocess.check_output(['ps', '-a', '-c', '-ocomm=']).splitlines()
count = procs.count('kms')
Or, even more more simplierly, install psutil and don't even try to run subprocesses and parse their output:
count = sum(1 for proc in psutil.process_iter() if proc.name() == 'kms')
If you want to simulate pipes you can use Popen:
p1 = Popen(["ps", "uaxw"], stdout=PIPE)
p2 = Popen(["grep", 'kms'], stdout=PIPE, stdin=p1.stdout)
p1.stdout.close()
out,_ = p2.communicate()
print(len(out.splitlines()))
Or use pgrep if it is available:
count = check_output(["pgrep", "-c", "kms"])
You may get different output from both as pgrep only gets the executable's names but so will ps -aux vs ps -a.
how to perform
echo xyz | ssh [host]
(send xyz to host)
with python?
I have tried pexpect
pexpect.spawn('echo xyz | ssh [host]')
but it's performing
echo 'xyz | ssh [host]'
maybe other package will be better?
http://pexpect.sourceforge.net/pexpect.html#spawn
Gives an example of running a command with a pipe :
shell_cmd = 'ls -l | grep LOG > log_list.txt'
child = pexpect.spawn('/bin/bash', ['-c', shell_cmd])
child.expect(pexpect.EOF)
Previous incorrect attempt deleted to make sure no-one is confused by it.
You don't need pexpect to simulate a simple shell pipeline. The simplest way to emulate the pipeline is the os.system function:
os.system("echo xyz | ssh [host]")
A more Pythonic approach is to use the subprocess module:
p = subprocess.Popen(["ssh", "host"],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write("xyz\n")
output = p.communicate()[0]