Call multi-quoted unix command in python's subprocess - python

How do i call a unix command such as df -Ph | awk 'NR>=2 {print $6","$5","$4}' using subprocess. Would it make sense to use shlex.split here?
Thanks for any assistance here.

You're using a pipe, so it needs to run in the shell. So just use the string form and make sure to specify shell=True. As for the quoting, it's easiest to use a triple quote here:
cmd = """df -Ph | awk 'NR>=2 {print $6","$5","$4}'"""

Just have subprocess pass it to a shell by setting shell=True:
subprocess.call('''df -Ph | awk 'NR>=2 {print $6","$5","$4}'''', shell=True)

Hi you can also do like this. Do not forget to import sub-process
import subprocess
def linuxOperation():
p = subprocess.Popen(["df","-Ph"], stdout=subprocess.PIPE)
p2 = subprocess.Popen(["awk",'NR>=2 {print $6","$5","$4}'], stdin=p.stdout, stdout=subprocess.PIPE, universal_newlines=True)
p.stdout.close()
out,err = p2.communicate()
print(out)
linuxOperation()

Related

Kill application in linux using python

I need one help regarding killing application in linux
As manual process I can use command -- ps -ef | grep "app_name" | awk '{print $2}'
It will give me jobids and then I will kill using command " kill -9 jobid".
I want to have python script which can do this task.
I have written code as
import os
os.system("ps -ef | grep app_name | awk '{print $2}'")
this collects jobids. But it is in "int" type. so I am not able to kill the application.
Can you please here?
Thank you
import subprocess
temp = subprocess.run("ps -ef | grep 'app_name' | awk '{print $2}'", stdin=subprocess.PIPE, shell=True, stdout=subprocess.PIPE)
job_ids = temp.stdout.decode("utf-8").strip().split("\n")
# sample job_ids will be: ['59899', '68977', '68979']
# convert them to integers
job_ids = list(map(int, job_ids))
# job_ids = [59899, 68977, 68979]
Then iterate through the job ids and kill them. Use os.kill()
for job_id in job_ids:
os.kill(job_id, 9)
Subprocess.run doc - https://docs.python.org/3/library/subprocess.html#subprocess.run
To kill a process in Python, call os.kill(pid, sig), with sig = 9 (signal number for SIGKILL) and pid = the process ID (PID) to kill.
To get the process ID, use os.popen instead of os.system above. Alternatively, use subprocess.Popen(..., stdout=subprocess.PIPE). In both cases, call the .readline() method, and convert the return value of that to an integer with int(...).

Python run bash script with flags and pipes

I have a bash script that returns the admin email for a domain, like the following.
whois -h $(whois "stackoverflow.com" | grep 'Registrar WHOIS Server:' | cut -f2- -d:) "stackoverflow.com" | grep 'Admin Email:' | cut -f2- -d:
I want to run this in a python file. I believe I need to use a subprocess but can't seem to get it working with the pipes and flags. Any help?
Yes, you can use subprocess with pipe.
i will ilustrate an exemple:
ps = subprocess.Popen(('whois', 'stackoverflow.com'), stdout=subprocess.PIPE)
output = subprocess.check_output(('grep', 'Registrar WHOIS'), stdin=ps.stdout)
ps.wait()
You can ajust as your's need
The easiest solution is to write the commands into a script file and execute that file.
If you don't want that, you can execute any command with
bash -c 'command'
This is covered in the Replacing Older Functions with the subprocess Module section of the docs.
The example there is this bash pipeline:
output=`dmesg | grep hda`
rewritten for subprocess as;
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
Note that in many cases, you don't need to handle all of the same edge cases that the shell handles in exactly the same way. But if you don't know what you need, it's better to be fully general like this.
Your $() does the same thing as the backticks in that example, your pipes are the same as the example's pipes, and your arguments aren't anything special.
So:
whois = Popen(['whois', 'stackoverflow.com'], stdout=PIPE)
grep = Popen(['grep', 'Registrar WHOIS Server:'], stdin=whois.stdout, stdout=PIPE)
whois.stdout.close()
cut = Popen(['cut', '-f2-', '-d:'], stdin=grep.stdout, stdout=PIPE)
grep.stdout.close()
inneroutput, _ = cut.communicate()
whois = Popen(['whois', '-h', inneroutput, 'stackoverflow.com'], stdout=PIPE)
grep = Popen(['grep', 'Admin Email:', stdin=whois.stdout, stdout=PIPE)
whois.stdout.close()
cut = Popen(['cut', '-f2-', '-d:'], stdin=grep.stdout)
grep.stdout.close()
cut.communicate()
If this seems like a mess, consider that:
Your original shell command is a mess.
If you actually know exactly what you're expecting the pipeline to do, you can skip a lot of it.
All of the stuff you're doing here could just be done directly in Python without the need for this whole mess.
You may be happier using a third-party library like plumbum.
How could you write the whole thing in Python without all this piping? For example, instead of using grep, you could use Python's re module. Or, since you're not even using a regular expression at all, just a simple in check. And likewise for cut:
whois = subprocess.run(['whois', 'stackoverflow.com'],
check=True, stdout=PIPE, encoding='utf-8').output
for line in whois.splitlines():
if 'Registrar WHOIS Server:' in line:
registrar = line.split(':', 1)[1]
break
whois = subprocess.run(['whois', '-h', registrar, 'stackoverflow.com'],
check=True, stdout=PIPE, encoding='utf-8').output
for line in inner.splitlines():
if 'Admin Email:' in line:
admin = line.split(':', 1)[1]
break

Python Popen shell script but fail

I want to execute bash command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
in python using Popen. The code does not raise any exception, but none change is made on the jruby.log after execution. The python code is shown below.
>>> command='/bin/echo </verbosegc> >> '+fullpath
>>> command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
>>process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
>>> output= process.communicate()[0]
>>> output
'</verbosegc> >> /tmp/jruby.log\n
I also print out the process.pid and then check the pid using ps -ef | grep pid. The result shows that the process pid has been finished.
Just use pass file object if you want to append the output to a file, you cannot redirect to a file unless you set shell=True:
command = ['/bin/echo', '</verbosegc>']
with open('/tmp/jruby.log',"a") as f:
subprocess.check_call(command, stdout=f,stderr=subprocess.STDOUT)
The first argument to subprocess.Popen is the array ['/bin/echo', '</verbosegc>', '>>', '/tmp/jruby.log']. When the first argument to subprocess.Popen is an array, it does not launch a shell to run the command, and the shell is what's responsible for interpreting >> /tmp/jruby.log to mean "write output to jruby.log".
In order to make the >> redirection work in this command, you'll need to pass command directly to subprocess.Popen() without splitting it into a list. You'll also need to quote the first argument (or else the shell will interpret the "<" and ">" characters in ways you don't want):
command = '/bin/echo "</verbosegc>" >> /tmp/jruby.log'
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
Consider the following:
command = [ 'printf "%s\n" "$1" >>"$2"', # shell script to execute
'', # $0 in shell
'</verbosegc>', # $1
'/tmp/jruby.log' ] # $2
subprocess.Popen(command, shell=True)
The first argument is a shell script referring to $1 and $2, which are in turn passed as separate arguments. Keeping data separate from code, rather than trying to substitute the former into the latter, is a precaution against shell injection (think of this as an analog to SQL injection).
Of course, don't actually do anything like this in Python -- the native primitives for file IO are far more appropriate.
Have you tried without splitting the command and using shell=True? My usual format is:
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
output = process.stdout.read() # or .readlines()

Using multiple command in `subprocess.call`

I want to create a shell pipeline using call. For example, I want to run this code get the number of lines having 123:
The shell command would be:
grep "123" myfile | wc -l > sum.txt
But "123" is a variable so I want to use python:
A= ["123","1234","12345"]
for i in A:
call(["grep", i,"| wc >> sum.txt"])
This code does not work!
call only calls one executable. What you want is that executable to be a shell (e.g. bash), which parses the command line. bash is also responsible for handling pipes. You can do this using shell=True option, which is off by default.
When you give it an array like you do, then you are calling that executable with those arguments. A pipe is not an argument; and grep does not know how to pipe, nor how to invoke wc.
You can do
call(["grep '%s' myfile | wc >> sum.txt" % i, shell=True)
If you are using the pipe character you would need shell=True,pass i each time using str.format:
call('grep "{}" myfile | wc >> sum.txt'.format(i),shell=True)
You can also do it without shell=True and using python to open the file instead of shell redirection:
from subprocess import Popen, PIPE, check_call
p = Popen(["grep","myfile" i], stdout=PIPE)
with open('sum.txt', "a") as f:
check_call(["wc"], stdin=p.stdout,stdout=f)
Also your > and >> are not the same so what mode you open the file in will depend on which one you actually want to replicate

sort and uniq in python

I want to do some shell command in python. I have a main.py, which call successive function and I find some of them easier to do in shell. The problem : I want to do all of this automatically !
I want to do this kind of code :
sort fileIn | uniq > fileOut
my problem is to do it with the pipe caracter. I try :
from subprocess import call
call(['sort ',FileOut,'|',' uniq '])
or
p1 = subprocess.Popen(['sort ', FileOut], stdout=subprocess.PIPE)
p2 = subprocess.Popen([" wc","-l"], stdin=p1.stdout, stdout=subprocess.PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output,err = p2.communicate()
But all of this didn't work.
(NB: FileOut is a string)
You need to use shell=True, which causes your command to be run by the shell, instead of using a exec syscall:
call('sort {0} | uniq'.format(FileOut), shell=True)
It's worth noting that, if you simply want unique lines of a file in python (in no particular order), it may be easier to do so without the shell:
unique_lines= set(open('filename').readlines())
I got tired of always looking up the Popen module documentation so this is an abridged version of the utility function I use to wrap Popen. you can take the so parameter of the first call and pass it as the input to the next call. You can also do error checking/parsing if you need to.
def run(command, input=None)
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE, shell=True)
if input:
so, se = process.communicate(input)
else:
so, se = process.communicate()
rc = process.returncode
return so, se, rc

Categories

Resources