I need one help regarding killing application in linux
As manual process I can use command -- ps -ef | grep "app_name" | awk '{print $2}'
It will give me jobids and then I will kill using command " kill -9 jobid".
I want to have python script which can do this task.
I have written code as
import os
os.system("ps -ef | grep app_name | awk '{print $2}'")
this collects jobids. But it is in "int" type. so I am not able to kill the application.
Can you please here?
Thank you
import subprocess
temp = subprocess.run("ps -ef | grep 'app_name' | awk '{print $2}'", stdin=subprocess.PIPE, shell=True, stdout=subprocess.PIPE)
job_ids = temp.stdout.decode("utf-8").strip().split("\n")
# sample job_ids will be: ['59899', '68977', '68979']
# convert them to integers
job_ids = list(map(int, job_ids))
# job_ids = [59899, 68977, 68979]
Then iterate through the job ids and kill them. Use os.kill()
for job_id in job_ids:
os.kill(job_id, 9)
Subprocess.run doc - https://docs.python.org/3/library/subprocess.html#subprocess.run
To kill a process in Python, call os.kill(pid, sig), with sig = 9 (signal number for SIGKILL) and pid = the process ID (PID) to kill.
To get the process ID, use os.popen instead of os.system above. Alternatively, use subprocess.Popen(..., stdout=subprocess.PIPE). In both cases, call the .readline() method, and convert the return value of that to an integer with int(...).
Related
I'm running strandtest.py to power a neo pixel matrix on a django server.
A button click runs the program
I'd like a second button click to kill it.
I've read a bunch of different solutions but I am having trouble following the instructions. Is there another subprocess command I can execute while the program runs?
This is how I start the program
import subprocess
def test(request):
sp = subprocess.Popen('sudo python /home/pi/strandtest.py', shell=True)
return HttpResponse()
I want to kill the process and clear the matrix.
You can try to run below command with subprocess.Popen:
kill -9 `ps aux | grep /home/pi/strandtest.py | awk -F' ' '{print $2}'`
kill -9 - will kill your process
ps aux combine with grep - to find only process you need
awk - get only id of that process
I need to get the duration of an Video for an application for Django. So I'll have to do this in python. But I'm really a beginner in this. So it would be nice, if you can help.
This is what I got so far:
import subprocess
task = subprocess.Popen("avconv -i video.mp4 2>&1 | grep Duration | cut -d ' ' -f 4 | sed -r 's/([^\.]*)\..*/\1/'", shell=True, stdout=subprocess.PIPE)
time = task.communicate()[0]
print time
I want to solve it with avconv because I'm allready using this at another point. The shell-command works well so far and gives me an output like:
HH:MM:SS.
But when I'm executing the python-code I just get an non-interpretable symbol on the shell.
Thanks a lot allready for your help!
Found a solution. Problem was the sed-part:
import os
import subprocess
task = subprocess.Popen("avconv -i video.mp4 2>&1 | grep Duration | cut -d ' ' -f 4 | sed -e 's/.\{4\}$//'", shell=True, stdout=subprocess.PIPE)
time = task.communicate()[0]
print time
Because it is allways the same part, it was enought to just cut the last 4 characters.
From python documentation:
Warning
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
So you should really user communicate for that:
import subprocess
task = subprocess.Popen("avconv -i video.mp4 2>&1 | grep Duration | cut -d ' ' -f 4 | sed -r 's/([^\.]*)\..*/\1/'", shell=True, stdout=subprocess.PIPE)
time = task.communicate()[0]
print time
That way you can also catch stderr message, if any.
My Requirement is to run a shell function or script in parallel with multi-processing. Currently I get it done with the below script that doesn't use multi-processing. Also when I start 10 jobs in parallel, one job might get completed early and has to wait for the other 9 jobs to complete. I wanted eliminate this with the help of multiprocessing in python.
i=1
total=`cat details.txt |wc -l`
while [ $i -le $total ]
do
name=`cat details.txt | head -$i | tail -1 | awk '{print $1}'
age=`cat details.txt | head -$i | tail -1 | awk '{print $2}'
./new.sh $name $age &
if (( $i % 10 == 0 )); then wait; fi
done
wait
I want to run ./new.sh $name $age inside a python script with multiprocessing enabled(taking into account the number of cpu) As you can see the value of $name and $age would change in each execution. Kindly share your thoughts
First, your whole schell script could be replaced with:
awk '{ print $1; print $2; }' details.txt | xargs -d'\n' -n 2 -P 10 ./new.sh
A simple python solution would be:
from subprocess import check_call
from multiprocessing.dummy import Pool
def call_script(args):
name, age = args # unpack arguments
check_call(["./new.sh", name, age])
def main():
with open('details.txt') as inputfile:
args = [line.split()[:2] for line in inputfile]
pool = Pool(10)
# pool = Pool() would use the number of available processors instead
pool.map(call_script, args)
pool.close()
pool.join()
if __name__ == '__main__':
main()
Note that this uses multiprocessing.dummy.Pool (a thread pool) to call the external script, which in this case is preferable to a process pool, since all the call_script method does is invoke the script and wait for its return. Doing that in a worker process instead of a worker thread wouldn't increase performance since this is an IO based operation. It would only increase the overhead for process creation and interprocess communication.
How do i call a unix command such as df -Ph | awk 'NR>=2 {print $6","$5","$4}' using subprocess. Would it make sense to use shlex.split here?
Thanks for any assistance here.
You're using a pipe, so it needs to run in the shell. So just use the string form and make sure to specify shell=True. As for the quoting, it's easiest to use a triple quote here:
cmd = """df -Ph | awk 'NR>=2 {print $6","$5","$4}'"""
Just have subprocess pass it to a shell by setting shell=True:
subprocess.call('''df -Ph | awk 'NR>=2 {print $6","$5","$4}'''', shell=True)
Hi you can also do like this. Do not forget to import sub-process
import subprocess
def linuxOperation():
p = subprocess.Popen(["df","-Ph"], stdout=subprocess.PIPE)
p2 = subprocess.Popen(["awk",'NR>=2 {print $6","$5","$4}'], stdin=p.stdout, stdout=subprocess.PIPE, universal_newlines=True)
p.stdout.close()
out,err = p2.communicate()
print(out)
linuxOperation()
I try to find the process ID on linux OS with python script, with following:
PID = Popen("ps -elf | grep <proc_name>| grep -v grep | awk '{print $4}'", shell=True, stdout=PIPE).stdout
pid = PID.read()
pid=int(pid)
However, the script does not work if there are more than one PIDs with the same
The program exits at the int() function due to '123\n146\n' is not the 10 based int
I then tried the following:
pid= Pid.read().split()
print len(pid)
print pid[0]
It seems to work with the python command line and forms an array of pid =['123','156'], but somehow, it does not work in the script.
any suggestion ? thanks
Are you trying to find out your own process id? If so, use os.getpid()
You could use subprocess.check_output() and str.splitlines():
from subprocess import check_output as qx
pids = map(int, qx(["pgrep", procname]).splitlines())
To do it without an external process you could try psutil:
import psutil # pip install psutil
pids = [p.pid for p in psutil.process_iter() if p.name == procname]
Experiment with p.name, p.cmdline and various comparisons with procname to get what you need in your particular case.
And there is also os.getpid() to return the current process id.