Invoke tasklist /m with subprocess hangs - python

Here is something curious which I cannot explain. Consider this code segment:
import subprocess
command = ['tasklist.exe', '/m', 'imm32.dll']
p = subprocess.Popen(command, stdout=subprocess.PIPE)
out, err = p.communicate()
The call to communicate() hung and I had to break out with Ctrl+C. After that, the tasklist process was still running, which I had to kill it with taskkill.exe. Take a look at the behaviors of tasklist.exe in the table below:
| Command | Behavior |
|-------------------------------------|----------|
| ['tasklist.exe', '/m', 'imm32.dll'] | Hung |
| ['tasklist.exe', '/m'] | Hung |
| ['tasklist.exe'] | OK |
It seems when the /m flag is present, the process does not return. It just runs forever. I have also tried the following and it also hung:
os.system('tasklist.exe /m imm32.dll')
How can I launch this command and not hang?
Update
It turns out that the communicate call did not hang, but took up to 10 minutes to finished, something that would take a couple of seconds if I ran the command from the cmd.exe prompt. I believe this has to do with buffering, but have not found a way to make it to finish faster.

Related

Kill application in linux using python

I need one help regarding killing application in linux
As manual process I can use command -- ps -ef | grep "app_name" | awk '{print $2}'
It will give me jobids and then I will kill using command " kill -9 jobid".
I want to have python script which can do this task.
I have written code as
import os
os.system("ps -ef | grep app_name | awk '{print $2}'")
this collects jobids. But it is in "int" type. so I am not able to kill the application.
Can you please here?
Thank you
import subprocess
temp = subprocess.run("ps -ef | grep 'app_name' | awk '{print $2}'", stdin=subprocess.PIPE, shell=True, stdout=subprocess.PIPE)
job_ids = temp.stdout.decode("utf-8").strip().split("\n")
# sample job_ids will be: ['59899', '68977', '68979']
# convert them to integers
job_ids = list(map(int, job_ids))
# job_ids = [59899, 68977, 68979]
Then iterate through the job ids and kill them. Use os.kill()
for job_id in job_ids:
os.kill(job_id, 9)
Subprocess.run doc - https://docs.python.org/3/library/subprocess.html#subprocess.run
To kill a process in Python, call os.kill(pid, sig), with sig = 9 (signal number for SIGKILL) and pid = the process ID (PID) to kill.
To get the process ID, use os.popen instead of os.system above. Alternatively, use subprocess.Popen(..., stdout=subprocess.PIPE). In both cases, call the .readline() method, and convert the return value of that to an integer with int(...).

Why does subprocess keep running after communicate() is finished?

I have an older python 2.7.5 script which suddenly makes problems on Red Hat Enterprise Linux Server release 7.6 (Maipo). After all I see, it runs fine on Red Hat Enterprise Linux Server release 7.4 (Maipo).
The script basically implements something like
cat /proc/cpuinfo | grep -m 1 -i 'cpu MHz'
by creating two subrocesses and piping the output of the first into the second (see code example below). On the newer OS version, the cat processes stay open until the script terminates.
It seems, that the pipe to grep somehow holds the cat-process open and I can't find any documentation on how to explicitely close it.
The issue can be reproduced by pasting this code into the python CLI and then checking the ps process list for a static process 'cat /proc/cpuinfo'.
The code is breaking down what's originally happening inside a loop, so please don't argue about its style. ;-)
import shlex
from subprocess import *
cmd1 = "cat /proc/cpuinfo"
cmd2 = "grep -m 1 -i 'cpu MHz'"
args1 = shlex.split(cmd1) # split into args
args2 = shlex.split(cmd2) # split into args
# first process uses default stdin
ps1 = Popen(args1, stdout=PIPE)
# then use the output of the previous process as stdin
ps2 = Popen(args2, stdin=ps1.stdout, stdout=PIPE)
out, err = ps2.communicate()
print(out)
Afterwards check the process list in a second session(!) with:
ps -eF |grep -v grep|grep /proc/cpuinfo
On RHEL7.4 I find no open process in the process list, whereas on RHEL 7.6 after some attempts it looks like this:
[reinski#myhost ~]$ ps -eF |grep -v grep|grep /proc/cpuinfo
reinski 2422 89459 0 26993 356 142 18:46 pts/3 00:00:00 cat /proc/cpuinfo
reinski 2597 139605 0 26993 352 31 18:39 pts/3 00:00:00 cat /proc/cpuinfo
reinski 7809 139605 0 26993 352 86 18:03 pts/3 00:00:00 cat /proc/cpuinfo
These processes will only dissappear when I close the python CLI, in which case I get errors like this (I left the formatting messed up as it was):
cat: write error: Broken pipe
cat: write errorcat: write error: Broken pipe
: Broken pipe
Why is cat obviously still wanting to write to the pipe, even though it should have already output the whole /proc/cpuinfo and should have terminated itself?
Or more important: How can I prevent this from happening?
Thanks for any help!
Example 2:
Given the suggestion from VPfB it turned out, that my example was a little unlucky, since the expected result can be achieved by a single grep command.
So here is a modified example to show the problem with piping in another way:
import shlex
from subprocess import *
cmd1 = "grep -m 1 -i 'cpu MHz' /proc/cpuinfo"
cmd2 = "awk '{print $4}'"
args1 = shlex.split(cmd1) # split into args
args2 = shlex.split(cmd2) # split into args
# first process uses default stdin
ps1 = Popen(args1, stdout=PIPE)
# then use the output of the previous process as stdin
ps2 = Popen(args2, stdin=ps1.stdout, stdout=PIPE)
out, err = ps2.communicate()
print(out)
This time, the result is a single zombie process for the grep process (169731 is the pid of the python session):
[reinski#myhost ~]$ ps -eF|grep 169731
reinski 169731 189499 0 37847 6024 198 17:51 pts/2 00:00:00 python
reinski 193999 169731 0 0 0 142 17:53 pts/2 00:00:00 [grep] <defunct>
So, is this just another symptom of the same problem or am I doing something completely wrong here?
Ok, it seems I just found a solution for the zombie processes staying open from the examples:
Simply need to do a
ps1.communicate()
It seems, this is required to close the pipe properly.
I'd expect this to happen when the second process's communicate() is called and it reads the pipe from the first process.
Can someone maybe point out to me, what I am missing here?
I am always willing to learn... ;-)

Kill a running subprocess with a button click

I'm running strandtest.py to power a neo pixel matrix on a django server.
A button click runs the program
I'd like a second button click to kill it.
I've read a bunch of different solutions but I am having trouble following the instructions. Is there another subprocess command I can execute while the program runs?
This is how I start the program
import subprocess
def test(request):
sp = subprocess.Popen('sudo python /home/pi/strandtest.py', shell=True)
return HttpResponse()
I want to kill the process and clear the matrix.
You can try to run below command with subprocess.Popen:
kill -9 `ps aux | grep /home/pi/strandtest.py | awk -F' ' '{print $2}'`
kill -9 - will kill your process
ps aux combine with grep - to find only process you need
awk - get only id of that process

Why subprocess.Popen returncode differs for similar commands with bash

Why
import subprocess
p = subprocess.Popen(["/bin/bash", "-c", "timeout -s KILL 1 sleep 5 2>/dev/null"])
p.wait()
print(p.returncode)
returns
[stderr:] /bin/bash: line 1: 963663 Killed timeout -s KILL 1 sleep 5 2> /dev/null
[stdout:] 137
when
import subprocess
p = subprocess.Popen(["/bin/bash", "-c", "timeout -s KILL 1 sleep 5"])
p.wait()
print(p.returncode)
returns
[stdout:] -9
If you change bash to dash, you'll get 137 in both cases. I know that -9 is KILL code and 137 is 128 + 9. But seems weird for similar code to get different returncode.
Happens on Python 2.7.12 and python 3.4.3
Looks like Popen.wait() does not call Popen._handle_exitstatus https://github.com/python/cpython/blob/3.4/Lib/subprocess.py#L1468 when using /bin/bash but I could not figure out why.
This is due to the fact how bash executes timeout with or without redirection/pipes or any other bash features:
With redirection
python starts bash
bash starts timeout, monitors the process and does pipe handling.
timeout transfers itself into a new process group and starts sleep
After one second, timeout sends SIGKILL into its process group
As the process group died, bash returns from waiting for timeout, sees the SIGKILL and prints the message pasted above to stderr. It then sets its own exit status to 128+9 (a behaviour simulated by timeout).
Without redirection
python starts bash.
bash sees that it has nothing to do on its own and calls execve() to effectively replace itself with timeout.
timeout acts as above, the whole process group dies with SIGKILL.
python get's an exit status of 9 and does some mangling to turn this into -9 (SIGKILL)
In other words, without redirection/pipes/etc. bash withdraws itself from the call-chain. Your second example looks like subprocess.Popen() is executing bash, yet effectively it does not. bash is no longer there when timeout does its deed, which is why you don't get any messages and an unmangled exit status.
If you want consistent behaviour, use timeout --foreground; you'll get an exit status of 124 in both cases.
I don't know about dash; yet suppose it does not do any execve() trickery to effectively replace itself with the only program it's executing. Therefore you always see the mangled exit status of 128+9 in dash.
Update: zshshows the same behaviour, while it drops out even for simple redirections such as timeout -s KILL 1 sleep 5 >/tmp/foo and the like, giving you an exit status of -9. timeout -s KILL 1 sleep 5 && echo $? will give you status 137 in zsh also.

To get Parent and ChildProcess ID from process ID in Python

I am trying to get the ppid of the process that I want.
I used following code to get the pid
proc=subprocess.Popen('ps -ae | grep ruby', shell=True, stdout=subprocess.PIPE, )
output=proc.communicate()[0]
str = output.split()
Now in the str[0], I have the pid of the process say ruby, I want to get the parent process ID ppid and child process ID of the same process.
I need this solution to be run on Solaris as well as Red Hat Enterprise Linux 6.0
Is there any way to get that like getppid() and getchildid()? Or do I need to do it by grep command again and splitting?
Using this code is a bad idea. Your code will not work on solaris.
You can use 'psutil' library, that way you can keep your code independent of os.
https://github.com/giampaolo/psutil
p = psutil.Process(7055)
parent_pid = p.ppid()
I presume there's nothing wrong with os.getppid() .
Shrug.
http://docs.python.org/3/library/os.html#process-parameters
The answer depends on your system's ps command. On Linux, ps will include the PPID for each process with the -l flag (among others), so ps -ale | grep ruby will include the ruby process id in str[3] and ruby's PPID in str[4].

Categories

Resources