I am currently trying to write (Python 2.7.3) kind of a wrapper for GDB, which will allow me to dynamically switch from scripted input to interactive communication with GDB.
So far I use
self.process = subprocess.Popen(["gdb vuln"], stdin = subprocess.PIPE, shell = True)
to start gdb within my script. (vuln is the binary I want to examine)
Since a key feature of gdb is to pause the execution of the attached process and allow the user to inspect registers and memory on receiving SIGINT (STRG+C) I do need some way to pass a SIGINT signal to it.
Neither
self.process.send_signal(signal.SIGINT)
nor
os.kill(self.process.pid, signal.SIGINT)
or
os.killpg(self.process.pid, signal.SIGINT)
work for me.
When I use one of these functions there is no response. I suppose this problem arises from the use of shell=True. However, at this point I am really out of ideas.
Even my old friend Google couldn't really help me out this time, so maybe you can help me. Thank's in advance.
Cheers, Mike
Here is what worked for me:
import signal
import subprocess
try:
p = subprocess.Popen(...)
p.wait()
except KeyboardInterrupt:
p.send_signal(signal.SIGINT)
p.wait()
I looked deeper into the problem and found some interesting things. Maybe these findings will help someone in the future.
When calling gdb vuln using suprocess.Popen() it does in fact create three processes, where the pid returned is the one of sh (5180).
ps -a
5180 pts/0 00:00:00 sh
5181 pts/0 00:00:00 gdb
5183 pts/0 00:00:00 vuln
Consequently sending a SIGINT to the process will in fact send SIGINT to sh.
Besides, I continued looking for an answer and stumbled upon this post
https://bugzilla.kernel.org/show_bug.cgi?id=9039
To keep it short, what is mentioned there is the following:
When pressing STRG+C while using gdb regularly SIGINT is in fact sent to the examined program (in this case vuln), then ptrace will intercept it and pass it to gdb.
What this means is, that if I use self.process.send_signal(signal.SIGINT) it will in fact never reach gdb this way.
Temporary Workaround:
I managed to work around this problem by simply calling subprocess.popen() as follows:
subprocess.Popen("killall -s INT " + self.binary, shell = True)
This is nothing more than a first workaround. When multiple applications with the same name are running might do some serious damage. Besides, it somehow fails, if shell=True is not set.
If someone has a better fix (e.g. how to get the pid of the process startet by gdb), please let me know.
Cheers, Mike
EDIT:
Thanks to Mark for pointing out to look at the ppid of the process.
I managed to narrow down the process's to which SIGINT is sent using the following approach:
out = subprocess.check_output(['ps', '-Aefj'])
for line in out.splitlines():
if self.binary in line:
l = line.split(" ")
while "" in l:
l.remove("")
# Get sid and pgid of child process (/bin/sh)
sid = os.getsid(self.process.pid)
pgid = os.getpgid(self.process.pid)
#only true for target process
if l[4] == str(sid) and l[3] != str(pgid):
os.kill(pid, signal.SIGINT)
I have done something like the following in the past and if I recollect it seemed to work for me :
def detach_procesGroup():
os.setpgrp()
subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=detach_processGroup)
Related
I'm launching a number of subprocesses with subprocess.Popen in Python.
I'd like to check whether one such process has completed. I've found two ways of checking the status of a subprocess, but both seem to force the process to complete.
One is using process.communicate() and printing the returncode, as explained here: checking status of process with subprocess.Popen in Python.
Another is simply calling process.wait() and checking that it returns 0.
Is there a way to check if a process is still running without waiting for it to complete if it is?
Ouestion: ... a way to check if a process is still running ...
You can do it for instance:
p = subprocess.Popen(...
"""
A None value indicates that the process hasn't terminated yet.
"""
poll = p.poll()
if poll is None:
# p.subprocess is alive
Python » 3.6.1 Documentation popen-objects
Tested with Python:3.4.2
Doing the
myProcessIsRunning = poll() is None
As suggested by the main answer, is the recommended way and the simplest way to check if a process running. (and it works in jython as well)
If you do not have the process instance in hand to check it.
Then use the operating system TaskList / Ps processes.
On windows, my command will look as follows:
filterByPid = "PID eq %s" % pid
pidStr = str(pid)
commandArguments = ['cmd', '/c', "tasklist", "/FI", filterByPid, "|", "findstr", pidStr ]
This is essentially doing the same thing as the following command line:
cmd /c "tasklist /FI "PID eq 55588" | findstr 55588"
And on linux, I do exactly the same using the:
pidStr = str(pid)
commandArguments = ['ps', '-p', pidStr ]
The ps command will already be returning error code 0 / 1 depending on whether the process is found. While on windows you need the find string command.
This is the same approach that is discussed on the following stack overflow thread:
Verify if a process is running using its PID in JAVA
NOTE:
If you use this approach, remember to wrap your command call in a try/except:
try:
foundRunningProcess = subprocess.check_output(argumentsArray, **kwargs)
return True
except Exception as err:
return False
Note, be careful if you are developing with VS Code and using pure Python and Jython.
On my environment, I was under the illusion that the poll() method did not work because a process that I suspected that must have ended was indeed running.
This process had launched Wildfly. And after I had asked for wildfly to stop, the shell was still waiting for user to "Press any key to continue . . .".
In order to finish off this process, in pure python the following code was working:
process.stdin.write(os.linesep)
On jython, I had to fix this code to look as follows:
print >>process.stdin, os.linesep
And with this difference the process did indeed finish.
And the jython.poll() started telling me that the process is indeed finished.
As suggested by the other answers None is the designed placeholder for the "return code" when no code has been returned yet by the subprocess.
The documentation for the returncode attribute backs this up (emphasis mine):
The child return code, set by poll() and wait() (and indirectly by communicate()). A None value indicates that the process hasn’t terminated yet.
A negative value -N indicates that the child was terminated by signal N (POSIX only).
An interesting place where this None value occurs is when using the timeout parameter for wait or communicate.
If the process does not terminate after timeout seconds, a TimeoutExpired exception will be raised.
If you catch that exception and check the returncode attribute it will indeed be None
import subprocess
with subprocess.Popen(['ping','127.0.0.1']) as p:
try:
p.wait(timeout=3)
except subprocess.TimeoutExpired:
assert p.returncode is None
If you look at the source for subprocess you can see the exception being raised.
https://github.com/python/cpython/blob/47be7d0108b4021ede111dbd15a095c725be46b7/Lib/subprocess.py#L1930-L1931
If you search that source for self.returncode is you'll find many uses where the library authors lean on that None return code design to infer if an app is running or not running. The returncode attribute is initialized to None and only ever changes in a few spots, the main flow in invocations to _handle_exitstatus to pass on the actual return code.
You could use subprocess.check_output to have a look at your output.
Try this code:
import subprocess
subprocess.check_output(['your command here'], shell=True, stderr=subprocess.STDOUT)
Hope this helped!
I am working on some scripts (in the company I work in) that are loaded/unloaded into hypervisors to fire a piece of code when an event occurs. The only way to actually unload a script is to hit Ctrl-C. I am writing a function in Python that automates the process
As soon as it sees the string "done" in the output of the program, it should kill the vprobe.
I am using subprocess.Popen to execute the command:
lineList = buff.readlines()
cmd = "vprobe /vprobe/myhello.emt"
p = subprocess.Popen(args = cmd, shell=True,stdout = buff, universal_newlines = True,preexec_fn=os.setsid)
while not re.search("done",lineList[-1]):
print "waiting"
os.kill(p.pid,signal.CTRL_C_EVENT)
As you can see, I am writing the output in buff file descriptor opened in read+write mode. I check the last line; if it has 'done', I kill it. Unfortunately, the CTRL_C_EVENT is only valid for Windows.
What can I do for Linux?
I think you can just send the Linux equivalent, signal.SIGINT (the interrupt signal).
(Edit: I used to have something here discouraging the use of this strategy for controlling subprocesses, but on more careful reading it sounds like you've already decided you need control-C in this specific case... So, SIGINT should do it.)
In Linux, Ctrl-C keyboard interrupt can be sent programmatically to a process using Popen.send_signal(signal.SIGINT) function. For example
import subprocess
import signal
..
process = subprocess.Popen(..)
..
process.send_signal(signal.SIGINT)
..
Don't use Popen.communicate() for blocking commands..
Maybe I misunderstand something, but the way you do it it is difficult to get the desired result.
Whatever buff is, you query it first, then use it in the context of Popen() and then you hope that by maciv lineList fills itself up.
What you probably want is something like
logfile = open("mylogfile", "a")
p = subprocess.Popen(['vprobe', '/vprobe/myhello.emt'], stdout=subprocess.PIPE, buff, universal_newlines=True, preexec_fn=os.setsid)
for line in p.stdout:
logfile.write(line)
if re.search("done", line):
break
print "waiting"
os.kill(p.pid, signal.CTRL_C_EVENT)
This gives you a pipe end fed by your vprobe script which you can read out linewise and act appropriately upon the found output.
I am trying to detect when an installation program finishes executing from within a Python script. Specifically, the application is the Oracle 10gR2 Database. Currently I am using the subprocess module with Popen. Ideally, I would simply use the wait() method to wait for the installation to finish executing, however, the documented command actually spawns child processes to handle the actual installation. Here is some sample code of the failing code:
import subprocess
OUI_DATABASE_10GR2_SUBPROCESS = ['sudo',
'-u',
'oracle',
os.path.join(DATABASE_10GR2_TMP_PATH,
'database',
'runInstaller'),
'-ignoreSysPrereqs',
'-silent',
'-noconfig',
'-responseFile '+ORACLE_DATABASE_10GR2_SILENT_RESPONSE]
oracle_subprocess = subprocess.Popen(OUI_DATABASE_10GR2_SUBPROCESS)
oracle_subprocess.wait()
There is a similar question here: Killing a subprocess including its children from python, but the selected answer does not address the children issue, instead it instructs the user to call directly the application to wait for. I am looking for a specific solution that will wait for all children of the subprocess. What if there are an unknown number of subprocesses? I will select the answer that addresses the issue of waiting for all children subprocesses to finish.
More clarity on failure: The child processes continue executing after the wait() command since that command only waits for the top level process (in this case it is 'sudo'). Here is a simple diagram of the known child processes in this problem:
Python subprocess module -> Sudo -> runInstaller -> java -> (unknown)
Ok, here is a trick that will work only under Unix. It is similar to one of the answers to this question: Ensuring subprocesses are dead on exiting Python program. The idea is to create a new process group. You can then wait for all processes in the group to terminate.
pid = os.fork()
if pid == 0:
os.setpgrp()
oracle_subprocess = subprocess.Popen(OUI_DATABASE_10GR2_SUBPROCESS)
oracle_subprocess.wait()
os._exit(0)
else:
os.waitpid(-pid)
I have not tested this. It creates an extra subprocess to be the leader of the process group, but avoiding that is (I think) quite a bit more complicated.
I found this web page to be helpful as well. http://code.activestate.com/recipes/278731-creating-a-daemon-the-python-way/
You can just use os.waitpid with the the pid set to -1, this will wait for all the subprocess of the current process until they finish:
import os
import sys
import subprocess
proc = subprocess.Popen([sys.executable,
'-c',
'import subprocess;'
'subprocess.Popen("sleep 5", shell=True).wait()'])
pid, status = os.waitpid(-1, 0)
print pid, status
This is the result of pstree <pid> of different subprocess forked:
python───python───sh───sleep
Hope this can help :)
Check out the following link http://www.oracle-wiki.net/startdocsruninstaller which details a flag you can use for the runInstaller command.
This flag is definitely available for 11gR2, but I have not got a 10g database to try out this flag for the runInstaller packaged with that version.
Regards
Everywhere I look seems to say it's not possible to solve this in the general case. I've whipped up a library called 'pidmon' that combines some answers for Windows and Linux and might do what you need.
I'm planning to clean this up and put it on github, possibly called 'pidmon' or something like that. I'll post a link if/when I get it up.
EDIT: It's available at http://github.com/dbarnett/python-pidmon.
I made a special waitpid function that accepts a graft_func argument so that you can loosely define what sort of processes you want to wait for when they're not direct children:
import pidmon
pidmon.waitpid(oracle_subprocess.pid, recursive=True,
graft_func=(lambda p: p.name == '???' and p.parent.pid == ???))
or, as a shotgun approach, to just wait for any processes started since the call to waitpid to stop again, do:
import pidmon
pidmon.waitpid(oracle_subprocess.pid, graft_func=(lambda p: True))
Note that this is still barely tested on Windows and seems very slow on Windows (but did I mention it's on github where it's easy to fork?). This should at least get you started, and if it works at all for you, I have plenty of ideas on how to optimize it.
Prior to this,I run two command in for loop,like
for x in $set:
command
In order to save time,i want to run these two command in the same time,like parallel method in makefile
Thanks
Lyn
The threading module won't give you much performance-wise because of the Global Interpreter Lock.
I think the best way to do this is to use the subprocess module and open each command with it's own stdout.
processes = {}
for cmd in ['cmd1', 'cmd2', 'cmd3']:
p = subprocess.Popen('cmd1', stdout=subprocess.PIPE)
processes[p.stdout] = p
while len(processes):
rfds, _, _ = select.select(processes.keys(), [], [])
for fd in rfds:
process = processses[fd]
print fd.read()
if process.returncode is not None:
print "Process {0} returned with code {1}".format(process.pid, process.returncode)
del processes[fd]
You basically have to use select to see which file descriptors are ready and you have to check their returncode to see if doing a "read" caused them to exit. Processes basically go into a wait state until their stdout is closed. If you would like to do some things while you're waiting, you can put a timeout on select.select() so you'll stop waiting after so long. You can test the length of rfds and if it is 0 then you know that the timeout happened.
twisted or select module is probably what you're after.
If all you want to do is a bunch of batch commands, shell scripts, ie
#!/bin/sh
for i in "command1 command2 command3"; do
$i &
done
Might work better. Alternately, a Makefile like you said.
Look at the threading module.
I want to capture stdout from a long-ish running process started via subprocess.Popen(...) so I'm using stdout=PIPE as an arg.
However, because it's a long running process I also want to send the output to the console (as if I hadn't piped it) to give the user of the script an idea that it's still working.
Is this at all possible?
Cheers.
The buffering your long-running sub-process is probably performing will make your console output jerky and very bad UX. I suggest you consider instead using pexpect (or, on Windows, wexpect) to defeat such buffering and get smooth, regular output from the sub-process. For example (on just about any unix-y system, after installing pexpect):
>>> import pexpect
>>> child = pexpect.spawn('/bin/bash -c "echo ba; sleep 1; echo bu"', logfile=sys.stdout); x=child.expect(pexpect.EOF); child.close()
ba
bu
>>> child.before
'ba\r\nbu\r\n'
The ba and bu will come with the proper timing (about a second between them). Note the output is not subject to normal terminal processing, so the carriage returns are left in there -- you'll need to post-process the string yourself (just a simple .replace!-) if you need \n as end-of-line markers (the lack of processing is important just in case the sub-process is writing binary data to its stdout -- this ensures all the data's left intact!-).
S. Lott's comment points to Getting realtime output using subprocess and Real-time intercepting of stdout from another process in Python
I'm curious that Alex's answer here is different from his answer 1085071.
My simple little experiments with the answers in the two other referenced questions has given good results...
I went and looked at wexpect as per Alex's answer above, but I have to say reading the comments in the code I was not left a very good feeling about using it.
I guess the meta-question here is when will pexpect/wexpect be one of the Included Batteries?
Can you simply print it as you read it from the pipe?
Inspired by pty.openpty() suggestion somewhere above, tested on python2.6, linux. Publishing since it took a while to make this working properly, w/o buffering...
def call_and_peek_output(cmd, shell=False):
import pty, subprocess
master, slave = pty.openpty()
p = subprocess.Popen(cmd, shell=shell, stdin=None, stdout=slave, close_fds=True)
os.close(slave)
line = ""
while True:
try:
ch = os.read(master, 1)
except OSError:
# We get this exception when the spawn process closes all references to the
# pty descriptor which we passed him to use for stdout
# (typically when it and its childs exit)
break
line += ch
sys.stdout.write(ch)
if ch == '\n':
yield line
line = ""
if line:
yield line
ret = p.wait()
if ret:
raise subprocess.CalledProcessError(ret, cmd)
for l in call_and_peek_output("ls /", shell=True):
pass
Alternatively, you can pipe your process into tee and capture only one of the streams.
Something along the lines of sh -c 'process interesting stuff' | tee /dev/stderr.
Of course, this only works on Unix-like systems.