I want to make a simple windows executable loading program
which simply implemented using os.system('./calc.exe') in python
or WinExec(...), CreateProcess(...) in Windows API...
This would be a VERY simple and easy task.
However, I want to receive the detailed error report if my child process crashes.
I know I can get the error number code as the return value of
functions such as Popen.call() in Python, or something...
But when windows binary crashes, I can see the detailed error report
which contains the name of crashed module, violation code(0xC0000005, etc)
offset of crashed module, time, etc...
How can I get these information from the parent process and what would be the most easy and simple way to implement this?
Thank you in advance.
I haven't tested this, but something like this should do the trick:
import logging
import subprocess
cmd = "ls -al /directory/that/does/not/exist" # <- or Windows equivalent
logging.info(cmd)
try:
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError, err:
logging.error(err.child_traceback)
(stdout, stderr) = process.communicate()
logging.debug(stdout)
if not stderr is None:
logging.error(stderr)
Related
I have in my root directory
$ cat pssa.py
import subprocess,sys
p = subprocess.Popen(["powershell.exe",".\\pre-commit.ps1"],
stdout=sys.stdout,stderr=sys.stderr,shell=True)
p.communicate()
pre-commit.ps1 returns 1, so it's in error, but
python pssa.py
returns 0.
Forgive us the complete lack of python skills, but I'm stuck. Grateful for help suggesting how python pssa.py can return the error code from the powershell script.
I think I read somewhere Popen does not wait for the script to finish. So 1) is there another method I can use that does wait, and in turn can read the return code from powershell?
Python is installed on Windows. The idea with above is to be able to use, for example, pre-commit run meaningfully on Windows. Right now, pre-commit run, executes the powershell script but does not fail as I would like it to.
Popen.communicate waits for a subprocess to finish and fills the returncode in Popen. You can use it like this:
import subprocess, sys
p = subprocess.Popen(["powershell.exe",".\\pre-commit.ps1"],
stdout=sys.stdout,stderr=sys.stderr,shell=True)
outs, errs = p.communicate()
code = p.returncode
I have the following Python(2.7) code:
try:
FNULL = open(os.devnull,'w')
subprocess.check_call(["tar", "-czvf", '/folder/archive.tar.gz', '/folder/some_other_folder'], stdout=FNULL, stderr=subprocess.STDOUT)
except Exception as e:
print str(e)
The problem which I face is that, when there is no more space for the archive, print str(e) prints Command '['tar', '-czvf', '/folder/archive.tar.gz', '/folder/some_other_folder']' returned non-zero exit status 1, which is true, but I want to catch the real error here, that is gzip: write error: No space left on device (I got the this error when I ran the same tar comand manually). Is that possible somehow? I assume that gzip is another process within tar. Am I wrong? Please keep in mind that upgrading to Python 3 is not possible.
EDIT: I also tried to use subprocess.check_output() and print the contents of e.output but that also didn't work
Python 3 solution for sane people
On Python 3, the solution is simple, and you should be using Python 3 for new code anyway (Python 2.7 ended all support nearly a year ago):
The problem is that the program is echoing the error to stderr, so check_output doesn't capture it (either normally, or in the CalledProcessError). The best solution is to use subprocess.run (which check_call/check_output are just a thin wrapper over) and ensure you capture both stdout and stderr. The simplest approach is:
try:
subprocess.run(["tar", "-czvf", '/folder/archive.tar.gz', '/folder/some_other_folder'],
check=True, stdout=subprocess.DEVNULL, stderr=subprocess.PIPE)
# ^ Ignores stdout ^ Captures stderr so e.stderr is populated if needed
except CalledProcessError as e:
print("tar exited with exit status {}:".format(e.returncode), e.stderr, file=sys.stderr)
Python 2 solution for people who like unsupported software
If you must do this on Python 2, you have to handle it all yourself by manually invoking Popen, as none of the high level functions available there will cover you (CalledProcessError didn't spawn a stderr attribute until 3.5, because no high-level API that raised it was designed to handle stderr at all):
with open(os.devnull, 'wb') as f:
proc = subprocess.Popen(["tar", "-czvf", '/folder/archive.tar.gz', '/folder/some_other_folder'],
stdout=f, stderr=subprocess.PIPE)
_, stderr = proc.communicate()
if proc.returncode != 0:
# Assumes from __future__ import print_function at top of file
# because Python 2 print statements are terrible
print("tar exited with exit status {}:".format(proc.returncode), stderr, file=sys.stderr)
I am currently trying to write (Python 2.7.3) kind of a wrapper for GDB, which will allow me to dynamically switch from scripted input to interactive communication with GDB.
So far I use
self.process = subprocess.Popen(["gdb vuln"], stdin = subprocess.PIPE, shell = True)
to start gdb within my script. (vuln is the binary I want to examine)
Since a key feature of gdb is to pause the execution of the attached process and allow the user to inspect registers and memory on receiving SIGINT (STRG+C) I do need some way to pass a SIGINT signal to it.
Neither
self.process.send_signal(signal.SIGINT)
nor
os.kill(self.process.pid, signal.SIGINT)
or
os.killpg(self.process.pid, signal.SIGINT)
work for me.
When I use one of these functions there is no response. I suppose this problem arises from the use of shell=True. However, at this point I am really out of ideas.
Even my old friend Google couldn't really help me out this time, so maybe you can help me. Thank's in advance.
Cheers, Mike
Here is what worked for me:
import signal
import subprocess
try:
p = subprocess.Popen(...)
p.wait()
except KeyboardInterrupt:
p.send_signal(signal.SIGINT)
p.wait()
I looked deeper into the problem and found some interesting things. Maybe these findings will help someone in the future.
When calling gdb vuln using suprocess.Popen() it does in fact create three processes, where the pid returned is the one of sh (5180).
ps -a
5180 pts/0 00:00:00 sh
5181 pts/0 00:00:00 gdb
5183 pts/0 00:00:00 vuln
Consequently sending a SIGINT to the process will in fact send SIGINT to sh.
Besides, I continued looking for an answer and stumbled upon this post
https://bugzilla.kernel.org/show_bug.cgi?id=9039
To keep it short, what is mentioned there is the following:
When pressing STRG+C while using gdb regularly SIGINT is in fact sent to the examined program (in this case vuln), then ptrace will intercept it and pass it to gdb.
What this means is, that if I use self.process.send_signal(signal.SIGINT) it will in fact never reach gdb this way.
Temporary Workaround:
I managed to work around this problem by simply calling subprocess.popen() as follows:
subprocess.Popen("killall -s INT " + self.binary, shell = True)
This is nothing more than a first workaround. When multiple applications with the same name are running might do some serious damage. Besides, it somehow fails, if shell=True is not set.
If someone has a better fix (e.g. how to get the pid of the process startet by gdb), please let me know.
Cheers, Mike
EDIT:
Thanks to Mark for pointing out to look at the ppid of the process.
I managed to narrow down the process's to which SIGINT is sent using the following approach:
out = subprocess.check_output(['ps', '-Aefj'])
for line in out.splitlines():
if self.binary in line:
l = line.split(" ")
while "" in l:
l.remove("")
# Get sid and pgid of child process (/bin/sh)
sid = os.getsid(self.process.pid)
pgid = os.getpgid(self.process.pid)
#only true for target process
if l[4] == str(sid) and l[3] != str(pgid):
os.kill(pid, signal.SIGINT)
I have done something like the following in the past and if I recollect it seemed to work for me :
def detach_procesGroup():
os.setpgrp()
subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=detach_processGroup)
I'm launching a number of subprocesses with subprocess.Popen in Python.
I'd like to check whether one such process has completed. I've found two ways of checking the status of a subprocess, but both seem to force the process to complete.
One is using process.communicate() and printing the returncode, as explained here: checking status of process with subprocess.Popen in Python.
Another is simply calling process.wait() and checking that it returns 0.
Is there a way to check if a process is still running without waiting for it to complete if it is?
Ouestion: ... a way to check if a process is still running ...
You can do it for instance:
p = subprocess.Popen(...
"""
A None value indicates that the process hasn't terminated yet.
"""
poll = p.poll()
if poll is None:
# p.subprocess is alive
Python » 3.6.1 Documentation popen-objects
Tested with Python:3.4.2
Doing the
myProcessIsRunning = poll() is None
As suggested by the main answer, is the recommended way and the simplest way to check if a process running. (and it works in jython as well)
If you do not have the process instance in hand to check it.
Then use the operating system TaskList / Ps processes.
On windows, my command will look as follows:
filterByPid = "PID eq %s" % pid
pidStr = str(pid)
commandArguments = ['cmd', '/c', "tasklist", "/FI", filterByPid, "|", "findstr", pidStr ]
This is essentially doing the same thing as the following command line:
cmd /c "tasklist /FI "PID eq 55588" | findstr 55588"
And on linux, I do exactly the same using the:
pidStr = str(pid)
commandArguments = ['ps', '-p', pidStr ]
The ps command will already be returning error code 0 / 1 depending on whether the process is found. While on windows you need the find string command.
This is the same approach that is discussed on the following stack overflow thread:
Verify if a process is running using its PID in JAVA
NOTE:
If you use this approach, remember to wrap your command call in a try/except:
try:
foundRunningProcess = subprocess.check_output(argumentsArray, **kwargs)
return True
except Exception as err:
return False
Note, be careful if you are developing with VS Code and using pure Python and Jython.
On my environment, I was under the illusion that the poll() method did not work because a process that I suspected that must have ended was indeed running.
This process had launched Wildfly. And after I had asked for wildfly to stop, the shell was still waiting for user to "Press any key to continue . . .".
In order to finish off this process, in pure python the following code was working:
process.stdin.write(os.linesep)
On jython, I had to fix this code to look as follows:
print >>process.stdin, os.linesep
And with this difference the process did indeed finish.
And the jython.poll() started telling me that the process is indeed finished.
As suggested by the other answers None is the designed placeholder for the "return code" when no code has been returned yet by the subprocess.
The documentation for the returncode attribute backs this up (emphasis mine):
The child return code, set by poll() and wait() (and indirectly by communicate()). A None value indicates that the process hasn’t terminated yet.
A negative value -N indicates that the child was terminated by signal N (POSIX only).
An interesting place where this None value occurs is when using the timeout parameter for wait or communicate.
If the process does not terminate after timeout seconds, a TimeoutExpired exception will be raised.
If you catch that exception and check the returncode attribute it will indeed be None
import subprocess
with subprocess.Popen(['ping','127.0.0.1']) as p:
try:
p.wait(timeout=3)
except subprocess.TimeoutExpired:
assert p.returncode is None
If you look at the source for subprocess you can see the exception being raised.
https://github.com/python/cpython/blob/47be7d0108b4021ede111dbd15a095c725be46b7/Lib/subprocess.py#L1930-L1931
If you search that source for self.returncode is you'll find many uses where the library authors lean on that None return code design to infer if an app is running or not running. The returncode attribute is initialized to None and only ever changes in a few spots, the main flow in invocations to _handle_exitstatus to pass on the actual return code.
You could use subprocess.check_output to have a look at your output.
Try this code:
import subprocess
subprocess.check_output(['your command here'], shell=True, stderr=subprocess.STDOUT)
Hope this helped!
Well, I have a python script running on Mac OS X. Now I need to modify it to support updating my SVN working copy into a specified time. However, after learning I've found that SVN commands only support updating the working copy into a specified version.
So I write a function to grub the information from the command: svn log XXX, to find the corresponding version to the specified time. Here is my solution:
process=os.popen('svn log XXX')
print process.readline()
print process.readline()
process.close()
To make the problem simple, I just print the first 2 lines in the output. However, when I was executing the script, I got the error message: svn: Write error: Broken pipe
I think that the reason why I got the message is that the svn command kept executing when I was closing the Popen. So the error message arise.
Is there any one who can help me slove the problem? Or give me a alternative solution to reach the goal. Thx!
I get that error whenever I use svn log | head, too, it's not Python specific. Try something like:
from subprocess import PIPE, Popen
process = Popen('svn log XXX', stdout=PIPE, stderr=PIPE)
print process.stdout.readline()
print process.stdout.readline()
to suppress the stderr. You could also just use
stdout, stderr = Popen('svn log XXX | head -n2', stdout=PIPE, stderr=PIPE, shell=True).communicate()
print stdout
Please use pysvn. It is quite easy to use. Or use subprocess.
Does you error still occur if you do finally:
print process.read()
And it is better to call wait() if you use os.popen or subprocess.