subprocess child traceback - python

I want to access the traceback of a python programm running in a subprocess.
The documentation says:
Exceptions raised in the child process, before the new program has started to execute, will be re-raised in the parent. Additionally, the exception object will have one extra attribute called child_traceback, which is a string containing traceback information from the child’s point of view.
Contents of my_sub_program.py:
raise Exception("I am raised!")
Contents of my_main_program.py:
import sys
import subprocess
try:
subprocess.check_output([sys.executable, "my_sub_program.py"])
except Exception as e:
print e.child_traceback
If I run my_main_program.py, I get the following error:
Traceback (most recent call last):
File "my_main_program.py", line 6, in <module>
print e.child_traceback
AttributeError: 'CalledProcessError' object has no attribute 'child_traceback'
How can I access the traceback of the subprocess without modifying the subprocess program code? This means, I want to avoid adding a large try/except clause around my whole sub-program code, but rather handle error logging from my main program.
Edit: sys.executable should be replaceable with an interpreter differing from the one running the main program.

As you're starting another Python process, you can also try to use the multiprocessing Python module ; by sub-classing the Process class it is quite easy to get exceptions from the target function:
from multiprocessing import Process, Pipe
import traceback
import functools
class MyProcess(Process):
def __init__(self, *args, **kwargs):
Process.__init__(self, *args, **kwargs)
self._pconn, self._cconn = Pipe()
self._exception = None
def run(self):
try:
Process.run(self)
self._cconn.send(None)
except Exception as e:
tb = traceback.format_exc()
self._cconn.send((e, tb))
# raise e # You can still rise this exception if you need to
#property
def exception(self):
if self._pconn.poll():
self._exception = self._pconn.recv()
return self._exception
p = MyProcess(target=functools.partial(execfile, "my_sub_program.py"))
p.start()
p.join() #wait for sub-process to end
if p.exception:
error, traceback = p.exception
print 'you got', traceback
The trick is to have the target function executing the Python sub-program, this is done by using functools.partial.

Related

Python APScheduler - Catching raised exceptions in callback method

I am using the Python APScheduler and I wish to catch the raised exceptions from the callback method, however unsure as to how to do this. I have provided my code so far however I am stuck on how to do this appropriately. Any help would be appreciated.
import time
from apscheduler.schedulers.background import BackgroundScheduler
def expiry_callback():
raise ValueError
print("inside job")
sched = BackgroundScheduler(daemon=True)
sched.add_job(expiry_callback,'interval',seconds=1)
try:
sched.start()
except ValueError as e:
print(f'ran into an issue!! {e}')
try:
while True:
time.sleep(5)
except (KeyboardInterrupt, SystemExit):
sched.shutdown()
stacktrace:
/Users/me/Documents/environments/my_env/bin/python3.9 /Users/me/PycharmProjects/pythonProject2/run.py
Job "expiry_callback (trigger: interval[0:00:01], next run at: 2021-08-24 22:33:26 MDT)" raised an exception
Traceback (most recent call last):
File "/Users/me/Documents/environments/my_env/lib/python3.9/site-packages/apscheduler/executors/base.py", line 125, in run_job
retval = job.func(*job.args, **job.kwargs)
File "/Users/me/PycharmProjects/pythonProject2/run.py", line 6, in expiry_callback
raise ValueError
ValueError
Process finished with exit code 0
Calling sched.start() only starts the background thread that executes the callback function but does not call the callback function itself so it is never going to produce the exception.
If you're looking to handle exceptions from callback functions in a consistent way, you can instead call the via a wrapper function that catches a given exception and outputs the error in a definite manner:
# insert definition of expiry_callback before this line
def catch_exception(func, exception):
def wrapper():
try:
func()
except exception as e:
print(f'ran into an issue!! {e}')
return wrapper
sched = BackgroundScheduler(daemon=True)
sched.add_job(catch_exception(expiry_callback, ValueError),'interval',seconds=1)
sched.start()
# insert idle code after this line
Demo: https://replit.com/#blhsing/BlueCreepyMethod

KeyboardInterrupt cannot be caught in parent class methods

I have a parent class who has a try clause in it, and a child class overrides a method inside of the try clause. Normal exceptions can be caught when the child raises it. However, the keyboard interrupt exception cannot be caught. Moreover, it can be caught inside the child method, but not the parent method.
Please see the example code like the following, where the interrupt cannot be caught in bar1, but can be caught after bar2 changes it into an assertion error. I produced this problem in python 3.6 in both Linux and Windows.
import time
class foo:
def body(self):
pass
def run(self):
try:
self.body()
except Exception as ex:
print("caught exception:", str(ex))
class bar1(foo):
def body(self):
while(1):
print(1)
time.sleep(0.1)
class bar2(foo):
def body(self):
interrupted = False
while(1):
assert not interrupted, "assert not interrupted"
try:
print(1)
time.sleep(0.1)
except KeyboardInterrupt as ex:
print("received interrupt")
interrupted = True
Interrupting the run method of class bar1 gets
1
....
1
Traceback (most recent call last):
File "tmp.py", line 34, in <module>
b.run()
File "tmp.py", line 7, in run
self.body()
File "tmp.py", line 15, in body
time.sleep(0.1)
KeyboardInterrupt
However, interrupting bar2 gets
1
...
1
received interrupt
caught exception: assert not interrupted
I have searched over StackOverflow and found some problems regarding the keyboard interruption handling with threads and stdIO, but I did not found some problems like this.

Handling keyboard interrupt when using subproccess

I have python script called monitiq_install.py which calls other scripts (or modules) using the subprocess python module. However, if the user sends a keyboard interrupt (CTRL + C) it exits, but with an exception. I want it to exit, but nicely.
My Code:
import os
import sys
from os import listdir
from os.path import isfile, join
from subprocess import Popen, PIPE
import json
# Run a module and capture output and exit code
def runModule(module):
try:
# Run Module
process = Popen(os.path.dirname(os.path.realpath(__file__)) + "/modules/" + module, shell=True, stdout=PIPE, bufsize=1)
for line in iter(process.stdout.readline, b''):
print line,
process.communicate()
exit_code = process.wait();
return exit_code;
except KeyboardInterrupt:
print "Got keyboard interupt!";
sys.exit(0);
The error I'm getting is below:
python monitiq_install.py -a
Invalid module filename: create_db_user_v0_0_0.pyc
Not Running Module: '3parssh_install' as it is already installed
######################################
Running Module: 'create_db_user' Version: '0.0.3'
Choose username for Monitiq DB User [MONITIQ]
^CTraceback (most recent call last):
File "/opt/monitiq-universal/install/modules/create_db_user-v0_0_3.py", line 132, in <module>
inputVal = raw_input("");
Traceback (most recent call last):
File "monitiq_install.py", line 40, in <module>
KeyboardInterrupt
module_install.runModules();
File "/opt/monitiq-universal/install/module_install.py", line 86, in runModules
exit_code = runModule(module);
File "/opt/monitiq-universal/install/module_install.py", line 19, in runModule
for line in iter(process.stdout.readline, b''):
KeyboardInterrupt
A solution or some pointers would be helpful :)
--EDIT
With try catch
Running Module: 'create_db_user' Version: '0.0.0'
Choose username for Monitiq DB User [MONITIQ]
^CGot keyboard interupt!
Traceback (most recent call last):
File "monitiq_install.py", line 36, in <module>
module_install.runModules();
File "/opt/monitiq-universal/install/module_install.py", line 90, in runModules
exit_code = runModule(module);
File "/opt/monitiq-universal/install/module_install.py", line 29, in runModule
sys.exit(0);
NameError: global name 'sys' is not defined
Traceback (most recent call last):
File "/opt/monitiq-universal/install/modules/create_db_user-v0_0_0.py", line 132, in <module>
inputVal = raw_input("");
KeyboardInterrupt
If you press Ctrl + C in a terminal then SIGINT is sent to all processes within the process group. See child process receives parent's SIGINT.
That is why you see the traceback from the child process despite try/except KeyboardInterrupt in the parent.
You could suppress the stderr output from the child process: stderr=DEVNULL. Or start it in a new process group: start_new_session=True:
import sys
from subprocess import call
try:
call([sys.executable, 'child.py'], start_new_session=True)
except KeyboardInterrupt:
print('Ctrl C')
else:
print('no exception')
If you remove start_new_session=True in the above example then KeyboardInterrupt may be raised in the child too and you might get the traceback.
If subprocess.DEVNULL is not available; you could use DEVNULL = open(os.devnull, 'r+b', 0). If start_new_session parameter is not available; you could use preexec_fn=os.setsid on POSIX.
You can do this using try and except as below:
import subprocess
try:
proc = subprocess.Popen("dir /S", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while proc.poll() is None:
print proc.stdout.readline()
except KeyboardInterrupt:
print "Got Keyboard interrupt"
You could avoid shell=True in your execution as best security practice.
This code spawns a child process and hands signals like SIGINT, ... to them just like shells (bash, zsh, ...) do it.
This means KeyboardInterrupt is no longer seen by the Python process, but the child receives this and is killed correctly.
It works by running the process in a new foreground process group set by Python.
import os
import signal
import subprocess
import sys
import termios
def run_as_fg_process(*args, **kwargs):
"""
the "correct" way of spawning a new subprocess:
signals like C-c must only go
to the child process, and not to this python.
the args are the same as subprocess.Popen
returns Popen().wait() value
Some side-info about "how ctrl-c works":
https://unix.stackexchange.com/a/149756/1321
fun fact: this function took a whole night
to be figured out.
"""
old_pgrp = os.tcgetpgrp(sys.stdin.fileno())
old_attr = termios.tcgetattr(sys.stdin.fileno())
user_preexec_fn = kwargs.pop("preexec_fn", None)
def new_pgid():
if user_preexec_fn:
user_preexec_fn()
# set a new process group id
os.setpgid(os.getpid(), os.getpid())
# generally, the child process should stop itself
# before exec so the parent can set its new pgid.
# (setting pgid has to be done before the child execs).
# however, Python 'guarantee' that `preexec_fn`
# is run before `Popen` returns.
# this is because `Popen` waits for the closure of
# the error relay pipe '`errpipe_write`',
# which happens at child's exec.
# this is also the reason the child can't stop itself
# in Python's `Popen`, since the `Popen` call would never
# terminate then.
# `os.kill(os.getpid(), signal.SIGSTOP)`
try:
# fork the child
child = subprocess.Popen(*args, preexec_fn=new_pgid,
**kwargs)
# we can't set the process group id from the parent since the child
# will already have exec'd. and we can't SIGSTOP it before exec,
# see above.
# `os.setpgid(child.pid, child.pid)`
# set the child's process group as new foreground
os.tcsetpgrp(sys.stdin.fileno(), child.pid)
# revive the child,
# because it may have been stopped due to SIGTTOU or
# SIGTTIN when it tried using stdout/stdin
# after setpgid was called, and before we made it
# forward process by tcsetpgrp.
os.kill(child.pid, signal.SIGCONT)
# wait for the child to terminate
ret = child.wait()
finally:
# we have to mask SIGTTOU because tcsetpgrp
# raises SIGTTOU to all current background
# process group members (i.e. us) when switching tty's pgrp
# it we didn't do that, we'd get SIGSTOP'd
hdlr = signal.signal(signal.SIGTTOU, signal.SIG_IGN)
# make us tty's foreground again
os.tcsetpgrp(sys.stdin.fileno(), old_pgrp)
# now restore the handler
signal.signal(signal.SIGTTOU, hdlr)
# restore terminal attributes
termios.tcsetattr(sys.stdin.fileno(), termios.TCSADRAIN, old_attr)
return ret
# example:
run_as_fg_process(['openage', 'edit', '-f', 'random_map.rms'])

Post mortem debugging of the multi-thread scripts

I am trying to debugging multi-thread script. Once the exception is
raised I want to:
report it to monitoring system (just print in following example)
stop whole script (including all other threads)
call post mortem debugger prompt in a perspective raised exception
I prepare pretty complicated example to show how I tried to solve it:
#!/usr/bin/env python
import threading
import inspect
import traceback
import sys
import os
import time
def POST_PORTEM_DEBUGGER(type, value, tb):
traceback.print_exception(type, value, tb)
print
if hasattr(sys, 'ps1') or not sys.stderr.isatty():
import rpdb
rpdb.pdb.pm()
else:
import pdb
pdb.pm()
sys.excepthook = POST_PORTEM_DEBUGGER
class MyThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.exception = None
self.info = None
self.the_calling_script_name = os.path.abspath(inspect.currentframe().f_back.f_code.co_filename)
def main(self):
"Virtual method to be implemented by inherited worker"
return self
def run(self):
try:
self.main()
except Exception as exception:
self.exception = exception
self.info = traceback.extract_tb(sys.exc_info()[2])[-1]
# because of bug http://bugs.python.org/issue1230540
# I cannot use just "raise" under threading.Thread
sys.excepthook(*sys.exc_info())
def __del__(self):
print 'MyThread via {} catch "{}: {}" in {}() from {}:{}: {}'.format(self.the_calling_script_name, type(self.exception).__name__, str(self.exception), self.info[2], os.path.basename(self.info[0]), self.info[1], self.info[3])
class Worker(MyThread):
def __init__(self):
super(Worker, self).__init__()
def main(self):
""" worker job """
counter = 0
while True:
counter += 1
print self
time.sleep(1.0)
if counter == 3:
pass # print 1/0
def main():
Worker().start()
counter = 1
while True:
counter += 1
time.sleep(1.0)
if counter == 3:
pass # print 1/0
if __name__ == '__main__':
main()
The trick with
sys.excepthook = POST_PORTEM_DEBUGGER
works perfectly if no threads are involved. I found that in case of
multi-thread script I can use rpdb for debuggig by calling:
import rpdb; rpdb.set_trace()
It works perfectly for defined breakpoint but I want to debug
multi-thread script post mortem (after the uncatched exception is
raised). When I try to use rpdb in the POST_PORTEM_DEBUGGER function
with multi-thread application I get following:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "./demo.py", line 49, in run
sys.excepthook(*sys.exc_info())
File "./demo.py", line 22, in POST_PORTEM_DEBUGGER
pdb.pm()
File "/usr/lib/python2.7/pdb.py", line 1270, in pm
post_mortem(sys.last_traceback)
AttributeError: 'module' object has no attribute 'last_traceback'
I looks like the
sys.excepthook(*sys.exc_info())
did not set up all what the raise command does.
I want the same behavior if the exception is raised in main() even
under started thread.
(I haven't tested my answer, but it seems to me that...)
The call to pdb.pm (pm="post mortem") fails simply because there had been no "mortem" prior to it. I.e. the program is still running.
Looking at the pdb source code, you find the implementation of pdb.pm:
def pm():
post_mortem(sys.last_traceback)
which makes me guess that what you actually want to do is call pdb.post_mortem() with no args. Looks like the default behavior does exactly what you need.
Some more source code (notice the t = sys.exc_info()[2] line):
def post_mortem(t=None):
# handling the default
if t is None:
# sys.exc_info() returns (type, value, traceback) if an exception is
# being handled, otherwise it returns None
t = sys.exc_info()[2]
if t is None:
raise ValueError("A valid traceback must be passed if no "
"exception is being handled")
p = Pdb()
p.reset()
p.interaction(None, t)
Building on #shx2's above, I now use the following pattern in the context of multithreading.
import sys, pdb
try:
... # logic that may fail
except exception as exc:
pdb.post_mortem(exc.__traceback__)
Here is a more verbose alternative:
import sys, pdb
try:
... # logic that may fail
except exception as exc:
if hasattr(sys, "last_traceback"):
pdb.pm()
else:
pdb.post_mortem(exc.__traceback__)
This can help:
import sys
from IPython.core import ultratb
sys.excepthook = ultratb.FormattedTB(mode='Verbose', color_scheme='Linux',
call_pdb=True, ostream=sys.__stdout__)

Make Python unittest fail on exception from any thread

I am using the unittest framework to automate integration tests of multi-threaded python code, external hardware and embedded C. Despite my blatant abuse of a unittesting framework for integration testing, it works really well. Except for one problem: I need the test to fail if an exception is raised from any of the spawned threads. Is this possible with the unittest framework?
A simple but non-workable solution would be to either a) refactor the code to avoid multi-threading or b) test each thread separately. I cannot do that because the code interacts asynchronously with the external hardware. I have also considered implementing some kind of message passing to forward the exceptions to the main unittest thread. This would require significant testing-related changes to the code being tested, and I want to avoid that.
Time for an example. Can I modify the test script below to fail on the exception raised in my_thread without modifying the x.ExceptionRaiser class?
import unittest
import x
class Test(unittest.TestCase):
def test_x(self):
my_thread = x.ExceptionRaiser()
# Test case should fail when thread is started and raises
# an exception.
my_thread.start()
my_thread.join()
if __name__ == '__main__':
unittest.main()
At first, sys.excepthook looked like a solution. It is a global hook which is called every time an uncaught exception is thrown.
Unfortunately, this does not work. Why? well threading wraps your run function in code which prints the lovely tracebacks you see on screen (noticed how it always tells you Exception in thread {Name of your thread here}? this is how it's done).
Starting with Python 3.8, there is a function which you can override to make this work: threading.excepthook
... threading.excepthook() can be overridden to control how uncaught exceptions raised by Thread.run() are handled
So what do we do? Replace this function with our logic, and voilà:
For python >= 3.8
import traceback
import threading
import os
class GlobalExceptionWatcher(object):
def _store_excepthook(self, args):
'''
Uses as an exception handlers which stores any uncaught exceptions.
'''
self.__org_hook(args)
formated_exc = traceback.format_exception(args.exc_type, args.exc_value, args.exc_traceback)
self._exceptions.append('\n'.join(formated_exc))
return formated_exc
def __enter__(self):
'''
Register us to the hook.
'''
self._exceptions = []
self.__org_hook = threading.excepthook
threading.excepthook = self._store_excepthook
def __exit__(self, type, value, traceback):
'''
Remove us from the hook, assure no exception were thrown.
'''
threading.excepthook = self.__org_hook
if len(self._exceptions) != 0:
tracebacks = os.linesep.join(self._exceptions)
raise Exception(f'Exceptions in other threads: {tracebacks}')
For older versions of Python, this is a bit more complicated.
Long story short, it appears that the threading nodule has an undocumented import which does something along the lines of:
threading._format_exc = traceback.format_exc
Not very surprisingly, this function is only called when an exception is thrown from a thread's run function.
So for python <= 3.7
import threading
import os
class GlobalExceptionWatcher(object):
def _store_excepthook(self):
'''
Uses as an exception handlers which stores any uncaught exceptions.
'''
formated_exc = self.__org_hook()
self._exceptions.append(formated_exc)
return formated_exc
def __enter__(self):
'''
Register us to the hook.
'''
self._exceptions = []
self.__org_hook = threading._format_exc
threading._format_exc = self._store_excepthook
def __exit__(self, type, value, traceback):
'''
Remove us from the hook, assure no exception were thrown.
'''
threading._format_exc = self.__org_hook
if len(self._exceptions) != 0:
tracebacks = os.linesep.join(self._exceptions)
raise Exception('Exceptions in other threads: %s' % tracebacks)
Usage:
my_thread = x.ExceptionRaiser()
# will fail when thread is started and raises an exception.
with GlobalExceptionWatcher():
my_thread.start()
my_thread.join()
You still need to join yourself, but upon exit, the with-statement's context manager will check for any exception thrown in other threads, and will raise an exception appropriately.
THE CODE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED
This is an undocumented, sort-of-horrible hack. I tested it on linux and windows, and it seems to work. Use it at your own risk.
I've come across this problem myself, and the only solution I've been able to come up with is subclassing Thread to include an attribute for whether or not it terminates without an uncaught exception:
from threading import Thread
class ErrThread(Thread):
"""
A subclass of Thread that will log store exceptions if the thread does
not exit normally
"""
def run(self):
try:
Thread.run(self)
except Exception as self.err:
pass
else:
self.err = None
class TaskQueue(object):
"""
A utility class to run ErrThread objects in parallel and raises and exception
in the event that *any* of them fail.
"""
def __init__(self, *tasks):
self.threads = []
for t in tasks:
try:
self.threads.append(ErrThread(**t)) ## passing in a dict of target and args
except TypeError:
self.threads.append(ErrThread(target=t))
def run(self):
for t in self.threads:
t.start()
for t in self.threads:
t.join()
if t.err:
raise Exception('Thread %s failed with error: %s' % (t.name, t.err))
I've been using the accepted answer above for a while now, but since Python 3.8 the solution doesn't work anymore because the threading module doesn't have this _format_exc import anymore.
On the other hand the threading module now has a nice way to register custom except hooks in Python 3.8 so here is a simple solution to run unit tests which assert that some exceptions are raised inside threads:
def test_in_thread():
import threading
exceptions_caught_in_threads = {}
def custom_excepthook(args):
thread_name = args.thread.name
exceptions_caught_in_threads[thread_name] = {
'thread': args.thread,
'exception': {
'type': args.exc_type,
'value': args.exc_value,
'traceback': args.exc_traceback
}
}
# Registering our custom excepthook to catch the exception in the threads
threading.excepthook = custom_excepthook
# dummy function that raises an exception
def my_function():
raise Exception('My Exception')
# running the funciton in a thread
thread_1 = threading.Thread(name='thread_1', target=my_function, args=())
thread_1.start()
thread_1.join()
assert 'thread_1' in exceptions_caught_in_threads # there was an exception in thread 1
assert exceptions_caught_in_threads['thread_1']['exception']['type'] == Exception
assert str(exceptions_caught_in_threads['thread_1']['exception']['value']) == 'My Exception'

Categories

Resources