I am using the Python APScheduler and I wish to catch the raised exceptions from the callback method, however unsure as to how to do this. I have provided my code so far however I am stuck on how to do this appropriately. Any help would be appreciated.
import time
from apscheduler.schedulers.background import BackgroundScheduler
def expiry_callback():
raise ValueError
print("inside job")
sched = BackgroundScheduler(daemon=True)
sched.add_job(expiry_callback,'interval',seconds=1)
try:
sched.start()
except ValueError as e:
print(f'ran into an issue!! {e}')
try:
while True:
time.sleep(5)
except (KeyboardInterrupt, SystemExit):
sched.shutdown()
stacktrace:
/Users/me/Documents/environments/my_env/bin/python3.9 /Users/me/PycharmProjects/pythonProject2/run.py
Job "expiry_callback (trigger: interval[0:00:01], next run at: 2021-08-24 22:33:26 MDT)" raised an exception
Traceback (most recent call last):
File "/Users/me/Documents/environments/my_env/lib/python3.9/site-packages/apscheduler/executors/base.py", line 125, in run_job
retval = job.func(*job.args, **job.kwargs)
File "/Users/me/PycharmProjects/pythonProject2/run.py", line 6, in expiry_callback
raise ValueError
ValueError
Process finished with exit code 0
Calling sched.start() only starts the background thread that executes the callback function but does not call the callback function itself so it is never going to produce the exception.
If you're looking to handle exceptions from callback functions in a consistent way, you can instead call the via a wrapper function that catches a given exception and outputs the error in a definite manner:
# insert definition of expiry_callback before this line
def catch_exception(func, exception):
def wrapper():
try:
func()
except exception as e:
print(f'ran into an issue!! {e}')
return wrapper
sched = BackgroundScheduler(daemon=True)
sched.add_job(catch_exception(expiry_callback, ValueError),'interval',seconds=1)
sched.start()
# insert idle code after this line
Demo: https://replit.com/#blhsing/BlueCreepyMethod
Related
I've taken over some code a former colleague wrote, which was frequently getting stuck when one or more parallelised functions through a NameError exception, which wasn't caught. (The parallelisation is handled by multiprocessing.Pool.) Because the exception is due to certain arguments not being defined, the only way I've been able to catch this exception is to put the pool.apply_async commands into try...except blocks, like so:
from multiprocessing import Pool
# Define worker functions
def workerfn1(args1):
#commands
def workerfn2(args2):
#more commands
def workerfn3(args3):
#even more commands
# Execute worker functions in parallel
with Pool(processes=os.cpu_count()-1) as pool:
try:
r1 = pool.apply_async(workerfn1, args1)
except NameError as e:
print("Worker function r1 failed")
print(e)
try:
r2 = pool.apply_async(workerfn2, args2)
except NameError as e:
print("Worker function r2 failed")
print(e)
try:
r3 = pool.apply_async(workerfn3, args3)
except NameError as e:
print("Worker function r3 failed")
print(e)
Obviously, the try...except blocks are not parallelised, but the interpreter has to read the apply_async commands sequentially anyway while it assigns them to different CPUs...so will these three functions still be executed in parallel (if they don't throw the NameError exception), or does the use of try...except prevent this from happening?
First, you need to be more careful in posting code that is not full of spelling and other errors.
Method multiprocessing.pool.Pool.apply_async (not apply_sync) returns a multiprocessing.pool.AsyncResult instance. It is only when you call method get on this instance that you get either the return value from your worker function or any exception that occurred in your worker function is now thrown. So:
from multiprocessing import Pool
# Define worker functions
def workerfn1(args1):
...
def workerfn2(args2):
...
def workerfn3(args3):
raise NameError('Some name goes here.')
# Required for Windows:
if __name__ == '__main__':
# Execute worker functions in parallel
with Pool(processes=3) as pool:
result1 = pool.apply_async(workerfn1, args=(1,))
result2 = pool.apply_async(workerfn2, args=(1,))
result3 = pool.apply_async(workerfn3, args=(1,))
try:
return_value1 = result1.get()
except NameError as e:
print("Worker function workerfn1 failed:", e)
try:
return_value2 = result2.get()
except NameError as e:
print("Worker function workerfn2 failed:", e)
try:
return_value3 = result3.get()
except NameError as e:
print("Worker function workerfn3 failed:", e)
Prints:
Worker function workerfn3 failed: Some name goes here.
Note
Without calling get on the AsyncResult returned from apply_async you are not waiting for the completion of the submitted task and there is no point in surrounding the call with try/catch. When you then fall through the with block an implicit call to terminate will be done on the pool instance that will immediately kill all running pool processes and any running tasks will be halted and any tasks waiting to run will be purged. You can call pool.close() followed by pool.join() within the block and that sequence will wait for all submitted tasks to complete. But without explicitly calling get on the AsyncResult instances you will not be able to get return values or exceptions.
I have a parent class who has a try clause in it, and a child class overrides a method inside of the try clause. Normal exceptions can be caught when the child raises it. However, the keyboard interrupt exception cannot be caught. Moreover, it can be caught inside the child method, but not the parent method.
Please see the example code like the following, where the interrupt cannot be caught in bar1, but can be caught after bar2 changes it into an assertion error. I produced this problem in python 3.6 in both Linux and Windows.
import time
class foo:
def body(self):
pass
def run(self):
try:
self.body()
except Exception as ex:
print("caught exception:", str(ex))
class bar1(foo):
def body(self):
while(1):
print(1)
time.sleep(0.1)
class bar2(foo):
def body(self):
interrupted = False
while(1):
assert not interrupted, "assert not interrupted"
try:
print(1)
time.sleep(0.1)
except KeyboardInterrupt as ex:
print("received interrupt")
interrupted = True
Interrupting the run method of class bar1 gets
1
....
1
Traceback (most recent call last):
File "tmp.py", line 34, in <module>
b.run()
File "tmp.py", line 7, in run
self.body()
File "tmp.py", line 15, in body
time.sleep(0.1)
KeyboardInterrupt
However, interrupting bar2 gets
1
...
1
received interrupt
caught exception: assert not interrupted
I have searched over StackOverflow and found some problems regarding the keyboard interruption handling with threads and stdIO, but I did not found some problems like this.
I'm using GLib.MainLoop() from PyGObject in my Python application and have a question.
Is it possible to handle Python exception that raises in loop.run()?
For example I'm calling some function using GLib.MainContext.invoke_full():
import traceback, gi
from gi.repository import GLib
try:
loop = GLib.MainLoop()
def handler(self):
print('handler')
raise Exception('from handler with love')
loop.get_context().invoke_full(GLib.PRIORITY_DEFAULT, handler, None)
loop.run()
except Exception:
print('catched!')
I thought that handler() should be called somewhere inside loop.run() so raise Exception('from handler with love') should be catched by except Exception:. However, it is not:
$ python test.py
handler
Traceback (most recent call last):
File "test.py", line 9, in handler
raise Exception('from handler with love')
Exception: from handler with love
It seems that handler() called in the middle of nowhere (called from GLib's C code?), and not catched by except Exception:.
Is it possible to catch all Python exceptions that raises in GLib.MainLoop.run()? I have a dozen of handlers called like that so I have to add same try: ... except OneException: ... exceptAnotherException: ... wrapper into each handler.
No, the exception is not propagated. It is caught and printed. No exception in a Python callback causes the loop to exit.
You can handle these types of errors through sys.excepthook
I want to access the traceback of a python programm running in a subprocess.
The documentation says:
Exceptions raised in the child process, before the new program has started to execute, will be re-raised in the parent. Additionally, the exception object will have one extra attribute called child_traceback, which is a string containing traceback information from the child’s point of view.
Contents of my_sub_program.py:
raise Exception("I am raised!")
Contents of my_main_program.py:
import sys
import subprocess
try:
subprocess.check_output([sys.executable, "my_sub_program.py"])
except Exception as e:
print e.child_traceback
If I run my_main_program.py, I get the following error:
Traceback (most recent call last):
File "my_main_program.py", line 6, in <module>
print e.child_traceback
AttributeError: 'CalledProcessError' object has no attribute 'child_traceback'
How can I access the traceback of the subprocess without modifying the subprocess program code? This means, I want to avoid adding a large try/except clause around my whole sub-program code, but rather handle error logging from my main program.
Edit: sys.executable should be replaceable with an interpreter differing from the one running the main program.
As you're starting another Python process, you can also try to use the multiprocessing Python module ; by sub-classing the Process class it is quite easy to get exceptions from the target function:
from multiprocessing import Process, Pipe
import traceback
import functools
class MyProcess(Process):
def __init__(self, *args, **kwargs):
Process.__init__(self, *args, **kwargs)
self._pconn, self._cconn = Pipe()
self._exception = None
def run(self):
try:
Process.run(self)
self._cconn.send(None)
except Exception as e:
tb = traceback.format_exc()
self._cconn.send((e, tb))
# raise e # You can still rise this exception if you need to
#property
def exception(self):
if self._pconn.poll():
self._exception = self._pconn.recv()
return self._exception
p = MyProcess(target=functools.partial(execfile, "my_sub_program.py"))
p.start()
p.join() #wait for sub-process to end
if p.exception:
error, traceback = p.exception
print 'you got', traceback
The trick is to have the target function executing the Python sub-program, this is done by using functools.partial.
I am trying to debugging multi-thread script. Once the exception is
raised I want to:
report it to monitoring system (just print in following example)
stop whole script (including all other threads)
call post mortem debugger prompt in a perspective raised exception
I prepare pretty complicated example to show how I tried to solve it:
#!/usr/bin/env python
import threading
import inspect
import traceback
import sys
import os
import time
def POST_PORTEM_DEBUGGER(type, value, tb):
traceback.print_exception(type, value, tb)
print
if hasattr(sys, 'ps1') or not sys.stderr.isatty():
import rpdb
rpdb.pdb.pm()
else:
import pdb
pdb.pm()
sys.excepthook = POST_PORTEM_DEBUGGER
class MyThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.exception = None
self.info = None
self.the_calling_script_name = os.path.abspath(inspect.currentframe().f_back.f_code.co_filename)
def main(self):
"Virtual method to be implemented by inherited worker"
return self
def run(self):
try:
self.main()
except Exception as exception:
self.exception = exception
self.info = traceback.extract_tb(sys.exc_info()[2])[-1]
# because of bug http://bugs.python.org/issue1230540
# I cannot use just "raise" under threading.Thread
sys.excepthook(*sys.exc_info())
def __del__(self):
print 'MyThread via {} catch "{}: {}" in {}() from {}:{}: {}'.format(self.the_calling_script_name, type(self.exception).__name__, str(self.exception), self.info[2], os.path.basename(self.info[0]), self.info[1], self.info[3])
class Worker(MyThread):
def __init__(self):
super(Worker, self).__init__()
def main(self):
""" worker job """
counter = 0
while True:
counter += 1
print self
time.sleep(1.0)
if counter == 3:
pass # print 1/0
def main():
Worker().start()
counter = 1
while True:
counter += 1
time.sleep(1.0)
if counter == 3:
pass # print 1/0
if __name__ == '__main__':
main()
The trick with
sys.excepthook = POST_PORTEM_DEBUGGER
works perfectly if no threads are involved. I found that in case of
multi-thread script I can use rpdb for debuggig by calling:
import rpdb; rpdb.set_trace()
It works perfectly for defined breakpoint but I want to debug
multi-thread script post mortem (after the uncatched exception is
raised). When I try to use rpdb in the POST_PORTEM_DEBUGGER function
with multi-thread application I get following:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "./demo.py", line 49, in run
sys.excepthook(*sys.exc_info())
File "./demo.py", line 22, in POST_PORTEM_DEBUGGER
pdb.pm()
File "/usr/lib/python2.7/pdb.py", line 1270, in pm
post_mortem(sys.last_traceback)
AttributeError: 'module' object has no attribute 'last_traceback'
I looks like the
sys.excepthook(*sys.exc_info())
did not set up all what the raise command does.
I want the same behavior if the exception is raised in main() even
under started thread.
(I haven't tested my answer, but it seems to me that...)
The call to pdb.pm (pm="post mortem") fails simply because there had been no "mortem" prior to it. I.e. the program is still running.
Looking at the pdb source code, you find the implementation of pdb.pm:
def pm():
post_mortem(sys.last_traceback)
which makes me guess that what you actually want to do is call pdb.post_mortem() with no args. Looks like the default behavior does exactly what you need.
Some more source code (notice the t = sys.exc_info()[2] line):
def post_mortem(t=None):
# handling the default
if t is None:
# sys.exc_info() returns (type, value, traceback) if an exception is
# being handled, otherwise it returns None
t = sys.exc_info()[2]
if t is None:
raise ValueError("A valid traceback must be passed if no "
"exception is being handled")
p = Pdb()
p.reset()
p.interaction(None, t)
Building on #shx2's above, I now use the following pattern in the context of multithreading.
import sys, pdb
try:
... # logic that may fail
except exception as exc:
pdb.post_mortem(exc.__traceback__)
Here is a more verbose alternative:
import sys, pdb
try:
... # logic that may fail
except exception as exc:
if hasattr(sys, "last_traceback"):
pdb.pm()
else:
pdb.post_mortem(exc.__traceback__)
This can help:
import sys
from IPython.core import ultratb
sys.excepthook = ultratb.FormattedTB(mode='Verbose', color_scheme='Linux',
call_pdb=True, ostream=sys.__stdout__)