Twisted unhandled error - python

When the twisted reactor is running and an exception occurs within a deferred that isn't caught, "Unhandled Error" is printed to the terminal along with a traceback and the exception. Is it possible to handle/intercept these exceptions (e.g., set a callback or override a method)?
EDIT: I'm aware that I can catch a failure by adding an errback to a deferrerd. What I want to know is if there is a way to intercept an unhandled failure/exception that has traversed its way up the chain to the reactor.
EDIT: Essentially, I'm wondering if the twisted reactor has a global error handler or something that can be accessed. I wonder because it prints the traceback and error from the failure.
Example:
Unhandled Error
Traceback (most recent call last):
File "/var/projects/python/server.py", line 359, in run_server
return server.run()
File "/var/projects/python/server.py", line 881, in run
reactor.run()
File "/usr/local/lib/python2.6/dist-packages/Twisted-11.0.0-py2.6-linux-x86_64.egg/twisted/internet/base.py", line 1162, in run
self.mainLoop()
File "/usr/local/lib/python2.6/dist-packages/Twisted-11.0.0-py2.6-linux-x86_64.egg/twisted/internet/base.py", line 1171, in mainLoop
self.runUntilCurrent()
--- <exception caught here> ---
File "/usr/local/lib/python2.6/dist-packages/Twisted-11.0.0-py2.6-linux-x86_64.egg/twisted/internet/base.py", line 793, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/var/projects/python/server.py", line 524, in monitor
elapsed = time.time() - info.last
exceptions.NameError: global name 'info' is not defined

Because these tracebacks are written using a call to twisted.python.log.deferr() (in Twisted 10.2 anyway), it is possible to redirect them using a log observer. This is the most common thing to do with these stack traces. I can't find any base class for log observers (surprisingly) but there are a couple built in:
twisted.python.log.PythonLoggingObserver - Anything logged goes to the standard Python logging module. (I use this in my application.)
twisted.python.log.FileLogObserver - Anything logged goes to a file.
Both of these will catch stack traces reported by the reactor. All you have to do is construct the log observer (no arguments) and then call the object's start() method.
(Side note: there's also a StdioOnnaStick class that you can construct and assign to sys.stdout or sys.stderr if you want. Then anything you print goes to the Twisted log.)
To really, truly intercept these calls, so the stack traces never get logged at all, you could either:
Subclass twisted.internet.SelectReactor and override its runUntilCurrent() method. That is what logs the stack traces. You would need to study the source of twisted.internet.base.ReactorBase before doing this.
After you have done all twisted.* imports, set twisted.python.log.deferr to a function of your choosing, that is compatible with the prototype def err(_stuff=None, _why=None, **kw).

You can add an errback to the deferred; unhandled exceptions are automatically converted to twisted.python.failure.Failure.

Answering to your comment:
Essentially, I'm wondering if the twisted reactor has a global error handler or something that can be accessed. I wonder because it prints the traceback and error from the failure.
The response is "not in a proper way".
First, the reactor has nothing to do with deferreds, actually, the whole deferred module should be placed in the twisted.python package, but this cannot be done yet because of some dependencies. Back to your question...
Digging into the twisted code (more precisel, the twisted.internet.defer module) you can outline the following event flow:
When the callback method is called with a result, the deferred instance begins to run its callbacks through the _runCallbacks method;
If one of the callbacks throws an exception, it is wrapped into Failure (line 542);
If the callback chain is exhausted and the last result was a failure, the current result is assigned to the failResult property of a DebugInfo instance (line 575);
If the deferred instance, and thus its DebugInfo instance, are garbage collected ad there still is an active failure as a result, the DebugInfo.__del__ method is called and the traceback printed out.
Given these premises, one of the simplest solutions would be to monkey patch the DebugInfo class:
from twisted.internet.defer import DebugInfo
del DebugInfo.__del__ # Hides all errors

Related

Python package unloaded before __del__ is called

I am using pyvisa to communicate via USB with an instrument. I am able to control it properly. Since it is a high voltage source, and it is dangerous to forget it with high voltage turned on, I wanted to implement the __del__ method in order to turn off the output when the code execution finishes. So basically I wrote this:
import pyvisa as visa
class Instrument:
def __init__(self, resource_str='USB0::1510::9328::04481179::0::INSTR'):
self._resource_str = resource_str
self._resource = visa.ResourceManager().open_resource(resource_str)
def set_voltage(self, volts: float):
self._resource.write(f':SOURCE:VOLT:LEV {volts}')
def __del__(self):
self.set_voltage(0)
instrument = Instrument()
instrument.set_voltage(555)
The problem is that it is not working and in the terminal I get
$ python3 comunication\ test.py
Exception ignored in: <function Instrument.__del__ at 0x7f4cca419820>
Traceback (most recent call last):
File "comunication test.py", line 12, in __del__
File "comunication test.py", line 9, in set_voltage
File "/home/superman/.local/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 197, in write
File "/home/superman/.local/lib/python3.8/site-packages/pyvisa/resources/messagebased.py", line 157, in write_raw
File "/home/superman/.local/lib/python3.8/site-packages/pyvisa/resources/resource.py", line 190, in session
pyvisa.errors.InvalidSession: Invalid session handle. The resource might be closed.
I guess that what is happening is that pyvisa is being "deleted" before the __del__ method of my object is being called. How can I prevent this? How can I tell Python that pyvisa is "important" for objects of the Instrument class so it is not unloaded until all of them have been destroyed?
In general, you cannot assume that __del__ will be called. If you're coming from an RAII (resource allocation is initialization) language such as C++, Python makes no similar guarantee of destructors.
To ensure some action is reversed, you should consider an alternative such as context managers:
from contextlib import contextmanager
#contextmanager
def instrument(resource_str='USB0::1510::9328::04481179::0::INSTR'):
...
try:
... # yield something
finally:
# set voltage of resource to 0 here
You would use it like
with instrument(<something>) as inst:
...
# guaranteed by here to set to 0.
I believe Ami Tavory's answer is generally considered to be the recommended solution, though context managers aren't always suitable depending on how the application is structured.
The other option would be to explicitly call the cleanup functions when the application is exiting. You can make it safer by wrapping the whole application in a try/finally, with the finally clause doing the cleanup. Note that if you don't include a catch then the exception will be automatically re-raised after executing the finally, which may be what you want. Example:
app = Application()
try:
app.run()
finally:
app.cleanup()
Be aware, though, that you potentially just threw an exception. If the exception happened, for example, mid-communication then you may not be able to send the command to reset the output as the device could be expecting you to finish what you had already started.
Finally I found my answer here using the package atexit. This does exactly what I wanted to do (based on my tests up to now):
import pyvisa as visa
import atexit
class Instrument:
def __init__(self, resource_str):
self._resource = visa.ResourceManager().open_resource(resource_str)
# Configure a safe shut down for when the class instance is destroyed:
def _atexit():
self.set_voltage(0)
atexit.register(_atexit) # https://stackoverflow.com/a/41627098
def set_voltage(self, volts: float):
self._resource.write(f':SOURCE:VOLT:LEV {volts}')
instrument = Instrument(resource_str = 'USB0::1510::9328::04481179::0::INSTR')
instrument.set_voltage(555)
The advantage of this solution is that it is user-independent, it does not matter how the user instantiates the Instrument class, in the end the high voltage will be turned off.
I faced the same kind of safety issue with another type of connected device. I could not predict safely the behavior of the __del__ method as discussed in questions like
I don't understand this python __del__ behaviour.
I ended with a context manager instead. It would look like this in your case:
def __enter__(self):
"""
Nothing to do.
"""
return self
def __exit__(self, type, value, traceback):
"""
Set back to zero voltage.
"""
self.set_voltage(0)
with Instrument() as instrument:
instrument.set_voltage(555)

ZMQ socket gracefully termination in Python

I have the following ZMQ script
#!/usr/bin/env python2.6
import signal
import sys
import zmq
context = zmq.Context()
socket = context.socket(zmq.SUB)
def signal_term_handler(signal, fname):
socket.close()
sys.exit(0)
def main():
signal.signal(signal.SIGTERM, signal_term_handler)
socket.connect('tcp://16.160.163.27:8888')
socket.setsockopt(zmq.SUBSCRIBE, '')
print 'Waiting for a message'
while True:
(event, params) = socket.recv().split()
# ... doing something with that data ...
if __name__ == '__main__':
main()
When I Ctrl-C, I get the following errors:
Traceback (most recent call last):
File "./nag.py", line 28, in <module>
main()
File "./nag.py", line 24, in main
(event, params) = socket.recv().split()
File "socket.pyx", line 628, in zmq.backend.cython.socket.Socket.recv (zmq/backend/cython/socket.c:5616)
File "socket.pyx", line 662, in zmq.backend.cython.socket.Socket.recv (zmq/backend/cython/socket.c:5436)
File "socket.pyx", line 139, in zmq.backend.cython.socket._recv_copy (zmq/backend/cython/socket.c:1771)
File "checkrc.pxd", line 11, in zmq.backend.cython.checkrc._check_rc (zmq/backend/cython/socket.c:5863)
KeyboardInterrupt
Now, I thought I handled the closing of the socket, when receiving a termination signal from the user, pretty well, then why do I get this ugly messages. What am I missing.
Note I have done some search on Google and StackOverflow but haven't found anything that fixes this problem.
Thanks.
EDIT To anyone that has gotten this far -- user3666197 has suggested a very-good-and-robust way to handle termination or any exception during the execution.
Event handling approach
While the demo-code is small, the real-world systems, the more the multi-host / multi-process communicating systems, shall typically handle all adversely impacting events in their main control-loop.
try:
context = zmq.Context() # setup central Context instance
socket = ... # instantiate/configure all messaging archetypes
# main control-loop ----------- # ----------------------------------------
#
# your app goes here, incl. all nested event-handling & failure-resilience
# ----------------------------- # ----------------------------------------
except ...:
# # handle IOErrors, context-raised exceptions
except Keyboard Interrupt:
# # handle UI-SIG
except:
# # handle other, exceptions "un-handled" above
finally:
# # GRACEFULL TERMINATION
# .setsockopt( zmq.LINGER, 0 ) # to avoid hanging infinitely
# .close() # .close() for all sockets & devices
#
context.term() # Terminate Context before exit
Cleaning at exist
One may think of the code bellow! But it's not needed! For the socket closing!
The sockets get closed automatically!
However that's the way to do it manually!
Also i'm listing all the different useful information to understand the implication around the subject of destroying and closing or cleaning!
try:
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
socket.bind(SOCKET_PATH)
# ....
finally :
context.destroy() # Or term() for graceful destroy
Error at KeyboardInterupt and fix
Before going further! Why the error:
Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
...
msg = self.recv(flags)
File "zmq/backend/cython/socket.pyx", line 781, in zmq.backend.cython.socket.Socket.recv
File "zmq/backend/cython/socket.pyx", line 817, in zmq.backend.cython.socket.Socket.recv
File "zmq/backend/cython/socket.pyx", line 186, in zmq.backend.cython.socket._recv_copy
File "zmq/backend/cython/checkrc.pxd", line 13, in zmq.backend.cython.checkrc._check_rc
KeyboardInterrupt
It's simply the KeyboardInterrupt error!
Just catching it! Will solve the problem!
For example:
try:
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
socket.bind(SOCKET_PATH)
# ...
except KeyboardInterrupt:
print('> User forced exit!')
Bingo the error no more show!
Now no need to terminate the context! It will be done automatically!
Note too: If you don't catch KeyboardInterrupt! And simply make a finally: block and run context.term() alone! The process will hang for ever!
finally:
socket.close() # assuming one socket within the context
context.term()
or
finally:
context.destroy()
Will throw the same error! Which prove the error is the raise up of the keyboard interupt! Which should have been catched from within the library! And thrown again!
Only catching KeyboardInterrupt will do!
except KeyboardInterrupt:
print('> User forced exit!')
finally:
context.destroy() # manual (not needed)
Will do! But completly useless to add the finally block! And manually destroy (close socket + terminate)
Let me tell you why!
If in a hurry go to In python no need to clean at exit section all at the end!
How termination work and why
From the zguide: Making-a-Clean-Exit
It states that we need to close all messages! And also all sockets! Only until this, that the termination unblock and make the code exit
And c lang! The api go through zmq_ctx_destroy() and also closing the sockets and destroying the messages!
There is a lot of things to know:
Memory leaks are one thing, but ZeroMQ is quite finicky about how you exit an application. The reasons are technical and painful, but the upshot is that if you leave any sockets open, the zmq_ctx_destroy() function will hang forever. And even if you close all sockets, zmq_ctx_destroy() will by default wait forever if there are pending connects or sends unless you set the LINGER to zero on those sockets before closing them.
The ZeroMQ objects we need to worry about are messages, sockets, and contexts. Luckily it’s quite simple, at least in simple programs:
Use zmq_send() and zmq_recv() when you can, as it avoids the need to work with zmq_msg_t objects.
If you do use zmq_msg_recv(), always release the received message as soon as you’re done with it, by calling zmq_msg_close().
If you are opening and closing a lot of sockets, that’s probably a sign that you need to redesign your application. In some cases socket handles won’t be freed until you destroy the context.
When you exit the program, close your sockets and then call zmq_ctx_destroy(). This destroys the context.
Python api for destroying the context and termination
In pyzmq! The Context.term() make the call to zmq_ctx_destroy()!
The method Context.destroy() on the other hand is not only zmq_ctx_destroy() but it go and close all the sockets of the context! Then call Context.term() which call zmq_ctx_destroy()!
From the python doc
destroy()
note destroy() is not zmq_ctx_destroy()! term() is!
destroy() = context socket close() + term()
destroy(linger=None)
Close all sockets associated with this context and then terminate the context.
Warning
destroy involves calling zmq_close(), which is NOT threadsafe. If there are active sockets in other threads, this must not be called.
Parameters
linger (int, optional) – If specified, set LINGER on sockets prior to closing them.
term()
term()
Close or terminate the context.
Context termination is performed in the following steps:
Any blocking operations currently in progress on sockets open within context shall raise zmq.ContextTerminated. With the exception of socket.close(), any further operations on sockets open within this context shall raise zmq.ContextTerminated.
After interrupting all blocking calls, term shall block until the following conditions are satisfied:
All sockets open within context have been closed.
For each socket within context, all messages sent on the socket have either been physically transferred to a network peer, or the socket’s linger period set with the zmq.LINGER socket option has expired.
For further details regarding socket linger behaviour refer to libzmq documentation for ZMQ_LINGER.
This can be called to close the context by hand. If this is not called, the context will automatically be closed when it is garbage collected.
This is useful if you want to manually close!
It depends on the wanted behavior one may want to go with a way or another!
term() will raise the zmq.ContextTerminated exception for open sockets operation! If forcing out! One can simply call destroy()! For graceful exit! One can use term()! Then in the catched zmq.ContextTerminated exceptoin block! One should close the sockets! And do any handling! For closing the socket one can use socket.close()! Doing it socket by socket! I wonder what happen if we call destroy() at this point! It may works! The socket will get closed! But then a second call for context.term() will go! it may be ok! It may not! Didn't try it!
LINGER
Check ZMQ_LINGER: Set linger period for socket shutdown title! (ctrl + f)
http://api.zeromq.org/2-1:zmq-setsockopt
The ZMQ_LINGER option shall set the linger period for the specified socket. The linger period determines how long pending messages which have yet to be sent to a peer shall linger in memory after a socket is closed with zmq_close(3), and further affects the termination of the socket's context with zmq_term(3). The following outlines the different behaviours:
The default value of -1 specifies an infinite linger period. Pending messages shall not be discarded after a call to zmq_close(); attempting to terminate the socket's context with zmq_term() shall block until all pending messages have been sent to a peer.
The value of 0 specifies no linger period. Pending messages shall be discarded immediately when the socket is closed with zmq_close().
Positive values specify an upper bound for the linger period in milliseconds. Pending messages shall not be discarded after a call to zmq_close(); attempting to terminate the socket's context with zmq_term() shall block until either all pending messages have been sent to a peer, or the linger period expires, after which any pending messages shall be discarded.
Option value type: int
Option value unit: milliseconds
Default value: -1 (infinite)
Applicable socket types: all
In python no need to clean at exit
You only use destroy() or a combination of term() and destroy() if you want to manually destroy a context! And that's if you want to do some handling given the zmq.ContextTerminated exception! Or while working with multiple contexts! And you are creating them and closing them! Even though generally we never do that! Or some reasons while the code is all right running!
Otherwise as stated in the zguide
This is at least the case for C development. In a language with automatic object destruction, sockets and contexts will be destroyed as you leave the scope. If you use exceptions you’ll have to do the clean-up in something like a “final” block, the same as for any resource.
And you can see it in the pyzmq doc at Context.term() above:
This can be called to close the context by hand. If this is not called, the context will automatically be closed when it is garbage collected.
When the variable run out of scope they get destroyed! And the destroy and exit will be handled automatically! When the program exit ! Let's say even after a finally code! All variables will get destroyed! And so the cleaning will happen there!
Again! If you are having some problems! Make sure it's not contexts, socket and messages closing related! And make sure to use the latest version of pyzmq
Use SIGINT instead of SIGTERM that should fix it.
http://www.quora.com/Linux/What-is-the-difference-between-the-SIGINT-and-SIGTERM-signals-in-Linux

Does raising a manual exception in a python program terminate it?

Does invoking a raise statement in python cause the program to exit with traceback or continue the program from next statement? I want to raise an exception but continue with the remainder program.
Well I need this because I am running the program in a thirdparty system and I want the exception to be thrown yet continue with the program. The concerned code is a threaded function which has to return .
Cant I spawn a new thread just for throwing exception and letting the program continue?
I want to raise an exception but continue with the remainder program.
There's not much sense in that: the program control either continues through the code, or ripples up the call stack to the nearest try block.
Instead you can try some of:
the traceback module (for reading or examining the traceback info you see together with exceptions; you can easily get it as text)
the logging module (for saving diagnostics during program runtime)
Example:
def somewhere():
print 'Oh no! Where am I?'
import traceback
print ''.join(traceback.format_stack()) # or traceback.print_stack(sys.stdout)
print 'Oh, here I am.'
def someplace():
somewhere()
someplace()
Output:
Oh no! Where am I?
File "/home/kos/exc.py", line 10, in <module>
someplace()
File "/home/kos/exc.py", line 8, in someplace
somewhere()
File "/home/kos/exc.py", line 4, in somewhere
print ''.join(traceback.format_stack())
Oh, here I am.
Only an uncaught exception will terminate a program. If you raise an exception that your 3rd-party software is not prepared to catch and handle, the program will terminate. Raising an exception is like a soft abort: you don't know how to handle the error, but you give anyone using your code the opportunity to do so rather than just calling sys.exit().
If you are not prepared for the program to exit, don't raise an exception. Just log the error instead.

Why can Python coroutines not be called recursively?

I have been using Python coroutines instead of threading with some success. It occurred to me that I might have a use for a coroutine that knows about itself, so it can send itself something. I found that this is not possible (in Python 3.3.3 anyway). To test, I wrote the following code:
def recursive_coroutine():
rc = (yield)
rc.send(rc)
reco = recursive_coroutine()
next(reco)
reco.send(reco)
This raises an exception:
Traceback (most recent call last):
File "rc.py", line 7, in <module>
reco.send(reco)
File "rc.py", line 3, in recursive_coroutine
rc.send(rc)
ValueError: generator already executing
Although the error is clear, it feels like this should be possible. I never got as far as to come up with a useful, realistic application of a recursive coroutine, so I'm not looking for an answer to a specific problem. Is there a reason, other than perhaps implementation difficulty, that this is not possible?
This isn't possible because for send to work, the coroutine has to be waiting for input. yield pauses a coroutine, and send and next unpause it. If the generator is calling send, it can't simultaneously be paused and waiting for input.
If you could send to an unpaused coroutine, the semantics would get really weird. Suppose that rc.send(rc) line worked. Then send would continue the execution of the coroutine from where it left off, which is the send call... but there is no value for send to return, because we didn't hit a yield.
Suppose we return some dummy value and continue. Then the coroutine would execute until the next yield. At that point, what happens? Does execution rewind so send can return the yielded value? Does the yield discard the value? Where does control flow go? There's no good answer.

python - Selective handling of exception traceback

I'm trying to have an exception handling mechanism with several layers of information to display to the user for my application, using python's logging module.
In the application, the logging module has 2 handlers: a file handler for keeping DEBUG information and a stream handler for keeping INFO information. By default, the logging level is set to INFO. What I'm trying to achieve is a setup where if any exception occurs, the user gets shown a simple error message without any tracebacks by default. If the logging level is set to DEBUG, the user should still get the simple message only, but this time the exception traceback is logged into a log file through the file handler.
Is it possible to achieve this?
I tried using logger.exception(e), but it always prints the traceback onto the console.
The traceback module may help you. At the top level of your application, you should put a catch all statement:
setup_log_and_other_basic_services()
try:
run_your_app()
except Exception as e:
if is_debug():
traceback.print_stack()
else:
traceback.print_stack(get_log_file())
print e
the code outside the try/catch block should not be allowed to crash.
Write your custom exception handling function, and use it every time you write catch.
In this function you should check which mode is on (INFO or DEBUG) and then extract info about exception and feed it to logger manually when needed.

Categories

Resources