In this pseudo code, the destructor of o is never called.
class MyObj():
def __del__():
print "Destroyed!"
do_upon_death()
def parse():
o = MyObj()
print "Ok so far..."
raise Exception
run_in_QThread(parse)
do_other_things()
I see the exception printed on the terminal, but o seems not being garbage-collected. The interpreter does not exit upon the exception (the main thread keeps running).
The whole purpose behind this is to be able to do certain cleanup in the destructor when exceptions occur.
Another way to explain this is with a different pattern:
try:
run_in_thread(parse)
except:
print "Will I ever get printed?"
Exceptions happening in parse will not get caught.
Even further:
with MyContextManager() as manager:
run_in_thread(lambda: do_something(manager))
Here manager.__exit__() will get called with all None arguments as if the code completed cleanly, even if there is an exception in do_something.
The purpose behind this is because I'm running certain tasks which can fail for too many different reasons, and these tasks can be done partially in the main thread, and partially in a worker thread. It can fail in either.
So my initial idea was to keep references to an object while the task was running, so if the task completed by exiting the context or by an exception, references would be lost and eventually the destructor would be run.
I'm trying to keep track of these tasks and keep a visual indicator on a GUI of which tasks are still doing their job.
You can catch Exception after it was raised and do what you want to do
try:
o = MyObj()
print "Ok so far..."
raise Exception
except Exception:
del o
or use such form if is no difference was exception raised or not:
finaly:
del o
If you need just keep Thread alive, may be, you should use: Thread.sleep()
As i understood in Thread is only parse(), so you need to create signal by emit() about deleting and send it in main thread
I believe that caughting destructor is not the best decision, it will be better to catch some signal. Please add more info about why you do this all.
Related
I'm wondering if anybody would have an idea to catch all exceptions in a running thread. My program is started as follow, by a service
def main():
global RUNNING
signal.signal(signal.SIGINT, stopHandler)
signal.signal(signal.SIGTERM, stopHandler)
projectAlice = ProjectAlice()
try:
while RUNNING:
time.sleep(0.1)
except KeyboardInterrupt:
pass
finally:
projectAlice.onStop()
_logger.info('Project Alice stopped, see you soon!')
So a CTRL-C or a signal can stop it. ProjectAlice runs forever and answers to mqtt topics that are sent by Snips. It uses paho-mqtt with loop_forever. As it's pretty large, errors can occur, even though they shouldn't. I cover as many as I can, but today, as an exemple, google-translate started to throw out errors, because it can't use the service anymore (free...). Unhandled errors.... So the thread crashes and ProjectAlice is left as is. I would like to, as it's possible per exemple in Java, to super catch all exceptions and work further from there
Here's a simple solution to override the python exception hook, thus enabling you to handle uncaught exceptions:
import sys
def my_custom_exception_hook(exctype, value, tb):
print('Yo, do stuff here, handle specific exceptions and raise others or whatever')
and before your actual code starts do:
sys.excepthook = my_custom_exception_hook
A simple except Exception: will catch all exceptions except KeyboardInterrupt and SystemExit within the same thread.
You'll have to have the try: except ...: block within the code that is run in the thread to catch exceptions occurring in the thread.
For any possible try-finally block in Python, is it guaranteed that the finally block will always be executed?
For example, let’s say I return while in an except block:
try:
1/0
except ZeroDivisionError:
return
finally:
print("Does this code run?")
Or maybe I re-raise an Exception:
try:
1/0
except ZeroDivisionError:
raise
finally:
print("What about this code?")
Testing shows that finally does get executed for the above examples, but I imagine there are other scenarios I haven't thought of.
Are there any scenarios in which a finally block can fail to execute in Python?
"Guaranteed" is a much stronger word than any implementation of finally deserves. What is guaranteed is that if execution flows out of the whole try-finally construct, it will pass through the finally to do so. What is not guaranteed is that execution will flow out of the try-finally.
A finally in a generator or async coroutine might never run, if the object never executes to conclusion. There are a lot of ways that could happen; here's one:
def gen(text):
try:
for line in text:
try:
yield int(line)
except:
# Ignore blank lines - but catch too much!
pass
finally:
print('Doing important cleanup')
text = ['1', '', '2', '', '3']
if any(n > 1 for n in gen(text)):
print('Found a number')
print('Oops, no cleanup.')
Note that this example is a bit tricky: when the generator is garbage collected, Python attempts to run the finally block by throwing in a GeneratorExit exception, but here we catch that exception and then yield again, at which point Python prints a warning ("generator ignored GeneratorExit") and gives up. See PEP 342 (Coroutines via Enhanced Generators) for details.
Other ways a generator or coroutine might not execute to conclusion include if the object is just never GC'ed (yes, that's possible, even in CPython), or if an async with awaits in __aexit__, or if the object awaits or yields in a finally block. This list is not intended to be exhaustive.
A finally in a daemon thread might never execute if all non-daemon threads exit first.
os._exit will halt the process immediately without executing finally blocks.
os.fork may cause finally blocks to execute twice. As well as just the normal problems you'd expect from things happening twice, this could cause concurrent access conflicts (crashes, stalls, ...) if access to shared resources is not correctly synchronized.
Since multiprocessing uses fork-without-exec to create worker processes when using the fork start method (the default on Unix), and then calls os._exit in the worker once the worker's job is done, finally and multiprocessing interaction can be problematic (example).
A C-level segmentation fault will prevent finally blocks from running.
kill -SIGKILL will prevent finally blocks from running. SIGTERM and SIGHUP will also prevent finally blocks from running unless you install a handler to control the shutdown yourself; by default, Python does not handle SIGTERM or SIGHUP.
An exception in finally can prevent cleanup from completing. One particularly noteworthy case is if the user hits control-C just as we're starting to execute the finally block. Python will raise a KeyboardInterrupt and skip every line of the finally block's contents. (KeyboardInterrupt-safe code is very hard to write).
If the computer loses power, or if it hibernates and doesn't wake up, finally blocks won't run.
The finally block is not a transaction system; it doesn't provide atomicity guarantees or anything of the sort. Some of these examples might seem obvious, but it's easy to forget such things can happen and rely on finally for too much.
Yes. Finally always wins.
The only way to defeat it is to halt execution before finally: gets a chance to execute (e.g. crash the interpreter, turn off your computer, suspend a generator forever).
I imagine there are other scenarios I haven't thought of.
Here are a couple more you may not have thought about:
def foo():
# finally always wins
try:
return 1
finally:
return 2
def bar():
# even if he has to eat an unhandled exception, finally wins
try:
raise Exception('boom')
finally:
return 'no boom'
Depending on how you quit the interpreter, sometimes you can "cancel" finally, but not like this:
>>> import sys
>>> try:
... sys.exit()
... finally:
... print('finally wins!')
...
finally wins!
$
Using the precarious os._exit (this falls under "crash the interpreter" in my opinion):
>>> import os
>>> try:
... os._exit(1)
... finally:
... print('finally!')
...
$
I'm currently running this code, to test if finally will still execute after the heat death of the universe:
try:
while True:
sleep(1)
finally:
print('done')
However, I'm still waiting on the result, so check back here later.
According to the Python documentation:
No matter what happened previously, the final-block is executed once the code block is complete and any raised exceptions handled. Even if there's an error in an exception handler or the else-block and a new exception is raised, the code in the final-block is still run.
It should also be noted that if there are multiple return statements, including one in the finally block, then the finally block return is the only one that will execute.
Well, yes and no.
What is guaranteed is that Python will always try to execute the finally block. In the case where you return from the block or raise an uncaught exception, the finally block is executed just before actually returning or raising the exception.
(what you could have controlled yourself by simply running the code in your question)
The only case I can imagine where the finally block will not be executed is when the Python interpretor itself crashes for example inside C code or because of power outage.
I found this one without using a generator function:
import multiprocessing
import time
def fun(arg):
try:
print("tried " + str(arg))
time.sleep(arg)
finally:
print("finally cleaned up " + str(arg))
return foo
list = [1, 2, 3]
multiprocessing.Pool().map(fun, list)
The sleep can be any code that might run for inconsistent amounts of time.
What appears to be happening here is that the first parallel process to finish leaves the try block successfully, but then attempts to return from the function a value (foo) that hasn't been defined anywhere, which causes an exception. That exception kills the map without allowing the other processes to reach their finally blocks.
Also, if you add the line bar = bazz just after the sleep() call in the try block. Then the first process to reach that line throws an exception (because bazz isn't defined), which causes its own finally block to be run, but then kills the map, causing the other try blocks to disappear without reaching their finally blocks, and the first process not to reach its return statement, either.
What this means for Python multiprocessing is that you can't trust the exception-handling mechanism to clean up resources in all processes if even one of the processes can have an exception. Additional signal handling or managing the resources outside the multiprocessing map call would be necessary.
You can use a finally with an if statement, below example is checking for network connection and if its connected it will run the finally block
try:
reader1, writer1 = loop.run_until_complete(self.init_socket(loop))
x = 'connected'
except:
print("cant connect server transfer") #open popup
x = 'failed'
finally :
if x == 'connected':
with open('text_file1.txt', "r") as f:
file_lines = eval(str(f.read()))
else:
print("not connected")
All the docs tell us is,
Raised when the user hits the interrupt key (normally Control-C or Delete). During execution, a check for interrupts is made regularly.
But from the point of the code, when can I see this exception? Does it occur during statement execution? Only between statements? Can it happen in the middle of an expression?
For example:
file_ = open('foo')
# <-- can a KeyboardInterrupt be raised here, after the successful
# completion of open but prior to the try? -->
try:
# try some things with file_
finally:
# cleanup
Will this code leak during a well-timed KeyboardInterrupt? Or is it raised during the execution of some statements or expressions?
According to a note in the unrelated PEP 343:
Even if you write bug-free code, a KeyboardInterrupt exception can still cause it to exit between any two virtual machine opcodes.
So it can occur essentially anywhere. It can indeed occur during evaluation of a single expression. (This shouldn't be surprising, since an expression can include function calls, and pretty much anything can happen inside a function call.)
Yes, a KeyboardInterrupt can occur in the place you marked.
To deal with this, you should use a with block:
with open('foo') as file_:
# do some things
raise KeyboardInterrupt
# file resource is closed no matter what, even if a KeyboardInterrupt is raised
However, the exception could occur even between the open() call and the assignment to file_. It's probably not worth worrying about this, because usually a ctrl-c will mean your program is about to end, so the "leaked" file handle will be cleaned up by the OS. But if you know that it is important, you can use a signal handler to catch the signal that raises KeyboardInterrupt (SIGINT).
I have a python thread that runs every 20 seconds. The code is:
import threading
def work():
Try:
#code here
except (SystemExit, KeyboardInterrupt):
raise
except Exception, e:
logger.error('error somewhere',exc_info=True)
threading.Timer(20, work).start ();
It usually runs completely fine. Once in a while, it'll return an error that doesnt make much sense. The errors are the same two errors. The first one might be legitimate, but the errors after that definitely aren't. Then after that, it returns that same error every time it runs the thread. If I kill the process and start over, then it runs cleanly. I have absolutely no idea what going on here. Help please.
As currently defined in your question, you are most likely exceeding your maximum recursion depth. I can't be certain because you have omitted any opportunities for flow control that may be evident in your try block. Furthermore, everytime your code fails to execute, the general catch for exceptions will log the exception and then bump you into a new timer with a new logger (assume you are declaring that in the try block). I think you probably meant to do the following:
import threading
import time
def work():
try:
#code here
pass
except (SystemExit, KeyboardInterrupt):
raise
except Exception, e:
logger.error('error somewhere',exc_info=True)
t = threading.Timer(20, work)
t.start()
i = 0
while True:
time.sleep(1)
i+=1
if i >1000:
break
t.cancel()
If this is in fact the case, the reason your code was not working is that when you call your work function the first time, it processes and then right at the end, starts another work function in a new timer. This happens add infinitum until the stack fills up, python coughs, and gets angry that you have recursed (called a function from within itself) too many times.
My code fix pulls the timer outside of the function so we create a single timer, which calls the work function once every 20 seconds.
Because threading.timers run in separate threads, we also need to wait around in the main thread. To do this, I added a simple while loop that will run for 1000 seconds and then close the timer and exit. If we didn't wait around in the main loop, it would call your timer and then close out immediately causing python to clean up the timer before it executed even once.
I'm writing a command line utility in Python which, since it is production code, ought to be able to shut down cleanly without dumping a bunch of stuff (error codes, stack traces, etc.) to the screen. This means I need to catch keyboard interrupts.
I've tried using both a try catch block like:
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print 'Interrupted'
sys.exit(0)
and catching the signal itself (as in this post):
import signal
import sys
def sigint_handler(signal, frame):
print 'Interrupted'
sys.exit(0)
signal.signal(signal.SIGINT, sigint_handler)
Both methods seem to work quite well during normal operation. However, if the interrupt comes during cleanup code at the end of the application, Python seems to always print something to the screen. Catching the interrupt gives
^CInterrupted
Exception KeyboardInterrupt in <bound method MyClass.__del__ of <path.to.MyClass object at 0x802852b90>> ignored
whereas handling the signal gives either
^CInterrupted
Exception SystemExit: 0 in <Finalize object, dead> ignored
or
^CInterrupted
Exception SystemExit: 0 in <bound method MyClass.__del__ of <path.to.MyClass object at 0x802854a90>> ignored
Not only are these errors ugly, they're not very helpful (especially to an end user with no source code)!
The cleanup code for this application is fairly big, so there's a decent chance that this issue will be hit by real users. Is there any way to catch or block this output, or is it just something I'll have to deal with?
Checkout this thread, it has some useful information about exiting and tracebacks.
If you are more interested in just killing the program, try something like this (this will take the legs out from under the cleanup code as well):
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print('Interrupted')
try:
sys.exit(130)
except SystemExit:
os._exit(130)
[Edited to change the exit code as suggested in comments. 130 is the code typically returned on Linux for a script terminated by Ctrl-C. We may not be on Linux, but the important thing is to return a non-zero value, and 130 is as good as any.]
You could ignore SIGINTs after shutdown starts by calling signal.signal(signal.SIGINT, signal.SIG_IGN) before you start your cleanup code.