what is the meaning of filching in the context of logging module - python

Going over logging module and saw this:
# next bit filched from 1.5.2's inspect.py
def currentframe():
"""Return the frame object for the caller's stack frame."""
try:
raise Exception
except:
return sys.exc_info()[2].tb_frame.f_back
if hasattr(sys, '_getframe'): currentframe = lambda: sys._getframe(3)
# done filching
What is the meaning of the phrase "filching" in this context?

This is code you don't want to take as a good example! First of all it's specific to the CPython implementation, and so won't work under PyPy, Jython, Iron Python and so on.
I presume the author raises the exception in order to access the stack frame of the calling routine, or perhaps it's caller's caller.
"Filching" is taking without permission, so this is just a guilt-ridden way of saying "I copied and pasted from an open source library."

As barny commented, the comment that reads:
# next bit filched from 1.5.2's inspect.py
uses the word to mean "copied from". That is, this particular code sequence has been around for a very long time, since Python version 1.5.2.
What's going on here (edit: this part of the question got edited away!) is simple yet subtle. Any exception causes the Python system to locate the innermost, currently-active except handler. In this case, that's the very next line—so:
try:
raise Exception
except:
...
proceeds directly to the ... line. However, the raise has a side effect, which is the key to the whole thing. The side effect is that the raise makes the traceback stack contain, as the most recent entry,1 the execution state pointing to the raise line itself.
The sys.exc_info() function returns a tuple with three elements: the exception's type, the exception's value—no value was passed here because the handler doesn't need one—and the (entire) traceback stack. The [2] extracts this traceback stack from the tuple, discarding the exception type and value.
The structure of the traceback stack is somewhat complicated, but there is a .tb_frame attribute in each traceback stack instance. This contains information about stack frame that was active when the exception occurred. Since this is a stack of function activations, its predecessor is the state that was active at the call to currentframe, so this is the caller's frame.
This method of locating the caller's frame is not very efficient (and, as holdenweb points out, specific to the CPython interpreter), so if sys has a _getframe function, the file re-binds currentframe to invoke sys._getframe(3). (I'm not sure what the constant 3 is doing here since the other version effectively returns what sys._getframe(0) would return. Edit 2: on further inspection, the magic constant 3 takes care of the log handler calling _log which calls findCaller which calls currentframe. This is another efficiency hack since findCaller climbs up through each stack frame looking for one that occurs in some file other than the logging module code itself. This starts it at a better point.)
1Remember, a stack is any data structure that behaves in a last-in first-out (LIFO) fashion. The Python interpreter manages a bunch of different, but more or less simultaneous, stacks, including the exception handlers and the normal function-call mechanism.

Related

Is it possible to catch an exception from outside code that is already catching it?

This is a hard question to phrase, but here's a stripped-down version of the situation. I'm using some library code that accepts a callback. It has its own error-handling, and raises an error if anything goes wrong while executing the callback.
class LibraryException(Exception):
pass
def library_function(callback, string):
try:
# (does other stuff here)
callback(string)
except:
raise LibraryException('The library code hit a problem.')
I'm using this code inside an input loop. I know of potential errors that could arise in my callback function, depending on values in the input. If that happens, I'd like to reprompt, after getting helpful feedback from its error message. I imagine it looking something like this:
class MyException(Exception):
pass
def my_callback(string):
raise MyException("Here's some specific info about my code hitting a problem.")
while True:
something = input('Enter something: ')
try:
library_function(my_callback, something)
except MyException as e:
print(e)
continue
Of course, this doesn't work, because MyException will be caught within library_function, which will raise its own (much less informative) Exception and halt the program.
The obvious thing to do would be to validate my input before calling library_function, but that's a circular problem, because parsing is what I'm using the library code for in the first place. (For the curious, it's Lark, but I don't think my question is specific enough to Lark to warrant cluttering it with all the specific details.)
One alternative would be to alter my code to catch any error (or at least the type of error the library generates), and directly print the inner error message:
def my_callback(string):
error_str = "Here's some specific info about my code hitting a problem."
print(error_str)
raise MyException(error_str)
while True:
something = input('Enter something: ')
try:
library_function(my_callback, something)
except LibraryException:
continue
But I see two issues with this. One is that I'm throwing a wide net, potentially catching and ignoring errors other than in the scope I'm aiming at. Beyond that, it just seems... inelegant, and unidiomatic, to print the error message, then throw the exception itself into the void. Plus the command line event loop is only for testing; eventually I plan to embed this in a GUI application, and without printed output, I'll still want to access and display the info about what went wrong.
What's the cleanest and most Pythonic way to achieve something like this?
There seems to be many ways to achieve what you want. Though, which one is more robust - I cannot find a clue about. I'll try to explain all the methods that seemed apparent to me. Perhaps you'll find one of them useful.
I'll be using the example code you provided to demonstrate these methods, here's a fresher on how it looks-
class MyException(Exception):
pass
def my_callback(string):
raise MyException("Here's some specific info about my code hitting a problem.")
def library_function(callback, string):
try:
# (does other stuff here)
callback(string)
except:
raise Exception('The library code hit a problem.')
The simplest approach - traceback.format_exc
import traceback
try:
library_function(my_callback, 'boo!')
except:
# NOTE: Remember to keep the `chain` parameter of `format_exc` set to `True` (default)
tb_info = traceback.format_exc()
This does not require much know-how about exceptions and stack traces themselves, nor does it require you to pass any special frame/traceback/exception to the library function. But look at what this returns (as in, the value of tb_info)-
'''
Traceback (most recent call last):
File "path/to/test.py", line 14, in library_function
callback(string)
File "path/to/test.py", line 9, in my_callback
raise MyException("Here's some specific info about my code hitting a problem.")
MyException: Here's some specific info about my code hitting a problem.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "path/to/test.py", line 19, in <module>
library_function(my_callback, 'boo!')
File "path/to/test.py", line 16, in library_function
raise Exception('The library code hit a problem.')
Exception: The library code hit a problem.
'''
That's a string, the same thing you'd see if you just let the exception happen without catching. Notice the exception chaining here, the exception at the top is the exception that happened prior to the exception at the bottom. You could parse out all the exception names-
import re
exception_list = re.findall(r'^(\w+): (\w+)', tb_info, flags=re.M)
With that, you'll get [('MyException', "Here's some specific info about my code hitting a problem"), ('Exception', 'The library code hit a problem')] in exception_list
Although this is the easiest way out, it's not very context aware. I mean, all you get are class names in string form. Regardless, if that is what suits your needs - I don't particularly see a problem with this.
The "robust" approach - recursing through __context__/__cause__
Python itself keeps track of the exception trace history, the exception currently at hand, the exception that caused this exception and so on. You can read about the intricate details of this concept in PEP 3134
Whether or not you go through the entirety of the PEP, I urge you to at least familiarize yourself with implicitly chained exceptions and explicitly chained exceptions. Perhaps this SO thread will be useful for that.
As a small refresher, raise ... from is for explicitly chaining exceptions. The method you show in your example, is implicit chaining
Now, you need to make a mental note - TracebackException#__cause__ is for explicitly chained exceptions and TracebackException#__context__ is for implicitly chained exceptions. Since your example uses implicit chaining, you can simply follow __context__ backwards and you'll reach MyException. In fact, since this is only one level of nesting, you'll reach it instantly!
import sys
import traceback
try:
library_function(my_callback, 'boo!')
except:
previous_exc = traceback.TracebackException(*sys.exc_info()).__context__
This first constructs the TracebackException from sys.exc_info. sys.exc_info returns a tuple of (exc_type, exc_value, exc_traceback) for the exception at hand (if any). Notice that those 3 values, in that specific order, are exactly what you need to construct TracebackException - so you can simply destructure it using * and pass it to the class constructor.
This returns a TracebackException object about the current exception. The exception that it is implicitly chained from is in __context__, the exception that it is explicitly chained from is in __cause__.
Note that both __cause__ and __context__ will return either a TracebackException object, or None (if you're at the end of the chain). This means, you can call __cause__/__context__ again on the return value and basically keep going till you reach the end of the chain.
Printing a TracebackException object just prints the message of the exception, if you want to get the class itself (the actual class, not a string), you can do .exc_type
print(previous_exc)
# prints "Here's some specific info about my code hitting a problem."
print(previous_exc.exc_type)
# prints <class '__main__.MyException'>
Here's an example of recursing through .__context__ and printing the types of all exceptions in the implicit chain. (You can do the same for .__cause__)
def classes_from_excs(exc: traceback.TracebackException):
print(exc.exc_type)
if not exc.__context__:
# chain exhausted
return
classes_from_excs(exc.__context__)
Let's use it!
try:
library_function(my_callback, 'boo!')
except:
classes_from_excs(traceback.TracebackException(*sys.exc_info()))
That will print-
<class 'Exception'>
<class '__main__.MyException'>
Once again, the point of this is to be context aware. Ideally, printing isn't the thing you'll want to do in a practical environment, you have the class objects themselves on your hands, with all the info!
NOTE: For implicitly chained exceptions, if an exception is explicitly suppressed, it'll be a bad day trying to recover the chain - regardless, you might give __supressed_context__ a shot.
The painful way - walking through traceback.walk_tb
This is probably the closest you can get to the low level stuff of exception handling. If you want to capture entire frames of information instead of just the exception classes and messages and such, you might find walk_tb useful....and a bit painful.
import traceback
try:
library_function(my_callback, 'foo')
except:
tb_gen = traceback.walk_tb(sys.exc_info()[2])
There is....entirely too much to discuss here. .walk_tb takes a traceback object, you may remember from the previous method that the 2nd index of the returned tuple from sys.exec_info is just that. It then returns a generator of tuples of frame object and int (Iterator[Tuple[FrameType, int]]).
These frame objects have all kinds of intricate information. Though, whether or not you'll actually find exactly what you're looking for, is another story. They may be complex, but they aren't exhaustive unless you play around with a lot of frame inspection. Regardless, this is what the frame objects represent.
What you do with the frames is upto you. They can be passed to many functions. You can pass the entire generator to StackSummary.extract to get framesummary objects, you can iterate through each frame to have a look at [0].f_locals (The [0] on Tuple[FrameType, int] returns the actual frame object) and so on.
for tb in tb_gen:
print(tb[0].f_locals)
That will give you a dict of the locals for each frame. Within the first tb from tb_gen, you'll see MyException as part of the locals....among a load of other stuff.
I have a creeping feeling I have overlooked some methods, most probably with inspect. But I hope the above methods will be good enough so that no one has to go through the jumble that is inspect :P
Chase's answer above is phenomenal. For completeness's sake, here's how I implemented their second approach in this situation. First, I made a function that can search the stack for the specified error type. Even though the chaining in my example is implicit, this should be able to follow implicit and/or explicit chaining:
import sys
import traceback
def find_exception_in_trace(exc_type):
"""Return latest exception of exc_type, or None if not present"""
tb = traceback.TracebackException(*sys.exc_info())
prev_exc = tb.__context__ or tb.__cause__
while prev_exc:
if prev_exc.exc_type == exc_type:
return prev_exc
prev_exc = prev_exc.__context__ or prev_exc.__cause__
return None
With that, it's as simple as:
while True:
something = input('Enter something: ')
try:
library_function(my_callback, something)
except LibraryException as exc:
if (my_exc := find_exception_in_trace(MyException)):
print(my_exc)
continue
raise exc
That way I can access my inner exception (and print it for now, although eventually I may do other things with it) and continue. But if my exception wasn't in there, I simply reraise whatever the library raised. Perfect!

Catch all un-caught exceptions in python [duplicate]

Let's say I want to be able to log to file every time any exception is raised, anywhere in my program. I don't want to modify any existing code.
Of course, this could be generalized to being able to insert a hook every time an exception is raised.
Would the following code be considered safe for doing such a thing?
class MyException(Exception):
def my_hook(self):
print('---> my_hook() was called');
def __init__(self, *args, **kwargs):
global BackupException;
self.my_hook();
return BackupException.__init__(self, *args, **kwargs);
def main():
global BackupException;
global Exception;
BackupException = Exception;
Exception = MyException;
raise Exception('Contrived Exception');
if __name__ == '__main__':
main();
If you want to log uncaught exceptions, just use sys.excepthook.
I'm not sure I see the value of logging all raised exceptions, since lots of libraries will raise/catch exceptions internally for things you probably won't care about.
Your code as far as I can tell would not work.
__init__ has to return None and you are trying to return an instance of backup exception. In general if you would like to change what instance is returned when instantiating a class you should override __new__.
Unfortunately you can't change any of the attributes on the Exception class. If that was an option you could have changed Exception.__new__ and placed your hook there.
the "global Exception" trick will only work for code in the current module. Exception is a builtin and if you really want to change it globally you need to import __builtin__; __builtin__.Exception = MyException
Even if you changed __builtin__.Exception it will only affect future uses of Exception, subclasses that have already been defined will use the original Exception class and will be unaffected by your changes. You could loop over Exception.__subclasses__ and change the __bases__ for each one of them to insert your Exception subclass there.
There are subclasses of Exception that are also built-in types that you also cannot modify, although I'm not sure you would want to hook any of them (think StopIterration).
I think that the only decent way to do what you want is to patch the Python sources.
This code will not affect any exception classes that were created before the start of main, and most of the exceptions that happen will be of such kinds (KeyError, AttributeError, and so forth). And you can't really affect those "built-in exceptions" in the most important sense -- if anywhere in your code is e.g. a 1/0, the real ZeroDivisionError will be raised (by Python's own internals), not whatever else you may have bound to that exceptions' name.
So, I don't think your code can do what you want (despite all the semicolons, it's still supposed to be Python, right?) -- it could be done by patching the C sources for the Python runtime, essentially (e.g. by providing a hook potentially caught on any exception even if it's later caught) -- such a hook currently does not exist because the use cases for it would be pretty rare (for example, a StopIteration is always raised at the normal end of every for loop -- and caught, too; why on Earth would one want to trace that, and the many other routine uses of caught exceptions in the Python internals and standard library?!).
Download pypy and instrument it.

How to create a traceback object

I want to create a traceback like the one returned by sys.exc_info()[2]. I don't want a list of lines, I want an actual traceback object:
<traceback object at 0x7f6575c37e48>
How can I do this? My goal is to have it include the current stack minus one frame, so it looks the the caller is the most recent call.
Since Python 3.7 you can create traceback objects dynamically from Python.
To create traceback identical to one created by raise:
raise Exception()
use this:
import sys
import types
def exception_with_traceback(message):
tb = None
depth = 0
while True:
try:
frame = sys._getframe(depth)
depth += 1
except ValueError as exc:
break
tb = types.TracebackType(tb, frame, frame.f_lasti, frame.f_lineno)
return Exception(message).with_traceback(tb)
Relevant documentation is here:
https://docs.python.org/3/library/types.html#types.TracebackType
https://docs.python.org/3/reference/datamodel.html#traceback-objects
https://docs.python.org/3/library/sys.html#sys._getframe
There's no documented way to create traceback objects.
None of the functions in the traceback module create them. You can of course access the type as types.TracebackType, but if you call its constructor you just get a TypeError: cannot create 'traceback' instances.
The reason for this is that tracebacks contain references to internals that you can't actually access or generate from within Python.
However, you can access stack frames, and everything else you'd need to simulate a traceback is trivial. You can even write a class that has tb_frame, tb_lasti, tb_lineno, and tb_next attributes (using the info you can get from traceback.extract_stack and one of the inspect functions), which will look exactly like a traceback to any pure-Python code.
So there's a good chance that whatever you really want to do is doable, even though what you're asking for is not.
If you really need to fool another library—especially one written in C and using the non-public API—there are two potential ways to get a real traceback object. I haven't gotten either one to work reliably. Also, both are CPython-specific, require not just using the C API layer but using undocumented types and functions that could change at any moment, and offer the potential for new and exciting opportunities to segfault your interpreter. But if you want to try, they may be useful for a start.
The PyTraceBack type is not part of the public API. But (except for being defined in the Python directory instead of the Object directory) it's built as a C API type, just not documented. So, if you look at traceback.h and traceback.c for your Python version, you'll see that… well, there's no PyTraceBack_New, but there is a PyTraceBack_Here that constructs a new traceback and swaps it into the current exception info. I'm not sure it's valid to call this unless there's a current exception, and if there is a current exception you might be screwing it up by mutating it like this, but with a bit of trial&crash or reading the code, hopefully you can get this to work:
import ctypes
import sys
ctypes.pythonapi.PyTraceBack_Here.argtypes = (ctypes.py_object,)
ctypes.pythonapi.PyTraceBack_Here.restype = ctypes.c_int
def _fake_tb():
try:
1/0
except:
frame = sys._getframe(2)
if ctypes.pythonapi.PyTraceBack_Here(frame):
raise RuntimeError('Oops, probably hosed the interpreter')
raise
def get_tb():
try:
_fake_tb()
except ZeroDivisionError as e:
return e.__traceback__
As a fun alternative, we can try to mutate a traceback object on the fly. To get a traceback object, just raise and catch an exception:
try: 1/0
except exception as e: tb = e.__traceback__ # or sys.exc_info()[2]
The only problem is that it's pointing at your stack frame, not your caller's, right? If tracebacks were mutable, you could fix that easily:
tb.tb_lasti, tb.tb_lineno = tb.tb_frame.f_lasti, tb.tb_frame.f_lineno
tb.tb_frame = tb.tb_frame.f_back
And there's no methods for setting these things, either. Notice that it doesn't have a setattro, and its getattro works by building a __dict__ on the fly, so obviously the only way we're getting at this stuff is through the underlying struct. Which you should really build with ctypes.Structure, but as a quick hack:
p8 = ctypes.cast(id(tb), ctypes.POINTER(ctypes.c_ulong))
p4 = ctypes.cast(id(tb), ctypes.POINTER(ctypes.c_uint))
Now, for a normal 64-bit build of CPython, p8[:2] / p4[:4] are the normal object header, and after that come the traceback-specific fields, so p8[3] is the tb_frame, and p4[8] and p4[9] are the tb_lasti and tb_lineno, respectively. So:
p4[8], p4[9] = tb.tb_frame.f_lasti, tb.tb_frame.f_lineno
But the next part is a bit harder, because tb_frame isn't actually a PyObject *, it's just a raw struct _frame *, so off you go to frameobject.h, where you see that it really is a PyFrameObject * so you can just use the same trick again. Just remember to _ctypes.Py_INCREF the frame's next frame and Py_DECREF the frame itself after doing reassigning p8[3] to point at pf8[3], or as soon as you try to print the traceback you'll segfault and lose all the work you'd done writing this up. :)
"In order to better support dynamic creation of stack traces, types.TracebackType can now be instantiated from Python code, and the tb_next attribute on tracebacks is now writable."
There is an explanation(in python 3.7) for the same in here(python 3.7) https://docs.python.org/3/library/types.html#types.TracebackType
As others have pointed out, it's not possible to create traceback objects. However, you can write your own class that has the same properties:
from collections import namedtuple
fake_tb = namedtuple('fake_tb', ('tb_frame', 'tb_lasti', 'tb_lineno', 'tb_next'))
You can still pass instances of this class to some Python functions. Most notably, traceback.print_exception(...), which produces the same output as Python's standard excepthook.
If you (like me) encountered this problem because you are working on a PyQt-based GUI app, you may also be interested in a more comprehensive solution laid out in this blog post.

How often should custom exceptions be defined in python?

In trying to eliminate potential race condition in a python module I wrote to monitor some specialized workflows, I learned about python's "easier to ask forgiveness than permission" (EAFP) coding style, and I'm now raising lots of custom exceptions with try/except blocks where I used to use if/thens.
I'm new to python and This EAFP style makes sense logically and seems make my code more robust, but something about this feels way overboard. Is is bad practice to define one or more exceptions per method?
These custom exceptions tend to be useful only to a single method and, while it feels like a functionally correct solution, it seems like a lot of code to maintain.
Here a sample method for example:
class UploadTimeoutFileMissing(Exception):
def __init__(self, value):
self.parameter = value
def __str__(self):
return repr(self.parameter)
class UploadTimeoutTooSlow(Exception):
def __init__(self, value):
self.parameter = value
def __str__(self):
return repr(self.parameter)
def check_upload(file, timeout_seconds, max_age_seconds, min_age_seconds):
timeout = time.time() + timeout_seconds
## Check until file found or timeout
while (time.time() < timeout):
time.sleep(5)
try:
filetime = os.path.getmtime(file)
filesize = os.path.getsize(file)
except OSError:
print "File not found %s" % file
continue
fileage = time.time() - filetime
## Make sure file isn't pre-existing
if fileage > max_age_seconds:
print "File too old %s" % file
continue
## Make sure file isn't still uploading
elif fileage <= min_age_seconds:
print "File too new %s" % file
continue
return(filetime, filesize)
## Timeout
try:
filetime
filesize
raise UploadTimeoutTooSlow("File still uploading")
except NameError:
raise UploadTimeoutFileMissing("File not sent")
define one or more exceptions per method
If you mean that the exception is actually defined per method as in "within the method body", then yes. That is bad practice. This is true also if you define two exceptions that would relate to the same error but you create two because two different methods raise them.
If you ask whether it is bad practice to raise more than one exception per method, then no, that is good practice. And if the errors are not of the same category, it's perfectly ok to define several exceptions per module.
In general, for larger modules you will define more than one exception. If you would work on some arithmetic library and you would define a ZeroDivisionError and an OverflowError (if they weren't already defined in python, because you can of course re-use those) that would be perfectly fine.
Is is bad practice to define one or more exceptions per method?
Yes.
One per module is more typical. It depends, of course, on the detailed semantics. The question boils down to this: "What will you really try to catch?"
If you're never going to use except ThisVeryDetailedException: in your code, then your very detailed exception isn't very helpful.
If you can do this: except Error as e: if e.some_special_case for the very few times it matters, then you can easily simplify to one exception per module and handle your special cases as attributes of the exception rather than different types of exceptions.
The common suggestions (one per module, named Error) means that your code will often look like this.
try:
something
except some_module.Error as e:
carry on
This gives you a nice naming convention: module.Error. This covers numerous sins.
On an unrelated note, if you think you've got "potential race condition" you should probably redesign things correctly or stop trying to use threads or switch to multiprocessing. If you use multiprocessing, you'll find that it's very easy to avoid race conditions.
I'm going to weigh in on this because custom exceptions are dear to my heart. I'll explain my circumstances and the reader can weigh them against their own.
I'm the pipeline architect for a visual effects company - most of what I do involves developing what I call the "Facility API" - it's a system of a great many modules which handle everything from locating things on the filesystem, managing module/tool/project configuration, to handling datatypes from various CG applications to enable collaboration.
I go to great lengths to try to ensure that Python's built-in exceptions never bubble up. Since our developers will be relying on an ecosystem of existing modules to build their own tools on top of, having the API let a generic IOError escape is counterproductive - especially since the calling routine might not even be aware that it's reading the filesystem (abstraction is a beautiful thing). If the underlying module is unable to express something meaningful about that error, more work needs to be done.
My approach to solving this is to create a facility exception class from which all other facility exceptions are derived. There are subclasses of that for specific types of task or specific host applications - which allows me to customize error handling (for instance, exceptions raised in Maya will launch a UI to aid in troubleshooting since the usual exception would be raised in an inconspicuous console and would often be missed).
All sorts of reporting is built into the facility exception class - exceptions don't appear to a user without also being reported internally. For a range of exceptions, I get an IM any time one is raised. Others simply report quietly into a database that I can query for recent (daily or weekly) reports. Each links to EXTENSIVE data captured from the user session - typically including a screenshot, stack trace, system configuration, and a whole lot more. This means I can effectively troubleshoot problems before they're reported - and have more information at my fingertips than most users are likely able to provide.
Very fine gradations in purpose are discouraged - the exceptions accept passed values (sometimes even a dictionary instead of a string, if we want to provide plenty of data for troubleshooting) to provide with their formatted output.
So no - I don't think defining an exception or two per module is unreasonable - but they need to be meaningful and add something to the project. If you're just wrapping an IOError to raise MyIOError("I got an IO error!"), then you may want to rethink that.
I don't think it's necessary to have an extremely specific exception for every possible scenario. A single UploadTimeoutError would probably be fine, and you can just customize the exception string - that's what the strings are for, after all. Note how python doesn't have a separate exception for every possible type of syntax error, just a general SyntaxError.
Also - is it actually necessary to define the __init__ and __str__ methods for each of your custom exceptions? As far as I can tell, if you're not implementing any unusual behavior, you don't need to add any code:
>>> class MyException(Exception): pass
...
>>> raise MyException("oops!")
Traceback (most recent call last):
File "<ipython console>", line 1, in <module>
MyException: oops!
>>> str(MyException("oops!"))
'oops!'

How can I call sys.exc_clear() for another part of a Python program?

I have a class (see this previous question if you are interested) which tracks, amongst other things, errors. The class is called in a variety of situations, one of them is during an exception. Even though my class calls sys.exc_clear() as part of the regular course of events, the next time the class is called (even when there is no error, such as when I am just throwing some statistical information in one of the class functions) the sys.exc_info() tuple is still full of the original non-None objects.
I can call sys.exc_clear() and sys.exc_info() is just a bunch of Nones while the thread is executing in that class, but as soon as execution returns to the main program, this ceases to be valid. My reading of the documentation suggests that this is because the execution stack has returned to another frame. This seems to be a situation tangentially mentioned in this question previously.
So, my only option appears to be tacking sys.exc_clear() after each except in my main program. I have tried it in a few places and it works. I can do this, but it seems tedious and ugly. Is another way?
ADDITION:
Imagine the main program as
import tracking
def Important_Function():
try:
something that fails
except:
myTrack.track(level='warning', technical='Failure in Important_Function' ...)
return
def Other_Function():
myTrack.track(level='info', technical='Total=0' ...)
return
myTrack = tracking.Tracking()
myTrack.track(level='debug', parties=['operator'], technical='Started the program.')
Important_Function()
Other_Function()
Then the Tracking code as:
import sys
import inspect
import traceback
... lots of initialization stuff
def track(self, level='info', technical=None, parties=None, clear=True ...):
# What are our errors?
errors = {}
errortype, errorvalue, errortraceback = sys.exc_info()
errortype, errorvalue = sys.exc_info()[:2]
errors['type'] = None
errors['class'] = errortype
errors['value'] = errorvalue
errors['arguments'] = None
errors['traceback'] = None
try:
errors['type'] = str(errortype.__name__)
try:
errors['arguments'] = str(errorvalue.__dict__['args'])
except KeyError:
pass
errors['traceback'] = traceback.format_tb(errortraceback, maxTBlevel)
except:
pass
if clear == True:
sys.exc_clear()
No multi-threading that I'm aware of. If I print sys.exc_info() right after calling sys.exc_clear(), everything has been cleared. But once I return from the track function and then re-enter it, even without errors, sys.exc_info() is back with a tuple full of the previous, old errors.
Please note that last exception information is a per-thread construct. An excerpt from sys.exc_info:
This function returns a tuple of three values that give information about the exception that is currently being handled. The information returned is specific both to the current thread and to the current stack frame.
So, running sys.exc_clear in a thread does not affect other threads.
UPDATE:
Quoting from the documentation:
Warning
Assigning the traceback return value to a local variable in a function that is handling an exception will cause a circular reference. This will prevent anything referenced by a local variable in the same function or by the traceback from being garbage collected. Since most functions don’t need access to the traceback, the best solution is to use something like exctype, value = sys.exc_info()[:2] to extract only the exception type and value. If you do need the traceback, make sure to delete it after use (best done with a try ... finally statement) or to call exc_info() in a function that does not itself handle an exception.
You do assign the traceback to a local variable, and that is why I commented your question with the suggestion to remove the “offending” line.
I believe the error is caused by a misuse of sys.exc_clear() and exceptions. The actual problem is that you're calling it after handling another exception. This is the exception that actually gets cleared, not the one in the you recorded.
A solution to your problem would be to create a different method for tracking exceptions and call it in the except clause and call it only in the except clause -- that way the exception will always be the right one.
Problems I can see in the code above:
You're calling exc.clear_exc() after another exception has been handled, clearing it not the one you want.
You want to call exc.clear_exc() to clear someone else's exceptions - this is wrong and it can break the program (e.g. if a bare raise is called after the call to a fixed version track, it will fail). Whoever is handling the exception likes to use those values, clearing them like this can do no good.
You're expecting that if there was no error, sys.exc_info() will not be set as long as you clear it each time. That is not true -- there might be data for a previous exception there in a completely unrelated call to track. Don't rely on that.
All of those are fixed by using separate methods and never using sys.clear_exc().
Oh, another thing, if those except: clauses without an exceptions are not just examples, it's advisable to handle only exceptions that you know about, and not all of them like that.

Categories

Resources