I have developed a python framework that is being used by others. In order to print any data to the output, the developer should use a Log class (Log.print(...)) and should not use the print() method directly. Is there any ways to force this rule throughout the code? For example, by throwing an error when a developer uses the print method directly like this:
Error: print method cannot be called directly. Please use Log.print().
Suppressing print (as discussed here) is not a good idea as the developer might get confused.
Actullay, below two line code are the same:
sys.stdout.write('hello'+'\n')
print('hello')
so, you can redirect sys.stdout to a class which raise a exception at calling print.
import sys
class BlockPrint():
call_print_exception = Exception('Error: print method cannot be called directly. Please use Log.print().')
def write(self, str):
raise self.call_print_exception
bp = BlockPrint()
sys.stdout=bp
print('aaa')
Output:
Traceback (most recent call last):
File "p.py", line 12, in <module>
print('aaa')
File "p.py", line 7, in write
raise self.call_print_exception
Exception: Error: print method cannot be called directly. Please use Log.print().
Related
Can anyone advice what would be effective way to hide the Trackback from a python class exception. We know sys.tracebacklimit = 0 can be useful for hiding the trace, but not sure how this can be implemented in a class.
For example, we have a test.py file with example code:
class FooError(Exception):
pass
class Foo():
def __init__(self, *args):
if len(args) != 2:
raise FooError('Input must be two parameters')
x, y = args
self.x = x
self.y = y
When we run the cmd to run this file, we get
>>> from test import *
>>> Foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/test.py", line 8, in __init__
raise FooError('Input must be two parameters')
test.FooError: Input must be two parameters
However, we only expect the error message should be displayed:
test.FooError: Input must be two parameters
Any code changes should be included in the class to reach this?
But why? Are you trying to make debugging your program harder?
Either way, this is really about how exceptions and tracebacks are printed by the default implementation. If you really want the console exception print implementation not to print a traceback for your errors, you can set sys.excepthook to a custom implementation that doesn't print the traceback.
This won't prevent any other try/except block from being able to access the traceback though, of course.
I write a lot of Python code that uses external libraries. Frequently I will write a bug, and when I run the code I get a big long traceback in the Python console. 99.999999% of the time it's due to a coding error in my code, not because of a bug in the package. But the traceback goes all the way to the line of error in the package code, and either it takes a lot of scrolling through the traceback to find the code I wrote, or the traceback is so deep into the package that my own code doesn't even appear in the traceback.
Is there a way to "black-box" the package code, or somehow only show traceback lines from my code? I'd like the ability to specify to the system which directories or files I want to see traceback from.
In order to print your own stacktrace, you would need to handle all unhandled exceptions yourself; this is how the sys.excepthook becomes handy.
The signature for this function is sys.excepthook(type, value, traceback) and its job is:
This function prints out a given traceback and exception to sys.stderr.
So as long as you can play with the traceback and only extract the portion you care about you should be fine. Testing frameworks do that very frequently; they have custom assert functions which usually does not appear in the traceback, in other words they skip the frames that belong to the test framework. Also, in those cases, the tests usually are started by the test framework as well.
You end up with a traceback that looks like this:
[ custom assert code ] + ... [ code under test ] ... + [ test runner code ]
How to identify your code.
You can add a global to your code:
__mycode = True
Then to identify the frames:
def is_mycode(tb):
globals = tb.tb_frame.f_globals
return globals.has_key('__mycode')
How to extract your frames.
skip the frames that don't matter to you (e.g. custom assert code)
identify how many frames are part of your code -> length
extract length frames
def mycode_traceback_levels(tb):
length = 0
while tb and is_mycode(tb):
tb = tb.tb_next
length += 1
return length
Example handler.
def handle_exception(type, value, tb):
# 1. skip custom assert code, e.g.
# while tb and is_custom_assert_code(tb):
# tb = tb.tb_next
# 2. only display your code
length = mycode_traceback_levels(tb)
print ''.join(traceback.format_exception(type, value, tb, length))
install the handler:
sys.excepthook = handle_exception
What next?
You could adjust length to add one or more levels if you still want some info about where the failure is outside of your own code.
see also https://gist.github.com/dnozay/b599a96dc2d8c69b84c6
As others suggested, you could use sys.excepthook:
This function prints out a given traceback and exception to sys.stderr.
When an exception is raised and uncaught, the interpreter calls sys.excepthook with three arguments, the exception class, exception instance, and a traceback object. In an interactive session this happens just before control is returned to the prompt; in a Python program this happens just before the program exits. The handling of such top-level exceptions can be customized by assigning another three-argument function to sys.excepthook.
(emphasis mine)
It's possible to filter a traceback extracted by extract_tb (or similar functions from the traceback module) based on specified directories.
Two functions that can help:
from os.path import join, abspath
from traceback import extract_tb, format_list, format_exception_only
def spotlight(*show):
''' Return a function to be set as new sys.excepthook.
It will SHOW traceback entries for files from these directories. '''
show = tuple(join(abspath(p), '') for p in show)
def _check_file(name):
return name and name.startswith(show)
def _print(type, value, tb):
show = (fs for fs in extract_tb(tb) if _check_file(fs.filename))
fmt = format_list(show) + format_exception_only(type, value)
print(''.join(fmt), end='', file=sys.stderr)
return _print
def shadow(*hide):
''' Return a function to be set as new sys.excepthook.
It will HIDE traceback entries for files from these directories. '''
hide = tuple(join(abspath(p), '') for p in hide)
def _check_file(name):
return name and not name.startswith(hide)
def _print(type, value, tb):
show = (fs for fs in extract_tb(tb) if _check_file(fs.filename))
fmt = format_list(show) + format_exception_only(type, value)
print(''.join(fmt), end='', file=sys.stderr)
return _print
They both use the traceback.extract_tb. It returns "a list of “pre-processed” stack trace entries extracted from the traceback object"; all of them are instances of traceback.FrameSummary (a named tuple). Each traceback.FrameSummary object has a filename field which stores the absolute path of the corresponding file. We check if it starts with any of the directory paths provided as separate function arguments to determine if we'll need to exclude the entry (or keep it).
Here's an Example:
The enum module from the standard library doesn't allow reusing keys,
import enum
enum.Enum('Faulty', 'a a', module=__name__)
yields
Traceback (most recent call last):
File "/home/vaultah/so/shadows/main.py", line 23, in <module>
enum.Enum('Faulty', 'a a', module=__name__)
File "/home/vaultah/cpython/Lib/enum.py", line 243, in __call__
return cls._create_(value, names, module=module, qualname=qualname, type=type, start=start)
File "/home/vaultah/cpython/Lib/enum.py", line 342, in _create_
classdict[member_name] = member_value
File "/home/vaultah/cpython/Lib/enum.py", line 72, in __setitem__
raise TypeError('Attempted to reuse key: %r' % key)
TypeError: Attempted to reuse key: 'a'
We can restrict stack trace entries to our code (in /home/vaultah/so/shadows/main.py).
import sys, enum
sys.excepthook = spotlight('/home/vaultah/so/shadows')
enum.Enum('Faulty', 'a a', module=__name__)
and
import sys, enum
sys.excepthook = shadow('/home/vaultah/cpython/Lib')
enum.Enum('Faulty', 'a a', module=__name__)
give the same result:
File "/home/vaultah/so/shadows/main.py", line 22, in <module>
enum.Enum('Faulty', 'a a', module=__name__)
TypeError: Attempted to reuse key: 'a'
There's a way to exclude all site directories (where 3rd party packages are installed - see site.getsitepackages)
import sys, site, jinja2
sys.excepthook = shadow(*site.getsitepackages())
jinja2.Template('{%}')
# jinja2.exceptions.TemplateSyntaxError: unexpected '}'
# Generates ~30 lines, but will only display 4
Note: Don't forget to restore sys.excepthook from sys.__excepthook__. Unfortunately, you won't be able to "patch-restore" it using a context manager.
the traceback.extract_tb(tb) would return a tuple of error frames in the format(file, line_no, type, error_statement) , you can play with that to format the traceback. Also refer https://pymotw.com/2/sys/exceptions.html
import sys
import traceback
def handle_exception(ex_type, ex_info, tb):
print ex_type, ex_info, traceback.extract_tb(tb)
sys.excepthook = handle_exception
A package that I'm using in my python program is throwing a warning that I'd like to understand the exact cause of. I've set logging.captureWarning(True) and am capturing the warning in my logging, but still have no idea where it is coming from. How do I also log the stack trace so I can see where in my code the warning is coming from? Do I use traceback?
I've ended up going with the below:
import warnings
import traceback
_formatwarning = warnings.formatwarning
def formatwarning_tb(*args, **kwargs):
s = _formatwarning(*args, **kwargs)
tb = traceback.format_stack()
s += ''.join(tb[:-1])
return s
warnings.formatwarning = formatwarning_tb
logging.captureWarnings(True)
It's a little hackish, but you can monkeypatch the warnings.warn method to this:
import traceback
import warnings
def g():
warnings.warn("foo", Warning)
def f():
g()
warnings.warn("bar", Warning)
_old_warn = warnings.warn
def warn(*args, **kwargs):
tb = traceback.extract_stack()
_old_warn(*args, **kwargs)
print("".join(traceback.format_list(tb)[:-1]))
warnings.warn = warn
f()
print("DONE")
This is the output:
/tmp/test.py:14: Warning: foo
_old_warn(*args, **kwargs)
File "/tmp/test.py", line 17, in <module>
f()
File "/tmp/test.py", line 8, in f
g()
File "/tmp/test.py", line 5, in g
warnings.warn("foo", Warning)
/tmp/test.py:14: Warning: bar
_old_warn(*args, **kwargs)
File "/tmp/test.py", line 17, in <module>
f()
File "/tmp/test.py", line 9, in f
warnings.warn("bar", Warning)
DONE
See that calling the original warnings.warn function does not report the line you'd want, bu the stack trace is indeed correct (you could print the warning message yourself).
If you do not know what data/instruction is causing the warning throw, you can use tools like the standard Python Debugger.
The documentation is really good and detailed, but some quickly examples that may help should be:
Without modifying source code: invoking the debbugger as script:
$ python -m pdb myscript.py
Modifying source code: you can make use of calls to pdb.set_trace(), that work like breakpoints; For example, consider I have the following example code:
x = 2
x = x * 10 * 100
y = x + 3 + y
return y
And I would like to know what value does x and y have before the return, or what does the stack contains, I would add the following line between those statements:
pdb.set_trace()
And I will be promted to the (Pdb) prompt, that will allow you to go through the code line by line. Useful commands for the (Pdb) prompt are:
n: executes the next statement.
q: quits the whole program.
c: quits the (Pdb) prompt and stops debugging.
p varname: prints the value of varname
As you do not provide more information, I do not know if that should be enough, but I think that at least, it may be a good start.
BONUS EDIT
Based on this answer, I have found there is a nice and friendly GUI debugging tool, that you can simply install by:
$ pip install pudb
And run the debugger with your script with:
$ python -m pudb.run myscript.py
EDIT: Adding the postmortem debugging
If we do not even know if the code is going to crash or not, we can enter in postmortem debugging if there has been a crash. From the Pbd documentation:
The typical usage to inspect a crashed program is:
>>> import pdb
>>> import mymodule
>>> mymodule.test()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "./mymodule.py", line 4, in test
test2()
File "./mymodule.py", line 3, in test2
print spam
NameError: spam
>>> pdb.pm()
> ./mymodule.py(3)test2()
-> print spam
(Pdb)
As postmortem looks at sys.last_traceback, to enter only if there is a traceback (and so on, a warning or crash):
if sys.last_traceback:
pdb.pm()
You can turn warnings into exceptions, which means you will get a stack trace automatically:
warnings.filterwarnings("error")
See https://docs.python.org/3.4/library/warnings.html#the-warnings-filter
If it was me, I'd go with #Lluís Vilanova's quick & dirty hack, just to find something. But if that's not an option...
If you really want a "logging" solution, you could try something like this (fully working source).
Basic steps are:
Create a custom logging.Formatter subclass that includes the current stack where the logging record is formatted
Use that formatter on the class of the warning
The meat of the code is the custom formatter:
class Formatter(logging.Formatter):
def format(self, record):
record.stack_info = ''.join(traceback.format_stack())
return super().format(record)
Per the docs:
New in version 3.2: The stack_info parameter was added.
For python 3.2 and above, using the optional stack_info keyword argument is the easiest way to get stack trace info along with the log message.
In the example below, "Server.py" is using "lib2.py", which is in turn using "lib.py".
On enabling the stack_info argument the complete trace back is logged along with every logging.log() call. This works the same with logging.info() and other convenience methods as well.
Usage :-
logging.log(DEBUG, "RWL [{}] : acquire_read()".format(self._ownerName), stack_info=True)
Output :-
2018-10-06 10:59:55,726|DEBUG|MainThread|lib.py|acquire_read|RWL [Cache] : acquire_read()
Stack (most recent call last):
File "./Server.py", line 41, in <module>
logging.info("Found {} requests for simulation".format(simdata.count()))
File "<Path>\lib2.py", line 199, in count
with basics.ReadRWLock(self.cacheLock):
File "<Path>\lib.py", line 89, in __enter__
self.rwLock.acquire_read()
File "<Path>\lib.py", line 34, in acquire_read
logging.log(DEBUG, "RWL [{}] : acquire_read()".format(self._ownerName), stack_info=True)
Not sure how possible this is, but here goes:
I'm trying to write an object with some slightly more subtle behavior - which may or may not be a good idea, I haven't determined that yet.
I have this method:
def __getattr__(self, attr):
try:
return self.props[attr].value
except KeyError:
pass #to hide the keyerror exception
msg = "'{}' object has no attribute '{}'"
raise AttributeError(msg.format(self.__dict__['type'], attr))
Now, when I create an instance of this like so:
t = Thing()
t.foo
I get a stacktrace containing my function:
Traceback (most recent call last):
File "attrfun.py", line 23, in <module>
t.foo
File "attrfun.py", line 15, in __getattr__
raise AttributeError(msg.format(self._type, attr))
AttributeError: 'Thing' object has no attribute 'foo'
I don't want that - I want the stack trace to read:
Traceback (most recent call last):
File "attrfun.py", line 23, in <module>
t.foo
AttributeError: 'Thing' object has no attribute 'foo'
Is this possible with a minimal amount of effort, or is there kind of a lot required? I found this answer which indicates that something looks to be possible, though perhaps involved. If there's an easier way, I'd love to hear it! Otherwise I'll just put that idea on the shelf for now.
You cannot tamper with traceback objects (and that's a good thing). You can only control how you process one that you've already got.
The only exceptions are: you can
substitute an exception with another or re-raise it with raise e (i.e make the traceback point to the re-raise statement's location)
raise an exception with an explicit traceback object
remove outer frame(s) from a traceback object by accessing its tb_next property (this reflects a traceback object's onion-like structure)
For your purpose, the way to go appears to be the 1st option: re-raise an exception from a handler one level above your function.
And, I'll say this again, this is harmful for yourself or whoever will be using your module as it deletes valuable diagnostic information. If you're dead set on making your module proprietary with whatever rationale, it's more productive for that goal to make it a C extension.
The traceback object is created during stack unwinding, not directly when you raise the exception, so you can not alter it right in your function. What you could do instead (though it's probably a bad idea) is to alter the top level exception hook so that it hides your function from the traceback.
Suppose you have this code:
class MagicGetattr:
def __getattr__(self, item):
raise AttributeError(f"{item} not found")
orig_excepthook = sys.excepthook
def excepthook(type, value, traceback):
iter_tb = traceback
while iter_tb.tb_next is not None:
if iter_tb.tb_next.tb_frame.f_code is MagicGetattr.__getattr__.__code__:
iter_tb.tb_next = None
break
iter_tb = iter_tb.tb_next
orig_excepthook(type, value, traceback)
sys.excepthook = excepthook
# The next line will raise an error
MagicGetattr().foobar
You will get the following output:
Traceback (most recent call last):
File "test.py", line 49, in <module>
MagicGetattr().foobar
AttributeError: foobar not found
Note that this ignores the __cause__ and __context__ members of the exception, which you would probably want to visit too if you were to implement this in real life.
You can get the current frame and any other level using the inspect module. For instance, here is what I use when I'd like to know where I'm in my code :
from inspect import currentframe
def get_c_frame(level = 0) :
"""
Return caller's frame
"""
return currentframe(level)
...
def locate_error(level = 0) :
"""
Return a string containing the filename, function name and line
number where this function was called.
Output is : ('file name' - 'function name' - 'line number')
"""
fi = get_c_frame(level = level + 2)
return '({} - {} - {})'.format(__file__,
fi.f_code,
fi.f_lineno)
Is there an easy way to get the message of the exception to be colored on the command line? For example
def g(): f()
def f(): 1/0
g()
Gives the error
Traceback (most recent call last):
File "test.py", line 3, in <module>
g()
File "test.py", line 1, in g
def g(): f()
File "test.py", line 2, in f
def f(): 1/0
ZeroDivisionError: integer division or modulo by zero
I would like "integer division or modulo by zero" to be colored or highlighted on the terminal so that I can quickly pick it out of a long traceback (Linux only). Ideally, I wouldn't want to write a custom class for each Exception, but somehow catch and format all kinds.
EDIT: The question linked in the comments gives examples on how to solve the problem with external software, but I'm interested in an internal Python solution.
You can assign a custom function to the sys.excepthook handler. The function is called whenever there is a unhandled exception (so one that exits the interpreter).
def set_highlighted_excepthook():
import sys, traceback
from pygments import highlight
from pygments.lexers import get_lexer_by_name
from pygments.formatters import TerminalFormatter
lexer = get_lexer_by_name("pytb" if sys.version_info.major < 3 else "py3tb")
formatter = TerminalFormatter()
def myexcepthook(type, value, tb):
tbtext = ''.join(traceback.format_exception(type, value, tb))
sys.stderr.write(highlight(tbtext, lexer, formatter))
sys.excepthook = myexcepthook
set_highlighted_excepthook()
This version uses the pygments library to convert the traceback text into one formatted with ANSI coloring, before writing it to stderr.
Someone turned this into a project that detects terminal support and lets you set the pygments style, see colored-traceback.py.
Found another way to do this using the IPython module which is likely a dependency that everyone already has installed:
from IPython.core.ultratb import ColorTB
c = ColorTB()
exc = sys.exc_info()
print(''.join(c.structured_traceback(*exc)))
This takes the solution #freakish shared and makes the colorization part of the exception instead of requiring the user to add color to each exception message. Obviously, it only works for custom exceptions, so it may not be exactly what OP was looking for.
from colorama import Fore, init
init()
class Error (Exception):
def __init__ (self, message):
super().__init__(Fore.RED + message)
class BadConfigFile (Error):
pass
raise BadConfigFile("some error message")
This will print the traceback with "some error message" in red. Having 'Error' as a base class means you can create other exceptions that will all inherit the colorization of the message.
Have a look at colorama ( or any other coloring ) module. Then you can wrap you're entire app with:
import traceback
from colorama import Fore, init
init( )
try:
// your app
except Exception:
print Fore.RED + traceback.format_exc( ) + Fore.RESET
// possibly raise again or log to db