I have developed a python framework that is being used by others. In order to print any data to the output, the developer should use a Log class (Log.print(...)) and should not use the print() method directly. Is there any ways to force this rule throughout the code? For example, by throwing an error when a developer uses the print method directly like this:
Error: print method cannot be called directly. Please use Log.print().
Suppressing print (as discussed here) is not a good idea as the developer might get confused.
Actullay, below two line code are the same:
sys.stdout.write('hello'+'\n')
print('hello')
so, you can redirect sys.stdout to a class which raise a exception at calling print.
import sys
class BlockPrint():
call_print_exception = Exception('Error: print method cannot be called directly. Please use Log.print().')
def write(self, str):
raise self.call_print_exception
bp = BlockPrint()
sys.stdout=bp
print('aaa')
Output:
Traceback (most recent call last):
File "p.py", line 12, in <module>
print('aaa')
File "p.py", line 7, in write
raise self.call_print_exception
Exception: Error: print method cannot be called directly. Please use Log.print().
There are several logging.warn('....') calls in the legacy code base I am working on today.
I want to understand the log output better. Up to now logging.warn() does emit one line. But this single line is not enough to understand the context.
I would like to see the stacktrace of the interpreter.
Since there are a lot of logging.warn('....') lines in my code, I would like to leave them like they are and only modify the configuration of the logging.
How can I add the interpreter stacktrace to every warn() or error() call automatically?
I know that logging.exception("message") shows the stacktrace, but I would like to leave the logging.warn() lines untouched.
The answer I was looking for was given by #Martijn Pieters♦ in the comments
In python 3.x
logger.warning(f'{error_message}', stack_info=True)
does exactly what you need.
Thanks #Martijn Pieters♦
it is trivial if you accept to add a log handler:
import logging
import traceback
class WarnWithStackHandler(logging.StreamHandler):
def emit(self, record):
if record.levelno == logging.WARNING:
stack = traceback.extract_stack()
# skip logging internal stacks
stack = stack[:-7]
for line in traceback.format_list(stack):
print(line, end='')
super().emit(record)
I don't believe a handler is your solution. Go for a filter:
import os.path
import traceback
import logging
_LOGGING_FILE = os.path.normcase(logging.addLevelName.__code__.co_filename)
_CURRENT_FILE = os.path.normcase(__file__)
_ELIMINATE_STACK = (_CURRENT_FILE, _LOGGING_FILE)
class AddStackFilter(logging.Filter):
def __init__(self, levels=None):
self.levels = levels or set()
def get_stack(self):
# Iterator over file names
filenames = iter(_ELIMINATE_STACK)
filename = next(filenames, "")
frames = traceback.walk_stack(None)
# Walk up the frames
for frame, lineno in frames:
# If frame is not from file, continue on to the next file
while os.path.normcase(frame.f_code.co_filename) != filename:
filename = next(filenames, None)
if filename is None:
break
else:
# It's from the given file, go up a frame
continue
# Finished iterating over all files
break
# No frames left
else:
return None
info = traceback.format_stack(frame)
info.insert(0, 'Stack (most recent call last):\n')
# Remove last newline
info[-1] = info[-1].rstrip()
return "".join(info)
def filter(self, record):
if record.levelno in self.levels:
sinfo = self.get_stack()
if sinfo is not None:
record.stack_info = sinfo
return True
This filter has numerous advantages:
Removes stack frames from the local file and logging's file.
Leaves stack frames in case we come back to the local file after passing through logging. Important if we wish to use the same module for other stuff.
You can attach it to any handler or logger, doesn't bind you to StreamHandler or any other handler.
You can affect multiple handlers using the same filter, or a single handler, your choice.
The levels are given as an __init__ variable, allowing you to add more levels as needed.
Allows you to add the stack trace to the log, and not just print.
Plays well with the logging module, putting the stack in the correct place, nothing unexpected.
Usage:
>>> import stackfilter
>>> import logging
>>> sfilter = stackfilter.AddStackFilter(levels={logging.WARNING})
>>> logging.basicConfig()
>>> logging.getLogger().addFilter(sfilter)
>>> def testy():
... logging.warning("asdasd")
...
>>> testy()
WARNING:root:asdasd
Stack (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in testy
Not sure how possible this is, but here goes:
I'm trying to write an object with some slightly more subtle behavior - which may or may not be a good idea, I haven't determined that yet.
I have this method:
def __getattr__(self, attr):
try:
return self.props[attr].value
except KeyError:
pass #to hide the keyerror exception
msg = "'{}' object has no attribute '{}'"
raise AttributeError(msg.format(self.__dict__['type'], attr))
Now, when I create an instance of this like so:
t = Thing()
t.foo
I get a stacktrace containing my function:
Traceback (most recent call last):
File "attrfun.py", line 23, in <module>
t.foo
File "attrfun.py", line 15, in __getattr__
raise AttributeError(msg.format(self._type, attr))
AttributeError: 'Thing' object has no attribute 'foo'
I don't want that - I want the stack trace to read:
Traceback (most recent call last):
File "attrfun.py", line 23, in <module>
t.foo
AttributeError: 'Thing' object has no attribute 'foo'
Is this possible with a minimal amount of effort, or is there kind of a lot required? I found this answer which indicates that something looks to be possible, though perhaps involved. If there's an easier way, I'd love to hear it! Otherwise I'll just put that idea on the shelf for now.
You cannot tamper with traceback objects (and that's a good thing). You can only control how you process one that you've already got.
The only exceptions are: you can
substitute an exception with another or re-raise it with raise e (i.e make the traceback point to the re-raise statement's location)
raise an exception with an explicit traceback object
remove outer frame(s) from a traceback object by accessing its tb_next property (this reflects a traceback object's onion-like structure)
For your purpose, the way to go appears to be the 1st option: re-raise an exception from a handler one level above your function.
And, I'll say this again, this is harmful for yourself or whoever will be using your module as it deletes valuable diagnostic information. If you're dead set on making your module proprietary with whatever rationale, it's more productive for that goal to make it a C extension.
The traceback object is created during stack unwinding, not directly when you raise the exception, so you can not alter it right in your function. What you could do instead (though it's probably a bad idea) is to alter the top level exception hook so that it hides your function from the traceback.
Suppose you have this code:
class MagicGetattr:
def __getattr__(self, item):
raise AttributeError(f"{item} not found")
orig_excepthook = sys.excepthook
def excepthook(type, value, traceback):
iter_tb = traceback
while iter_tb.tb_next is not None:
if iter_tb.tb_next.tb_frame.f_code is MagicGetattr.__getattr__.__code__:
iter_tb.tb_next = None
break
iter_tb = iter_tb.tb_next
orig_excepthook(type, value, traceback)
sys.excepthook = excepthook
# The next line will raise an error
MagicGetattr().foobar
You will get the following output:
Traceback (most recent call last):
File "test.py", line 49, in <module>
MagicGetattr().foobar
AttributeError: foobar not found
Note that this ignores the __cause__ and __context__ members of the exception, which you would probably want to visit too if you were to implement this in real life.
You can get the current frame and any other level using the inspect module. For instance, here is what I use when I'd like to know where I'm in my code :
from inspect import currentframe
def get_c_frame(level = 0) :
"""
Return caller's frame
"""
return currentframe(level)
...
def locate_error(level = 0) :
"""
Return a string containing the filename, function name and line
number where this function was called.
Output is : ('file name' - 'function name' - 'line number')
"""
fi = get_c_frame(level = level + 2)
return '({} - {} - {})'.format(__file__,
fi.f_code,
fi.f_lineno)
Is there an easy way to get the message of the exception to be colored on the command line? For example
def g(): f()
def f(): 1/0
g()
Gives the error
Traceback (most recent call last):
File "test.py", line 3, in <module>
g()
File "test.py", line 1, in g
def g(): f()
File "test.py", line 2, in f
def f(): 1/0
ZeroDivisionError: integer division or modulo by zero
I would like "integer division or modulo by zero" to be colored or highlighted on the terminal so that I can quickly pick it out of a long traceback (Linux only). Ideally, I wouldn't want to write a custom class for each Exception, but somehow catch and format all kinds.
EDIT: The question linked in the comments gives examples on how to solve the problem with external software, but I'm interested in an internal Python solution.
You can assign a custom function to the sys.excepthook handler. The function is called whenever there is a unhandled exception (so one that exits the interpreter).
def set_highlighted_excepthook():
import sys, traceback
from pygments import highlight
from pygments.lexers import get_lexer_by_name
from pygments.formatters import TerminalFormatter
lexer = get_lexer_by_name("pytb" if sys.version_info.major < 3 else "py3tb")
formatter = TerminalFormatter()
def myexcepthook(type, value, tb):
tbtext = ''.join(traceback.format_exception(type, value, tb))
sys.stderr.write(highlight(tbtext, lexer, formatter))
sys.excepthook = myexcepthook
set_highlighted_excepthook()
This version uses the pygments library to convert the traceback text into one formatted with ANSI coloring, before writing it to stderr.
Someone turned this into a project that detects terminal support and lets you set the pygments style, see colored-traceback.py.
Found another way to do this using the IPython module which is likely a dependency that everyone already has installed:
from IPython.core.ultratb import ColorTB
c = ColorTB()
exc = sys.exc_info()
print(''.join(c.structured_traceback(*exc)))
This takes the solution #freakish shared and makes the colorization part of the exception instead of requiring the user to add color to each exception message. Obviously, it only works for custom exceptions, so it may not be exactly what OP was looking for.
from colorama import Fore, init
init()
class Error (Exception):
def __init__ (self, message):
super().__init__(Fore.RED + message)
class BadConfigFile (Error):
pass
raise BadConfigFile("some error message")
This will print the traceback with "some error message" in red. Having 'Error' as a base class means you can create other exceptions that will all inherit the colorization of the message.
Have a look at colorama ( or any other coloring ) module. Then you can wrap you're entire app with:
import traceback
from colorama import Fore, init
init( )
try:
// your app
except Exception:
print Fore.RED + traceback.format_exc( ) + Fore.RESET
// possibly raise again or log to db
I am printing Python exception messages to a log file with logging.error:
import logging
try:
1/0
except ZeroDivisionError as e:
logging.error(e) # ERROR:root:division by zero
Is it possible to print more detailed information about the exception and the code that generated it than just the exception string? Things like line numbers or stack traces would be great.
logger.exception will output a stack trace alongside the error message.
For example:
import logging
try:
1/0
except ZeroDivisionError:
logging.exception("message")
Output:
ERROR:root:message
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ZeroDivisionError: integer division or modulo by zero
#Paulo Cheque notes, "be aware that in Python 3 you must call the logging.exception method just inside the except part. If you call this method in an arbitrary place you may get a bizarre exception. The docs alert about that."
Using exc_info options may be better, to allow you to choose the error level (if you use exception, it will always be at the error level):
try:
# do something here
except Exception as e:
logging.critical(e, exc_info=True) # log exception info at CRITICAL log level
One nice thing about logging.exception that SiggyF's answer doesn't show is that you can pass in an arbitrary message, and logging will still show the full traceback with all the exception details:
import logging
try:
1/0
except ZeroDivisionError:
logging.exception("Deliberate divide by zero traceback")
With the default (in recent versions) logging behaviour of just printing errors to sys.stderr, it looks like this:
>>> import logging
>>> try:
... 1/0
... except ZeroDivisionError:
... logging.exception("Deliberate divide by zero traceback")
...
ERROR:root:Deliberate divide by zero traceback
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ZeroDivisionError: integer division or modulo by zero
Quoting
What if your application does logging some other way – not using the logging module?
Now, traceback could be used here.
import traceback
def log_traceback(ex, ex_traceback=None):
if ex_traceback is None:
ex_traceback = ex.__traceback__
tb_lines = [ line.rstrip('\n') for line in
traceback.format_exception(ex.__class__, ex, ex_traceback)]
exception_logger.log(tb_lines)
Use it in Python 2:
try:
# your function call is here
except Exception as ex:
_, _, ex_traceback = sys.exc_info()
log_traceback(ex, ex_traceback)
Use it in Python 3:
try:
x = get_number()
except Exception as ex:
log_traceback(ex)
You can log the stack trace without an exception.
https://docs.python.org/3/library/logging.html#logging.Logger.debug
The second optional keyword argument is stack_info, which defaults to False. If true, stack information is added to the logging message, including the actual logging call. Note that this is not the same stack information as that displayed through specifying exc_info: The former is stack frames from the bottom of the stack up to the logging call in the current thread, whereas the latter is information about stack frames which have been unwound, following an exception, while searching for exception handlers.
Example:
>>> import logging
>>> logging.basicConfig(level=logging.DEBUG)
>>> logging.getLogger().info('This prints the stack', stack_info=True)
INFO:root:This prints the stack
Stack (most recent call last):
File "<stdin>", line 1, in <module>
>>>
If you use plain logs - all your log records should correspond this rule: one record = one line. Following this rule you can use grep and other tools to process your log files.
But traceback information is multi-line. So my answer is an extended version of solution proposed by zangw above in this thread. The problem is that traceback lines could have \n inside, so we need to do an extra work to get rid of this line endings:
import logging
logger = logging.getLogger('your_logger_here')
def log_app_error(e: BaseException, level=logging.ERROR) -> None:
e_traceback = traceback.format_exception(e.__class__, e, e.__traceback__)
traceback_lines = []
for line in [line.rstrip('\n') for line in e_traceback]:
traceback_lines.extend(line.splitlines())
logger.log(level, traceback_lines.__str__())
After that (when you'll be analyzing your logs) you could copy / paste required traceback lines from your log file and do this:
ex_traceback = ['line 1', 'line 2', ...]
for line in ex_traceback:
print(line)
Profit!
This answer builds up from the above excellent ones.
In most applications, you won't be calling logging.exception(e) directly. Most likely you have defined a custom logger specific for your application or module like this:
# Set the name of the app or module
my_logger = logging.getLogger('NEM Sequencer')
# Set the log level
my_logger.setLevel(logging.INFO)
# Let's say we want to be fancy and log to a graylog2 log server
graylog_handler = graypy.GELFHandler('some_server_ip', 12201)
graylog_handler.setLevel(logging.INFO)
my_logger.addHandler(graylog_handler)
In this case, just use the logger to call the exception(e) like this:
try:
1/0
except ZeroDivisionError, e:
my_logger.exception(e)
If "debugging information" means the values present when exception was raised, then logging.exception(...) won't help. So you'll need a tool that logs all variable values along with the traceback lines automatically.
Out of the box you'll get log like
2020-03-30 18:24:31 main ERROR File "./temp.py", line 13, in get_ratio
2020-03-30 18:24:31 main ERROR return height / width
2020-03-30 18:24:31 main ERROR height = 300
2020-03-30 18:24:31 main ERROR width = 0
2020-03-30 18:24:31 main ERROR builtins.ZeroDivisionError: division by zero
Have a look at some pypi tools, I'd name:
tbvaccine
traceback-with-variables
better-exceptions
Some of them give you pretty crash messages:
But you might find some more on pypi
A little bit of decorator treatment (very loosely inspired by the Maybe monad and lifting). You can safely remove Python 3.6 type annotations and use an older message formatting style.
fallible.py
from functools import wraps
from typing import Callable, TypeVar, Optional
import logging
A = TypeVar('A')
def fallible(*exceptions, logger=None) \
-> Callable[[Callable[..., A]], Callable[..., Optional[A]]]:
"""
:param exceptions: a list of exceptions to catch
:param logger: pass a custom logger; None means the default logger,
False disables logging altogether.
"""
def fwrap(f: Callable[..., A]) -> Callable[..., Optional[A]]:
#wraps(f)
def wrapped(*args, **kwargs):
try:
return f(*args, **kwargs)
except exceptions:
message = f'called {f} with *args={args} and **kwargs={kwargs}'
if logger:
logger.exception(message)
if logger is None:
logging.exception(message)
return None
return wrapped
return fwrap
Demo:
In [1] from fallible import fallible
In [2]: #fallible(ArithmeticError)
...: def div(a, b):
...: return a / b
...:
...:
In [3]: div(1, 2)
Out[3]: 0.5
In [4]: res = div(1, 0)
ERROR:root:called <function div at 0x10d3c6ae8> with *args=(1, 0) and **kwargs={}
Traceback (most recent call last):
File "/Users/user/fallible.py", line 17, in wrapped
return f(*args, **kwargs)
File "<ipython-input-17-e056bd886b5c>", line 3, in div
return a / b
In [5]: repr(res)
'None'
You can also modify this solution to return something a bit more meaningful than None from the except part (or even make the solution generic, by specifying this return value in fallible's arguments).
In your logging module(if custom module) just enable stack_info.
api_logger.exceptionLog("*Input your Custom error message*",stack_info=True)
If you look at the this code example (which works for Python 2 and 3) you'll see the function definition below which can extract
method
line number
code context
file path
for an entire stack trace, whether or not there has been an exception:
def sentry_friendly_trace(get_last_exception=True):
try:
current_call = list(map(frame_trans, traceback.extract_stack()))
alert_frame = current_call[-4]
before_call = current_call[:-4]
err_type, err, tb = sys.exc_info() if get_last_exception else (None, None, None)
after_call = [alert_frame] if err_type is None else extract_all_sentry_frames_from_exception(tb)
return before_call + after_call, err, alert_frame
except:
return None, None, None
Of course, this function depends on the entire gist linked above, and in particular extract_all_sentry_frames_from_exception() and frame_trans() but the exception info extraction totals less than around 60 lines.
Hope that helps!
I wrap all functions around my custom designed logger:
import json
import timeit
import traceback
import sys
import unidecode
def main_writer(f,argument):
try:
f.write(str(argument))
except UnicodeEncodeError:
f.write(unidecode.unidecode(argument))
def logger(*argv,logfile="log.txt",singleLine = False):
"""
Writes Logs to LogFile
"""
with open(logfile, 'a+') as f:
for arg in argv:
if arg == "{}":
continue
if type(arg) == dict and len(arg)!=0:
json_object = json.dumps(arg, indent=4, default=str)
f.write(str(json_object))
f.flush()
"""
for key,val in arg.items():
f.write(str(key) + " : "+ str(val))
f.flush()
"""
elif type(arg) == list and len(arg)!=0:
for each in arg:
main_writer(f,each)
f.write("\n")
f.flush()
else:
main_writer(f,arg)
f.flush()
if singleLine==False:
f.write("\n")
if singleLine==True:
f.write("\n")
def tryFunc(func, func_name=None, *args, **kwargs):
"""
Time for Successfull Runs
Exception Traceback for Unsuccessful Runs
"""
stack = traceback.extract_stack()
filename, codeline, funcName, text = stack[-2]
func_name = func.__name__ if func_name is None else func_name # sys._getframe().f_code.co_name # func.__name__
start = timeit.default_timer()
x = None
try:
x = func(*args, **kwargs)
stop = timeit.default_timer()
# logger("Time to Run {} : {}".format(func_name, stop - start))
except Exception as e:
logger("Exception Occurred for {} :".format(func_name))
logger("Basic Error Info :",e)
logger("Full Error TraceBack :")
# logger(e.message, e.args)
logger(traceback.format_exc())
return x
def bad_func():
return 'a'+ 7
if __name__ == '__main__':
logger(234)
logger([1,2,3])
logger(['a','b','c'])
logger({'a':7,'b':8,'c':9})
tryFunc(bad_func)
My approach was to create a context manager, to log and raise Exceptions:
import logging
from contextlib import AbstractContextManager
class LogError(AbstractContextManager):
def __init__(self, logger=None):
self.logger = logger.name if isinstance(logger, logging.Logger) else logger
def __exit__(self, exc_type, exc_value, traceback):
if exc_value is not None:
logging.getLogger(self.logger).exception(exc_value)
with LogError():
1/0
You can either pass a logger name or a logger instance to LogError(). By default it will use the base logger (by passing None to logging.getLogger).
One could also simply add a switch for raising the error or just logging it.
If you can cope with the extra dependency then use twisted.log, you don't have to explicitly log errors and also it returns the entire traceback and time to the file or stream.
A clean way to do it is using format_exc() and then parse the output to get the relevant part:
from traceback import format_exc
try:
1/0
except Exception:
print 'the relevant part is: '+format_exc().split('\n')[-2]
Regards