Python, logging pathname in decorator - python

When I logging an error in a decorator, the logging pathname is not what I want.
logging.conf:
[loggers]
keys=root
[handlers]
keys=console
[formatters]
keys=console
[logger_root]
...
[handler_console]
...
[formatter_console]
format=%(levelname)s - File "%(pathname)s", line %(lineno)s, %(funcName)s: %(message)s
Nomally, logging an error in file /home/lizs/test/app.py:
def app():
try:
a # error, on line 12
except Exception, err:
logging.getLogger().error(str(err))
Debug message on console:
ERROR - File "/home/lizs/test/app.py", line 12, app: global name 'a' is not defined
The above logging pathname(/home/lizs/test/app.py) is what I want. But when I use a decorator:
/home/lizs/test/app.py:
from decorators import logging_decorator
#logging_decorator
def app():
a
/home/lizs/test/decorators.py:
def logging_decorator(func):
def error_log():
try:
func() # on line 10
except Exception, err:
logging.getLogger().error(str(err))
return error_log
The debug message:
ERROR - File "/home/lizs/test/decorators.py", line 10, error_log: global name 'a' is not defined
Now, the logging pathname is a pathname of the decorator (/home/lizs/test/decorators.py).
How to set the logging pathname to /home/lizs/test/app.py when I use decorator.

Solution:
Try this:
app.py:
from decorators import logging_decorator
#logging_decorator
def app():
a
app()
decorators.py:
import logging
import inspect
# init logger
logger = logging.getLogger()
formatter = logging.Formatter('%(levelname)s - File %(real_pathname)s,'
' line %(real_lineno)s, %(real_funcName)s: %(message)s')
console_handle = logging.StreamHandler()
console_handle.setFormatter(formatter)
logger.addHandler(console_handle)
def logging_decorator(func):
def error_log():
try:
func()
except Exception as err:
logger.error(err, extra={'real_pathname': inspect.getsourcefile(func), # path to source file
'real_lineno': inspect.trace()[-1][2], # line number from trace
'real_funcName': func.__name__}) # function name
return error_log
Explanation:
According to docs here you can pass a dictionary as extra argument to populate the __dict__ of the LogRecord created for the logging event with user-defined attributes. These custom attributes can then be used as you like.
Thus, because we can't modify pathname directly, this approach with real_pathname is most straight possible.
Links:
inspect.getsourcefile
inspect.trace
logging.message

Your problem is that your exception handler is one level upper than where the exception was initially raised, so you will have to examine the stacktrace and manually build a LogRecord with the correct file/line information:
def logging_decorator(func):
def error_log():
try:
func() # on line 10
except Exception, err:
tb = sys.exc_info()[2] # extract the current exception info
exc_tup = traceback.extract_tb(tb)[-1] # extract the deeper stack frame
logger = logging.getLogger()
# manually build a LogRecord from that stack frame
lr = logger.makeRecord(logger.name,
logging.ERROR, exc_tup[0], exc_tup[1],
str(err), {}, None, exc_tup[2])
logger.handle(lr) # and ask the logging system to process it
return error_log

Related

Store unhandled exceptions in log file

I am trying to build a logger logger_script.py for my python scripts that:
outputs a log file with customizable log level.
outputs a console output with customizable log level (not necessarily equal to the log file's one)
logs unhandled exceptions both to the log file and to the console
I achieved the first two points by following the answer to "https://stackoverflow.com/questions/29087297/is-there-a-way-to-change-the-filemode-for-a-logger-object-that-is-not-configured/29087645 ". I adapted it a to my needs and it now looks like:
import sys
import logging
def create_logger(log_filename, logfile_level, console_level):
# create logger and file handler
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler(log_filename, mode='w')
fh.setLevel(logfile_level)
# create console handler with independent log level
ch = logging.StreamHandler(stream=sys.stdout)
ch.setLevel(console_level)
formatter = logging.Formatter('[%(asctime)s] %(levelname)8s: %(message)s' +
' (%(filename)s:%(lineno)s)',
datefmt='%m-%d, %H:%M:%S')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
if (logger.hasHandlers()): #clear pre-existing logs (?)
logger.handlers.clear()
logger.addHandler(ch)
logger.addHandler(fh)
return logger
#create the logger: file log + console output
logger = create_logger("LogFile.log", logging.INFO, logging.WARNING)
##### piece of code with handled exceptions: #####
beta = 3
while beta > -3:
try:
2/beta
logger.info("division successful".rjust(20))
except ZeroDivisionError:
logger.exception("ZeroDivisionError".rjust(20))
beta -= 1
##### piece of code with unhandled exception #####
gamma = 1/0
However, when running a code with an unhandled exception (see last line), these do not get passed to the log file, only to the console.
I followed advices from Logging uncaught exceptions in Python and added the following snippet;
def handle_exception(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
logger.error("Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback))
sys.excepthook = handle_exception
either before or after the logger creation lines, but it does not work in my case.
How can I make unhandled exception appear, together with their traceback message?
I would like to avoid encapsulating the whole code into a try: except statement.
I would like to avoid encapsulating the whole code into a try: except statement.
I'd recommend doing the whole code in def main() then - it would be already useful if you decide to reuse parts of your code so that no code gets executed when importing the file. :)
Then you can do a catch-all thing in if __name__== "__main__"::
if __name__ == "__main__":
try:
main()
except Exception as e:
logger.exception("Program stopped because of a critical error.")
raise
[That's literally how I do it in my own code. It doesn't need any tricks, I literally added it for unexpected exceptions happened so that I could debug later.]
raise re-raises the same error that was caught - so it's uncaught to the outside world. And .exception does the same thing as .error but includes the exception info on its own.

Python: Use logging module with configparser or argparser

What is the best way to use Python's logging module to log everything that your script is doing when also utilizing the configparser file to load a config file which contains the location of where you'd like your log to be saved.
Here is my example code:
import sys
import os
import logging
import configparser
import argparse
### Create Functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
def get_logger(LOG_DIR, FULL_LOG_PATH):
"""Create logger."""
# Create LOG_DIR if it doesn't exist already
try:
os.makedirs(f"{LOG_DIR}")
except:
pass
try:
# Create logger and set level
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.INFO)
# Configure file handler
formatter = logging.Formatter(
fmt = '%(asctime)s - %(name)s - %(levelname)s - %(message)s',
datefmt = "%Y-%m-%d_%H-%M-%S")
fh = logging.FileHandler(f"{FULL_LOG_PATH}")
fh.setFormatter(formatter)
fh.setLevel(level=logging.INFO)
# Add handlers to logger
logger.addHandler(fh)
return logger
except:
sys.exit(-1)
def parse_cl_args():
"""Set CLI Arguments."""
try:
# Initiate the parser
parser = argparse.ArgumentParser(
description="Script to scrape Twitter users account information."
)
# Add optional arguments
parser.add_argument(
"-c", "--config-file",
metavar='Config-file',
help="Full path to the global config file containing paths/file names for script.",
required=True
)
# Read parsed arguments from the command line into "args"
args = parser.parse_args()
# Assign the file name to a variable and return it
config_file_path = args.config_file
return config_file_path
except:
sys.exit(-1)
def parse_config_file(config_file_path):
try:
config = configparser.ConfigParser()
config.read(config_file_path)
return config
except:
sys.exit(-1)
# A bunch of other functions
if __name__ == '__main__':
# parse command line args
config_file_path = parse_cl_args()
# parse config file
config = parse_config_file(config_file_path)
# Set logging path
LOG_DIR = os.path.join(config["PATHS"]["LOG_DIR"])
# Set log file name
FULL_LOG_PATH = os.path.join(config["PATHS"]["LOG_DIR"], "mylog.log")
# Get logger
logger = get_logger(
LOG_DIR = LOG_DIR,
FULL_LOG_PATH= FULL_LOG_PATH
)
Everything above the get_logger() line can't be recorded in the logger but the logger can't be created without first loading my commandline argument (config_file.ini) and then parsing that file(which contains the location of where I'd like my log to be saved). Is there a better way to do this?
If you want to record logs before you know the location of the log-file but want those logs in the file too you can use a MemoryHandler, which is a special type of BufferingHandler. So the flow of your program would be:
Set up a logger
add MemoryHandler to this logger
do stuff like reading config files while using the logger you have to create logs
Set up FileHandler with value from config
Call setTarget(file_handler) on the MemoryHandler passing it the FileHandler
Call flush() on the MemoryHandler -> logs from step 3 are written to file
Optionally you can now remove the MemoryHandler

Having two different handlers for logging in python logging module

I am trying to have two different handlers where one handler will print the logs on console and other different handler will print the logs on console. Conslole handler is given by one inbuilt python modbus-tk library and I have written my own file handlers.
LOG = utils.create_logger(name="console", record_format="%(message)s") . ---> This is from modbus-tk library
LOG = utils.create_logger("console", level=logging.INFO)
logging.basicConfig(filename="log", level=logging.DEBUG)
log = logging.getLogger("simulator")
handler = RotatingFileHandler("log",maxBytes=5000,backupCount=1)
log.addHandler(handler)
What I need:
LOG.info("This will print message on console")
log.info("This will print message in file")
But problem is both the logs are getting printed on the console and both are going in file. I want only LOG to be printed on the console and log to be printed in the file.
edited:
Adding snippet from utils.create_logger
def create_logger(name="dummy", level=logging.DEBUG, record_format=None):
"""Create a logger according to the given settings"""
if record_format is None:
record_format = "%(asctime)s\t%(levelname)s\t%(module)s.%(funcName)s\t%(threadName)s\t%(message)s"
logger = logging.getLogger("modbus_tk")
logger.setLevel(level)
formatter = logging.Formatter(record_format)
if name == "udp":
log_handler = LogitHandler(("127.0.0.1", 1975))
elif name == "console":
log_handler = ConsoleHandler()
elif name == "dummy":
log_handler = DummyHandler()
else:
raise Exception("Unknown handler %s" % name)
log_handler.setFormatter(formatter)
logger.addHandler(log_handler)
return logger
I have an own customized logging module. I have modified a little and I think now it can be proper for your problem. It is totally configurable and it can handle more different handlers.
If you want to combine the console and file logging, you only need to remove the return statement (I use this way).
I have written comment to code for more understandable and You can found a test section in if __name__ == "__main__": ... statement.
Code:
import logging
import os
# Custom logger class with multiple destinations
class CustomLogger(logging.Logger):
"""
Customized Logger class from the original logging.Logger class.
"""
# Format for console log
FORMAT = (
"[%(name)-30s][%(levelname)-19s] | %(message)-100s "
"| (%(filename)s:%(lineno)d)"
)
# Format for log file
LOG_FILE_FORMAT = "[%(name)s][%(levelname)s] | %(message)s " "| %(filename)s:%(lineno)d)"
def __init__(
self,
name,
log_file_path=None,
console_level=logging.INFO,
log_file_level=logging.DEBUG,
log_file_open_format="w",
):
logging.Logger.__init__(self, name)
consol_color_formatter = logging.Formatter(self.FORMAT)
# If the "log_file_path" parameter is provided,
# the logs will be visible only in the log file.
if log_file_path:
fh_formatter = logging.Formatter(self.LOG_FILE_FORMAT)
file_handler = logging.FileHandler(log_file_path, mode=log_file_open_format)
file_handler.setLevel(log_file_level)
file_handler.setFormatter(fh_formatter)
self.addHandler(file_handler)
return
# If the "log_file_path" parameter is not provided,
# the logs will be visible only in the console.
console = logging.StreamHandler()
console.setLevel(console_level)
console.setFormatter(consol_color_formatter)
self.addHandler(console)
if __name__ == "__main__": # pragma: no cover
current_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_log.log")
console_logger = CustomLogger(__file__, console_level=logging.INFO)
file_logger = CustomLogger(__file__, log_file_path=current_dir, log_file_level=logging.DEBUG)
console_logger.info("test_to_console")
file_logger.info("test_to_file")
Console output:
>>> python3 test.py
[test.py][INFO ] | test_to_console | (test.py:55)
Content of test_log.log file:
[test.py][INFO] | test_to_file | test.py:56)
If something is not clear of you have question/remark, let me know and I will try to help.
EDIT:
If you change the GetLogger to Logger in your implementation, it will work.
Code:
import logging
def create_logger(name="dummy", level=logging.DEBUG, record_format=None):
"""Create a logger according to the given settings"""
if record_format is None:
record_format = "%(asctime)s\t%(levelname)s\t%(module)s.%(funcName)s\t%(threadName)s\t%(message)s"
logger = logging.Logger("modbus_tk")
logger.setLevel(level)
formatter = logging.Formatter(record_format)
if name == "console":
log_handler = logging.StreamHandler()
else:
raise Exception("Wrong type of handler")
log_handler.setFormatter(formatter)
logger.addHandler(log_handler)
return logger
console_logger = create_logger(name="console")
# logging.basicConfig(filename="log", level=logging.DEBUG)
file_logger = logging.Logger("simulator")
handler = logging.FileHandler("log", "w")
file_logger.addHandler(handler)
console_logger.info("info to console")
file_logger.info("info to file")
Console output:
>>> python3 test.py
2019-12-16 13:10:45,963 INFO test.<module> MainThread info to console
Content of log file:
info to file
There are a few problems in your code and without seeing the whole configuration it is hard to tell what exactly causes this, but most likely what is happening is that the logs are propagated.
First of all when you call basicConfig you are configuring the root logger and tell it to create a FileHandler with the filename log, but just two lines after that you are creating a RotatingFileHandler that uses the same file. Both loggers are writing to the same file now.
I find it always helps to understand the flow of how logging works in python: https://docs.python.org/3/howto/logging.html#logging-flow
And if you don't want all logs to be sent to the root logger too you should set LOG.propagate = False. That stops this logger from propagating their logs.

How to do debugging prints in Python?

I have a function like this....
def validate_phone(raw_number, debug=False):
I want the debug flag to control whether it outputs logging statements. For example:
if (debug):
print('Before splitting numbers', file=sys.stderr)
split_version = raw_number.split('-')
if (debug):
print('After splitting numbers', file=sys.stderr)
That code is very repetitive however. What is the cleanest (DRYest?) way to handle such if-flag-then-log logic?
I agree that using logging is the best solution for printing debugging information while running a python script. I wrote a DebugPrint module that helps facilitate using the logger more easily:
#DebugPrint.py
import logging
import os
import time
DEBUGMODE=True
logging.basicConfig(level=logging.DEBUG)
log=logging.getLogger('=>')
#DebugPrint.py
#DbgPrint=logging.debug
def DbgPrint(*args, **kwargs):
if DEBUGMODE:
#get module, class, function, linenumber information
import inspect
className = None
try:
className = inspect.stack()[2][0].f_locals['self'].__class__.__name__
except:
pass
modName=None
try:
modName = os.path.basename(inspect.stack()[2][1])
except:
pass
lineNo=inspect.stack()[2][2]
fnName=None
try:
fnName = inspect.stack()[2][3]
except:
pass
DbgText="line#{}:{}->{}->{}()".format(lineNo, modName,className, fnName)
argCnt=len(args)
kwargCnt=len(kwargs)
#print("argCnt:{} kwargCnt:{}".format(argCnt,kwargCnt))
fmt=""
fmt1=DbgText+":"+time.strftime("%H:%M:%S")+"->"
if argCnt > 0:
fmt1+=(argCnt-1)*"%s,"
fmt1+="%s"
fmt+=fmt1
if kwargCnt>0:
fmt2="%s"
args+=("{}".format(kwargs),)
if len(fmt)>0:
fmt+=","+fmt2
else:
fmt+=fmt2
#print("fmt:{}".format(fmt))
log.debug(fmt,*args)
if __name__=="__main__":
def myTest():
print("Running myTest()")
DbgPrint("Hello","World")
myTest()
If the DEBUGMODE variable is false, nothing will be printed.
If it is true, the sample code above prints out:
DEBUG:=>:16:24:14:line#78:DebugPrint.py->None->myTest():->Hello,World
Now I'm going to test DebugPrint with a module that defines a class.
#testDebugPrint.py
from DebugPrint import DbgPrint
class myTestClass(object):
def __init__(self):
DbgPrint("Initializing the class")
def doSomething(self, arg):
DbgPrint("I'm doing something with {}".format(arg))
if __name__=='__main__':
test=myTestClass()
test.doSomething("a friend!")
When this script is run the output is as follows:
DEBUG:=>:16:25:02:line#7:testDebugPrint.py->myTestClass->__init__():->Initializing the class
DEBUG:=>:16:25:02:line#10:testDebugPrint.py->myTestClass->doSomething():->I'm doing something with a friend!
Note that the module name, class name, function name and line number printed on the console is correct as well as the time the statement was printed.
I hope that you will find this utility useful.
I would use the logging module for it. It's made for it.
> cat a.py
import logging
log = logging.getLogger(__name__)
def main():
log.debug('This is debug')
log.info('This is info')
log.warn('This is warn')
log.fatal('This is fatal')
try:
raise Exception("this is exception")
except Exception:
log.warn('Failed with exception', exc_info=True)
raise
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='something')
parser.add_argument(
'-v', '--verbose', action='count', default=0, dest='verbosity')
args = parser.parse_args()
logging.basicConfig()
logging.getLogger().setLevel(logging.WARN - 10 * args.verbosity)
main()
> python a.py
WARNING:__main__:This is warn
CRITICAL:__main__:This is fatal
WARNING:__main__:Failed with exception
Traceback (most recent call last):
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
Traceback (most recent call last):
File "a.py", line 27, in <module>
main()
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
> python a.py -v
INFO:__main__:This is info
WARNING:__main__:This is warn
CRITICAL:__main__:This is fatal
WARNING:__main__:Failed with exception
Traceback (most recent call last):
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
Traceback (most recent call last):
File "a.py", line 27, in <module>
main()
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
> python a.py -vv
DEBUG:__main__:This is debug
INFO:__main__:This is info
WARNING:__main__:This is warn
CRITICAL:__main__:This is fatal
WARNING:__main__:Failed with exception
Traceback (most recent call last):
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
Traceback (most recent call last):
File "a.py", line 27, in <module>
main()
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
you can change the logging level to INFO/DEBUG/WARNING/ERROR/CRITICAL, and also logging module can record the timestamp for you and it is configurable as well.
Check the link python3 logging HowTo
You ought to use the logging module:
import logging
import sys
# Get the "root" level logger.
root = logging.getLogger()
# Set the log level to debug.
root.setLevel(logging.DEBUG)
# Add a handler to output log messages as a stream (to a file/handle)
# in this case, sys.stderr
ch = logging.StreamHandler(sys.stderr)
# This handler can log debug messages - multiple handlers could log
# different "levels" of log messages.
ch.setLevel(logging.DEBUG)
# Format the output to include the time, etc.
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
# Add the formatter to the root handler...
root.addHandler(ch)
# Get a logger object - this is what we use to omit log messages.
logger=logging.getLogger(__name__)
# Generate a bunch of log messages as a demonstration
for i in xrange(100):
logger.debug('The value of i is: %s', i)
# Demonstrate a useful example of logging full exception tracebacks
# if the logger will output debug, but a warning in other modes.
try:
a='a'+12
except Exception as e:
# Only log exceptions if debug is enabled..
if logger.isEnabledFor(logging.DEBUG):
# Log the traceback w/ the exception message.
logger.exception('Something went wrong: %s', e)
else:
logger.warning('Something went wrong: %s', e)
Read more about it here: https://docs.python.org/2/library/logging.html
To disable logging just set the level (logging.DEBUG) to something else (like logging.INFO) . Note that it's also quite easy to redirect the messages elsewhere (like a file) or send some messages (debug) some place, and other messages (warning) to others.

python 2.7 - setting up 2 loggers

I'me trying to setup 2 loggers, unfortunately one of them doesn't write into the file, Here is a snippet of my code:
LOG_FILENAME = 'test.log'
LOG_FILENAME2 = 'test2.log'
error_counter = 0
error_logger = CustomLogger(LOG_FILENAME2, 'w', '%(asctime)s - %(levelname)s - %(message)s',
'%d/%m/%Y %H:%M:%S')
error_logger.set_level('info')
error_logger.basic_config()
print "This is the first logger: {0}".format(error_logger.get_file_path)
error_logger.log_message("This is a test message of the first instance")
warning_logger = CustomLogger(LOG_FILENAME, 'w', '%(asctime)s - %(levelname)s - %(message)s',
'%d/%m/%Y %H:%M:%S')
warning_logger.set_level('warning')
warning_logger.basic_config()
print "This is the the second logger: {0} ".format(warning_logger.get_file_path)
warning_logger.log_message("this is a test message of the second instance")
Here is the custom class that i've created:
class CustomLogger(object):
LEVELS = {'debug': logging.DEBUG,
'info': logging.INFO,
'warning': logging.WARNING,
'error': logging.ERROR,
'critical': logging.CRITICAL}
def __init__(self, i_file_path=None, i_filemode=None, i_format=None, i_date_format=None, i_log_level=None):
self.__file_path = i_file_path
self.__filemode = i_filemode
self.__format = i_format
self.__date_format = i_date_format
self.__log_level = i_log_level
def basic_config(self):
logging.basicConfig(
filename=self.__file_path,
filemode=self.__filemode,
format=self.__format,
datefmt=self.__date_format,
level=self.__log_level
)
def log_message(self, i_message):
try:
if None in (self.__file_path, self.__log_level, self.__filemode, self.__date_format, self.__format):
raise ErrorLoggerPropertiesRequiredException()
except ErrorLoggerPropertiesRequiredException as e:
print "{0}".format(e.message)
else:
curr_logger = logging.getLogger(self.__file_path)
print "writing to log {0}".format(i_message)
curr_logger.log(self.__log_level, i_message)
It's creating and writing only to the first logger, i've tried many things i saw on Python Documentation that there is another property called disable_existing_loggers which is by default True, i;ve tried accessing this property using logging.config.fileConfig(fname, defaults=None, disable_existing_loggers=False), but it seems that there is no such method.
I believe you are running into a problem I have run into many times:
logging.basicConfig()
is calling the module level configuration, and, according to the docs:
This function does nothing if the root logger already has handlers
configured for it.
In other words, it will only have an effect the first time it is called.
This function is only meant as a "last resort" config, if I understand it correctly.
You should instead configure each logger based on the "self" reference, rather than the global basic config...
A basic pattern I use for each module level logger is (note the import statement!):
import logging.config
try:
logging.config.fileConfig('loggingpy.conf', disable_existing_loggers=False)
except Exception as e:
# try to set up a default logger
logging.basicConfig(level=logging.INFO,
format="%(asctime)s:%(name)s:%(lineno)d %(levelname)s : %(message)s")
main_logger = logging.getLogger(__name__)
I am sure I copied this from somewhere...

Categories

Resources