I can create a named child logger, so that all the logs output by that logger are marked with it's name. I can use that logger exclusively in my function/class/whatever.
However, if that code calls out to functions in another module that makes use of logging using just the logging module functions (that proxy to the root logger), how can I ensure that those log messages go through the same logger (or are at least logged in the same way)?
For example:
main.py
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
do_stuff(logger)
other.py
import logging
def do_more_stuff():
logging.info("doing other stuff")
Outputs:
$ python main.py
INFO:stuff:doing stuff
INFO:root:doing other stuff
I want to be able to cause both log lines to be marked with the name 'stuff', and I want to be able to do this only changing main.py.
How can I cause the logging calls in other.py to use a different logger without changing that module?
This is the solution I've come up with:
Using thread local data to store the contextual information, and using a Filter on the root loggers handlers to add this information to LogRecords before they are emitted.
context = threading.local()
context.name = None
class ContextFilter(logging.Filter):
def filter(self, record):
if context.name is not None:
record.name = "%s.%s" % (context.name, record.name)
return True
This is fine for me, because I'm using the logger name to indicate what task was being carried out when this message was logged.
I can then use context managers or decorators to make logging from a particular passage of code all appear as though it was logged from a particular child logger.
#contextlib.contextmanager
def logname(name):
old_name = context.name
if old_name is None:
context.name = name
else:
context.name = "%s.%s" % (old_name, name)
try:
yield
finally:
context.name = old_name
def as_logname(name):
def decorator(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
with logname(name):
return f(*args, **kwargs)
return wrapper
return decorator
So then, I can do:
with logname("stuff"):
logging.info("I'm doing stuff!")
do_more_stuff()
or:
#as_logname("things")
def do_things():
logging.info("Starting to do things")
do_more_stuff()
The key thing being that any logging that do_more_stuff() does will be logged as if it were logged with either a "stuff" or "things" child logger, without having to change do_more_stuff() at all.
This solution would have problems if you were going to have different handlers on different child loggers.
Use logging.setLoggerClass so that all loggers used by other modules use your logger subclass (emphasis mine):
Tells the logging system to use the class klass when instantiating a logger. The class should define __init__() such that only a name argument is required, and the __init__() should call Logger.__init__(). This function is typically called before any loggers are instantiated by applications which need to use custom logger behavior.
This is what logging.handlers (or the handlers in the logging module) is for. In addition to creating your logger, you create one or more handlers to send the logging information to various places and add them to the root logger. Most modules that do logging create a logger that they use for there own purposes but depend on the controlling script to create the handlers. Some frameworks decide to be super helpful and add handlers for you.
Read the logging docs, its all there.
(edit)
logging.basicConfig() is a helper function that adds a single handler to the root logger. You can control the format string it uses with the 'format=' parameter. If all you want to do is have all modules display "stuff", then use logging.basicConfig(level=logging.INFO, format="%(levelname)s:stuff:%(message)s").
The logging.{info,warning,…} methods just call the respective methods on a Logger object called root (cf. the logging module source), so if you know the other module is only calling the functions exported by the logging module, you can overwrite the logging module in others namespace with your logger object:
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
# Overwrite other.logging with your just-generated logger object:
other.logging = logger
do_stuff(logger)
Related
I want to use a memory logger in my project. It keeps track of the last n logging records. A minimal example main file looks like this:
import sys
import logging
from logging import StreamHandler
from test_module import do_stuff
logger = logging.getLogger(__name__)
class MemoryHandler(StreamHandler):
def __init__(self, n_logs: int):
StreamHandler.__init__(self)
self.n_logs = n_logs
self.my_records = []
def emit(self, record):
self.my_records.append(self.format(record))
self.my_records = self.my_records[-self.n_logs:]
def to_string(self):
return '\n'.join(self.my_records)
if __name__ == '__main__':
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
mem_handler = MemoryHandler(n_logs=10)
logger.addHandler(mem_handler)
logger.info('hello')
do_stuff()
print(mem_handler.to_string())
The test module I am importing do_stuff from looks like this:
import logging
logger = logging.getLogger(__name__)
def do_stuff():
logger.info('doing stuff')
When I run the main function two log statements appear. The one from main and the one from doing stuff, but the memory logger only receives "hello" and not "doing stuff":
INFO:__main__:hello
INFO:test_module:doing stuff
hello
I assume that this is because mem_handler is not added to the test_module logger. I can fix this by adding the mem_handler explicitely:
logging.getLogger('test_module').addHandler(mem_handler)
But in general I don't want to list all modules and add the mem_handler manually. How can I add the mem_handler to all loggers in my project?
The Python logging system is federated. That means there is a tree like structure similar to the package structure. This structure works by logger name and the levels are separated by dots.
If you use the module's __name__ to get the logger it will be equivalant to the dotted name of the package. for example:
package.subpackage.module
In this federated system a message is send up the loggers structure (unless one of the loggers is explicitly configured with propagate=False).
So, the best way to add a handler is to add it to the root logger on the top of the structure and make sure all loggers below propagate.
You can get the root logger with logging.getLogger() (without any name) and then add handlers or other configuration as you like.
I want to change the log-level temporarily.
My current strategy is to use mocking.
with mock.patch(...):
my_method_which_does_log()
All logging.info() calls inside the method should get ignored and not logged to the console.
How to implement the ... to make logs of level INFO get ignored?
The code is single-process and single-thread and executed during testing only.
I want to change the log-level temporarily.
A way to do this without mocking is logging.disable
class TestSomething(unittest.TestCase):
def setUp(self):
logging.disable(logging.WARNING)
def tearDown(self):
logging.disable(logging.NOTSET)
This example would only show messages of level WARNING and above for each test in the TestSomething class. (You call disable at the start and end of each test as needed. This seems a bit cleaner.)
To unset this temporary throttling, call logging.disable(logging.NOTSET):
If logging.disable(logging.NOTSET) is called, it effectively removes this overriding level, so that logging output again depends on the effective levels of individual loggers.
I don't think mocking is going to do what you want. The loggers are presumably already instantiated in this scenario, and level is an instance variable for each of the loggers (and also any of the handlers that each logger has).
You can create a custom context manager. That would look something like this:
Context Manager
import logging
class override_logging_level():
"A context manager for temporarily setting the logging level"
def __init__(self, level, process_handlers=True):
self.saved_level = {}
self.level = level
self.process_handlers = process_handlers
def __enter__(self):
# Save the root logger
self.save_logger('', logging.getLogger())
# Iterate over the other loggers
for name, logger in logging.Logger.manager.loggerDict.items():
self.save_logger(name, logger)
def __exit__(self, exception_type, exception_value, traceback):
# Restore the root logger
self.restore_logger('', logging.getLogger())
# Iterate over the loggers
for name, logger in logging.Logger.manager.loggerDict.items():
self.restore_logger(name, logger)
def save_logger(self, name, logger):
# Save off the level
self.saved_level[name] = logger.level
# Override the level
logger.setLevel(self.level)
if not self.process_handlers:
return
# Iterate over the handlers for this logger
for handler in logger.handlers:
# No reliable name. Just use the id of the object
self.saved_level[id(handler)] = handler.level
def restore_logger(self, name, logger):
# It's possible that some intervening code added one or more loggers...
if name not in self.saved_level:
return
# Restore the level for the logger
logger.setLevel(self.saved_level[name])
if not self.process_handlers:
return
# Iterate over the handlers for this logger
for handler in logger.handlers:
# Reconstruct the key for this handler
key = id(handler)
# Again, we could have possibly added more handlers
if key not in self.saved_level:
continue
# Restore the level for the handler
handler.setLevel(self.saved_level[key])
Test Code
# Setup for basic logging
logging.basicConfig(level=logging.ERROR)
# Create some loggers - the root logger and a couple others
lr = logging.getLogger()
l1 = logging.getLogger('L1')
l2 = logging.getLogger('L2')
# Won't see this message due to the level
lr.info("lr - msg 1")
l1.info("l1 - msg 1")
l2.info("l2 - msg 1")
# Temporarily override the level
with override_logging_level(logging.INFO):
# Will see
lr.info("lr - msg 2")
l1.info("l1 - msg 2")
l2.info("l2 - msg 2")
# Won't see, again...
lr.info("lr - msg 3")
l1.info("l1 - msg 3")
l2.info("l2 - msg 3")
Results
$ python ./main.py
INFO:root:lr - msg 2
INFO:L1:l1 - msg 2
INFO:L2:l2 - msg 2
Notes
The code would need to be enhanced to support multithreading; for example, logging.Logger.manager.loggerDict is a shared variable that's guarded by locks in the logging code.
Using #cryptoplex's approach of using Context Managers, here's the official version from the logging cookbook:
import logging
import sys
class LoggingContext(object):
def __init__(self, logger, level=None, handler=None, close=True):
self.logger = logger
self.level = level
self.handler = handler
self.close = close
def __enter__(self):
if self.level is not None:
self.old_level = self.logger.level
self.logger.setLevel(self.level)
if self.handler:
self.logger.addHandler(self.handler)
def __exit__(self, et, ev, tb):
if self.level is not None:
self.logger.setLevel(self.old_level)
if self.handler:
self.logger.removeHandler(self.handler)
if self.handler and self.close:
self.handler.close()
# implicit return of None => don't swallow exceptions
You could use dependency injection to pass the logger instance to the method you are testing. It is a bit more invasive though since you are changing your method a little, however it gives you more flexibility.
Add the logger parameter to your method signature, something along the lines of:
def my_method( your_other_params, logger):
pass
In your unit test file:
if __name__ == "__main__":
# define the logger you want to use:
logging.basicConfig( stream=sys.stderr )
logging.getLogger( "MyTests.test_my_method" ).setLevel( logging.DEBUG )
...
def test_my_method(self):
test_logger = logging.getLogger( "MyTests.test_my_method" )
# pass your logger to your method
my_method(your_normal_parameters, test_logger)
python logger docs: https://docs.python.org/3/library/logging.html
I use this pattern to write all logs to a list. It ignores logs of level INFO and smaller.
logs=[]
import logging
def my_log(logger_self, level, *args, **kwargs):
if level>logging.INFO:
logs.append((args, kwargs))
with mock.patch('logging.Logger._log', my_log):
my_method_which_does_log()
How do I find out whether getLogger() returned a new or an existing logger object?
The motivation is that I don't want to addHandler repeatedly to the same logger.
There doesn't seem to be a particularly clean way to do this... However, if you must, the source-code is a pretty good place to start looking in order to figure this out. Note that logging.getLogger is mostly a wrapper around logging.Logger.manager.getLogger.
The Manager keeps a mapping of names -> Logger (or Placeholder). If it has Logger in the slot designated by a given name, it will return it. Otherwise, it'll return a new Logger.
import logging
def has_logger(name):
manager = logging.Logger.manager
if name in manager.loggerDict:
return isinstance(manager.loggerDict[name], logging.Logger)
else:
return False
Note that this only handles the case where you have named loggers. If you do logging.getLogger() (without passing a name), then it simply will return the root logger which is created at import time (and therefore, it is never new).
Another approach could be to get a logger and check that it's handlers list is smaller than you'd expect (i.e. if it isn't an empty list, then handlers have been added).
def has_handlers(logger):
"""Return True if logger has handlers, False otherwise."""
return bool(logger.handlers)
getLogger will return a singleton instance over the named logger, to check that
import logging
id_ = id(logging.getLogger())
for i in range(10):
assert id_ == id(logging.getLogger())
For logging purpose i used to used à logger module looking like:
mylogger.py
import logging
import logging.config
from pathlib import Path
logging.config.fileConfig(str(Path(__file__).parent / "logging.conf"),
disable_existing_loggers=False)
def get(name="MYLOG", **kw):
logger = logging.getLogger(name)
logger.propagate = True
if kw.get('level'):
logger.setLevel(kw.get('level'))
else:
logger.setLevel(logging.ERROR)
return logger
All handlers are defined in the logging.conf
As mentioned here https://stackoverflow.com/a/4150322/1526342. When logging to child logger, it will pass on the message to its parent, and its parent will pass the message to the root logger.
Now considering the following example
import logging
import logging.handlers
child_logger = logging.getLogger(__name__)
f = logging.Formatter(
fmt='%(asctime)s; %(name)s; % (filename)s:%(lineno)d:%(message)s',
datefmt="%Y-%m-%d %H:%M:%S")
handler = logging.handlers.RotatingFileHandler('/tmp/info.log',
encoding='utf8',
maxBytes=500000000,
backupCount=5)
handler.setFormatter(f)
child_logger.setLevel(logging.INFO)
child_logger.addHandler(handler)
child_logger.info('1 + 1 is %d', 1+1)
child_logger should have reported back to the root logger instead of printing the output to the child_logger's log file.
I'm confused.
As shown in this logging flow chart, loggers pass log records both to own handlers and to parent logger objects. Try adding a handler to the parent logger, you'll see the log record is being processed there as well.
In this case, your 'child_logger' is your root logger. If you had initialized it like so:
logger = getLogger('root')
child_logger = getLogger('root.child')
child_logger is a child of logger as defined by:
The name is potentially a period-separated hierarchical value, like foo.bar.baz (though it could also be just plain foo, for example). Loggers that are further down in the hierarchical list are children of loggers higher up in the list. For example, given a logger with a name of foo, loggers with names of foo.bar, foo.bar.baz, and foo.bam are all descendants of foo. The logger name hierarchy is analogous to the Python package hierarchy, and identical to it if you organise your loggers on a per-module basis using the recommended construction logging.getLogger(__name__). That’s because in a module, __name__ is the module’s name in the Python package namespace.
If you do not want a child to propogate, you can set logger.propagate = False.
Furthermore, if you would like only certain levels written to your child logger file (i.e. only debug) but you want higher level to still propagate, you could create a subclass of a handler, as in mine here:
from logging import DEBUG, INFO, WARN, ERROR, CRITICAL, handlers
class DebugRotatingFileHandler(handlers.RotatingFileHandler):
def __init__(self, filename, mode, maxBytes, backupCount, encoding, delay):
super(DebugRotatingFileHandler, self).__init__(
self, filename, mode, maxBytes, backupCount, encoding, delay)
def emit(self, record):
if record.levelno != DEBUG:
return
super(DebugRotatingFileHandler, self).emit(self, record)
(Yes, I know there are some improvements that can be made, this is old code.)
For example, executing debug_logger.info("Info Message") would print nothing to the debug_logger's specified file, however, if root_logger's level was set to info, or debug, it would print it out in it's file. I use this for debug logging, whilst still retaining the ability to have the logger make error message calls and print those to the root log.
I'm currently working on a project where were use a single root logger. I understand from reading about logging that is a Bad Thing but am struggling to find a nice solution to a nice benefit this gives us.
Something we do (in part, to get around not having different loggers but in part gives us a nice feature) is to have a log_prefix decorator.
e.g.
#log_prefix("Download")
def download_file():
logging.info("Downloading file..")
connection = get_connection("127.0.0.1")
//Do other stuff
return file
#log_prefix("GetConnection")
def get_connection(url):
logging.info("Making connection")
//Do other stuff
logging.info("Finished making connection")
return connection
This gives us some nicely formatting logs that might look like:
Download:Downloading file..
Download:GetConnection:Making Connection
Download:GetConnection:Other stuff
Download:GetConnection:Finished making connection
Download:Other stuff
This also means that if we have
#log_prefix("StartTelnetSession")
logging.info("Starting telnet session..")
connection = get_connection("127.0.0.1")
We get the same prefix at the end:
StartTelnetSession:Starting telnet session..
StartTelnetSession:GetConnection:Making Connection
StartTelnetSession:GetConnection:Other stuff
StartTelnetSession:GetConnection:Finished making connection
This has proven to be quite useful for development and support.
I can see plenty of cases where actually just using a separate logger for the action would solve our problem but I can also see cases where throwing away the nesting we have will make things worse.
Are there any patterns or common uses out there for nesting loggers? i.e
logging.getLogger("Download").getLogger("MakingConnection")
Or am I missing something here?
You could use a LoggerAdapter to add extra contextual information:
utils_logging.py:
import functools
def log_prefix(logger, label, prefix=list()):
def decorator(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
prefix.append(label)
logger.extra['prefix'] = ':'.join(prefix)
result = func(*args, **kwargs)
prefix.pop()
logger.extra['prefix'] = ':'.join(prefix)
return result
return wrapper
return decorator
foo.py:
import logging
import utils_logging as UL
import bar
logger = logging.LoggerAdapter(logging.getLogger(__name__), {'prefix':''})
#UL.log_prefix(logger, "Download")
def download_file():
logger.info("Downloading file..")
connection = bar.get_connection("127.0.0.1")
if __name__ == '__main__':
logging.basicConfig(
level=logging.INFO,
format='%(prefix)s %(name)s %(levelname)s %(message)s')
download_file()
bar.get_connection('foo')
bar.py:
import logging
import utils_logging as UL
logger = logging.LoggerAdapter(logging.getLogger(__name__), {'prefix':''})
#UL.log_prefix(logger, "GetConnection")
def get_connection(url):
logger.info("Making connection")
logger.info("Finished making connection")
yields
Download __main__ INFO Downloading file..
Download:GetConnection bar INFO Making connection
Download:GetConnection bar INFO Finished making connection
GetConnection bar INFO Making connection
GetConnection bar INFO Finished making connection
Note: I don't think it is a good idea to have a new Logger instance for each
prefix because these instances are not garbage
collected. All
you need is for some prefix variable to take on a different value depending on
context. You don't need a new Logger instance for that -- one LoggerAdapter will
do.
Logger names are hierarchical.
logger = logging.getLogger("Download.MakingConnection")
This logger would inherit any configuration from logging.getLogger("Download").
Python 2.7 also added a convenience function for accessing descendants of an arbitrary logger.
logger = logging.getLogger("Download.MakingConnection")
parent_logger = logging.getLogger("Download")
child_logger = parent_logger.getChild("MakingConnection")
assert logger is child_logger
Here is an alternative which uses a logging.Filter to modify the record.msg. By modifying the message instead of adding an %(prefix)s field,
the format does not need to change.
This will make it easier to mix loggers which make use of log_prefix and those that don't.
To get the prefix, the logger should be initialized with a call to add_prefix_filter:
logger = UL.add_prefix_filter(logging.getLogger(__name__))
To append labels to the prefix, the functions should be decorated with #log_prefix(label), as before.
utils_logging.py:
import functools
import logging
prefix = list()
def log_prefix(label):
def decorator(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
prefix.append(label)
try:
result = func(*args, **kwargs)
finally:
prefix.pop()
return result
return wrapper
return decorator
class PrefixFilter(logging.Filter):
def filter(self, record):
if prefix:
record.msg = '{}:{}'.format(':'.join(prefix), record.msg)
return True
def add_prefix_filter(logger):
logger.addFilter(PrefixFilter())
return logger
main.py:
import logging
import bar
import utils_logging as UL
logger = UL.add_prefix_filter(logging.getLogger(__name__))
#UL.log_prefix("Download")
def download_file():
logger.info("Downloading file..")
connection = bar.get_connection("127.0.0.1")
if __name__ == '__main__':
logging.basicConfig(
level=logging.INFO,
format='%(message)s')
logger.info('Starting...')
download_file()
bar.get_connection('foo')
bar.py:
import logging
import utils_logging as UL
logger = UL.add_prefix_filter(logging.getLogger(__name__))
#UL.log_prefix("GetConnection")
def get_connection(url):
logger.info("Making connection")
logger.info("Finished making connection")
yields
Starting...
Download:Downloading file..
Download:GetConnection:Making connection
Download:GetConnection:Finished making connection
GetConnection:Making connection
GetConnection:Finished making connection