I want to change the log-level temporarily.
My current strategy is to use mocking.
with mock.patch(...):
my_method_which_does_log()
All logging.info() calls inside the method should get ignored and not logged to the console.
How to implement the ... to make logs of level INFO get ignored?
The code is single-process and single-thread and executed during testing only.
I want to change the log-level temporarily.
A way to do this without mocking is logging.disable
class TestSomething(unittest.TestCase):
def setUp(self):
logging.disable(logging.WARNING)
def tearDown(self):
logging.disable(logging.NOTSET)
This example would only show messages of level WARNING and above for each test in the TestSomething class. (You call disable at the start and end of each test as needed. This seems a bit cleaner.)
To unset this temporary throttling, call logging.disable(logging.NOTSET):
If logging.disable(logging.NOTSET) is called, it effectively removes this overriding level, so that logging output again depends on the effective levels of individual loggers.
I don't think mocking is going to do what you want. The loggers are presumably already instantiated in this scenario, and level is an instance variable for each of the loggers (and also any of the handlers that each logger has).
You can create a custom context manager. That would look something like this:
Context Manager
import logging
class override_logging_level():
"A context manager for temporarily setting the logging level"
def __init__(self, level, process_handlers=True):
self.saved_level = {}
self.level = level
self.process_handlers = process_handlers
def __enter__(self):
# Save the root logger
self.save_logger('', logging.getLogger())
# Iterate over the other loggers
for name, logger in logging.Logger.manager.loggerDict.items():
self.save_logger(name, logger)
def __exit__(self, exception_type, exception_value, traceback):
# Restore the root logger
self.restore_logger('', logging.getLogger())
# Iterate over the loggers
for name, logger in logging.Logger.manager.loggerDict.items():
self.restore_logger(name, logger)
def save_logger(self, name, logger):
# Save off the level
self.saved_level[name] = logger.level
# Override the level
logger.setLevel(self.level)
if not self.process_handlers:
return
# Iterate over the handlers for this logger
for handler in logger.handlers:
# No reliable name. Just use the id of the object
self.saved_level[id(handler)] = handler.level
def restore_logger(self, name, logger):
# It's possible that some intervening code added one or more loggers...
if name not in self.saved_level:
return
# Restore the level for the logger
logger.setLevel(self.saved_level[name])
if not self.process_handlers:
return
# Iterate over the handlers for this logger
for handler in logger.handlers:
# Reconstruct the key for this handler
key = id(handler)
# Again, we could have possibly added more handlers
if key not in self.saved_level:
continue
# Restore the level for the handler
handler.setLevel(self.saved_level[key])
Test Code
# Setup for basic logging
logging.basicConfig(level=logging.ERROR)
# Create some loggers - the root logger and a couple others
lr = logging.getLogger()
l1 = logging.getLogger('L1')
l2 = logging.getLogger('L2')
# Won't see this message due to the level
lr.info("lr - msg 1")
l1.info("l1 - msg 1")
l2.info("l2 - msg 1")
# Temporarily override the level
with override_logging_level(logging.INFO):
# Will see
lr.info("lr - msg 2")
l1.info("l1 - msg 2")
l2.info("l2 - msg 2")
# Won't see, again...
lr.info("lr - msg 3")
l1.info("l1 - msg 3")
l2.info("l2 - msg 3")
Results
$ python ./main.py
INFO:root:lr - msg 2
INFO:L1:l1 - msg 2
INFO:L2:l2 - msg 2
Notes
The code would need to be enhanced to support multithreading; for example, logging.Logger.manager.loggerDict is a shared variable that's guarded by locks in the logging code.
Using #cryptoplex's approach of using Context Managers, here's the official version from the logging cookbook:
import logging
import sys
class LoggingContext(object):
def __init__(self, logger, level=None, handler=None, close=True):
self.logger = logger
self.level = level
self.handler = handler
self.close = close
def __enter__(self):
if self.level is not None:
self.old_level = self.logger.level
self.logger.setLevel(self.level)
if self.handler:
self.logger.addHandler(self.handler)
def __exit__(self, et, ev, tb):
if self.level is not None:
self.logger.setLevel(self.old_level)
if self.handler:
self.logger.removeHandler(self.handler)
if self.handler and self.close:
self.handler.close()
# implicit return of None => don't swallow exceptions
You could use dependency injection to pass the logger instance to the method you are testing. It is a bit more invasive though since you are changing your method a little, however it gives you more flexibility.
Add the logger parameter to your method signature, something along the lines of:
def my_method( your_other_params, logger):
pass
In your unit test file:
if __name__ == "__main__":
# define the logger you want to use:
logging.basicConfig( stream=sys.stderr )
logging.getLogger( "MyTests.test_my_method" ).setLevel( logging.DEBUG )
...
def test_my_method(self):
test_logger = logging.getLogger( "MyTests.test_my_method" )
# pass your logger to your method
my_method(your_normal_parameters, test_logger)
python logger docs: https://docs.python.org/3/library/logging.html
I use this pattern to write all logs to a list. It ignores logs of level INFO and smaller.
logs=[]
import logging
def my_log(logger_self, level, *args, **kwargs):
if level>logging.INFO:
logs.append((args, kwargs))
with mock.patch('logging.Logger._log', my_log):
my_method_which_does_log()
Related
I'm setting the log level based on a configuration. Currently I call Settings() from the inside of Logger, but I'd like to pass it instead or set it globally - for all loggers.
I do not want to call getLogger(name, debug=Settings().isDebugMode()).
Any ideas? Thanks!
class Logger(logging.getLoggerClass()):
def __init__(self, name):
super().__init__(name)
debug_mode = Settings().isDebugMode()
if debug_mode:
self.setLevel(level=logging.DEBUG)
else:
self.setLevel(level=logging.INFO)
def getLogger(name):
logging.setLoggerClass(Logger)
return logging.getLogger(name)
The usual way to achieve this would be to only set a level on the root logger and keep all other loggers as NOTSET. This will have the effect that every logger works as if they had the level that is set on root. You can read about the mechanics of how that works in the documentation of setLevel().
Here is what that would look like in code:
import logging
root = logging.getLogger()
root.setLevel(logging.DEBUG) # set this based on your Settings().isDebugMode()
logger = logging.getLogger('some_logger')
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter('%(name)s: %(message)s'))
logger.addHandler(sh)
logger.debug('this will print')
root.setLevel(logging.INFO) # change level of all loggers (global log level)
logger.debug('this will not print')
I have a logger in on of my files which has a handler attached to it and it's level has been set to debug. Despite that, when running my program, the debug statement is not printed to the console. The root logger is still set to warning, but I understood that if I add a handler to the logger, the log is passed to that handler and logged before being passed to the parent loggers (which is eventually a null logger). It doesn't seem that is the case. For context here is the code in the file:
logger = logging.getLogger(__name__)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
logger.addHandler(console_handler)
class OpenBST:
app_data_folder = Path(user_data_dir(appname=lib_info.lib_name,
appauthor="HydrOffice"))
def __init__(self,
progress: CliProgress = CliProgress(use_logger=True),
app_data_path: Path = app_data_folder) -> None:
app_data_path.mkdir(exist_ok=True, parents=True)
self.progress = progress
self._prj = None
self._app_info = OpenBSTInfo(app_data_path=app_data_path)
self.current_project = None
logging.debug("App instance started")
And below is where it's called in an example script:
from pathlib import Path
from hyo2.openbst.lib.openbst import OpenBST
logging.basicConfig()
logger = logging.getLogger(__name__)
project_directory = Path(os.path.expanduser("~/Documents/openbst_projects"))
project_name = "test_project"
# Create App instance
obst = OpenBST()
Why doesn't the logger.debug('App instance started') not print out to the console?
EDIT:
The code below includes the suggestion from #Jesse R
__init__ was modified as such:
class OpenBST:
app_data_folder = Path(user_data_dir(appname=lib_info.lib_name,
appauthor="HydrOffice"))
def __init__(self,
progress: CliProgress = CliProgress(use_logger=True),
app_data_path: Path = app_data_folder) -> None:
app_data_path.mkdir(exist_ok=True, parents=True)
logger = logging.getLogger(__name__)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
logger.addHandler(console_handler)
self.progress = progress
self._prj = None
self._app_info = OpenBSTInfo(app_data_path=app_data_path)
self.current_project = None
logger.debug("App instance started")
No output is generated (exit code 0).
My understanding was a handler attached to a logger would execute before passing log up the chain (where the root is still set to warning).
You call logging.debug("App instance started"), which is not part of the logger that you declare from getLogger. You can set the debug level universally for logging with
logging.basicConfig(level=logging.DEBUG)
also calling logger = logging.getLogger(__name__) outside of the class does not inherit correctly, since you're not passing it but instead use logging. You can create a new logger by moving that declaration inside of the class.
For Example:
import logging
class SampleClass:
def __init__(self):
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.info('will log')
logging.info('will not log')
SampleClass()
Running:
$ python logtest.py
INFO:__main__:will log
How do I find out whether getLogger() returned a new or an existing logger object?
The motivation is that I don't want to addHandler repeatedly to the same logger.
There doesn't seem to be a particularly clean way to do this... However, if you must, the source-code is a pretty good place to start looking in order to figure this out. Note that logging.getLogger is mostly a wrapper around logging.Logger.manager.getLogger.
The Manager keeps a mapping of names -> Logger (or Placeholder). If it has Logger in the slot designated by a given name, it will return it. Otherwise, it'll return a new Logger.
import logging
def has_logger(name):
manager = logging.Logger.manager
if name in manager.loggerDict:
return isinstance(manager.loggerDict[name], logging.Logger)
else:
return False
Note that this only handles the case where you have named loggers. If you do logging.getLogger() (without passing a name), then it simply will return the root logger which is created at import time (and therefore, it is never new).
Another approach could be to get a logger and check that it's handlers list is smaller than you'd expect (i.e. if it isn't an empty list, then handlers have been added).
def has_handlers(logger):
"""Return True if logger has handlers, False otherwise."""
return bool(logger.handlers)
getLogger will return a singleton instance over the named logger, to check that
import logging
id_ = id(logging.getLogger())
for i in range(10):
assert id_ == id(logging.getLogger())
For logging purpose i used to used à logger module looking like:
mylogger.py
import logging
import logging.config
from pathlib import Path
logging.config.fileConfig(str(Path(__file__).parent / "logging.conf"),
disable_existing_loggers=False)
def get(name="MYLOG", **kw):
logger = logging.getLogger(name)
logger.propagate = True
if kw.get('level'):
logger.setLevel(kw.get('level'))
else:
logger.setLevel(logging.ERROR)
return logger
All handlers are defined in the logging.conf
I'm currently working on a project where were use a single root logger. I understand from reading about logging that is a Bad Thing but am struggling to find a nice solution to a nice benefit this gives us.
Something we do (in part, to get around not having different loggers but in part gives us a nice feature) is to have a log_prefix decorator.
e.g.
#log_prefix("Download")
def download_file():
logging.info("Downloading file..")
connection = get_connection("127.0.0.1")
//Do other stuff
return file
#log_prefix("GetConnection")
def get_connection(url):
logging.info("Making connection")
//Do other stuff
logging.info("Finished making connection")
return connection
This gives us some nicely formatting logs that might look like:
Download:Downloading file..
Download:GetConnection:Making Connection
Download:GetConnection:Other stuff
Download:GetConnection:Finished making connection
Download:Other stuff
This also means that if we have
#log_prefix("StartTelnetSession")
logging.info("Starting telnet session..")
connection = get_connection("127.0.0.1")
We get the same prefix at the end:
StartTelnetSession:Starting telnet session..
StartTelnetSession:GetConnection:Making Connection
StartTelnetSession:GetConnection:Other stuff
StartTelnetSession:GetConnection:Finished making connection
This has proven to be quite useful for development and support.
I can see plenty of cases where actually just using a separate logger for the action would solve our problem but I can also see cases where throwing away the nesting we have will make things worse.
Are there any patterns or common uses out there for nesting loggers? i.e
logging.getLogger("Download").getLogger("MakingConnection")
Or am I missing something here?
You could use a LoggerAdapter to add extra contextual information:
utils_logging.py:
import functools
def log_prefix(logger, label, prefix=list()):
def decorator(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
prefix.append(label)
logger.extra['prefix'] = ':'.join(prefix)
result = func(*args, **kwargs)
prefix.pop()
logger.extra['prefix'] = ':'.join(prefix)
return result
return wrapper
return decorator
foo.py:
import logging
import utils_logging as UL
import bar
logger = logging.LoggerAdapter(logging.getLogger(__name__), {'prefix':''})
#UL.log_prefix(logger, "Download")
def download_file():
logger.info("Downloading file..")
connection = bar.get_connection("127.0.0.1")
if __name__ == '__main__':
logging.basicConfig(
level=logging.INFO,
format='%(prefix)s %(name)s %(levelname)s %(message)s')
download_file()
bar.get_connection('foo')
bar.py:
import logging
import utils_logging as UL
logger = logging.LoggerAdapter(logging.getLogger(__name__), {'prefix':''})
#UL.log_prefix(logger, "GetConnection")
def get_connection(url):
logger.info("Making connection")
logger.info("Finished making connection")
yields
Download __main__ INFO Downloading file..
Download:GetConnection bar INFO Making connection
Download:GetConnection bar INFO Finished making connection
GetConnection bar INFO Making connection
GetConnection bar INFO Finished making connection
Note: I don't think it is a good idea to have a new Logger instance for each
prefix because these instances are not garbage
collected. All
you need is for some prefix variable to take on a different value depending on
context. You don't need a new Logger instance for that -- one LoggerAdapter will
do.
Logger names are hierarchical.
logger = logging.getLogger("Download.MakingConnection")
This logger would inherit any configuration from logging.getLogger("Download").
Python 2.7 also added a convenience function for accessing descendants of an arbitrary logger.
logger = logging.getLogger("Download.MakingConnection")
parent_logger = logging.getLogger("Download")
child_logger = parent_logger.getChild("MakingConnection")
assert logger is child_logger
Here is an alternative which uses a logging.Filter to modify the record.msg. By modifying the message instead of adding an %(prefix)s field,
the format does not need to change.
This will make it easier to mix loggers which make use of log_prefix and those that don't.
To get the prefix, the logger should be initialized with a call to add_prefix_filter:
logger = UL.add_prefix_filter(logging.getLogger(__name__))
To append labels to the prefix, the functions should be decorated with #log_prefix(label), as before.
utils_logging.py:
import functools
import logging
prefix = list()
def log_prefix(label):
def decorator(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
prefix.append(label)
try:
result = func(*args, **kwargs)
finally:
prefix.pop()
return result
return wrapper
return decorator
class PrefixFilter(logging.Filter):
def filter(self, record):
if prefix:
record.msg = '{}:{}'.format(':'.join(prefix), record.msg)
return True
def add_prefix_filter(logger):
logger.addFilter(PrefixFilter())
return logger
main.py:
import logging
import bar
import utils_logging as UL
logger = UL.add_prefix_filter(logging.getLogger(__name__))
#UL.log_prefix("Download")
def download_file():
logger.info("Downloading file..")
connection = bar.get_connection("127.0.0.1")
if __name__ == '__main__':
logging.basicConfig(
level=logging.INFO,
format='%(message)s')
logger.info('Starting...')
download_file()
bar.get_connection('foo')
bar.py:
import logging
import utils_logging as UL
logger = UL.add_prefix_filter(logging.getLogger(__name__))
#UL.log_prefix("GetConnection")
def get_connection(url):
logger.info("Making connection")
logger.info("Finished making connection")
yields
Starting...
Download:Downloading file..
Download:GetConnection:Making connection
Download:GetConnection:Finished making connection
GetConnection:Making connection
GetConnection:Finished making connection
I can create a named child logger, so that all the logs output by that logger are marked with it's name. I can use that logger exclusively in my function/class/whatever.
However, if that code calls out to functions in another module that makes use of logging using just the logging module functions (that proxy to the root logger), how can I ensure that those log messages go through the same logger (or are at least logged in the same way)?
For example:
main.py
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
do_stuff(logger)
other.py
import logging
def do_more_stuff():
logging.info("doing other stuff")
Outputs:
$ python main.py
INFO:stuff:doing stuff
INFO:root:doing other stuff
I want to be able to cause both log lines to be marked with the name 'stuff', and I want to be able to do this only changing main.py.
How can I cause the logging calls in other.py to use a different logger without changing that module?
This is the solution I've come up with:
Using thread local data to store the contextual information, and using a Filter on the root loggers handlers to add this information to LogRecords before they are emitted.
context = threading.local()
context.name = None
class ContextFilter(logging.Filter):
def filter(self, record):
if context.name is not None:
record.name = "%s.%s" % (context.name, record.name)
return True
This is fine for me, because I'm using the logger name to indicate what task was being carried out when this message was logged.
I can then use context managers or decorators to make logging from a particular passage of code all appear as though it was logged from a particular child logger.
#contextlib.contextmanager
def logname(name):
old_name = context.name
if old_name is None:
context.name = name
else:
context.name = "%s.%s" % (old_name, name)
try:
yield
finally:
context.name = old_name
def as_logname(name):
def decorator(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
with logname(name):
return f(*args, **kwargs)
return wrapper
return decorator
So then, I can do:
with logname("stuff"):
logging.info("I'm doing stuff!")
do_more_stuff()
or:
#as_logname("things")
def do_things():
logging.info("Starting to do things")
do_more_stuff()
The key thing being that any logging that do_more_stuff() does will be logged as if it were logged with either a "stuff" or "things" child logger, without having to change do_more_stuff() at all.
This solution would have problems if you were going to have different handlers on different child loggers.
Use logging.setLoggerClass so that all loggers used by other modules use your logger subclass (emphasis mine):
Tells the logging system to use the class klass when instantiating a logger. The class should define __init__() such that only a name argument is required, and the __init__() should call Logger.__init__(). This function is typically called before any loggers are instantiated by applications which need to use custom logger behavior.
This is what logging.handlers (or the handlers in the logging module) is for. In addition to creating your logger, you create one or more handlers to send the logging information to various places and add them to the root logger. Most modules that do logging create a logger that they use for there own purposes but depend on the controlling script to create the handlers. Some frameworks decide to be super helpful and add handlers for you.
Read the logging docs, its all there.
(edit)
logging.basicConfig() is a helper function that adds a single handler to the root logger. You can control the format string it uses with the 'format=' parameter. If all you want to do is have all modules display "stuff", then use logging.basicConfig(level=logging.INFO, format="%(levelname)s:stuff:%(message)s").
The logging.{info,warning,…} methods just call the respective methods on a Logger object called root (cf. the logging module source), so if you know the other module is only calling the functions exported by the logging module, you can overwrite the logging module in others namespace with your logger object:
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
# Overwrite other.logging with your just-generated logger object:
other.logging = logger
do_stuff(logger)