Python logging: reverse effects of disable() - python

The logging docs say that calling the logging.disable(lvl) method can "temporarily throttle logging output down across the whole application," but I'm having trouble finding the "temporarily." Take, for example, the following script:
import logging
logging.disable(logging.CRITICAL)
logging.warning("test")
# Something here
logging.warning("test")
So far, I haven't been able to find the Something here that will re-enable the logging system as a whole and allow the second warning to get through. Is there a reverse to disable()?

logging.disable(logging.NOTSET)

Based on the answer by #unutbu, I created a context manager:
import logging
log = logging.getLogger(__name__)
class SuppressLogging:
"""
Context handler class that suppresses logging for some controlled code.
"""
def __init__(self, loglevel):
logging.disable(loglevel)
return
def __enter__(self):
return
def __exit__(self, exctype, excval, exctraceback):
logging.disable(logging.NOTSET)
return False
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
log.info("log this")
with SuppressLogging(logging.WARNING):
log.info("don't log this")
log.warning("don't log this warning")
log.error("log this error while up to WARNING suppressed")
log.info("log this again")

Related

How to log different classes into different files?

Let's say I have different classes defined like this
class PS1:
def __init__(self):
pass
def do_sth(self):
logging.info('PS1 do sth')
class PS2:
def __init__(self):
pass
def do_sth(self):
logging.info('PS2 do sth')
How can I make PS1 log into PS1.log and PS2 log into PS2.log? I know I can set up loggers and file handlers and do something likes ps1_logger.info instead of logging.info. However, my current code base already has hundreds of logging.info, do I have to change them all?
When you call logging methods on the module level you are creating logs directly at the root logger, so you no longer can separate them by logger. The only chance left is to separate your logs by Handler with a filter. Since you don't want to update your logging calls to add any information about where the call happens you have to inspect the stack to find the calling class. The code below does this, but I highly recommend not using this solution in production and refactor your code to not send everything to the root logger. Look into logging adapters and/or the extra keyword argument for an alternative solution that modifies the logging calls.
import logging
import inspect
root = logging.getLogger()
ps1_handler = logging.FileHandler('ps1.log')
ps2_handler = logging.FileHandler('ps2.log')
def make_class_filter(class_name):
def filter(record):
for fi in inspect.stack():
if 'self' in fi[0].f_locals and class_name in str(fi[0].f_locals['self']):
return True
return False
return filter
ps1_handler.addFilter(make_class_filter('PS1'))
ps2_handler.addFilter(make_class_filter('PS2'))
root.addHandler(ps1_handler)
root.addHandler(ps2_handler)
class PS1:
def do_sth(self):
logging.warning('PS1 do sth')
class PS2:
def do_sth(self):
logging.warning('PS2 do sth')
PS1().do_sth()
PS2().do_sth()

Change log-level via mocking

I want to change the log-level temporarily.
My current strategy is to use mocking.
with mock.patch(...):
my_method_which_does_log()
All logging.info() calls inside the method should get ignored and not logged to the console.
How to implement the ... to make logs of level INFO get ignored?
The code is single-process and single-thread and executed during testing only.
I want to change the log-level temporarily.
A way to do this without mocking is logging.disable
class TestSomething(unittest.TestCase):
def setUp(self):
logging.disable(logging.WARNING)
def tearDown(self):
logging.disable(logging.NOTSET)
This example would only show messages of level WARNING and above for each test in the TestSomething class. (You call disable at the start and end of each test as needed. This seems a bit cleaner.)
To unset this temporary throttling, call logging.disable(logging.NOTSET):
If logging.disable(logging.NOTSET) is called, it effectively removes this overriding level, so that logging output again depends on the effective levels of individual loggers.
I don't think mocking is going to do what you want. The loggers are presumably already instantiated in this scenario, and level is an instance variable for each of the loggers (and also any of the handlers that each logger has).
You can create a custom context manager. That would look something like this:
Context Manager
import logging
class override_logging_level():
"A context manager for temporarily setting the logging level"
def __init__(self, level, process_handlers=True):
self.saved_level = {}
self.level = level
self.process_handlers = process_handlers
def __enter__(self):
# Save the root logger
self.save_logger('', logging.getLogger())
# Iterate over the other loggers
for name, logger in logging.Logger.manager.loggerDict.items():
self.save_logger(name, logger)
def __exit__(self, exception_type, exception_value, traceback):
# Restore the root logger
self.restore_logger('', logging.getLogger())
# Iterate over the loggers
for name, logger in logging.Logger.manager.loggerDict.items():
self.restore_logger(name, logger)
def save_logger(self, name, logger):
# Save off the level
self.saved_level[name] = logger.level
# Override the level
logger.setLevel(self.level)
if not self.process_handlers:
return
# Iterate over the handlers for this logger
for handler in logger.handlers:
# No reliable name. Just use the id of the object
self.saved_level[id(handler)] = handler.level
def restore_logger(self, name, logger):
# It's possible that some intervening code added one or more loggers...
if name not in self.saved_level:
return
# Restore the level for the logger
logger.setLevel(self.saved_level[name])
if not self.process_handlers:
return
# Iterate over the handlers for this logger
for handler in logger.handlers:
# Reconstruct the key for this handler
key = id(handler)
# Again, we could have possibly added more handlers
if key not in self.saved_level:
continue
# Restore the level for the handler
handler.setLevel(self.saved_level[key])
Test Code
# Setup for basic logging
logging.basicConfig(level=logging.ERROR)
# Create some loggers - the root logger and a couple others
lr = logging.getLogger()
l1 = logging.getLogger('L1')
l2 = logging.getLogger('L2')
# Won't see this message due to the level
lr.info("lr - msg 1")
l1.info("l1 - msg 1")
l2.info("l2 - msg 1")
# Temporarily override the level
with override_logging_level(logging.INFO):
# Will see
lr.info("lr - msg 2")
l1.info("l1 - msg 2")
l2.info("l2 - msg 2")
# Won't see, again...
lr.info("lr - msg 3")
l1.info("l1 - msg 3")
l2.info("l2 - msg 3")
Results
$ python ./main.py
INFO:root:lr - msg 2
INFO:L1:l1 - msg 2
INFO:L2:l2 - msg 2
Notes
The code would need to be enhanced to support multithreading; for example, logging.Logger.manager.loggerDict is a shared variable that's guarded by locks in the logging code.
Using #cryptoplex's approach of using Context Managers, here's the official version from the logging cookbook:
import logging
import sys
class LoggingContext(object):
def __init__(self, logger, level=None, handler=None, close=True):
self.logger = logger
self.level = level
self.handler = handler
self.close = close
def __enter__(self):
if self.level is not None:
self.old_level = self.logger.level
self.logger.setLevel(self.level)
if self.handler:
self.logger.addHandler(self.handler)
def __exit__(self, et, ev, tb):
if self.level is not None:
self.logger.setLevel(self.old_level)
if self.handler:
self.logger.removeHandler(self.handler)
if self.handler and self.close:
self.handler.close()
# implicit return of None => don't swallow exceptions
You could use dependency injection to pass the logger instance to the method you are testing. It is a bit more invasive though since you are changing your method a little, however it gives you more flexibility.
Add the logger parameter to your method signature, something along the lines of:
def my_method( your_other_params, logger):
pass
In your unit test file:
if __name__ == "__main__":
# define the logger you want to use:
logging.basicConfig( stream=sys.stderr )
logging.getLogger( "MyTests.test_my_method" ).setLevel( logging.DEBUG )
...
def test_my_method(self):
test_logger = logging.getLogger( "MyTests.test_my_method" )
# pass your logger to your method
my_method(your_normal_parameters, test_logger)
python logger docs: https://docs.python.org/3/library/logging.html
I use this pattern to write all logs to a list. It ignores logs of level INFO and smaller.
logs=[]
import logging
def my_log(logger_self, level, *args, **kwargs):
if level>logging.INFO:
logs.append((args, kwargs))
with mock.patch('logging.Logger._log', my_log):
my_method_which_does_log()

Nested prefixes accross loggers in Python

I'm currently working on a project where were use a single root logger. I understand from reading about logging that is a Bad Thing but am struggling to find a nice solution to a nice benefit this gives us.
Something we do (in part, to get around not having different loggers but in part gives us a nice feature) is to have a log_prefix decorator.
e.g.
#log_prefix("Download")
def download_file():
logging.info("Downloading file..")
connection = get_connection("127.0.0.1")
//Do other stuff
return file
#log_prefix("GetConnection")
def get_connection(url):
logging.info("Making connection")
//Do other stuff
logging.info("Finished making connection")
return connection
This gives us some nicely formatting logs that might look like:
Download:Downloading file..
Download:GetConnection:Making Connection
Download:GetConnection:Other stuff
Download:GetConnection:Finished making connection
Download:Other stuff
This also means that if we have
#log_prefix("StartTelnetSession")
logging.info("Starting telnet session..")
connection = get_connection("127.0.0.1")
We get the same prefix at the end:
StartTelnetSession:Starting telnet session..
StartTelnetSession:GetConnection:Making Connection
StartTelnetSession:GetConnection:Other stuff
StartTelnetSession:GetConnection:Finished making connection
This has proven to be quite useful for development and support.
I can see plenty of cases where actually just using a separate logger for the action would solve our problem but I can also see cases where throwing away the nesting we have will make things worse.
Are there any patterns or common uses out there for nesting loggers? i.e
logging.getLogger("Download").getLogger("MakingConnection")
Or am I missing something here?
You could use a LoggerAdapter to add extra contextual information:
utils_logging.py:
import functools
def log_prefix(logger, label, prefix=list()):
def decorator(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
prefix.append(label)
logger.extra['prefix'] = ':'.join(prefix)
result = func(*args, **kwargs)
prefix.pop()
logger.extra['prefix'] = ':'.join(prefix)
return result
return wrapper
return decorator
foo.py:
import logging
import utils_logging as UL
import bar
logger = logging.LoggerAdapter(logging.getLogger(__name__), {'prefix':''})
#UL.log_prefix(logger, "Download")
def download_file():
logger.info("Downloading file..")
connection = bar.get_connection("127.0.0.1")
if __name__ == '__main__':
logging.basicConfig(
level=logging.INFO,
format='%(prefix)s %(name)s %(levelname)s %(message)s')
download_file()
bar.get_connection('foo')
bar.py:
import logging
import utils_logging as UL
logger = logging.LoggerAdapter(logging.getLogger(__name__), {'prefix':''})
#UL.log_prefix(logger, "GetConnection")
def get_connection(url):
logger.info("Making connection")
logger.info("Finished making connection")
yields
Download __main__ INFO Downloading file..
Download:GetConnection bar INFO Making connection
Download:GetConnection bar INFO Finished making connection
GetConnection bar INFO Making connection
GetConnection bar INFO Finished making connection
Note: I don't think it is a good idea to have a new Logger instance for each
prefix because these instances are not garbage
collected. All
you need is for some prefix variable to take on a different value depending on
context. You don't need a new Logger instance for that -- one LoggerAdapter will
do.
Logger names are hierarchical.
logger = logging.getLogger("Download.MakingConnection")
This logger would inherit any configuration from logging.getLogger("Download").
Python 2.7 also added a convenience function for accessing descendants of an arbitrary logger.
logger = logging.getLogger("Download.MakingConnection")
parent_logger = logging.getLogger("Download")
child_logger = parent_logger.getChild("MakingConnection")
assert logger is child_logger
Here is an alternative which uses a logging.Filter to modify the record.msg. By modifying the message instead of adding an %(prefix)s field,
the format does not need to change.
This will make it easier to mix loggers which make use of log_prefix and those that don't.
To get the prefix, the logger should be initialized with a call to add_prefix_filter:
logger = UL.add_prefix_filter(logging.getLogger(__name__))
To append labels to the prefix, the functions should be decorated with #log_prefix(label), as before.
utils_logging.py:
import functools
import logging
prefix = list()
def log_prefix(label):
def decorator(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
prefix.append(label)
try:
result = func(*args, **kwargs)
finally:
prefix.pop()
return result
return wrapper
return decorator
class PrefixFilter(logging.Filter):
def filter(self, record):
if prefix:
record.msg = '{}:{}'.format(':'.join(prefix), record.msg)
return True
def add_prefix_filter(logger):
logger.addFilter(PrefixFilter())
return logger
main.py:
import logging
import bar
import utils_logging as UL
logger = UL.add_prefix_filter(logging.getLogger(__name__))
#UL.log_prefix("Download")
def download_file():
logger.info("Downloading file..")
connection = bar.get_connection("127.0.0.1")
if __name__ == '__main__':
logging.basicConfig(
level=logging.INFO,
format='%(message)s')
logger.info('Starting...')
download_file()
bar.get_connection('foo')
bar.py:
import logging
import utils_logging as UL
logger = UL.add_prefix_filter(logging.getLogger(__name__))
#UL.log_prefix("GetConnection")
def get_connection(url):
logger.info("Making connection")
logger.info("Finished making connection")
yields
Starting...
Download:Downloading file..
Download:GetConnection:Making connection
Download:GetConnection:Finished making connection
GetConnection:Making connection
GetConnection:Finished making connection

How can you make logging module functions use a different logger?

I can create a named child logger, so that all the logs output by that logger are marked with it's name. I can use that logger exclusively in my function/class/whatever.
However, if that code calls out to functions in another module that makes use of logging using just the logging module functions (that proxy to the root logger), how can I ensure that those log messages go through the same logger (or are at least logged in the same way)?
For example:
main.py
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
do_stuff(logger)
other.py
import logging
def do_more_stuff():
logging.info("doing other stuff")
Outputs:
$ python main.py
INFO:stuff:doing stuff
INFO:root:doing other stuff
I want to be able to cause both log lines to be marked with the name 'stuff', and I want to be able to do this only changing main.py.
How can I cause the logging calls in other.py to use a different logger without changing that module?
This is the solution I've come up with:
Using thread local data to store the contextual information, and using a Filter on the root loggers handlers to add this information to LogRecords before they are emitted.
context = threading.local()
context.name = None
class ContextFilter(logging.Filter):
def filter(self, record):
if context.name is not None:
record.name = "%s.%s" % (context.name, record.name)
return True
This is fine for me, because I'm using the logger name to indicate what task was being carried out when this message was logged.
I can then use context managers or decorators to make logging from a particular passage of code all appear as though it was logged from a particular child logger.
#contextlib.contextmanager
def logname(name):
old_name = context.name
if old_name is None:
context.name = name
else:
context.name = "%s.%s" % (old_name, name)
try:
yield
finally:
context.name = old_name
def as_logname(name):
def decorator(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
with logname(name):
return f(*args, **kwargs)
return wrapper
return decorator
So then, I can do:
with logname("stuff"):
logging.info("I'm doing stuff!")
do_more_stuff()
or:
#as_logname("things")
def do_things():
logging.info("Starting to do things")
do_more_stuff()
The key thing being that any logging that do_more_stuff() does will be logged as if it were logged with either a "stuff" or "things" child logger, without having to change do_more_stuff() at all.
This solution would have problems if you were going to have different handlers on different child loggers.
Use logging.setLoggerClass so that all loggers used by other modules use your logger subclass (emphasis mine):
Tells the logging system to use the class klass when instantiating a logger. The class should define __init__() such that only a name argument is required, and the __init__() should call Logger.__init__(). This function is typically called before any loggers are instantiated by applications which need to use custom logger behavior.
This is what logging.handlers (or the handlers in the logging module) is for. In addition to creating your logger, you create one or more handlers to send the logging information to various places and add them to the root logger. Most modules that do logging create a logger that they use for there own purposes but depend on the controlling script to create the handlers. Some frameworks decide to be super helpful and add handlers for you.
Read the logging docs, its all there.
(edit)
logging.basicConfig() is a helper function that adds a single handler to the root logger. You can control the format string it uses with the 'format=' parameter. If all you want to do is have all modules display "stuff", then use logging.basicConfig(level=logging.INFO, format="%(levelname)s:stuff:%(message)s").
The logging.{info,warning,…} methods just call the respective methods on a Logger object called root (cf. the logging module source), so if you know the other module is only calling the functions exported by the logging module, you can overwrite the logging module in others namespace with your logger object:
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
# Overwrite other.logging with your just-generated logger object:
other.logging = logger
do_stuff(logger)

How should I verify a log message when testing Python code under nose?

I'm trying to write a simple unit test that will verify that, under a certain condition, a class in my application will log an error via the standard logging API. I can't work out what the cleanest way to test this situation is.
I know that nose already captures logging output through it's logging plugin, but this seems to be intended as a reporting and debugging aid for failed tests.
The two ways to do this I can see are:
Mock out the logging module, either in a piecemeal way (mymodule.logging = mockloggingmodule) or with a proper mocking library.
Write or use an existing nose plugin to capture the output and verify it.
If I go for the former approach, I'd like to know what the cleanest way to reset the global state to what it was before I mocked out the logging module.
Looking forward to your hints and tips on this one...
From python 3.4 on, the standard unittest library offers a new test assertion context manager, assertLogs. From the docs:
with self.assertLogs('foo', level='INFO') as cm:
logging.getLogger('foo').info('first message')
logging.getLogger('foo.bar').error('second message')
self.assertEqual(cm.output, ['INFO:foo:first message',
'ERROR:foo.bar:second message'])
UPDATE: No longer any need for the answer below. Use the built-in Python way instead!
This answer extends the work done in https://stackoverflow.com/a/1049375/1286628. The handler is largely the same (the constructor is more idiomatic, using super). Further, I add a demonstration of how to use the handler with the standard library's unittest.
class MockLoggingHandler(logging.Handler):
"""Mock logging handler to check for expected logs.
Messages are available from an instance's ``messages`` dict, in order, indexed by
a lowercase log level string (e.g., 'debug', 'info', etc.).
"""
def __init__(self, *args, **kwargs):
self.messages = {'debug': [], 'info': [], 'warning': [], 'error': [],
'critical': []}
super(MockLoggingHandler, self).__init__(*args, **kwargs)
def emit(self, record):
"Store a message from ``record`` in the instance's ``messages`` dict."
try:
self.messages[record.levelname.lower()].append(record.getMessage())
except Exception:
self.handleError(record)
def reset(self):
self.acquire()
try:
for message_list in self.messages.values():
message_list.clear()
finally:
self.release()
Then you can use the handler in a standard-library unittest.TestCase like so:
import unittest
import logging
import foo
class TestFoo(unittest.TestCase):
#classmethod
def setUpClass(cls):
super(TestFoo, cls).setUpClass()
# Assuming you follow Python's logging module's documentation's
# recommendation about naming your module's logs after the module's
# __name__,the following getLogger call should fetch the same logger
# you use in the foo module
foo_log = logging.getLogger(foo.__name__)
cls._foo_log_handler = MockLoggingHandler(level='DEBUG')
foo_log.addHandler(cls._foo_log_handler)
cls.foo_log_messages = cls._foo_log_handler.messages
def setUp(self):
super(TestFoo, self).setUp()
self._foo_log_handler.reset() # So each test is independent
def test_foo_objects_fromble_nicely(self):
# Do a bunch of frombling with foo objects
# Now check that they've logged 5 frombling messages at the INFO level
self.assertEqual(len(self.foo_log_messages['info']), 5)
for info_message in self.foo_log_messages['info']:
self.assertIn('fromble', info_message)
I used to mock loggers, but in this situation I found best to use logging handlers, so I wrote this one based on the document suggested by jkp(now dead, but cached on Internet Archive)
class MockLoggingHandler(logging.Handler):
"""Mock logging handler to check for expected logs."""
def __init__(self, *args, **kwargs):
self.reset()
logging.Handler.__init__(self, *args, **kwargs)
def emit(self, record):
self.messages[record.levelname.lower()].append(record.getMessage())
def reset(self):
self.messages = {
'debug': [],
'info': [],
'warning': [],
'error': [],
'critical': [],
}
Simplest answer of all
Pytest has a built-in fixture called caplog. No setup needed.
def test_foo(foo, caplog, expected_msgs):
foo.bar()
assert [r.msg for r in caplog.records] == expected_msgs
I wish I'd known about caplog before I wasted 6 hours.
Warning, though - it resets, so you need to perform your SUT action in the same test where you make assertions about caplog.
Personally, I want my console output clean, so I like this to silence the log-to-stderr:
from logging import getLogger
from pytest import fixture
#fixture
def logger(caplog):
logger = getLogger()
_ = [logger.removeHandler(h) for h in logger.handlers if h != caplog.handler] # type: ignore
return logger
#fixture
def foo(logger):
return Foo(logger=logger)
#fixture
def expected_msgs():
# return whatever it is you expect from the SUT
def test_foo(foo, caplog, expected_msgs):
foo.bar()
assert [r.msg for r in caplog.records] == expected_msgs
There is a lot to like about pytest fixtures if you're sick of horrible unittest code.
Brandon's answer:
pip install testfixtures
snippet:
import logging
from testfixtures import LogCapture
logger = logging.getLogger('')
with LogCapture() as logs:
# my awesome code
logger.error('My code logged an error')
assert 'My code logged an error' in str(logs)
Note: the above does not conflict with calling nosetests and getting the output of logCapture plugin of the tool
As a follow up to Reef's answer, I took a liberty of coding up an example using pymox.
It introduces some extra helper functions that make it easier to stub functions and methods.
import logging
# Code under test:
class Server(object):
def __init__(self):
self._payload_count = 0
def do_costly_work(self, payload):
# resource intensive logic elided...
pass
def process(self, payload):
self.do_costly_work(payload)
self._payload_count += 1
logging.info("processed payload: %s", payload)
logging.debug("payloads served: %d", self._payload_count)
# Here are some helper functions
# that are useful if you do a lot
# of pymox-y work.
import mox
import inspect
import contextlib
import unittest
def stub_all(self, *targets):
for target in targets:
if inspect.isfunction(target):
module = inspect.getmodule(target)
self.StubOutWithMock(module, target.__name__)
elif inspect.ismethod(target):
self.StubOutWithMock(target.im_self or target.im_class, target.__name__)
else:
raise NotImplementedError("I don't know how to stub %s" % repr(target))
# Monkey-patch Mox class with our helper 'StubAll' method.
# Yucky pymox naming convention observed.
setattr(mox.Mox, 'StubAll', stub_all)
#contextlib.contextmanager
def mocking():
mocks = mox.Mox()
try:
yield mocks
finally:
mocks.UnsetStubs() # Important!
mocks.VerifyAll()
# The test case example:
class ServerTests(unittest.TestCase):
def test_logging(self):
s = Server()
with mocking() as m:
m.StubAll(s.do_costly_work, logging.info, logging.debug)
# expectations
s.do_costly_work(mox.IgnoreArg()) # don't care, we test logging here.
logging.info("processed payload: %s", 'hello')
logging.debug("payloads served: %d", 1)
# verified execution
m.ReplayAll()
s.process('hello')
if __name__ == '__main__':
unittest.main()
If you define a helper method like this:
import logging
def capture_logging():
records = []
class CaptureHandler(logging.Handler):
def emit(self, record):
records.append(record)
def __enter__(self):
logging.getLogger().addHandler(self)
return records
def __exit__(self, exc_type, exc_val, exc_tb):
logging.getLogger().removeHandler(self)
return CaptureHandler()
Then you can write test code like this:
with capture_logging() as log:
... # trigger some logger warnings
assert len(log) == ...
assert log[0].getMessage() == ...
You should use mocking, as someday You might want to change Your logger to a, say, database one. You won't be happy if it'll try to connect to the database during nosetests.
Mocking will continue to work even if standard output will be suppressed.
I have used pyMox's stubs. Remember to unset the stubs after the test.
The ExpectLog class implemented in tornado is a great utility:
with ExpectLog('channel', 'message regex'):
do_it()
http://tornado.readthedocs.org/en/latest/_modules/tornado/testing.html#ExpectLog
Keying off #Reef's answer, I did tried the code below. It works well for me both for Python 2.7 (if you install mock) and for Python 3.4.
"""
Demo using a mock to test logging output.
"""
import logging
try:
import unittest
except ImportError:
import unittest2 as unittest
try:
# Python >= 3.3
from unittest.mock import Mock, patch
except ImportError:
from mock import Mock, patch
logging.basicConfig()
LOG=logging.getLogger("(logger under test)")
class TestLoggingOutput(unittest.TestCase):
""" Demo using Mock to test logging INPUT. That is, it tests what
parameters were used to invoke the logging method, while still
allowing actual logger to execute normally.
"""
def test_logger_log(self):
"""Check for Logger.log call."""
original_logger = LOG
patched_log = patch('__main__.LOG.log',
side_effect=original_logger.log).start()
log_msg = 'My log msg.'
level = logging.ERROR
LOG.log(level, log_msg)
# call_args is a tuple of positional and kwargs of the last call
# to the mocked function.
# Also consider using call_args_list
# See: https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.call_args
expected = (level, log_msg)
self.assertEqual(expected, patched_log.call_args[0])
if __name__ == '__main__':
unittest.main()

Categories

Resources