Change logfile handle in decorator for unittest - python

I need some advice for a problem:
Ich try to do unittests which are started by a campaign runner.
Runner: HTMLTestRunner
The tests are arranged in a package (Testcases). Each testcase is a python module with a class for it. So: -> Testmodul -> TestClass -> test_funct
Now, to extend logging (different reasons like only saving logs for specific errors and starting and ending a screenrecord) I made a decorator which should handle my logfiles. So that means:
When a decorated testcase is started, The logfile handle should be set to a temporary file
During the testcase run, the logfile should be filled with ALL log-messages from the project
When the testcase is finished, the logfile should be copied to another directory (timestamp appended to name) and then the old logfile should be deleted.
So far so good, this all works fine so far, but the content of the logfile is not correct. I tried so many things. Sometimes not all logmessages are covered, sometimes it is a doubled amount of logmessages. Sometimes the messages start in the middle of the testcase. Can you maybe help me at this issue?
Initialization of logger in My Campaign:
def main():
console = logging.StreamHandler()
console.setLevel(logging.INFO)
formatter = logging.Formatter(my_format_string)
console.setFormatter(formatter)
logging.getLogger().addHandler(console)
logger = logging.getLogger("campaign")
logger.setLevel(logging.INFO)
Decorators in other module:
def extended_logging(func):
#wraps(func)
def wrapped_func(*args, **kwargs):
root = logging.getLogger("campaign")
for hndlrs in root.handlers:
root.removeHandler(hndlrs)
console = logging.StreamHandler()
console.setLevel(logging.INFO)
formatter = logging.Formatter(my_format_string)
console.setFormatter(formatter)
root.addHandler(console)
filehandler = logging.FileHandler(config.LOG_FILEPATH, mode='w+')
formatter = logging.Formatter(my_format_string)
filehandler.setFormatter(formatter)
root.addHandler(filehandler)
logger.info("This should actually work")
try:
time.sleep(5)
func(*args, **kwargs)
except Exception as error: # *** This is just dummy error handling for the test
logger.info("An exception occured")
logger.info("This testcase is false")
raise Exception
finally:
# *** Copies and renames the temp logfile and removes it after
save_logs(None, None, "module", "1")
return wrapped_func
Definition of TC is like this:
logger=logging.getLogger(f"camapaign.{__name__}")
class TestClass(unittest.Testcase):
#extended_logging
def test_mylittlemoduletest:
logger.info("Doing my testcase now")
# here smthng will be done

Related

How to get python unittest to show log messages only on failed tests

Issue
I've been trying to use the unittest --buffer flag to suppress logs for successful tests and show them for failing tests. But it seems to show the log output regardless. Is this a quirk of the logging module? How can I get the log output only on failing tests? Is there a special config on the logger that is required? Other questions and answers I've found have taken a brute force approach to disable all logging during tests.
Sample Code
import logging
import unittest
import sys
logger = logging.getLogger('abc')
logging.basicConfig(
format = '%(asctime)s %(module)s %(levelname)s: %(message)s',
level = logging.INFO,
stream = sys.stdout)
class TestABC(unittest.TestCase):
def test_abc_pass(self):
logger.info('log abc in pass')
print('print abc in pass')
self.assertTrue(True)
def test_abc_fail(self):
logger.info('log abc in fail')
print('print abc in fail')
self.assertTrue(False)
Test Output
$ python -m unittest --buffer
2021-09-15 17:38:48,462 test INFO: log abc in fail
F
Stdout:
print abc in fail
2021-09-15 17:38:48,463 test INFO: log abc in pass
.
======================================================================
FAIL: test_abc_fail (test.TestABC)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".../test.py", line 22, in test_abc_fail
self.assertTrue(False)
AssertionError: False is not true
Stdout:
print abc in fail
----------------------------------------------------------------------
Ran 2 tests in 3.401s
FAILED (failures=1)
So the buffer does successfully suppress the output from the print statement in the passing test. But it doesn't suppress the log output.
A Solution for the Sample Code
Just before the test runs we need to update the stream on the log handler to point to the buffer unittest has set up for capturing the test output.
import logging
import unittest
import sys
logger = logging.getLogger('abc')
logging.basicConfig(
format = '%(asctime)s %(module)s %(levelname)s: %(message)s',
level = logging.INFO,
stream = sys.stdout)
class LoggerRedirector:
# Keep a reference to the real streams so we can revert
_real_stdout = sys.stdout
_real_stderr = sys.stderr
#staticmethod
def all_loggers():
loggers = [logging.getLogger()]
loggers += [logging.getLogger(name) for name in logging.root.manager.loggerDict]
return loggers
#classmethod
def redirect_loggers(cls, fake_stdout=None, fake_stderr=None):
if ((not fake_stdout or fake_stdout is cls._real_stdout)
and (not fake_stderr or fake_stderr is cls._real_stderr)):
return
for logger in cls.all_loggers():
for handler in logger.handlers:
if hasattr(handler, 'stream'):
if handler.stream is cls._real_stdout:
handler.setStream(fake_stdout)
if handler.stream is cls._real_stderr:
handler.setStream(fake_stderr)
#classmethod
def reset_loggers(cls, fake_stdout=None, fake_stderr=None):
if ((not fake_stdout or fake_stdout is cls._real_stdout)
and (not fake_stderr or fake_stderr is cls._real_stderr)):
return
for logger in cls.all_loggers():
for handler in logger.handlers:
if hasattr(handler, 'stream'):
if handler.stream is fake_stdout:
handler.setStream(cls._real_stdout)
if handler.stream is fake_stderr:
handler.setStream(cls._real_stderr)
class TestABC(unittest.TestCase):
def setUp(self):
# unittest has reassigned sys.stdout and sys.stderr by this point
LoggerRedirector.redirect_loggers(fake_stdout=sys.stdout, fake_stderr=sys.stderr)
def tearDown(self):
LoggerRedirector.reset_loggers(fake_stdout=sys.stdout, fake_stderr=sys.stderr)
# unittest will revert sys.stdout and sys.stderr after this
def test_abc_pass(self):
logger.info('log abc in pass')
print('print abc in pass')
self.assertTrue(True)
def test_abc_fail(self):
logger.info('log abc in fail')
print('print abc in fail')
self.assertTrue(False)
The How and Why
The issue is a side effect from both how unittest is capturing the stdout and stderr for the test and how logging is usually set up. Usually logging is set up very early in the program execution and this means the log handlers will store a reference to sys.stdout and sys.stderr in their instances (code link). However, just before the test runs, unittest creates a io.StringIO() buffer for both streams and reassigns sys.stdout and sys.stderr to the new buffers (code link).
So right before the test runs, in order to get unittest to capture the log output, we need to tell the log handlers to point their streams to the buffer that unittest has set up. After the test has finished, the streams are reverted back to normal. However, unittest creates a new buffer for each test so we need to update the log handlers both before and after each test.
Since the log handlers are pointed to the buffer that unittest set up, if there was a failed test, then all the logs for that test will be displayed when using the --buffer option.
The LoggerRedirector class in the solution above just offers convenience methods to reassign all the handlers that might be pointed to sys.stdout or sys.stderr to the new buffer that unittest has set up and then an easy way to revert them. Since by the time setUp() runs, unittest has already reassigned sys.stdout and sys.stderr we are using these to reference the new buffer unittest has set up.

Python logger config is not impacting on the imported module

I have created a logger.py for formatting my python logger, which will have 2 different setup for logging error and info msgs in 2 different files.
import logging
import os
class Logger(logging.Filter):
def __init__(self):
try:
logpath = os.path.dirname(os.getcwd())+'/'
self.info_logger = self.setup_logger('Normal logger', 'Infologger111.log')
self.error_logger = self.setup_logger('Error logger', 'ErrorLogs2222.log')
except Exception as e:
print('[Logger __init__] Exception: ', str(e))
raise
def setup_logger(self, name, log_file, level=logging.DEBUG):
try:
handler = logging.FileHandler(log_file)
handler.setFormatter(logging.Formatter(
'%(asctime)s|%(levelname)s|p%(process)s|[%(pathname)s|%(name)s|%(funcName)s|%(lineno)d]|%(message)s',
datefmt="%Y-%m-%d %H:%M:%S"))
logger = logging.getLogger(name)
logger.setLevel(level)
logger.addHandler(handler)
return logger
except Exception as e:
print('[setup_logger] Exception: ', str(e))
raise
And I have a API_connector class in api_function.py file
import logging
logger = logging.Logger(__name__)
class API_connector(object):
def __init__(self, data={}):
logger.info("**********test log in API_connector")
Now I imported the Logger class as well as API_connector class both into my python code as below,
from logger import Logger
import api_function
class LogTest(Logger):
def __init__(self):
super().__init__()
def test_func(self):
print('inside test func')
self.info_logger.info('test_func log message')
self.error_logger.error('test_func error message ')
api_function.API_connector()
if __name__=='__main__':
ob=LogTest()
ob.info_logger.info('main start')
ob.test_func()
So the issue is 2 different files getting created and getting logged from my python code but the logging I kept in api_function file is not getting printed as my python code file does.
I want the same logger configuration to be impacted in the api_function file, by doing simply like this,
logger = logging.Logger(__name__)
How to make it printed in those 2 configured files with all the logger formatting I have configured?

unit test python code that has configparser reading from config file

I'm new to python unit test. I learned and did sample unit test where method accepts input and returns output.
But for code as mentioned below, I've some questions.
How I mock configparser of init method in unittest? Path /config/program.cfg is on production server and not exists in dev directory. program.cfg file exists at other location in code directory. Is there a way to handle that in unittest?
How I send or skip something like hardcoded path in unittest e.g. /var/log/info_server.log
If possible, can you please tell me how would you write unittest for below code using pytest module? This will be helpful to understand the flow and I can do that with rest of code.
def __init__(self):
self.setup_logger()
# Read config parameters
config = configparser.ConfigParser()
config.read("/config/program.cfg")
self.host_ip = config.get('default','HostIP')
self.redis_ip = config.get('default','RedisIP')
self.redis_port = config.get('default','RedisPort')
self.info_port = config.get('default','InfoPort')
self.sqlite_db_file = config.get('default','SQLiteDbFile')
self.connect_redis()
def setup_logger(self):
#Initialize logger
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.INFO)
# Create a file handler
handler = logging.FileHandler('/var/log/info_server.log')
handler.setLevel(logging.INFO)
# Create a logging format
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
# Add the handlers to the logger
self.logger.addHandler(handler)
def connect_redis(self):
# Start up a Redis instance
self.logger.info("Start Redis instance")
self.ad_info = redis.StrictRedis(host=self.redis_ip, port=self.redis_port, db=0)
if self.ad_info is None:
self.logger.error("Failed to start Redis instance")
else:
self.logger.info("Started Redis instance")
Sure, we could mock many things, but a simple re-factoring the class being tested, adding config and log, will make life much easier.
def __init__(self, cfg='/config/program.cfg', log='/var/log/info_server.log'):
In development server where /config/program.cfg is not present, you could just
TheClass(cfg='~/dev.cfg', log='/tmp/dev.log')
self.server_obj.connect_redis() log_mock.logger.info.assert_called_with('Started Redis instance') not working
import logging
logging.getLogger
# <function getLogger at 0x7f3d2ed427b8>
logger = logging.getLogger()
type(logger)
# <class 'logging.RootLogger'>
logger.info
# <bound method Logger.info of <RootLogger root (WARNING)>>
The patch should be sth like this:
with mock.patch.object(logging.RootLogger, 'info') as mock_info:
server_obj = Server(cfg='sth', log='sth')
mock_info.assert_called_with('xxx')

Change log-level via mocking

I want to change the log-level temporarily.
My current strategy is to use mocking.
with mock.patch(...):
my_method_which_does_log()
All logging.info() calls inside the method should get ignored and not logged to the console.
How to implement the ... to make logs of level INFO get ignored?
The code is single-process and single-thread and executed during testing only.
I want to change the log-level temporarily.
A way to do this without mocking is logging.disable
class TestSomething(unittest.TestCase):
def setUp(self):
logging.disable(logging.WARNING)
def tearDown(self):
logging.disable(logging.NOTSET)
This example would only show messages of level WARNING and above for each test in the TestSomething class. (You call disable at the start and end of each test as needed. This seems a bit cleaner.)
To unset this temporary throttling, call logging.disable(logging.NOTSET):
If logging.disable(logging.NOTSET) is called, it effectively removes this overriding level, so that logging output again depends on the effective levels of individual loggers.
I don't think mocking is going to do what you want. The loggers are presumably already instantiated in this scenario, and level is an instance variable for each of the loggers (and also any of the handlers that each logger has).
You can create a custom context manager. That would look something like this:
Context Manager
import logging
class override_logging_level():
"A context manager for temporarily setting the logging level"
def __init__(self, level, process_handlers=True):
self.saved_level = {}
self.level = level
self.process_handlers = process_handlers
def __enter__(self):
# Save the root logger
self.save_logger('', logging.getLogger())
# Iterate over the other loggers
for name, logger in logging.Logger.manager.loggerDict.items():
self.save_logger(name, logger)
def __exit__(self, exception_type, exception_value, traceback):
# Restore the root logger
self.restore_logger('', logging.getLogger())
# Iterate over the loggers
for name, logger in logging.Logger.manager.loggerDict.items():
self.restore_logger(name, logger)
def save_logger(self, name, logger):
# Save off the level
self.saved_level[name] = logger.level
# Override the level
logger.setLevel(self.level)
if not self.process_handlers:
return
# Iterate over the handlers for this logger
for handler in logger.handlers:
# No reliable name. Just use the id of the object
self.saved_level[id(handler)] = handler.level
def restore_logger(self, name, logger):
# It's possible that some intervening code added one or more loggers...
if name not in self.saved_level:
return
# Restore the level for the logger
logger.setLevel(self.saved_level[name])
if not self.process_handlers:
return
# Iterate over the handlers for this logger
for handler in logger.handlers:
# Reconstruct the key for this handler
key = id(handler)
# Again, we could have possibly added more handlers
if key not in self.saved_level:
continue
# Restore the level for the handler
handler.setLevel(self.saved_level[key])
Test Code
# Setup for basic logging
logging.basicConfig(level=logging.ERROR)
# Create some loggers - the root logger and a couple others
lr = logging.getLogger()
l1 = logging.getLogger('L1')
l2 = logging.getLogger('L2')
# Won't see this message due to the level
lr.info("lr - msg 1")
l1.info("l1 - msg 1")
l2.info("l2 - msg 1")
# Temporarily override the level
with override_logging_level(logging.INFO):
# Will see
lr.info("lr - msg 2")
l1.info("l1 - msg 2")
l2.info("l2 - msg 2")
# Won't see, again...
lr.info("lr - msg 3")
l1.info("l1 - msg 3")
l2.info("l2 - msg 3")
Results
$ python ./main.py
INFO:root:lr - msg 2
INFO:L1:l1 - msg 2
INFO:L2:l2 - msg 2
Notes
The code would need to be enhanced to support multithreading; for example, logging.Logger.manager.loggerDict is a shared variable that's guarded by locks in the logging code.
Using #cryptoplex's approach of using Context Managers, here's the official version from the logging cookbook:
import logging
import sys
class LoggingContext(object):
def __init__(self, logger, level=None, handler=None, close=True):
self.logger = logger
self.level = level
self.handler = handler
self.close = close
def __enter__(self):
if self.level is not None:
self.old_level = self.logger.level
self.logger.setLevel(self.level)
if self.handler:
self.logger.addHandler(self.handler)
def __exit__(self, et, ev, tb):
if self.level is not None:
self.logger.setLevel(self.old_level)
if self.handler:
self.logger.removeHandler(self.handler)
if self.handler and self.close:
self.handler.close()
# implicit return of None => don't swallow exceptions
You could use dependency injection to pass the logger instance to the method you are testing. It is a bit more invasive though since you are changing your method a little, however it gives you more flexibility.
Add the logger parameter to your method signature, something along the lines of:
def my_method( your_other_params, logger):
pass
In your unit test file:
if __name__ == "__main__":
# define the logger you want to use:
logging.basicConfig( stream=sys.stderr )
logging.getLogger( "MyTests.test_my_method" ).setLevel( logging.DEBUG )
...
def test_my_method(self):
test_logger = logging.getLogger( "MyTests.test_my_method" )
# pass your logger to your method
my_method(your_normal_parameters, test_logger)
python logger docs: https://docs.python.org/3/library/logging.html
I use this pattern to write all logs to a list. It ignores logs of level INFO and smaller.
logs=[]
import logging
def my_log(logger_self, level, *args, **kwargs):
if level>logging.INFO:
logs.append((args, kwargs))
with mock.patch('logging.Logger._log', my_log):
my_method_which_does_log()

Python Logging writing to terminal and log file

I have two files: script.py and functions.py. In functions.py, I have logger setup, and a set of functions (made up one below):
class ecosystem():
def __init__(self, environment, mode):
self.logger = logging.getLogger(__name__)
if os.path.exists('log.log'):
os.remove('log.log')
handler= logging.FileHandler('log.log')
if mode.lower()== 'info':
handler.setLevel(logging.INFO)
self.logger.setLevel(logging.INFO)
elif mode.lower()== 'warning':
handler.setLevel(logging.WARNING)
self.logger.setLevel(logging.WARNING)
elif mode.lower()== 'error':
handler.setLevel(logging.ERROR)
self.logger.setLevel(logging.ERROR)
elif mode.lower()== 'critical':
handler.setLevel(logging.CRITICAL)
self.logger.setLevel(logging.CRITICAL)
else:
handler.setLevel(logging.DEBUG)
self.logger.setLevel(logging.DEBUG)
#Logging file format
formatter = logging.Formatter(' %(levelname)s | %(asctime)s | %(message)s \n')
handler.setFormatter(formatter)
#Add the handler to the logger
self.logger.addHandler(handler)
self.logger.info('Logging starts here')
def my_function():
self.logger.debug('test log'))
return True
I'm trying to call ecosystem.my_function from script.py, but when I do, the logger.debug message shows up in both the terminal window AND log.log. Any ideas why this might be happening?
If it helps, I also import other modules into functions.py, if those modules import logging as well, could that cause issues?
It looks like you're initializing the logger with log.log file inside the __init__ method of ecosystem class. This means that any code that creates an object of ecosystem will initialize the logger. Somewhere in your code, in one the files, you are creating that object and hence the logger is initialized and writes to the file.
Note that you do not need to call __init__ yourself as it is called on object creation. ie. after this call
my_obj = ecosystem()
log files will be written.
You're asking why both stderr and file is used after your new file handler is attached. This is because of the propagate attribute. By default propagate is True and this means your log will bubble up the hierarchy of loggers and each one will continue handling it. Since default root logger is at the top of the hierarchy, it will be handling your log too. Set propagate to False fix this:
self.logger.propagate = False
You might want to read up a bit on logging. Also, if you want to keep your sanity regarding logging, check how you can use dict to configure logging.

Categories

Resources