I'd like to test logger messages without printing them to the screen in my unittests. Given this code:
import logging
logging.basicConfig(level=logging.ERROR)
logger = logging.getLogger('test')
logger.warning('this is a warning')
# How do I see that there was a warning?
How do I look at the log records in the logger to see that there was a warning? I cannot find an iterator in Logger that would do the job.
You may need to use TestCase.assertLogs() context manager in such case. The documentation provides a pretty good example of what can be done with it:
with self.assertLogs('foo', level='INFO') as cm:
logging.getLogger('foo').info('first message')
logging.getLogger('foo.bar').error('second message')
self.assertEqual(cm.output, ['INFO:foo:first message',
'ERROR:foo.bar:second message'])
Inside the context manager, you can access cm.records for a list of LogRecord instances, or cm.output for a list of formatted messages
Related
The fastAPI library that I import for an API I have written, writes many logging.INFO level messages to the console, which I would like either to redirect to a file-based log, or ideally, to both console and file. Here is an example of fastAPI module logging events in my console:
So I've tried to implement this Stack Overflow answer ("Easy-peasy with Python 3.3 and above"), but the log file it creates ("api_screen.log") is always empty....
# -------------------------- logging ----------------------------
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_format = ' %(message)s'
logging_handlers = [logging.FileHandler(logging_file), logging.StreamHandler()]
logging.basicConfig(level = logging_level, format = logging_format, handlers = logging_handlers)
logging.info("------logging test------")
Even though my own "------logging test------" message does appear on console within the other fastAPI logs:
As you can see here it's created the file, but it has size zero.
So what do I need to do also to get the file logging working?
There are multiple issues here. First and most importantly: basicConfig does nothing if a logger is already configured, which fastAPI does. So the handlers you are creating are never used. When you call logging.info() you are sending a log to the root logger which is printed because the fastAPI has added a handler to it. You are also not setting the level on your handlers. Try this code instead of what you currently have:
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_fh = logging.FileHandler(logging_file)
logging_sh = logging.StreamHandler()
logging_fh.setLevel(logging_level)
logging_sh.setLevel(logging_level)
root_logger = logging.getLogger()
root_logger.addHandler(logging_fh)
root_logger.addHandler(logging_sh)
logging.info('--test--')
I am having trouble finding the bug in my code when I try to log using the logging library. Even though I set the minimum log-level to DEBUG, the logger creates a log-file that starts at WARNING.
import logging
my_logger = logging.getLogger("our logger")
# Remove all handlers associated with my_logger object.
# This is only done so that we can run this block mult. times.
for handler in my_logger.handlers[:]:
my_logger.removeHandler(handler)
# let's create a handler and set the logging level
my_handler_for_file = logging.FileHandler("custom logging.log", mode='w')
my_handler_for_file.setLevel(logging.DEBUG)
# try to set it to logging.CRITICAL and it will only log CRITICAL,
# so it does work but neither for DEBUG nor INFO!
# add the handlers to our custom logger
my_logger.addHandler(my_handler_for_file)
# let's create some logs
my_logger.debug('This is a debug message')
my_logger.info('This is an info message')
my_logger.warning('This is a warning message')
my_logger.error('This is an error message')
my_logger.critical('This is a critical message')
This is the output in the log-file:
This is a warning message
This is an error message
This is a critical message
And this is what I expect it to be:
This is a debug message
This is an info message
This is a warning message
This is an error message
This is a critical message
Does anyone have an idea what is wrong here?
You need to set the level on the logger too.
my_logger.setLevel(logging.DEBUG)
This is because both the loggers and the handlers filter messages based on their level.
I'm using python's logging module. I've initialized it as:
import logging
logger = logging.getLogger(__name__)
in every of my modules. Then, in the main file:
logging.basicConfig(level=logging.INFO,filename="log.txt")
Now, in the app I'm also using WSGIServer from gevent. The initializer takes a log argument where I can add a logger instance. Since this is an HTTP Server it's very verbose.
I would like to log all of my app's regular logs to "log.txt" and WSGIServer's logs to "http-log.txt".
I tried this:
logging.basicConfig(level=logging.INFO,filename="log.txt")
logger = logging.getLogger(__name__)
httpLogger = logging.getLogger("HTTP")
httpLogger.addHandler(logging.FileHandler("http-log.txt"))
httpLogger.addFilter(logging.Filter("HTTP"))
http_server = WSGIServer(('0.0.0.0', int(config['ApiPort'])), app, log=httpLogger)
This logs all HTTP messages into http-log.txt, but also to the main logger.
How can I send all but HTTP messages to the default logger (log.txt), and HTTP messages only to http-log.txt?
EDIT: Since people are quickly jumping to point that this Logging to two files with different settings has an answer, plese read the linked answer and you'll see they don't use basicConfig but rather initialize each logger separately. This is not how I'm using the logging module.
Add the following line to disable propagation:
httpLogger.propagate = False
Then, it will no longer propagate messages to its ancestors' handlers which includes the root logger for which you have set up the general log file.
I'm trying to replace an old method of logging information with standard Python logging to a file. The application currently has a log file which is set to capture Info and Debug messages so I would like this at a lower level that isn't captured by the main log.
App structure:
- mysite
- legacy
-- items
--- item1.py
-- __init__.py
-- engine.py
Within item1.py and engine.py are calls to an old debug() function which I'd not like to be logged in legacy.log but not have them appear in the mysite.log file.
Ideally the way this works is to create a wrapper with a debug function which does the logging at the new level and I've read that this requires an extension of logging.Logger.
So In legacy/__init__.py I've written;
import logging
LEGACY_DEBUG_LVL = 5
class LegacyLogger(logging.Logger):
"""
Extend the Logger to introduce the new legacy logging.
"""
def legacydebug(self, msg, *args, **kwargs):
"""
Log messages from legacy provided they are strings.
#param msg: message to log
#type msg:
"""
if isinstance(msg, str):
self._log(LEGACY_DEBUG_LVL, msg, args)
logging.Logger.legacydebug = legacydebug
logger = logging.getLogger('legacy')
logger.setLevel(LEGACY_DEBUG_LVL)
logger.addHandler(logging.FileHandler('legacy.log'))
logging.addLevelName(LEGACY_DEBUG_LVL, "legacydebug")
And from engine.py and item1.py I can just do;
from . import logger
debug = logger.legacydebug
At the moment I'm seeing messages logged to both logs. Is this the correct approach for what I want to achieve? I've got a talent for over-complicating things and missing the simple stuff!
edit
Logging in the main application settings is setup as such;
# settings.py
logging.captureWarnings(True)
logger = logging.getLogger()
logger.addHandler(logging.NullHandler())
logger.addHandler(logging.handlers.FileHandler('mysite.log'))
if DEBUG:
# If we're running in debug mode, write logs to stdout as well:
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
else:
logger.setLevel(logging.INFO)
When using multiple loggers, the logging module implicitly creates them in a tree structure. The structure are defined by the logger name: a logger named 'animal' will be the parent of loggers called 'animal.cat' and 'animal.dog'.
In your case, the unnamed logger defined in settings.py is parent of the logger named 'legacy'. The unnamed logger will receive the messages sent through the 'legacy' logger, and write them into mysite.log.
Try to give a name for the unnamed logger, such as 'mysite' to break the tree structure.
How do I silence a class's logging without knowing the name of its logger? Class in question is qualysconnect.
import logging
import qualysconnect.util
# Set log options. This is my attempt to silence it.
logger_qc = logging.getLogger('qualysconnect')
logger_qc.setLevel(logging.ERROR)
# Define a Handler which writes WARNING messages or higher to the sys.stderr
logger_console = logging.StreamHandler()
logger_console.setLevel(logging.ERROR)
# Set a format which is simpler for console use.
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# Tell the handler to use this format.
logger_console.setFormatter(formatter)
# Add the handler to the root logger
logging.getLogger('').addHandler(logger_console)
# 'application' code
logging.debug('debug message')
logging.info('info message')
logging.warn('warn message')
logging.error('error message')
logging.critical('critical message')
Output when import qualysconnect.util is commented out:
root : ERROR error message
root : CRITICAL critical message
Output when import qualysconnect.util is kept in:
WARNING:root:warn message
ERROR:root:error message
root : ERROR error message
CRITICAL:root:critical message
root : CRITICAL critical message
Sadly, as they did not define a name for their logger, and as in qualysconnect.util they don't even do a getLogger call or a getChild call, you can't do something on one that will not affect the whole module's logging behavior without getting dirty.
The only clean option I can think of is to report the way they handle logging as a bug, and submit a patch request where you modify qualysconnect.util logging statement with something like:
import logging
logger = logging.getLogger('qualysconnect').getChild('util')
and replace all logging.info(), logging.debug()... into logger.info(), logger.debug()...
The dirty option: You can monkey patch the qualysconnect.util module so you replace its logging object with a logger object:
import qualysconnect.util
logger_qc = logging.getLogger('qualysconnect')
logger_qc.setLevel(logging.ERROR)
qualysconnect.util.logging = logger_qc.getLogger('qualysconnect').getChild('util')
qualysconnect.util.logging.disable(logging.CRITICAL) # will disable all logging for CRITICAL and below
That can be a working solution for the time you're sending a patch request to the upstream project, but certainly not a long-term working solution.
Or you can simply shut all logging off from the whole qualysconnect module, but I don't think that's what you want.