python: modify level of logs from a module - python

TLDR
if a module uses
log.error("something happened")
we would like to see these logs, but as warnings, so that the net effect for us would be the same as if that module had used
log.warning("something happened")
More details
We use the aiokafka module which logs errors when the connection with confluent.cloud has trouble. However these are transient problems and after a while connection is re-established, so we would have preferred these logs to be warning instead of error, yet we don't want to lose those logs.
Is there a way to modify these log records "on the fly", to change their log level? I know I could
logger = logging.getLogger("aiokafka")
logger.setLevel(logging.CRITICAL)
but then all logs would get lost.

You can attach a filter function to the logger which downgrades the level. Here'a a working example you can use to build from:
import logging
def downgrade_filter(record):
if record.levelno == logging.ERROR:
record.levelno = logging.WARNING
record.levelname = logging.getLevelName(logging.WARNING)
return True
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG, format='%(levelname)-8s|%(name)-8s|%(message)s')
logger = logging.getLogger('aiokafka')
logger.setLevel(logging.WARNING)
logger.addFilter(downgrade_filter)
logger.debug('This should not appear')
logger.info('This should not appear')
logger.warning('This should appear as a warning')
logger.error('This should appear as a warning, though logged as an error')
logger.critical('This should appear as a critical error')
When run, it should print
WARNING |aiokafka|This should appear as a warning
WARNING |aiokafka|This should appear as a warning, though logged as an error
CRITICAL|aiokafka|This should appear as a critical error
(This is on a recent version of Python 3.x)

Related

How to redirect another library's console logging messages to a file, in Python

The fastAPI library that I import for an API I have written, writes many logging.INFO level messages to the console, which I would like either to redirect to a file-based log, or ideally, to both console and file. Here is an example of fastAPI module logging events in my console:
So I've tried to implement this Stack Overflow answer ("Easy-peasy with Python 3.3 and above"), but the log file it creates ("api_screen.log") is always empty....
# -------------------------- logging ----------------------------
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_format = ' %(message)s'
logging_handlers = [logging.FileHandler(logging_file), logging.StreamHandler()]
logging.basicConfig(level = logging_level, format = logging_format, handlers = logging_handlers)
logging.info("------logging test------")
Even though my own "------logging test------" message does appear on console within the other fastAPI logs:
As you can see here it's created the file, but it has size zero.
So what do I need to do also to get the file logging working?
There are multiple issues here. First and most importantly: basicConfig does nothing if a logger is already configured, which fastAPI does. So the handlers you are creating are never used. When you call logging.info() you are sending a log to the root logger which is printed because the fastAPI has added a handler to it. You are also not setting the level on your handlers. Try this code instead of what you currently have:
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_fh = logging.FileHandler(logging_file)
logging_sh = logging.StreamHandler()
logging_fh.setLevel(logging_level)
logging_sh.setLevel(logging_level)
root_logger = logging.getLogger()
root_logger.addHandler(logging_fh)
root_logger.addHandler(logging_sh)
logging.info('--test--')

How to set the logging level for logger objects?

I am having trouble finding the bug in my code when I try to log using the logging library. Even though I set the minimum log-level to DEBUG, the logger creates a log-file that starts at WARNING.
import logging
my_logger = logging.getLogger("our logger")
# Remove all handlers associated with my_logger object.
# This is only done so that we can run this block mult. times.
for handler in my_logger.handlers[:]:
my_logger.removeHandler(handler)
# let's create a handler and set the logging level
my_handler_for_file = logging.FileHandler("custom logging.log", mode='w')
my_handler_for_file.setLevel(logging.DEBUG)
# try to set it to logging.CRITICAL and it will only log CRITICAL,
# so it does work but neither for DEBUG nor INFO!
# add the handlers to our custom logger
my_logger.addHandler(my_handler_for_file)
# let's create some logs
my_logger.debug('This is a debug message')
my_logger.info('This is an info message')
my_logger.warning('This is a warning message')
my_logger.error('This is an error message')
my_logger.critical('This is a critical message')
This is the output in the log-file:
This is a warning message
This is an error message
This is a critical message
And this is what I expect it to be:
This is a debug message
This is an info message
This is a warning message
This is an error message
This is a critical message
Does anyone have an idea what is wrong here?
You need to set the level on the logger too.
my_logger.setLevel(logging.DEBUG)
This is because both the loggers and the handlers filter messages based on their level.

Testing logger Messages with unittest

I'd like to test logger messages without printing them to the screen in my unittests. Given this code:
import logging
logging.basicConfig(level=logging.ERROR)
logger = logging.getLogger('test')
logger.warning('this is a warning')
# How do I see that there was a warning?
How do I look at the log records in the logger to see that there was a warning? I cannot find an iterator in Logger that would do the job.
You may need to use TestCase.assertLogs() context manager in such case. The documentation provides a pretty good example of what can be done with it:
with self.assertLogs('foo', level='INFO') as cm:
logging.getLogger('foo').info('first message')
logging.getLogger('foo.bar').error('second message')
self.assertEqual(cm.output, ['INFO:foo:first message',
'ERROR:foo.bar:second message'])
Inside the context manager, you can access cm.records for a list of LogRecord instances, or cm.output for a list of formatted messages

Python: setLevel of specific class's logger

How do I silence a class's logging without knowing the name of its logger? Class in question is qualysconnect.
import logging
import qualysconnect.util
# Set log options. This is my attempt to silence it.
logger_qc = logging.getLogger('qualysconnect')
logger_qc.setLevel(logging.ERROR)
# Define a Handler which writes WARNING messages or higher to the sys.stderr
logger_console = logging.StreamHandler()
logger_console.setLevel(logging.ERROR)
# Set a format which is simpler for console use.
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# Tell the handler to use this format.
logger_console.setFormatter(formatter)
# Add the handler to the root logger
logging.getLogger('').addHandler(logger_console)
# 'application' code
logging.debug('debug message')
logging.info('info message')
logging.warn('warn message')
logging.error('error message')
logging.critical('critical message')
Output when import qualysconnect.util is commented out:
root : ERROR error message
root : CRITICAL critical message
Output when import qualysconnect.util is kept in:
WARNING:root:warn message
ERROR:root:error message
root : ERROR error message
CRITICAL:root:critical message
root : CRITICAL critical message
Sadly, as they did not define a name for their logger, and as in qualysconnect.util they don't even do a getLogger call or a getChild call, you can't do something on one that will not affect the whole module's logging behavior without getting dirty.
The only clean option I can think of is to report the way they handle logging as a bug, and submit a patch request where you modify qualysconnect.util logging statement with something like:
import logging
logger = logging.getLogger('qualysconnect').getChild('util')
and replace all logging.info(), logging.debug()... into logger.info(), logger.debug()...
The dirty option: You can monkey patch the qualysconnect.util module so you replace its logging object with a logger object:
import qualysconnect.util
logger_qc = logging.getLogger('qualysconnect')
logger_qc.setLevel(logging.ERROR)
qualysconnect.util.logging = logger_qc.getLogger('qualysconnect').getChild('util')
qualysconnect.util.logging.disable(logging.CRITICAL) # will disable all logging for CRITICAL and below
That can be a working solution for the time you're sending a patch request to the upstream project, but certainly not a long-term working solution.
Or you can simply shut all logging off from the whole qualysconnect module, but I don't think that's what you want.

trouble setting up python logging

I am using python's standard logging system to log my application. I want to print all types of messages (debug through critical) to the console, but I also want to send an email if the message level is error or higher. I've been reading about the logging documentation, but it was a bit confusing. I set up the following test, but it doesn't seem to work correctly:
import logging
log = logging.getLogger('my_test_log')
sublog = logging.getLogger('my_test_log.sublog')
log.setLevel(logging.ERROR)
log.addHandler(logging.StreamHandler())
sublog.addHandler(logging.StreamHandler())
sublog.setLevel(logging.DEBUG)
sublog.debug('This is a debug message')
sublog.info('This is an info message')
sublog.warn('This is a warn message')
sublog.error('This is an error message')
sublog.critical('This is a critical message')
NOTE: I set up both logs to StreamHandler right now because I don't want to spam email yet, but it should technically just print the error and critical message twice instead of sending it to email in this situation. I will change this to SMTP after this works to email it off
This is my output when I run this code:
This is a debug message
This is a debug message
This is an info message
This is an info message
This is a warn message
This is a warn message
This is an error message
This is an error message
This is a critical message
This is a critical message
Basically everything gets printed twice rather than just the error and critical messages. What am I doing wrong here? Thanks!
After some quick research, it seems that Handler objects don't automatically use their parent Logger's log level. You'll have to set the level yourself.
import logging
log = logging.getLogger('my_test_log')
sublog = logging.getLogger('my_test_log.sublog')
log.setLevel(logging.ERROR)
handler = logging.StreamHandler()
handler.setLevel(logging.ERROR)
log.addHandler(handler)
...
Your problem is that you have the level set to DEBUG on the sublog. So, you will get all of the messages (just change to ERROR). Also, there is a problem with logger.propagate being True.
This should fix it:
sublog.propagate = False
This will stop the duplicate messages.
Review the documentation about logging here.

Categories

Resources