I am having trouble finding the bug in my code when I try to log using the logging library. Even though I set the minimum log-level to DEBUG, the logger creates a log-file that starts at WARNING.
import logging
my_logger = logging.getLogger("our logger")
# Remove all handlers associated with my_logger object.
# This is only done so that we can run this block mult. times.
for handler in my_logger.handlers[:]:
my_logger.removeHandler(handler)
# let's create a handler and set the logging level
my_handler_for_file = logging.FileHandler("custom logging.log", mode='w')
my_handler_for_file.setLevel(logging.DEBUG)
# try to set it to logging.CRITICAL and it will only log CRITICAL,
# so it does work but neither for DEBUG nor INFO!
# add the handlers to our custom logger
my_logger.addHandler(my_handler_for_file)
# let's create some logs
my_logger.debug('This is a debug message')
my_logger.info('This is an info message')
my_logger.warning('This is a warning message')
my_logger.error('This is an error message')
my_logger.critical('This is a critical message')
This is the output in the log-file:
This is a warning message
This is an error message
This is a critical message
And this is what I expect it to be:
This is a debug message
This is an info message
This is a warning message
This is an error message
This is a critical message
Does anyone have an idea what is wrong here?
You need to set the level on the logger too.
my_logger.setLevel(logging.DEBUG)
This is because both the loggers and the handlers filter messages based on their level.
Related
TLDR
if a module uses
log.error("something happened")
we would like to see these logs, but as warnings, so that the net effect for us would be the same as if that module had used
log.warning("something happened")
More details
We use the aiokafka module which logs errors when the connection with confluent.cloud has trouble. However these are transient problems and after a while connection is re-established, so we would have preferred these logs to be warning instead of error, yet we don't want to lose those logs.
Is there a way to modify these log records "on the fly", to change their log level? I know I could
logger = logging.getLogger("aiokafka")
logger.setLevel(logging.CRITICAL)
but then all logs would get lost.
You can attach a filter function to the logger which downgrades the level. Here'a a working example you can use to build from:
import logging
def downgrade_filter(record):
if record.levelno == logging.ERROR:
record.levelno = logging.WARNING
record.levelname = logging.getLevelName(logging.WARNING)
return True
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG, format='%(levelname)-8s|%(name)-8s|%(message)s')
logger = logging.getLogger('aiokafka')
logger.setLevel(logging.WARNING)
logger.addFilter(downgrade_filter)
logger.debug('This should not appear')
logger.info('This should not appear')
logger.warning('This should appear as a warning')
logger.error('This should appear as a warning, though logged as an error')
logger.critical('This should appear as a critical error')
When run, it should print
WARNING |aiokafka|This should appear as a warning
WARNING |aiokafka|This should appear as a warning, though logged as an error
CRITICAL|aiokafka|This should appear as a critical error
(This is on a recent version of Python 3.x)
I'd like to test logger messages without printing them to the screen in my unittests. Given this code:
import logging
logging.basicConfig(level=logging.ERROR)
logger = logging.getLogger('test')
logger.warning('this is a warning')
# How do I see that there was a warning?
How do I look at the log records in the logger to see that there was a warning? I cannot find an iterator in Logger that would do the job.
You may need to use TestCase.assertLogs() context manager in such case. The documentation provides a pretty good example of what can be done with it:
with self.assertLogs('foo', level='INFO') as cm:
logging.getLogger('foo').info('first message')
logging.getLogger('foo.bar').error('second message')
self.assertEqual(cm.output, ['INFO:foo:first message',
'ERROR:foo.bar:second message'])
Inside the context manager, you can access cm.records for a list of LogRecord instances, or cm.output for a list of formatted messages
I try to use logging in Python to write some log, but strangely, only the error will be logged, the info will be ignored no matter whichn level I set.
code:
import logging
import logging.handlers
if __name__ == "__main__":
logger = logging.getLogger()
fh = logging.handlers.RotatingFileHandler('./logtest.log', maxBytes=10240, backupCount=5)
fh.setLevel(logging.DEBUG)#no matter what level I set here
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.info('INFO')
logger.error('ERROR')
The result is:
2014-01-14 11:47:38,990 - root - ERROR - ERROR
According to http://docs.python.org/2/library/logging.html#logging-levels
The INFO should be logged too.
The problem is that the logger's level is still set to the default. So the logger discards the message before it even gets to the handlers. The fact that the handler would have accepted the message if it received it doesn't matter, because it never receives it.
So, just add this:
logger.setLevel(logging.INFO)
As the docs explain, the logger's default level is NOTSET, which means it checks with its parent, which is the root, which has a default of WARNING.
And you can probably leave the handler at its default of NOTSET, which means it defers to the logger's filtering.
I think you might have to set the correct threshold.
logger.setLevel(logging.INFO)
I got a logger using logging.getLogger(__name__). I tried setting the log level to logging.INFO as mentioned in other answers, but that didn't work.
A quick check on both the created logger and its parent (root) logger showed it did not have any handlers (using hasHandler()). The documentation states that the handler should've been created upon first call to any of logging functions debug, info etc.,
The functions debug(), info(), warning(), error() and critical() will
call basicConfig() automatically if no handlers are defined for the
root logger.
But it did not. All I had to do was call basicConfig() manually.
Solution:
import logging
logging.basicConfig() # Add logging level here if you plan on using logging.info() instead of my_logger as below.
my_logger = logging.getLogger(__name__)
my_logger .setLevel(logging.INFO)
my_logger .info("Hi")
INFO:__main__:Hi
How do I silence a class's logging without knowing the name of its logger? Class in question is qualysconnect.
import logging
import qualysconnect.util
# Set log options. This is my attempt to silence it.
logger_qc = logging.getLogger('qualysconnect')
logger_qc.setLevel(logging.ERROR)
# Define a Handler which writes WARNING messages or higher to the sys.stderr
logger_console = logging.StreamHandler()
logger_console.setLevel(logging.ERROR)
# Set a format which is simpler for console use.
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# Tell the handler to use this format.
logger_console.setFormatter(formatter)
# Add the handler to the root logger
logging.getLogger('').addHandler(logger_console)
# 'application' code
logging.debug('debug message')
logging.info('info message')
logging.warn('warn message')
logging.error('error message')
logging.critical('critical message')
Output when import qualysconnect.util is commented out:
root : ERROR error message
root : CRITICAL critical message
Output when import qualysconnect.util is kept in:
WARNING:root:warn message
ERROR:root:error message
root : ERROR error message
CRITICAL:root:critical message
root : CRITICAL critical message
Sadly, as they did not define a name for their logger, and as in qualysconnect.util they don't even do a getLogger call or a getChild call, you can't do something on one that will not affect the whole module's logging behavior without getting dirty.
The only clean option I can think of is to report the way they handle logging as a bug, and submit a patch request where you modify qualysconnect.util logging statement with something like:
import logging
logger = logging.getLogger('qualysconnect').getChild('util')
and replace all logging.info(), logging.debug()... into logger.info(), logger.debug()...
The dirty option: You can monkey patch the qualysconnect.util module so you replace its logging object with a logger object:
import qualysconnect.util
logger_qc = logging.getLogger('qualysconnect')
logger_qc.setLevel(logging.ERROR)
qualysconnect.util.logging = logger_qc.getLogger('qualysconnect').getChild('util')
qualysconnect.util.logging.disable(logging.CRITICAL) # will disable all logging for CRITICAL and below
That can be a working solution for the time you're sending a patch request to the upstream project, but certainly not a long-term working solution.
Or you can simply shut all logging off from the whole qualysconnect module, but I don't think that's what you want.
I am using python's standard logging system to log my application. I want to print all types of messages (debug through critical) to the console, but I also want to send an email if the message level is error or higher. I've been reading about the logging documentation, but it was a bit confusing. I set up the following test, but it doesn't seem to work correctly:
import logging
log = logging.getLogger('my_test_log')
sublog = logging.getLogger('my_test_log.sublog')
log.setLevel(logging.ERROR)
log.addHandler(logging.StreamHandler())
sublog.addHandler(logging.StreamHandler())
sublog.setLevel(logging.DEBUG)
sublog.debug('This is a debug message')
sublog.info('This is an info message')
sublog.warn('This is a warn message')
sublog.error('This is an error message')
sublog.critical('This is a critical message')
NOTE: I set up both logs to StreamHandler right now because I don't want to spam email yet, but it should technically just print the error and critical message twice instead of sending it to email in this situation. I will change this to SMTP after this works to email it off
This is my output when I run this code:
This is a debug message
This is a debug message
This is an info message
This is an info message
This is a warn message
This is a warn message
This is an error message
This is an error message
This is a critical message
This is a critical message
Basically everything gets printed twice rather than just the error and critical messages. What am I doing wrong here? Thanks!
After some quick research, it seems that Handler objects don't automatically use their parent Logger's log level. You'll have to set the level yourself.
import logging
log = logging.getLogger('my_test_log')
sublog = logging.getLogger('my_test_log.sublog')
log.setLevel(logging.ERROR)
handler = logging.StreamHandler()
handler.setLevel(logging.ERROR)
log.addHandler(handler)
...
Your problem is that you have the level set to DEBUG on the sublog. So, you will get all of the messages (just change to ERROR). Also, there is a problem with logger.propagate being True.
This should fix it:
sublog.propagate = False
This will stop the duplicate messages.
Review the documentation about logging here.