Why does logging.setLevel() has no effect here with Python? - python

I try to understand how the logging module really works. The following code doesn't react like I expected.
#!/usr/bin/env python3
import logging
l = logging.getLogger()
l.setLevel(logging.DEBUG)
print('enabled for DEBUG: {}'.format(l.isEnabledFor(logging.DEBUG)))
l.debug('debug')
l.info('info')
l.warning('warning')
l.error('error')
l.critical('critical')
It just print out this to the console.
warning
error
critical
But why? Shouldn't there be info and debug, too? Why not?
The question is not how to fix this. I know about handlers and things like that. I just try to understand how this code work and why it doesn't react like I expect.

When no handler is set, the lastResort handler is used, and by default the lastResort level is set to WARNING.
This is implemented by this bit of code:
_defaultLastResort = _StderrHandler(WARNING)
lastResort = _defaultLastResort
def callHandlers(self, record):
...
found = 0
...
if (found == 0):
if lastResort:
if record.levelno >= lastResort.level:
lastResort.handle(record)
Remember also that both loggers and handlers have levels. A record can be
filtered by the logger for having too low a level, and it can also be filtered
by a handler for having too low a level. Setting the logger level to DEBUG allows the subsequent logging calls to pass the logger's level filter, but they can still be filtered by the handler's level filter, which is set to lastResort.level, i.e. WARNING, by default.

Related

How to avoid root handler being called from the custom logger in Python?

I have a basic config for the logging module with debug level - now I want to create another logger with error level only. How can I do that?
The problem is that the root handler is called in addition to the error-handler - this is something I want to avoid.
import logging
fmt = '%(asctime)s:%(funcName)s:%(lineno)d:%(levelname)s:%(name)s:%(message)s'
logging.basicConfig(level=logging.DEBUG, format=fmt)
logger = logging.getLogger('Temp')
logger.setLevel(logging.ERROR)
handler = logging.StreamHandler()
handler.setLevel(logging.ERROR)
logger.addHandler(handler)
logger.error('boo')
The above code prints boo twice while I expect once only, and I have no idea what to do with this annoying issue...
In [4]: logger.error('boo')
boo
2021-04-26 18:54:24,329:<module>:1:ERROR:Temp:boo
In [5]: logger.handlers
Out[5]: [<StreamHandler stderr (ERROR)>]
Some basics about the logging module
logger: a person who receives the log string, sorts it by a predefined level, then uses his own handler if any to process the log and, by default passes the log to his superior.
root logger: the superior of superiors, does all the things that a normal logger does but doesn't pass the received log to anyone else.
handler: a private contractor of a logger, who actually does anything with the log, eg. formats the log, writes it to a file or stdout, or sends it through tcp/udp.
formatter: a theme, a design that the handler applies to the log.
basicConfig: a shortcut way to config the root logger. This is useful when you want him to do all the job and all his lower rank loggers would just pass the log to him.
With no argument, basicConfig sets root logger's level to WARNING and add a StreamHandler that output the log to stderr.
What you did
You created a format and used a shortcut basicConfig to config the root logger. You want the root logger to do all the actual logging things
You created a new low-rank logger Temp
You want it to accept logs with only ERROR level and above.
You created another StreamHandler. Which output to stdout by default.
You want it to handle only ERROR level and above
Oh, you assigned it to Temp logger, which made 5. redundant since the level is set in 3.
Oh wait, thought you just want the root logger to do the job since 1.!
You logged an ERROR with your logger.
What happened
Your Temp logger accepted a string boo at ERROR level. Then told its handler to process the string. Since this handler didn't have any formatter assigned to it, it outputted the string as-is to stdout: boo
After that, Temp logger passed the string boo to his superior, the root logger.
The root logger accepted the log since the log level is ERROR > WARNING.
The root logger then told its handler to process the string boo.
This handler applies the format string to boo. Added timestamp, added location, added the name of logger that passed the log, etc.
Finally it outputted the result to stderr: 2021-04-26 18:54:24,329:<module>:1:ERROR:Temp:boo
Make it right
Since your code does exactly what you tell it to do, you have to tell it as much detail as possible.
Only use basicConfig when you are lazy. By removing basicConfig line, your problem solved.
Use logger = logging.getLogger('__name__') so that the logger has the name of the module. Looking at the log and know exactly which import path that it came from.
Decide if a logger should keep the log on its own or pass it up the chain with the propagate property. In your case, logger.propagate = False also solves the problem.
Use a dictConfig file so you don't get messed with the config code.
In practice, you should not add handlers to your logger, and just let the logger pass the log all the way to the root and let the root do the actual logging. Why?
Someone else uses your code as a module, can control the logging. For example, not output to stdout but to tcp/udp, output with a different format, etc.
You can turn off the logging from a specific logger entirely, by propagating=False.
You know exactly all the handlers and formatters in the code if you only added them to the root logger. You have centralized control over the logging.

Setting the Level of Python's Logger inside a Try/Except

I seem to be having a spot of trouble getting the logger set up in Python. I'm trying to read in values for the control of the program (oceanographic simulations, with a lot of little tuning factors which are best kept isolated in a file somewhere, as we tend to play around with them a fair bit) from a control file (JSON-based). Among these is the level to which the logging should be set. As this is something I expect people may misspell and Python is quite strict on, I'm trying to wrap the logger setup in a try-except which will default it back to INFO if there's a mistake (and give a warning), as shown:
with open(control['outFileStem'] + "_log.info", 'w'):
pass #Clear an existing logfile
#Then set up the logging
try:
logging.basicConfig(filename = control['outFileStem'] + "_log.info", level = control['logLevel'])
except ValueError:
control['logLevel'] = "INFO"
logging.basicConfig(filename = control['outFileStem'] + "_log.info", level = logging.INFO)
logging.warning('WARNING: Logging Level set to INFO')
logging.info('Control structure created successfully')
However, when I experiment with it by mangling the level in the control file, it exclusively sets the level to WARNING. This happens no matter which variant I use for the level (i.e. logging.INFO, the numerical code, calling the value in from the control object as opposed to putting it in myself, etc.) I can't figure out what's going on here.
In case it's relevant, control is a dict to which I've altered the init method to handle the reading in from the JSON file nice and cleanly.
Thanks in advance.
Note that basicConfig() (as documented) does nothing if handlers are already present on the root logger. The basicConfig() implementation sets handlers first and the level at the end, so if it fails at that point, handlers are already set and the basicConfig() call in the except clause does nothing. The simplest solution is, in the exception clause, to set the level using
logging.getLogger().setLevel(logging.INFO)
instead of calling basicConfig() again to do it.
How do you define logLevel in control (what is it's type)?
What do you think of this?
#!/usr/bin/env python
import logging
control = {"logLevel": "FAKE"}
# get log level object which is just a number
try:
log_level = getattr(logging, control["logLevel"])
except AttributeError as ex:
log_level = logging.INFO
logging.basicConfig(filename = "logfile", level = log_level)
logging.debug("debug test")
logging.info("info test")
logging.warn("warn test")
logging.error("error test")

Python: how to suppress logging statements from third party libraries? [duplicate]

This question already has answers here:
How do I disable log messages from the Requests library?
(13 answers)
Closed 6 years ago.
My logging setting look like
import requests
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger('BBProposalGenerator')
When I run this, I get logs as
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): localhost
INFO:BBProposalGenerator:created proposal: 763099d5-3c8a-47bc-8be8-b71a593c36ac
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): localhost
INFO:BBProposalGenerator:added offer with cubeValueId: f23f801f-7066-49a2-9f1b-1f8c64576a03
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): localhost
INFO:BBProposalGenerator:evaluated proposal: 763099d5-3c8a-47bc-8be8-b71a593c36ac
How can I suppress the log statements from requests package? so that I only see logs from my code
INFO:BBProposalGenerator:created proposal: 763099d5-3c8a-47bc-8be8-b71a593c36ac
INFO:BBProposalGenerator:added offer with cubeValueId: f23f801f-7066-49a2-9f1b-1f8c64576a03
INFO:BBProposalGenerator:evaluated proposal: 763099d5-3c8a-47bc-8be8-b71a593c36ac
Thanks
You called basicConfig() with a level of logging.INFO, which sets the effective level to INFO for the root logger, and all descendant loggers which don't have explicitly set levels. This includes the requests loggers, and explains why you're getting these results.
Instead, you can do
logging.basicConfig()
which will leave the level at its default value of WARNING, but add a handler which outputs log messages to the console. Then, set the level on your logger to INFO:
logger = logging.getLogger('BBProposalGenerator')
logger.setLevel(logging.INFO)
Now, INFO and higher severity events logged to logger BBProposalGenerator or any of its descendants will be printed, but the root logger (and other descendants of it, such as requests.XXX) will remain at WARNING level and only show WARNING or higher messages.
Of course, you can specify a higher level in the basicConfig() call - if you specified ERROR, for example, you would only see ERROR or higher from all the other loggers, but INFO or higher from BBProposalGenerator and its descendants.
what you want to do is apply a filter to all of the loggers so that you can control what is emitted. The only way I could find to apply of filter to all of the loggers is by initializing the logging Logger class with something derived from logging.Logger and applying the filter there. As so:
class MyFilter(logging.Filter):
def filter(self, record):
if record.name != 'BBProposalGenerator':
return False
return True
class MyLogger(logging.Logger):
def __init__(self, name):
logging.Logger.__init__(self, name)
self.addFilter(MyFilter())
And then all you have to do is, the before any loggers are instantiated set the default logger class as your derived class, as so:
logging.setLoggerClass(MyLogger)
logging.basicConfig(level=logging.INFO)
Hope this helps!

python logging specific level only

I'm logging events in my python code uing the python logging module. I have 2 logging files I wish to log too, one to contain user information and the other a more detailed log file for devs. I've set the the two logging files to the levels I want (usr.log = INFO and dev.log = ERROR) but cant work out how to restrict the logging to the usr.log file so only the INFO level logs are written to the log file as opposed to INFO plus everthing else above it e.g. INFO, WARNING, ERROR and CRITICAL.
This is basically my code:-
import logging
logger1 = logging.getLogger('')
logger1.addHandler(logging.FileHandler('/home/tmp/usr.log')
logger1.setLevel(logging.INFO)
logger2 = logging.getLogger('')
logger2.addHandler(logging.FileHandler('/home/tmp/dev.log')
logger2.setLevel(logging.ERROR)
logging.critical('this to be logged in dev.log only')
logging.info('this to be logged to usr.log and dev.log')
logging.warning('this to be logged to dev.log only')
Any help would be great thank you.
I am in general agreement with David, but I think more needs to be said. To paraphrase The Princess Bride - I do not think this code means what you think it means. Your code has:
logger1 = logging.getLogger('')
...
logger2 = logging.getLogger('')
which means that logger1 and logger2 are the same logger, so when you set the level of logger2 to ERROR you actually end up setting the level of logger1 at the same time. In order to get two different loggers, you would need to supply two different logger names. For example:
logger1 = logging.getLogger('user')
...
logger2 = logging.getLogger('dev')
Worse still, you are calling the logging module's critical(), info() and warning() methods and expecting that both loggers will get the messages. This only works because you used the empty string as the name for both logger1 and logger2 and thus they are not only the same logger, they are also the root logger. If you use different names for the two loggers as I have suggested, then you'll need to call the critical(), info() and warning() methods on each logger individually (i.e. you'll need two calls rather than just one).
What I think you really want is to have two different handlers on a single logger. For example:
import logging
mylogger = logging.getLogger('mylogger')
handler1 = logging.FileHandler('usr.log')
handler1.setLevel(logging.INFO)
mylogger.addHandler(handler1)
handler2 = logging.FileHandler('dev.log')
handler2.setLevel(logging.ERROR)
mylogger.addHandler(handler2)
mylogger.setLevel(logging.INFO)
mylogger.critical('A critical message')
mylogger.info('An info message')
Once you've made this change, then you can use filters as David has already mentioned. Here's a quick sample filter:
class MyFilter(object):
def __init__(self, level):
self.__level = level
def filter(self, logRecord):
return logRecord.levelno <= self.__level
You can apply the filter to each of the two handlers like this:
handler1.addFilter(MyFilter(logging.INFO))
...
handler2.addFilter(MyFilter(logging.ERROR))
This will restrict each handler to only write out log messages at the level specified.
First: this is a rather odd thing to want to do, and strikes me as a slight misuse of the logging system. I can't imagine any situation in which it makes sense to notify the user about the normal operation of the program but not about things that are more important. The logging levels should be used to indicate importance; if you have messages that are only of interest to developers, you should be using some other mechanism to distinguish them (such as which logger you send them to).
That being said, you can implement arbitrary filtering of log records by creating a Filter subclass whose filter method implements your desired criteria and install it on the appropriate handler.

Python: Warnings and logging verbose limit

I want to unify the whole logging facility of my app. Any warning is raise an exception, next I catch it and pass it to the logger. But the question: Is there in logging any mute facility? Sometimes logger becomes too verbose. Sometimes for the reason of too noisy warnings, is there are any verbose limit in warnings?
http://docs.python.org/library/logging.html
http://docs.python.org/library/warnings.html
Not only are there log levels, but there is a really flexible way of configuring them. If you are using named logger objects (e.g., logger = logging.getLogger(...)) then you can configure them appropriately. That will let you configure verbosity on a subsystem-by-subsystem basis where a subsystem is defined by the logging hierarchy.
The other option is to use logging.Filter and Warning filters to limit the output. I haven't used this method before but it looks like it might be a better fit for your needs.
Give PEP-282 a read for a good prose description of the Python logging package. I think that it describes the functionality much better than the module documentation does.
Edit after Clarification
You might be able to handle the logging portion of this using a custom class based on logging.Logger and registered with logging.setLoggerClass(). It really sounds like you want something similar to syslog's "Last message repeated 9 times". Unfortunately I don't know of an implementation of this anywhere. You might want to see if twisted.python.log supports this functionality.
from the very source you mentioned.
there are the log-levels, use the wisely ;-)
LEVELS = {'debug': logging.DEBUG,
'info': logging.INFO,
'warning': logging.WARNING,
'error': logging.ERROR,
'critical': logging.CRITICAL}
This will be a problem if you plan to make all logging calls from some blind error handler that doesn't know anything about the code that raised the error, which is what your question sounds like. How will you decide which logging calls get made and which don't?
The more standard practice is to use such blocks to recover if possible, and log an error (really, if it is an error that you weren't specifically prepared for, you want to know about it; use a high level). But don't rely on these blocks for all your state/debug information. Better to sprinkle your code with logging calls before it gets to the error-handler. That way, you can observe useful run-time information about a system when it is NOT failing and you can make logging calls of different severity. For example:
import logging
from traceback import format_exc
logger = logging.getLogger() # Gives the root logger. Change this for better organization
# Add your appenders or what have you
def handle_error(e):
logger.error("Unexpected error found")
logger.warn(format_exc()) #put the traceback in the log at lower level
... #Your recovery code
def do_stuff():
logger.info("Program started")
... #Your main code
logger.info("Stuff done")
if __name__ == "__main__":
try:
do_stuff()
except Exception,e:
handle_error(e)

Categories

Resources