I was trying to create a custom attribute for logging (caller's class name, module name, etc.) and got stuck with a strange exception telling me that the LogRecord instance created in the process did not have the necessary attributes. After a bit of testing I ended up with this:
import logging
class MyLogger(logging.getLoggerClass()):
value = None
logging.setLoggerClass(MyLogger)
loggers = [
logging.getLogger(),
logging.getLogger(""),
logging.getLogger("Name")
]
for logger in loggers:
print(isinstance(logger, MyLogger), hasattr(logger, "value"))
This seemingly correct piece of code yields:
False False
False False
True True
Bug or feature?
Looking at the source code we can see the following:
root = RootLogger(WARNING)
def getLogger(name=None):
if name:
return Logger.manager.getLogger(name)
else:
return root
That is, a root logger is created by default when the module is imported. Hence, every time you look for the root looger (passing a false value such as the empty string), you're going to get a logging.RootLogger object regardless of any call to logging.setLoggerClass.
Regarding the logger class being used we can see:
_loggerClass = None
def setLoggerClass(klass):
...
_loggerClass = klass
This means that a global variable holds the logger class to be used in the future.
In addition to this, looking at logging.Manager (used by logging.getLogger), we can see this:
def getLogger(self, name):
...
rv = (self.loggerClass or _loggerClass)(name)
That is, if self.loggerClass isn't set (which won't be unless you've explicitly set it), the class from the global variable is used.
Hence, it's a feature. The root logger is always a logging.RootLogger object and the other logger objects are created based on the configuration at that time.
logging.getLogger() and logging.getLogger("") don't return a MyLogger because they return the root logger of the logging hierarchy, as described in the logging documentation:
logging.getLogger([name])
Return a logger with the specified name or, if no name is specified, return a logger which is the root logger of the hierarchy.
Thus, as you have the logger set up:
>>> logging.getLogger()
<logging.RootLogger object at 0x7d9450>
>>> logging.getLogger("foo")
<test3.MyLogger object at 0x76d9f0>
I don't think this is related to the KeyError you started your post with. You should post the code that caused that exception to be thrown (test.py).
Related
There was a weird issue happening yesterday. I got it fixed but still don't understand why things were happening the way they did.
So I have a class with an instance of a logger getting initialised in __init__:
class Foo:
def __init__(self, some_value):
self.logger = self._initialise_logger()
self.logger.info(f'Initialising an instance of {self.__class__.__name__}')
self.value = some_value
def _initialise_logger(self):
# Setting up logger
return logger
The class is meant to perform some calculations and at one stage I had to run it in a loop:
my_list = []
for m in my_list:
f = Foo(m)
f.calculate()
When this loop was running I started getting strange messages in the output. On the first run the messages would be normal, but on the second run they would be duplicated, then on the next run every logging message would appear three times and so on.
So I figured that somehow the instance of the class that spawned the logger might be persisting and logger keeps printing messages so I decided that I could just manually delete the instance when calculations are completed and the issue will be gone:
my_list = []
for m in my_list:
f = Foo(m)
f.calculate()
del f
That didn't work. In the end I fixed it by initialising an instance only once and then change the value of the instance variable inside the loop:
my_list = []
f = Foo()
for m in my_list:
f.value = m
f.calculate()
This fixed the problem but I still don't understand how a logger can persist even when the instance that spawned it has been deleted?
EDIT:
def _initialise_logger(self):
log_file = self._get_logging_filename()
logger = logging.getLogger(__name__ + "." + self.__class__.__name__)
logger.propagate = False
logger.setLevel(logging.DEBUG)
file_handler = logging.FileHandler(log_file)
file_handler.setLevel(logging.DEBUG)
file_formatter = logging.Formatter(fmt='%(asctime)s.%(msecs)04d,%(name)s,%(levelname)s,%(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
screen_handler = logging.StreamHandler()
screen_handler.setLevel(logging.DEBUG)
screen_formatter = logging.Formatter(fmt='%(asctime)s.%(msecs)02d,%(levelname)s: -> %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
screen_handler.setFormatter(screen_formatter)
file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler)
logger.addHandler(screen_handler)
logger.info(f'\n\n\n\n\nInitiating a {self.__class__.__name__} instance.')
return logger
There's a lot of things that are wrong with your code, but for starters, let's take a look at the below example to get a better understanding of the logging module:
import logging
def make_logger(name):
logger = logging.getLogger(name)
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter(fmt="%(message)s"))
logger.addHandler(sh)
return logger
logger_A = make_logger("A")
logger_A.warning("hi")
>>> hi
logger_B = make_logger("B")
logger_B.warning("bye")
>>> bye
logger_C1 = make_logger("C")
logger_C2 = make_logger("C")
logger_C1.warning("bonjour")
>>> bonjour
>>> bonjour
Notice we only get repeats when we use the same name for multiple loggers. This is because there can only be one instance of a logger with a given name, so if you call getLogger(name) with a name that already exists, it will just return the already existing logger object with that name. So, when we call our function make_logger twice with the same name, we're basically adding two different Stream Handlers to the same logger, which is why we see the double logging.
In your code, you construct the logger name using __name__ + "." + self.__class__.__name__. This produces a string that will be the exact same for every instance of your class. You could change that line of code to give a unique string for every different instance of your class, but this isn't really how you should be using the logging module.
I highly recommend reviewing this article to learn more about logging in python.
Why not just use one logger declared globally in your module/file? If you need to include identifying information for specific instance, you can always just include it in the log message itself:
import logging
logger = # set up your logger here, after your imports, before your code
class Foo:
def __init__(self, some_value):
self.value = some_value
logger.info(f'Initiating a {self.__class__.__name__} instance.')
I have the following code in my unittest TestCase:
class StatsdLogHandler(logging.Handler):
def emit(self, record):
log_entry = self.format(record)
statsd_client.incr(log_entry)
def setup_statsd_logging(logger, level=logging.WARNING):
# set up statsd counts for logging
statsd_format = "pegasus.logger.%(levelname)s.%(name)s"
formatter = logging.Formatter(statsd_format)
statsd_handler = StatsdLogHandler()
statsd_handler.setFormatter(formatter)
statsd_handler.setLevel(level)
logger.addHandler(statsd_handler)
class TestLogging(unittest.TestCase):
#mock.patch('.statsd_client')
def test_logging_to_statsd(self, statsd_mock):
"""Make sure log calls go to statsd"""
root_logger = logging.getLogger()
setup_statsd_logging(root_logger)
root_logger.setLevel(logging.DEBUG)
root_logger.warning('warning')
statsd_mock.incr.assert_called_with('pegasus.logger.WARNING.root')
However, when I run the TestLogging test case, I get
AssertionError: Expected call: incr(u'pegasus.logger.WARNING.root')
Not called
I would expect this call to be successful. Why is this happening?
When I debugged this behavior, I found that root_logger.manager.disable was set to the value of 30, meaning that all logging with a level of WARNING or lower was not being printed.
The logging Manager object isn't documented anywhere that I could easily find, but it is the object that describes and stores all the loggers, including the root logger. It is defined in code here. What I found was that there were calls elsewhere in the test cases to logging.disable(logging.WARNING). This sets the value of root_logger.manager.disable to 30. The manager disable level trumps the effective level of the logger (see this code).
By adding the line logging.disable(logging.NOTSET) to my test case at the beginning of the test case, I ensure that the logging manager disable property was set to the default value of 0, which caused my test cases to pass.
How do I find out whether getLogger() returned a new or an existing logger object?
The motivation is that I don't want to addHandler repeatedly to the same logger.
There doesn't seem to be a particularly clean way to do this... However, if you must, the source-code is a pretty good place to start looking in order to figure this out. Note that logging.getLogger is mostly a wrapper around logging.Logger.manager.getLogger.
The Manager keeps a mapping of names -> Logger (or Placeholder). If it has Logger in the slot designated by a given name, it will return it. Otherwise, it'll return a new Logger.
import logging
def has_logger(name):
manager = logging.Logger.manager
if name in manager.loggerDict:
return isinstance(manager.loggerDict[name], logging.Logger)
else:
return False
Note that this only handles the case where you have named loggers. If you do logging.getLogger() (without passing a name), then it simply will return the root logger which is created at import time (and therefore, it is never new).
Another approach could be to get a logger and check that it's handlers list is smaller than you'd expect (i.e. if it isn't an empty list, then handlers have been added).
def has_handlers(logger):
"""Return True if logger has handlers, False otherwise."""
return bool(logger.handlers)
getLogger will return a singleton instance over the named logger, to check that
import logging
id_ = id(logging.getLogger())
for i in range(10):
assert id_ == id(logging.getLogger())
For logging purpose i used to used à logger module looking like:
mylogger.py
import logging
import logging.config
from pathlib import Path
logging.config.fileConfig(str(Path(__file__).parent / "logging.conf"),
disable_existing_loggers=False)
def get(name="MYLOG", **kw):
logger = logging.getLogger(name)
logger.propagate = True
if kw.get('level'):
logger.setLevel(kw.get('level'))
else:
logger.setLevel(logging.ERROR)
return logger
All handlers are defined in the logging.conf
For some reason, in a Python application I am trying to modify, the logger is not logging anything. I traced the error to logging/__init__.py
def handle(self, record):
"""
Call the handlers for the specified record.
This method is used for unpickled records received from a socket, as
well as those created locally. Logger-level filtering is applied.
"""
if (not self.disabled) and self.filter(record):
self.callHandlers(record)
I am not sure why, but self.disabled is True. Nowhere in the application this value is set and I don't think any of the packages is changing it. The logger is instantiated as usual logger = logging.getLogger(__name__). When I set logger.disabled = False before actually logging anything (before calling logger.info()), the logger prints the expected log text. But if I don't, it returns in handle() without logging anything.
Is there any way I can debug this? Perhaps one can change the Logger class so that some function is called whenever disabled gets written to...
If you need to trace what code might set handler.disabled to True (it is 0, so false, by default), you can replace the attribute with a property:
import logging
import sys
#property
def disabled(self):
try:
return self._disabled
except AttributeError:
return False
#disabled.setter
def disabled(self, disabled):
if disabled:
frame = sys._getframe(1)
print(
f"{frame.f_code.co_filename}:{frame.f_lineno} "
f"disabled the {self.name} logger"
)
self._disabled = disabled
logging.Logger.disabled = disabled
Demo from the interactive interpreter:
>>> import logging
>>> logging.getLogger('foo.bar').disabled = True
<stdin>:1 disabled the foo.bar logger
If you want to see the full stack, add from traceback import print_stack, and inside the if disabled: block, print_stack(frame).
Often found this problem when configuration schema is used, by default disable_existing_loggers is True so all loggers that not included in that schema will be disabled.
BTW Martin Pieters' answer is supreme and works in any situation when you've stuck.
I can create a named child logger, so that all the logs output by that logger are marked with it's name. I can use that logger exclusively in my function/class/whatever.
However, if that code calls out to functions in another module that makes use of logging using just the logging module functions (that proxy to the root logger), how can I ensure that those log messages go through the same logger (or are at least logged in the same way)?
For example:
main.py
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
do_stuff(logger)
other.py
import logging
def do_more_stuff():
logging.info("doing other stuff")
Outputs:
$ python main.py
INFO:stuff:doing stuff
INFO:root:doing other stuff
I want to be able to cause both log lines to be marked with the name 'stuff', and I want to be able to do this only changing main.py.
How can I cause the logging calls in other.py to use a different logger without changing that module?
This is the solution I've come up with:
Using thread local data to store the contextual information, and using a Filter on the root loggers handlers to add this information to LogRecords before they are emitted.
context = threading.local()
context.name = None
class ContextFilter(logging.Filter):
def filter(self, record):
if context.name is not None:
record.name = "%s.%s" % (context.name, record.name)
return True
This is fine for me, because I'm using the logger name to indicate what task was being carried out when this message was logged.
I can then use context managers or decorators to make logging from a particular passage of code all appear as though it was logged from a particular child logger.
#contextlib.contextmanager
def logname(name):
old_name = context.name
if old_name is None:
context.name = name
else:
context.name = "%s.%s" % (old_name, name)
try:
yield
finally:
context.name = old_name
def as_logname(name):
def decorator(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
with logname(name):
return f(*args, **kwargs)
return wrapper
return decorator
So then, I can do:
with logname("stuff"):
logging.info("I'm doing stuff!")
do_more_stuff()
or:
#as_logname("things")
def do_things():
logging.info("Starting to do things")
do_more_stuff()
The key thing being that any logging that do_more_stuff() does will be logged as if it were logged with either a "stuff" or "things" child logger, without having to change do_more_stuff() at all.
This solution would have problems if you were going to have different handlers on different child loggers.
Use logging.setLoggerClass so that all loggers used by other modules use your logger subclass (emphasis mine):
Tells the logging system to use the class klass when instantiating a logger. The class should define __init__() such that only a name argument is required, and the __init__() should call Logger.__init__(). This function is typically called before any loggers are instantiated by applications which need to use custom logger behavior.
This is what logging.handlers (or the handlers in the logging module) is for. In addition to creating your logger, you create one or more handlers to send the logging information to various places and add them to the root logger. Most modules that do logging create a logger that they use for there own purposes but depend on the controlling script to create the handlers. Some frameworks decide to be super helpful and add handlers for you.
Read the logging docs, its all there.
(edit)
logging.basicConfig() is a helper function that adds a single handler to the root logger. You can control the format string it uses with the 'format=' parameter. If all you want to do is have all modules display "stuff", then use logging.basicConfig(level=logging.INFO, format="%(levelname)s:stuff:%(message)s").
The logging.{info,warning,…} methods just call the respective methods on a Logger object called root (cf. the logging module source), so if you know the other module is only calling the functions exported by the logging module, you can overwrite the logging module in others namespace with your logger object:
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
# Overwrite other.logging with your just-generated logger object:
other.logging = logger
do_stuff(logger)