Python default logger disabled - python

For some reason, in a Python application I am trying to modify, the logger is not logging anything. I traced the error to logging/__init__.py
def handle(self, record):
"""
Call the handlers for the specified record.
This method is used for unpickled records received from a socket, as
well as those created locally. Logger-level filtering is applied.
"""
if (not self.disabled) and self.filter(record):
self.callHandlers(record)
I am not sure why, but self.disabled is True. Nowhere in the application this value is set and I don't think any of the packages is changing it. The logger is instantiated as usual logger = logging.getLogger(__name__). When I set logger.disabled = False before actually logging anything (before calling logger.info()), the logger prints the expected log text. But if I don't, it returns in handle() without logging anything.
Is there any way I can debug this? Perhaps one can change the Logger class so that some function is called whenever disabled gets written to...

If you need to trace what code might set handler.disabled to True (it is 0, so false, by default), you can replace the attribute with a property:
import logging
import sys
#property
def disabled(self):
try:
return self._disabled
except AttributeError:
return False
#disabled.setter
def disabled(self, disabled):
if disabled:
frame = sys._getframe(1)
print(
f"{frame.f_code.co_filename}:{frame.f_lineno} "
f"disabled the {self.name} logger"
)
self._disabled = disabled
logging.Logger.disabled = disabled
Demo from the interactive interpreter:
>>> import logging
>>> logging.getLogger('foo.bar').disabled = True
<stdin>:1 disabled the foo.bar logger
If you want to see the full stack, add from traceback import print_stack, and inside the if disabled: block, print_stack(frame).

Often found this problem when configuration schema is used, by default disable_existing_loggers is True so all loggers that not included in that schema will be disabled.
BTW Martin Pieters' answer is supreme and works in any situation when you've stuck.

Related

Why is the python logger in unit testing not working after setLevel called

I have the following code in my unittest TestCase:
class StatsdLogHandler(logging.Handler):
def emit(self, record):
log_entry = self.format(record)
statsd_client.incr(log_entry)
def setup_statsd_logging(logger, level=logging.WARNING):
# set up statsd counts for logging
statsd_format = "pegasus.logger.%(levelname)s.%(name)s"
formatter = logging.Formatter(statsd_format)
statsd_handler = StatsdLogHandler()
statsd_handler.setFormatter(formatter)
statsd_handler.setLevel(level)
logger.addHandler(statsd_handler)
class TestLogging(unittest.TestCase):
#mock.patch('.statsd_client')
def test_logging_to_statsd(self, statsd_mock):
"""Make sure log calls go to statsd"""
root_logger = logging.getLogger()
setup_statsd_logging(root_logger)
root_logger.setLevel(logging.DEBUG)
root_logger.warning('warning')
statsd_mock.incr.assert_called_with('pegasus.logger.WARNING.root')
However, when I run the TestLogging test case, I get
AssertionError: Expected call: incr(u'pegasus.logger.WARNING.root')
Not called
I would expect this call to be successful. Why is this happening?
When I debugged this behavior, I found that root_logger.manager.disable was set to the value of 30, meaning that all logging with a level of WARNING or lower was not being printed.
The logging Manager object isn't documented anywhere that I could easily find, but it is the object that describes and stores all the loggers, including the root logger. It is defined in code here. What I found was that there were calls elsewhere in the test cases to logging.disable(logging.WARNING). This sets the value of root_logger.manager.disable to 30. The manager disable level trumps the effective level of the logger (see this code).
By adding the line logging.disable(logging.NOTSET) to my test case at the beginning of the test case, I ensure that the logging manager disable property was set to the default value of 0, which caused my test cases to pass.

Python Logging writing to terminal and log file

I have two files: script.py and functions.py. In functions.py, I have logger setup, and a set of functions (made up one below):
class ecosystem():
def __init__(self, environment, mode):
self.logger = logging.getLogger(__name__)
if os.path.exists('log.log'):
os.remove('log.log')
handler= logging.FileHandler('log.log')
if mode.lower()== 'info':
handler.setLevel(logging.INFO)
self.logger.setLevel(logging.INFO)
elif mode.lower()== 'warning':
handler.setLevel(logging.WARNING)
self.logger.setLevel(logging.WARNING)
elif mode.lower()== 'error':
handler.setLevel(logging.ERROR)
self.logger.setLevel(logging.ERROR)
elif mode.lower()== 'critical':
handler.setLevel(logging.CRITICAL)
self.logger.setLevel(logging.CRITICAL)
else:
handler.setLevel(logging.DEBUG)
self.logger.setLevel(logging.DEBUG)
#Logging file format
formatter = logging.Formatter(' %(levelname)s | %(asctime)s | %(message)s \n')
handler.setFormatter(formatter)
#Add the handler to the logger
self.logger.addHandler(handler)
self.logger.info('Logging starts here')
def my_function():
self.logger.debug('test log'))
return True
I'm trying to call ecosystem.my_function from script.py, but when I do, the logger.debug message shows up in both the terminal window AND log.log. Any ideas why this might be happening?
If it helps, I also import other modules into functions.py, if those modules import logging as well, could that cause issues?
It looks like you're initializing the logger with log.log file inside the __init__ method of ecosystem class. This means that any code that creates an object of ecosystem will initialize the logger. Somewhere in your code, in one the files, you are creating that object and hence the logger is initialized and writes to the file.
Note that you do not need to call __init__ yourself as it is called on object creation. ie. after this call
my_obj = ecosystem()
log files will be written.
You're asking why both stderr and file is used after your new file handler is attached. This is because of the propagate attribute. By default propagate is True and this means your log will bubble up the hierarchy of loggers and each one will continue handling it. Since default root logger is at the top of the hierarchy, it will be handling your log too. Set propagate to False fix this:
self.logger.propagate = False
You might want to read up a bit on logging. Also, if you want to keep your sanity regarding logging, check how you can use dict to configure logging.

How to find out whether getLogger created a new object?

How do I find out whether getLogger() returned a new or an existing logger object?
The motivation is that I don't want to addHandler repeatedly to the same logger.
There doesn't seem to be a particularly clean way to do this... However, if you must, the source-code is a pretty good place to start looking in order to figure this out. Note that logging.getLogger is mostly a wrapper around logging.Logger.manager.getLogger.
The Manager keeps a mapping of names -> Logger (or Placeholder). If it has Logger in the slot designated by a given name, it will return it. Otherwise, it'll return a new Logger.
import logging
def has_logger(name):
manager = logging.Logger.manager
if name in manager.loggerDict:
return isinstance(manager.loggerDict[name], logging.Logger)
else:
return False
Note that this only handles the case where you have named loggers. If you do logging.getLogger() (without passing a name), then it simply will return the root logger which is created at import time (and therefore, it is never new).
Another approach could be to get a logger and check that it's handlers list is smaller than you'd expect (i.e. if it isn't an empty list, then handlers have been added).
def has_handlers(logger):
"""Return True if logger has handlers, False otherwise."""
return bool(logger.handlers)
getLogger will return a singleton instance over the named logger, to check that
import logging
id_ = id(logging.getLogger())
for i in range(10):
assert id_ == id(logging.getLogger())
For logging purpose i used to used à logger module looking like:
mylogger.py
import logging
import logging.config
from pathlib import Path
logging.config.fileConfig(str(Path(__file__).parent / "logging.conf"),
disable_existing_loggers=False)
def get(name="MYLOG", **kw):
logger = logging.getLogger(name)
logger.propagate = True
if kw.get('level'):
logger.setLevel(kw.get('level'))
else:
logger.setLevel(logging.ERROR)
return logger
All handlers are defined in the logging.conf

Python logging in multiple modules

I want to write a logger which I can use in multiple modules. I must be able to enable and disable it from one place. And it must be reusable.
Following is the scenario.
switch_module.py
class Brocade(object):
def __init__(self, ip, username, password):
...
def connect(self):
...
def disconnect(self):
...
def switch_show(self):
...
switch_module_library.py
import switch_module
class Keyword_Mapper(object):
def __init__(self, keyword_to_execute):
self._brocade_object = switch_module.Brocade(ip, username, password)
...
def map_keyword_to_command(self)
...
application_gui.py
class GUI:
# I can open a file containing keyword for brocade switch
# in this GUI in a tab and tree widget(it uses PyQt which I don't know)
# Each tab is a QThread and there could be multiple tabs
# Each tab is accompanied by an execute button.
# On pressing exeute it will read the string/keywords from the file
# and create an object of keyword_Mapper class and call
# map_key_word_to_command method, execute the command on brocade
# switch and log the results. Current I am logging the result
# only from the Keyword_Mapper class.
Problem I have is how to write a logger which could be enabled and disabled at will
and it must log to one file as well as console from all three modules.
I tried writing global logger in int.py and then importing it in all three modules
and had to give a common name so that they log to the same file, but then
ran into trouble since there is multi-threading and later created logger to
log to a file which has thread-id in its name so that I can have each log
per thread.
What if I am required to log to different file rather than the same file?
I have gone through python logging documentation but am unable to get a clear picture
about writing a proper logging system which could be reused.
I have gone through this link
Is it better to use root logger or named logger in Python
but due to the GUI created by someone other than me using PyQt and multi-threading I am unable to get my head around logging here.
I my project I only use a root logger (I dont have the time to create named loggers, even if it would be nice). So if you don't want to use a named logger, here is a quick solution:
I created a function to set up logger quickly:
import logging
def initLogger(level=logging.DEBUG):
if level == logging.DEBUG:
# Display more stuff when in a debug mode
logging.basicConfig(
format='%(levelname)s-%(module)s:%(lineno)d-%(funcName)s: %(message)s',
level=level)
else:
# Display less stuff for info mode
logging.basicConfig(format='%(levelname)s: %(message)s', level=level)
I created a package for it so that I can import it anywhere.
Then, in my top level I have:
import LoggingTools
if __name__ == '__main__':
# Configure logger
LoggingTools.initLogger(logging.DEBUG)
#LoggingTools.initLogger(logging.INFO)
Depending if I am debugging or not, I using the corresponding statement.
Then in each other files, I just use the logging:
import logging
class MyClass():
def __init__(self):
logging.debug("Debug message")
logging.info("Info message")

Python Logging module: custom loggers

I was trying to create a custom attribute for logging (caller's class name, module name, etc.) and got stuck with a strange exception telling me that the LogRecord instance created in the process did not have the necessary attributes. After a bit of testing I ended up with this:
import logging
class MyLogger(logging.getLoggerClass()):
value = None
logging.setLoggerClass(MyLogger)
loggers = [
logging.getLogger(),
logging.getLogger(""),
logging.getLogger("Name")
]
for logger in loggers:
print(isinstance(logger, MyLogger), hasattr(logger, "value"))
This seemingly correct piece of code yields:
False False
False False
True True
Bug or feature?
Looking at the source code we can see the following:
root = RootLogger(WARNING)
def getLogger(name=None):
if name:
return Logger.manager.getLogger(name)
else:
return root
That is, a root logger is created by default when the module is imported. Hence, every time you look for the root looger (passing a false value such as the empty string), you're going to get a logging.RootLogger object regardless of any call to logging.setLoggerClass.
Regarding the logger class being used we can see:
_loggerClass = None
def setLoggerClass(klass):
...
_loggerClass = klass
This means that a global variable holds the logger class to be used in the future.
In addition to this, looking at logging.Manager (used by logging.getLogger), we can see this:
def getLogger(self, name):
...
rv = (self.loggerClass or _loggerClass)(name)
That is, if self.loggerClass isn't set (which won't be unless you've explicitly set it), the class from the global variable is used.
Hence, it's a feature. The root logger is always a logging.RootLogger object and the other logger objects are created based on the configuration at that time.
logging.getLogger() and logging.getLogger("") don't return a MyLogger because they return the root logger of the logging hierarchy, as described in the logging documentation:
logging.getLogger([name])
Return a logger with the specified name or, if no name is specified, return a logger which is the root logger of the hierarchy.
Thus, as you have the logger set up:
>>> logging.getLogger()
<logging.RootLogger object at 0x7d9450>
>>> logging.getLogger("foo")
<test3.MyLogger object at 0x76d9f0>
I don't think this is related to the KeyError you started your post with. You should post the code that caused that exception to be thrown (test.py).

Categories

Resources