Subclassing logging.Logger to add own functionality - python

I'm writing code for a robotic system that needs to log to different places, depending on type of deployment/time during startup/...
I'd like to have an option to create a basic logger, then add handlers when appropriate.
I have a basic function in place to create a streamhandler:
def setup_logger() -> logging.Logger:
"""Setup logging.
Returns logger object with (at least) 1 streamhandler to stdout.
Returns:
logging.Logger: configured logger object
"""
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
stream_handler = logging.StreamHandler() # handler to stdout
stream_handler.setLevel(logging.ERROR)
stream_handler.setFormatter(MilliSecondsFormatter(LOG_FMT))
logger.addHandler(stream_handler)
return logger
When the system has internet access, I'd like to add a mail handler (separate class, subclassed from logging.handlers.BufferingHandler).
(Example below with a simple rotating file handler to simplify)
def add_rotating_file(logger: logging.Logger) -> logging.Logger:
rot_fil_handler = logging.handlers.RotatingFileHandler(LOGFILE,
maxBytes=LOGMAXBYTES,
backupCount=3)
rot_fil_handler.setLevel(logging.DEBUG)
rot_fil_handler.setFormatter(MilliSecondsFormatter(LOG_FMT))
logger.addHandler(rot_fil_handler)
return logger
Usage would be:
logger = setup_logger()
logger = add_rotating_file(logger)
This looks "wrong" to me. Giving the logger to the function as an argument and then returning it seems weird and I would think I would better create a class, subclassing logging.Logger.
So something like this:
class pLogger(logging.Logger):
def __init__(self):
super().__init__()
self._basic_configuration()
def _basic_configuration(self):
self.setLevel(logging.DEBUG)
stream_handler = logging.StreamHandler() # handler to stdout
stream_handler.setLevel(logging.ERROR)
stream_handler.setFormatter(MilliSecondsFormatter(LOG_FMT))
self.addHandler(stream_handler)
def add_rotating_handler(self):
rot_file_handler = logging.handlers.RotatingFileHandler(LOGFILE,
maxBytes=LOGMAXBYTES,
backupCount=3)
self.addHandler(rot_file_handler)
However, the super().init() function needs the logger name as an argument and -as far as I know-, the root logger should be created using logging.getLogger(), so without a name.
Another way would be to not subclass anything, but create a self.logger in my class, which seems wrong as well.
I found this stackexchange question which seems related but I can't figure out how to interpret the answer.
What's the "correct" way to do this?

There's no particular reason I can see for returning the logger from add_rotating_file(), if that's what seems odd to you. And this (having handlers added based on conditions) doesn't seem like a reason to create a logger subclass. There are numerous ways you could arrange some basic handlers and some additional handlers based on other conditions, but it seems simplest to do something like this:
def setup_logger() -> logging.Logger:
formatter = MilliSecondsFormatter(LOG_FMT)
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout) # default is stderr
handler.setLevel(logging.ERROR)
handler.setFormatter(formatter)
logger.addHandler(handler)
if internet_is_available:
handler = MyCustomEmailHandler(...) # with whatever params you need
handler.setLevel(...)
handler.setFormatter(...) # a suitable formatter instance
logger.addHandler(handler)
if rotating_file_wanted:
handler = RotatingFileHandler(LOGFILE,
maxBytes=LOGMAXBYTES,
backupCount=3)
handler.setLevel(...)
handler.setFormatter(...) # a suitable formatter instance
logger.addHandler(handler)
# and so on for other handlers
return logger # and you don't even need to do this - you could pass the logger in instead
`

Related

Python: how to set the global log level properly?

I'm setting the log level based on a configuration. Currently I call Settings() from the inside of Logger, but I'd like to pass it instead or set it globally - for all loggers.
I do not want to call getLogger(name, debug=Settings().isDebugMode()).
Any ideas? Thanks!
class Logger(logging.getLoggerClass()):
def __init__(self, name):
super().__init__(name)
debug_mode = Settings().isDebugMode()
if debug_mode:
self.setLevel(level=logging.DEBUG)
else:
self.setLevel(level=logging.INFO)
def getLogger(name):
logging.setLoggerClass(Logger)
return logging.getLogger(name)
The usual way to achieve this would be to only set a level on the root logger and keep all other loggers as NOTSET. This will have the effect that every logger works as if they had the level that is set on root. You can read about the mechanics of how that works in the documentation of setLevel().
Here is what that would look like in code:
import logging
root = logging.getLogger()
root.setLevel(logging.DEBUG) # set this based on your Settings().isDebugMode()
logger = logging.getLogger('some_logger')
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter('%(name)s: %(message)s'))
logger.addHandler(sh)
logger.debug('this will print')
root.setLevel(logging.INFO) # change level of all loggers (global log level)
logger.debug('this will not print')

Dynamically change level of python logging

In my project, I have setup multiple pipelines(~20). I want to implement logging for each of these pipeline and redirect them to different file for each pipeline.
I have created a class GenericLogger as below:
class GenericLogger(object):
def __init__(self, pipeline):
self.name = pipeline
pass
def get_logger(self):
logger = logging.getLogger(self.name)
log_file = "{0}.log".format(self.name)
console_handler = logging.StreamHandler()
file_handler = logging.handlers.RotatingFileHandler(log_file, maxBytes=LOGS_FILE_SIZE, backupCount=3)
file_format = logging.Formatter('%(asctime)s: %(levelname)s: %(name)s: %(message)s', datefmt="%Y-%m-%d %H:%M:%S")
console_format = logging.Formatter('%(asctime)s: %(levelname)s: %(name)s: %(message)s', datefmt="%Y-%m-%d %H:%M:%S")
console_handler.setFormatter(console_format)
file_handler.setFormatter(file_format)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
logger.setLevel(logging.INFO)
return logger
I am importing this class in my pipeline and getting the logger and using as below:
logger_helper = PythonLogger('pipeline_name')
logger = logger_helper.get_logger()
logger.warning("Something happened")
Flow of pipeline:
Once triggered, they will run continuously in interval of T minutes. Currently to avoid piling up of logger objects after each complete execution I am using logger.handlers = [], and then creating a new instance of logger again on the next iteration.
Questions:
1). How Can I dynamically change the level of the logs for each pipeline separately? If I am using logging.ini, Is creating a static handlers/formatters for each pipeline is necessary or Is there something I can do dynamically? don't know much about this.
2). Is the above implementation of the logger is correct or creating a Class for logger is something which should not be done?
To answer your two points:
You could have a mapping between pipeline name and level, and after creating the logger for a pipeline, you can set its level appropriately.
You don't need multiple console handlers, do you? I'd just create one console handler and attach it to the root logger. Likewise, you don't need to create multiple identical file and console formatters - just make one of each. In fact, since they are all apparently identical format strings, you just need one formatter instance. Avoid creating a class like GenericLogger.
Thus, something like:
formatter = logging.Formatter(...)
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
logging.getLogger().addHandler(console_handler)
for name, level in name_to_level.items():
logger = logging.getLogger(name)
logger.setLevel(level)
file_handler = logging.handlers.RotatingFileHandler(...)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
should do what you need.

Two loggers for two separate python files

I have two file entrypoint.py and op_helper.py that I am trying to send each scripts logs to different log files (webhook.log & op.log). I set up my logger.py file with two different log classes.
import logging
from logging.handlers import TimedRotatingFileHandler
class Logger:
def create_timed_rotating_log(self, path):
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedRotatingFileHandler(path,
when="d",
interval=1,
backupCount=7)
formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
class WebhookLogger:
def create_timed_rotating_log(self, path):
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedRotatingFileHandler(path,
when="d",
interval=1,
backupCount=7)
formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
today = datetime.today()
month = today.strftime("%B")
logger = Logger().create_timed_rotating_log(f'./{month + str(today.year)}Logger.log')
webhook_logger = WebhookLogger().create_timed_rotating_log(f'./{month + str(today.year)}WebhookLogger.log')
In my entrypoint.py script:
from logger import webhook_logger
webhook_logger.info("Something to log")
And in my op_helper.py script:
from logger import logger
logger.info("Something else to log")
But when I run the script, both log statements are logged to both log files.
2021-10-15 14:17:51 INFO Something to log
2021-10-15 14:17:51 INFO Something else to log
Can anyone explain to me what's going on here, and possibly, what I'm doing incorrectly?
Thank you in advance!
Here is an excerpt from the documentation for logging (the bold is mine):
logging.getLogger(name=None)
Return a logger with the specified name or, if name is None, return a logger which is the root logger of the hierarchy. If specified, the name is typically a dot-separated hierarchical name like ‘a’, ‘a.b’ or ‘a.b.c.d’. Choice of these names is entirely up to the developer who is using logging.
All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
...
The solution, therefore, is to assign a different name to your second logger.
EDIT:
Keep in mind, however, that, as you can see, calling getLogger either creates a new instance, if one under the given name doesn't exist, or returns an already existing instance. Therefore every following instruction will only modify an existing logger. If your intention is to use your classes to create multiple instances of one logger type, that approach will not work. Right now, they both do exactly the same thing, so there's not really a need for two separate classes either. As you can see, logging doesn't lend itself well to being used with an object-oriented approach, because the objects are already instanced elsewhere and can be accessed as "global" objects. But this is all just a side note.

How to have one logger in every test case?

I'd like to add additional logging to pytest framework test cases. Currently my idea is like this:
Logger clas with the following configuration (let's say default)
import logging
class Logger:
logger = logging.getLogger()
handler = logging.StreamHandler()
formatter = logging.Formatter(
'%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
In the conftest I am creating a fixture which is actually an instance of the Logger:
#conftest.py
#pytest.fixture
def trace():
trace = Logger()
return trace
Then I am passing this trace fixture to every test where logging is needed.
trace.logger.info("Processing data")
value = input_data.data["value1"]
It does work but I am not sure if there is something better to have one common logger for every test case. Currently it is also needed to pass this fixture to any test I want to add more traces.
If you want to check what's logged by your various tests, pytest comes with the battery included fixture named caplog. You don't need to build your custom handler as I did when testing with unittest.
https://docs.pytest.org/en/latest/logging.html
You are misunderstanding how logging.getLogger() works. Loggers are some kind of singletons...
logging.getLogger() will return the logger object if it has already been instanciated, or creates and return a new one else. If you want to get different loggers, you will need to give them names.
Ex:
logger = logging.getLogger("logger1")
You should take a look there, the doc is really complete: https://docs.python.org/3/howto/logging-cookbook.html

Two writes using python logging

I have two files of classes with essentially the same set up of logging:
"""code - of 1 class the parent with mods from Reut's answer"""
logger = None
def __init__(self, verboseLevel=4):
'''
Constructor
'''
loggingLevels={1: logging.DEBUG,
2: logging.INFO,
3: logging.WARNING,
4: logging.ERROR,
5: logging.CRITICAL}
#debug(), info(), warning(), error(), critical()
if not tdoa.logger:
tdoa.logger=logging.getLogger('TDOA')
if (verboseLevel in range(1,6)):
logging.basicConfig(format='%(message)s',level=loggingLevels[verboseLevel])
else:
logging.basicConfig(format='%(levelname)s:%(message)s',level=logging.DEBUG)
tdoa.logger.critical("Incorrect logging level specified!")
self.logger = tdoa.logger
self.logger.debug("TDOA calculator using Newton's method.")
self.verboseLevel = verboseLevel
"""code of second "subclass" (with Reut's changes) (who's function is printing twice):"""
def __init__(self, verboseLevel=1, numberOfBytes=2, filename='myfile.log', ipaddr='127.0.0.1',getelset= True):
#debug(), info(), warning(), error(), critical()
# go through all this to know that only one logger is instantiated per class
# Set debug level
# set up various handlers (remove Std_err one for deployment unless you want them going to screen
# create console handler with a higher log level
if not capture.logger:
capture.logger=logging.getLogger('SatGeo')
console = logging.StreamHandler()
if (verboseLevel in range(1,6)):
console.setLevel(self.loggingLevels[verboseLevel])
logging.basicConfig(format='%(message)s',level=self.loggingLevels[verboseLevel],
filename=filename,filemode='a') #format='%(levelname)s:%(message)s'
else:
logging.basicConfig(format='%(message)s',level=logging.DEBUG,
filename=filename,filemod='a')
console.setLevel(logging.DEBUG)
capture.logger.critical("Incorrect logging level specified!")
# create formatter and add it to the handlers
#formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
#console.setFormatter(formatter)
# add the handlers to logger
handlers=capture.logger.handlers
if (console not in handlers):
capture.logger.addHandler(console)
else:
capture.logger.critical("not adding handler")
self.logger=capture.logger
I have a function in the "called class (satgeo)" that 'writes' to the logger:
def printMyself(self, rowDict):
ii=1
for res in rowDict:
self.logger.critical('{0}************************************'.format(ii))
ii+=1
for key, value in res.items():
self.logger.critical(' Name: {0}\t\t Value:{1}'.format(key, value))
When I call it by itself I get one output per self.logger call; but when I call it from the tdoa class it writes TWICE:
for example:
Name: actualLat Value:36.455444
Name: actualLat Value:36.455444
Any idea of how to fix this?
You are adding a handler to the parent class each time you construct a class instance using this line:
self.logger.addHandler(console)
So if you do something like:
for _ in range(x):
SubClass1()
some_operation_with_logging()
You should be seeing x messages, since you just added x handlers to the logger by doing x calls to parent's __init__.
You don't want to be doing that, make sure you add a handler only once!
You can access a logger's list of handlers using: logger.handlers.
Also, if you're using the same logger in both classes (named "TDOA") by using this line in both:
self.logger=logging.getLogger('TDOA')
Make sure you either synchronize the logger instantiation, or use separate loggers.
What I use:
Instead of having a private logger for each instance, you probably want a logger for all of them, or to be more precise - for the class itself:
class ClassWithLogger(object):
logger = None
def __init__(self):
if not ClassWithLogger.logger:
ClassWithLogger.logger = logging.getLogger("ClassWithLogger")
ClassWithLogger.logger.addHandler(logging.StreamHandler())
# log using ClassWithLogger.logger ...
# convenience:
self.logger = ClassWithLogger.logger
And now you know logger is instantiated once per class (instead of once per instance), and all instances of a certain class use the same logger.
I have since found a few links suggesting that submodules should take a logger as an input during the init:
def __init__ (self, pattern= None, action=None, logger = None):
# Set up logging for the class
self.log = logger or logging.getLogger(__name__)
self.log.addHandler(logging.NullHandler())
Note: the nullhandler is added to avoid a warning if the user decides to not provide a logger.
Then, if you want to debug your submodule:
if __name__ == "__main__":
log_level = logging.INFO
log = logging.getLogger('cmdparser')
log.setLevel(log_level)
fh = logging.FileHandler('cmdparser.log')
fh.setLevel(log_level)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(log_level)
# create formatter and add it to the handlers
# formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
log.addHandler(fh)
log.addHandler(ch)
<myfunction>(pattern,action, log)
Then provide the log to the module at instantiation.
I hope this helps.

Categories

Resources