I am struggling with python logging library.
I want a custom logger with 2 handlers (one stream handler and one file handler) and one of them with a customizable log level. I can obtain what I want if I create everything in my main script, but everything fails if I create the logger as a class, in a separate module.
I have the logger class in a custom module, let's say logUtils.py, as follow
#imports...
class ColoredFormatter(logging.Formatter):
#formatter definition
class MyLogger(logging.Logger):
def __init__(self, name):
logging.Logger.__init__(self, name, logging.DEBUG)
color_formatter = ColoredFormatter()
# this handler should be always at lowest level
fh = logging.FileHandler('test.log')
fh.setLevel(logging.DEBUG)
# this handler is at INFO level by default but can change (see next)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
# add formatter to the handlers
fh.setFormatter(color_formatter)
ch.setFormatter(color_formatter)
# add the handlers to logger
self.addHandler(fh)
self.addHandler(ch)
return
and in my main script I have something like
logging
import argparse
import logUtils
parser = argparse.ArgumentParser(description="bla bla")
parser.add_argument("-v", dest='v', action='store_true', default=False, help="Verbose mode for debugging")
args = parser.parse_args()
logging.setLoggerClass(logUtils.MyLogger)
logger = logging.getLogger('fancy_name')
if args.v: logger.setLevel(logging.DEBUG) #this is the line! Here I want to change the level of just the StreamHandler
The last line is the problematic one: it's changing the top logging level!
Suggestions?
Iterate on the logger's handlers to find which is the StreamHandler (assuming you will only have one of those) and changing its level :
if args.v:
for handler in logger.handlers:
if isinstance(handler, logging.StreamHandler):
handler.setlevel(logging.DEBUG)
break
Related
I'm setting the log level based on a configuration. Currently I call Settings() from the inside of Logger, but I'd like to pass it instead or set it globally - for all loggers.
I do not want to call getLogger(name, debug=Settings().isDebugMode()).
Any ideas? Thanks!
class Logger(logging.getLoggerClass()):
def __init__(self, name):
super().__init__(name)
debug_mode = Settings().isDebugMode()
if debug_mode:
self.setLevel(level=logging.DEBUG)
else:
self.setLevel(level=logging.INFO)
def getLogger(name):
logging.setLoggerClass(Logger)
return logging.getLogger(name)
The usual way to achieve this would be to only set a level on the root logger and keep all other loggers as NOTSET. This will have the effect that every logger works as if they had the level that is set on root. You can read about the mechanics of how that works in the documentation of setLevel().
Here is what that would look like in code:
import logging
root = logging.getLogger()
root.setLevel(logging.DEBUG) # set this based on your Settings().isDebugMode()
logger = logging.getLogger('some_logger')
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter('%(name)s: %(message)s'))
logger.addHandler(sh)
logger.debug('this will print')
root.setLevel(logging.INFO) # change level of all loggers (global log level)
logger.debug('this will not print')
In my project, I have setup multiple pipelines(~20). I want to implement logging for each of these pipeline and redirect them to different file for each pipeline.
I have created a class GenericLogger as below:
class GenericLogger(object):
def __init__(self, pipeline):
self.name = pipeline
pass
def get_logger(self):
logger = logging.getLogger(self.name)
log_file = "{0}.log".format(self.name)
console_handler = logging.StreamHandler()
file_handler = logging.handlers.RotatingFileHandler(log_file, maxBytes=LOGS_FILE_SIZE, backupCount=3)
file_format = logging.Formatter('%(asctime)s: %(levelname)s: %(name)s: %(message)s', datefmt="%Y-%m-%d %H:%M:%S")
console_format = logging.Formatter('%(asctime)s: %(levelname)s: %(name)s: %(message)s', datefmt="%Y-%m-%d %H:%M:%S")
console_handler.setFormatter(console_format)
file_handler.setFormatter(file_format)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
logger.setLevel(logging.INFO)
return logger
I am importing this class in my pipeline and getting the logger and using as below:
logger_helper = PythonLogger('pipeline_name')
logger = logger_helper.get_logger()
logger.warning("Something happened")
Flow of pipeline:
Once triggered, they will run continuously in interval of T minutes. Currently to avoid piling up of logger objects after each complete execution I am using logger.handlers = [], and then creating a new instance of logger again on the next iteration.
Questions:
1). How Can I dynamically change the level of the logs for each pipeline separately? If I am using logging.ini, Is creating a static handlers/formatters for each pipeline is necessary or Is there something I can do dynamically? don't know much about this.
2). Is the above implementation of the logger is correct or creating a Class for logger is something which should not be done?
To answer your two points:
You could have a mapping between pipeline name and level, and after creating the logger for a pipeline, you can set its level appropriately.
You don't need multiple console handlers, do you? I'd just create one console handler and attach it to the root logger. Likewise, you don't need to create multiple identical file and console formatters - just make one of each. In fact, since they are all apparently identical format strings, you just need one formatter instance. Avoid creating a class like GenericLogger.
Thus, something like:
formatter = logging.Formatter(...)
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
logging.getLogger().addHandler(console_handler)
for name, level in name_to_level.items():
logger = logging.getLogger(name)
logger.setLevel(level)
file_handler = logging.handlers.RotatingFileHandler(...)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
should do what you need.
I'm writing code for a robotic system that needs to log to different places, depending on type of deployment/time during startup/...
I'd like to have an option to create a basic logger, then add handlers when appropriate.
I have a basic function in place to create a streamhandler:
def setup_logger() -> logging.Logger:
"""Setup logging.
Returns logger object with (at least) 1 streamhandler to stdout.
Returns:
logging.Logger: configured logger object
"""
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
stream_handler = logging.StreamHandler() # handler to stdout
stream_handler.setLevel(logging.ERROR)
stream_handler.setFormatter(MilliSecondsFormatter(LOG_FMT))
logger.addHandler(stream_handler)
return logger
When the system has internet access, I'd like to add a mail handler (separate class, subclassed from logging.handlers.BufferingHandler).
(Example below with a simple rotating file handler to simplify)
def add_rotating_file(logger: logging.Logger) -> logging.Logger:
rot_fil_handler = logging.handlers.RotatingFileHandler(LOGFILE,
maxBytes=LOGMAXBYTES,
backupCount=3)
rot_fil_handler.setLevel(logging.DEBUG)
rot_fil_handler.setFormatter(MilliSecondsFormatter(LOG_FMT))
logger.addHandler(rot_fil_handler)
return logger
Usage would be:
logger = setup_logger()
logger = add_rotating_file(logger)
This looks "wrong" to me. Giving the logger to the function as an argument and then returning it seems weird and I would think I would better create a class, subclassing logging.Logger.
So something like this:
class pLogger(logging.Logger):
def __init__(self):
super().__init__()
self._basic_configuration()
def _basic_configuration(self):
self.setLevel(logging.DEBUG)
stream_handler = logging.StreamHandler() # handler to stdout
stream_handler.setLevel(logging.ERROR)
stream_handler.setFormatter(MilliSecondsFormatter(LOG_FMT))
self.addHandler(stream_handler)
def add_rotating_handler(self):
rot_file_handler = logging.handlers.RotatingFileHandler(LOGFILE,
maxBytes=LOGMAXBYTES,
backupCount=3)
self.addHandler(rot_file_handler)
However, the super().init() function needs the logger name as an argument and -as far as I know-, the root logger should be created using logging.getLogger(), so without a name.
Another way would be to not subclass anything, but create a self.logger in my class, which seems wrong as well.
I found this stackexchange question which seems related but I can't figure out how to interpret the answer.
What's the "correct" way to do this?
There's no particular reason I can see for returning the logger from add_rotating_file(), if that's what seems odd to you. And this (having handlers added based on conditions) doesn't seem like a reason to create a logger subclass. There are numerous ways you could arrange some basic handlers and some additional handlers based on other conditions, but it seems simplest to do something like this:
def setup_logger() -> logging.Logger:
formatter = MilliSecondsFormatter(LOG_FMT)
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout) # default is stderr
handler.setLevel(logging.ERROR)
handler.setFormatter(formatter)
logger.addHandler(handler)
if internet_is_available:
handler = MyCustomEmailHandler(...) # with whatever params you need
handler.setLevel(...)
handler.setFormatter(...) # a suitable formatter instance
logger.addHandler(handler)
if rotating_file_wanted:
handler = RotatingFileHandler(LOGFILE,
maxBytes=LOGMAXBYTES,
backupCount=3)
handler.setLevel(...)
handler.setFormatter(...) # a suitable formatter instance
logger.addHandler(handler)
# and so on for other handlers
return logger # and you don't even need to do this - you could pass the logger in instead
`
I want to log to a single log file from main and all sub modules.
The log messages send from a main file, where I define the logger, work as expected. But the ones send from a call to an imported function are missing.
It is working if I use logging.basicConfig as in Example 1 below.
But the second example which allows for more custom settings does not work.
Any ideas why?
# in the submodule I have this code
import logging
logger = logging.getLogger(__name__)
EXAMPLE 1 - Working
Here I create two handlers and just pass them to basicConfig:
# definition of root looger in main module
formatter = logging.Formatter(fmt="%(asctime)s %(name)s.%(levelname)s: %(message)s", datefmt="%Y.%m.%d %H:%M:%S")
handler = logging.FileHandler('logger.log')
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
handler2 = logging.StreamHandler(stream=None)
handler2.setFormatter(formatter)
handler2.setLevel(logging.DEBUG)
logging.basicConfig(handlers=[handler, handler2], level=logging.DEBUG)
logger = logging.getLogger(__name__)
EXAMPLE 2 - Not working
Here I create two handlers and addHandler() them to the root logger:
# definition of root looger in main module
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
handler = logging.FileHandler('logger.log')
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
#handler.setLevel(logging.ERROR)
logger.addHandler(handler)
handler = logging.StreamHandler(stream=None)
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
You need to configure the (one and only) root logger in the main module of your software. This is done by calling
logger = logging.getLogger() #without arguments
instead of
logger = logging.getLogger(__name__)
(Python doc on logging)
The second example creates a separate, child logger with the name of your script.
If there are no handlers defined in the submodules, the log message is being passed down to the root logger to handle it.
A related question can be found here:
Python Logging - How to inherit root logger level & handler
I have two files of classes with essentially the same set up of logging:
"""code - of 1 class the parent with mods from Reut's answer"""
logger = None
def __init__(self, verboseLevel=4):
'''
Constructor
'''
loggingLevels={1: logging.DEBUG,
2: logging.INFO,
3: logging.WARNING,
4: logging.ERROR,
5: logging.CRITICAL}
#debug(), info(), warning(), error(), critical()
if not tdoa.logger:
tdoa.logger=logging.getLogger('TDOA')
if (verboseLevel in range(1,6)):
logging.basicConfig(format='%(message)s',level=loggingLevels[verboseLevel])
else:
logging.basicConfig(format='%(levelname)s:%(message)s',level=logging.DEBUG)
tdoa.logger.critical("Incorrect logging level specified!")
self.logger = tdoa.logger
self.logger.debug("TDOA calculator using Newton's method.")
self.verboseLevel = verboseLevel
"""code of second "subclass" (with Reut's changes) (who's function is printing twice):"""
def __init__(self, verboseLevel=1, numberOfBytes=2, filename='myfile.log', ipaddr='127.0.0.1',getelset= True):
#debug(), info(), warning(), error(), critical()
# go through all this to know that only one logger is instantiated per class
# Set debug level
# set up various handlers (remove Std_err one for deployment unless you want them going to screen
# create console handler with a higher log level
if not capture.logger:
capture.logger=logging.getLogger('SatGeo')
console = logging.StreamHandler()
if (verboseLevel in range(1,6)):
console.setLevel(self.loggingLevels[verboseLevel])
logging.basicConfig(format='%(message)s',level=self.loggingLevels[verboseLevel],
filename=filename,filemode='a') #format='%(levelname)s:%(message)s'
else:
logging.basicConfig(format='%(message)s',level=logging.DEBUG,
filename=filename,filemod='a')
console.setLevel(logging.DEBUG)
capture.logger.critical("Incorrect logging level specified!")
# create formatter and add it to the handlers
#formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
#console.setFormatter(formatter)
# add the handlers to logger
handlers=capture.logger.handlers
if (console not in handlers):
capture.logger.addHandler(console)
else:
capture.logger.critical("not adding handler")
self.logger=capture.logger
I have a function in the "called class (satgeo)" that 'writes' to the logger:
def printMyself(self, rowDict):
ii=1
for res in rowDict:
self.logger.critical('{0}************************************'.format(ii))
ii+=1
for key, value in res.items():
self.logger.critical(' Name: {0}\t\t Value:{1}'.format(key, value))
When I call it by itself I get one output per self.logger call; but when I call it from the tdoa class it writes TWICE:
for example:
Name: actualLat Value:36.455444
Name: actualLat Value:36.455444
Any idea of how to fix this?
You are adding a handler to the parent class each time you construct a class instance using this line:
self.logger.addHandler(console)
So if you do something like:
for _ in range(x):
SubClass1()
some_operation_with_logging()
You should be seeing x messages, since you just added x handlers to the logger by doing x calls to parent's __init__.
You don't want to be doing that, make sure you add a handler only once!
You can access a logger's list of handlers using: logger.handlers.
Also, if you're using the same logger in both classes (named "TDOA") by using this line in both:
self.logger=logging.getLogger('TDOA')
Make sure you either synchronize the logger instantiation, or use separate loggers.
What I use:
Instead of having a private logger for each instance, you probably want a logger for all of them, or to be more precise - for the class itself:
class ClassWithLogger(object):
logger = None
def __init__(self):
if not ClassWithLogger.logger:
ClassWithLogger.logger = logging.getLogger("ClassWithLogger")
ClassWithLogger.logger.addHandler(logging.StreamHandler())
# log using ClassWithLogger.logger ...
# convenience:
self.logger = ClassWithLogger.logger
And now you know logger is instantiated once per class (instead of once per instance), and all instances of a certain class use the same logger.
I have since found a few links suggesting that submodules should take a logger as an input during the init:
def __init__ (self, pattern= None, action=None, logger = None):
# Set up logging for the class
self.log = logger or logging.getLogger(__name__)
self.log.addHandler(logging.NullHandler())
Note: the nullhandler is added to avoid a warning if the user decides to not provide a logger.
Then, if you want to debug your submodule:
if __name__ == "__main__":
log_level = logging.INFO
log = logging.getLogger('cmdparser')
log.setLevel(log_level)
fh = logging.FileHandler('cmdparser.log')
fh.setLevel(log_level)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(log_level)
# create formatter and add it to the handlers
# formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
log.addHandler(fh)
log.addHandler(ch)
<myfunction>(pattern,action, log)
Then provide the log to the module at instantiation.
I hope this helps.