I have two file entrypoint.py and op_helper.py that I am trying to send each scripts logs to different log files (webhook.log & op.log). I set up my logger.py file with two different log classes.
import logging
from logging.handlers import TimedRotatingFileHandler
class Logger:
def create_timed_rotating_log(self, path):
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedRotatingFileHandler(path,
when="d",
interval=1,
backupCount=7)
formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
class WebhookLogger:
def create_timed_rotating_log(self, path):
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedRotatingFileHandler(path,
when="d",
interval=1,
backupCount=7)
formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
today = datetime.today()
month = today.strftime("%B")
logger = Logger().create_timed_rotating_log(f'./{month + str(today.year)}Logger.log')
webhook_logger = WebhookLogger().create_timed_rotating_log(f'./{month + str(today.year)}WebhookLogger.log')
In my entrypoint.py script:
from logger import webhook_logger
webhook_logger.info("Something to log")
And in my op_helper.py script:
from logger import logger
logger.info("Something else to log")
But when I run the script, both log statements are logged to both log files.
2021-10-15 14:17:51 INFO Something to log
2021-10-15 14:17:51 INFO Something else to log
Can anyone explain to me what's going on here, and possibly, what I'm doing incorrectly?
Thank you in advance!
Here is an excerpt from the documentation for logging (the bold is mine):
logging.getLogger(name=None)
Return a logger with the specified name or, if name is None, return a logger which is the root logger of the hierarchy. If specified, the name is typically a dot-separated hierarchical name like ‘a’, ‘a.b’ or ‘a.b.c.d’. Choice of these names is entirely up to the developer who is using logging.
All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
...
The solution, therefore, is to assign a different name to your second logger.
EDIT:
Keep in mind, however, that, as you can see, calling getLogger either creates a new instance, if one under the given name doesn't exist, or returns an already existing instance. Therefore every following instruction will only modify an existing logger. If your intention is to use your classes to create multiple instances of one logger type, that approach will not work. Right now, they both do exactly the same thing, so there's not really a need for two separate classes either. As you can see, logging doesn't lend itself well to being used with an object-oriented approach, because the objects are already instanced elsewhere and can be accessed as "global" objects. But this is all just a side note.
Related
In my project, I have setup multiple pipelines(~20). I want to implement logging for each of these pipeline and redirect them to different file for each pipeline.
I have created a class GenericLogger as below:
class GenericLogger(object):
def __init__(self, pipeline):
self.name = pipeline
pass
def get_logger(self):
logger = logging.getLogger(self.name)
log_file = "{0}.log".format(self.name)
console_handler = logging.StreamHandler()
file_handler = logging.handlers.RotatingFileHandler(log_file, maxBytes=LOGS_FILE_SIZE, backupCount=3)
file_format = logging.Formatter('%(asctime)s: %(levelname)s: %(name)s: %(message)s', datefmt="%Y-%m-%d %H:%M:%S")
console_format = logging.Formatter('%(asctime)s: %(levelname)s: %(name)s: %(message)s', datefmt="%Y-%m-%d %H:%M:%S")
console_handler.setFormatter(console_format)
file_handler.setFormatter(file_format)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
logger.setLevel(logging.INFO)
return logger
I am importing this class in my pipeline and getting the logger and using as below:
logger_helper = PythonLogger('pipeline_name')
logger = logger_helper.get_logger()
logger.warning("Something happened")
Flow of pipeline:
Once triggered, they will run continuously in interval of T minutes. Currently to avoid piling up of logger objects after each complete execution I am using logger.handlers = [], and then creating a new instance of logger again on the next iteration.
Questions:
1). How Can I dynamically change the level of the logs for each pipeline separately? If I am using logging.ini, Is creating a static handlers/formatters for each pipeline is necessary or Is there something I can do dynamically? don't know much about this.
2). Is the above implementation of the logger is correct or creating a Class for logger is something which should not be done?
To answer your two points:
You could have a mapping between pipeline name and level, and after creating the logger for a pipeline, you can set its level appropriately.
You don't need multiple console handlers, do you? I'd just create one console handler and attach it to the root logger. Likewise, you don't need to create multiple identical file and console formatters - just make one of each. In fact, since they are all apparently identical format strings, you just need one formatter instance. Avoid creating a class like GenericLogger.
Thus, something like:
formatter = logging.Formatter(...)
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
logging.getLogger().addHandler(console_handler)
for name, level in name_to_level.items():
logger = logging.getLogger(name)
logger.setLevel(level)
file_handler = logging.handlers.RotatingFileHandler(...)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
should do what you need.
I'm writing code for a robotic system that needs to log to different places, depending on type of deployment/time during startup/...
I'd like to have an option to create a basic logger, then add handlers when appropriate.
I have a basic function in place to create a streamhandler:
def setup_logger() -> logging.Logger:
"""Setup logging.
Returns logger object with (at least) 1 streamhandler to stdout.
Returns:
logging.Logger: configured logger object
"""
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
stream_handler = logging.StreamHandler() # handler to stdout
stream_handler.setLevel(logging.ERROR)
stream_handler.setFormatter(MilliSecondsFormatter(LOG_FMT))
logger.addHandler(stream_handler)
return logger
When the system has internet access, I'd like to add a mail handler (separate class, subclassed from logging.handlers.BufferingHandler).
(Example below with a simple rotating file handler to simplify)
def add_rotating_file(logger: logging.Logger) -> logging.Logger:
rot_fil_handler = logging.handlers.RotatingFileHandler(LOGFILE,
maxBytes=LOGMAXBYTES,
backupCount=3)
rot_fil_handler.setLevel(logging.DEBUG)
rot_fil_handler.setFormatter(MilliSecondsFormatter(LOG_FMT))
logger.addHandler(rot_fil_handler)
return logger
Usage would be:
logger = setup_logger()
logger = add_rotating_file(logger)
This looks "wrong" to me. Giving the logger to the function as an argument and then returning it seems weird and I would think I would better create a class, subclassing logging.Logger.
So something like this:
class pLogger(logging.Logger):
def __init__(self):
super().__init__()
self._basic_configuration()
def _basic_configuration(self):
self.setLevel(logging.DEBUG)
stream_handler = logging.StreamHandler() # handler to stdout
stream_handler.setLevel(logging.ERROR)
stream_handler.setFormatter(MilliSecondsFormatter(LOG_FMT))
self.addHandler(stream_handler)
def add_rotating_handler(self):
rot_file_handler = logging.handlers.RotatingFileHandler(LOGFILE,
maxBytes=LOGMAXBYTES,
backupCount=3)
self.addHandler(rot_file_handler)
However, the super().init() function needs the logger name as an argument and -as far as I know-, the root logger should be created using logging.getLogger(), so without a name.
Another way would be to not subclass anything, but create a self.logger in my class, which seems wrong as well.
I found this stackexchange question which seems related but I can't figure out how to interpret the answer.
What's the "correct" way to do this?
There's no particular reason I can see for returning the logger from add_rotating_file(), if that's what seems odd to you. And this (having handlers added based on conditions) doesn't seem like a reason to create a logger subclass. There are numerous ways you could arrange some basic handlers and some additional handlers based on other conditions, but it seems simplest to do something like this:
def setup_logger() -> logging.Logger:
formatter = MilliSecondsFormatter(LOG_FMT)
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout) # default is stderr
handler.setLevel(logging.ERROR)
handler.setFormatter(formatter)
logger.addHandler(handler)
if internet_is_available:
handler = MyCustomEmailHandler(...) # with whatever params you need
handler.setLevel(...)
handler.setFormatter(...) # a suitable formatter instance
logger.addHandler(handler)
if rotating_file_wanted:
handler = RotatingFileHandler(LOGFILE,
maxBytes=LOGMAXBYTES,
backupCount=3)
handler.setLevel(...)
handler.setFormatter(...) # a suitable formatter instance
logger.addHandler(handler)
# and so on for other handlers
return logger # and you don't even need to do this - you could pass the logger in instead
`
I am having issues in getting access to the logger created from main program from another module.
For example:
In package 'common" I have a module "util01.py" and I have a function get_logger:
util01.py
import logging
def get_logger(file_name,logger_level):
# get logger
logger=logging.getLogger(__name__)
# set desired level
logger.setLevel(logger_level)=
# Get needed formatter
formatter = logging.Formatter('%(asctime)s %(module)s %(lineno)d %(levelname)s %(message)s')
# Get the log file handler
fh = logging.FileHandler(file_name, mode = 'w')
# Apply formatter and level to log file handler
fh.setFormatter(formatter)
logger.addHandler(fh)
return logger
In main.py, I create that logger:
import logging
import OSLCHelper
my_logger = util01.get_logger('c:\\temp\\test1.log', logging.INFO)
In main.py, the my_logger has proper visibility.
From main, I want to execute a function from another module e.g. function from OSLCHelper.py.
return OSCLHelper.get_something(var1)
Now, I have another module e.g. OLSCHelper.py with following code
import logging
from common import util01
get_something(var1):
my_logging.info("i am in getsomething method") // my_logging does not exist
Unfortunately. I don't have access to "my_logger" variable. It does log any statement in the test1.log file.
How to get access to "my_logger" from different modules? Any best practices?
Please help
I tried the above and it did not work
From logging.getLogger():
All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
So one solution would be to replace
logger = logging.getLogger(__name__)
with
logger = logging.getLogger("OLSC")
or any other string that makes sense, I'm guessing.
Then you can always "ask" the logging module for the logger associated with that name, from any module:
import logging
logger = logging.getLogger("OLSC")
I am having some difficulties using python's logging. I have two files, main.py and mymodule.py. Generally main.py is run, and it will import mymodule.py and use some functions from there. But sometimes, I will run mymodule.py directly.
I tried to make it so that logging is configured in only 1 location, but something seems wrong.
Here is the code.
# main.py
import logging
import mymodule
logger = logging.getLogger(__name__)
def setup_logging():
# only cofnigure logger if script is main module
# configuring logger in multiple places is bad
# only top-level module should configure logger
if not len(logger.handlers):
logger.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(levelname)s: %(asctime)s %(funcName)s(%(lineno)d) -- %(message)s', datefmt = '%Y-%m-%d %H:%M:%S')
ch.setFormatter(formatter)
logger.addHandler(ch)
if __name__ == '__main__':
setup_logging()
logger.info('calling mymodule.myfunc()')
mymodule.myfunc()
and the imported module:
# mymodule.py
import logging
logger = logging.getLogger(__name__)
def myfunc():
msg = 'myfunc is being called'
logger.info(msg)
print('done with myfunc')
if __name__ == '__main__':
# only cofnigure logger if script is main module
# configuring logger in multiple places is bad
# only top-level module should configure logger
if not len(logger.handlers):
logger.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(levelname)s: %(asctime)s %(funcName)s(%(lineno)d) -- %(message)s', datefmt = '%Y-%m-%d %H:%M:%S')
ch.setFormatter(formatter)
logger.addHandler(ch)
logger.info('myfunc was executed directly')
myfunc()
When I run the code, I see this output:
$>python main.py
INFO: 2016-07-14 18:13:04 <module>(22) -- calling mymodule.myfunc()
done with myfunc
But I expect to see this:
$>python main.py
INFO: 2016-07-14 18:13:04 <module>(22) -- calling mymodule.myfunc()
INFO: 2016-07-14 18:15:09 myfunc(8) -- myfunc is being called
done with myfunc
Anybody have any idea why the second logging.info call doesn't print to screen? thanks in advance!
Loggers exist in a hierarchy, with a root logger (retrieved with logging.getLogger(), no arguments) at the top. Each logger inherits configuration from its parent, with any configuration on the logger itself overriding the inherited configuration. In this case, you are never configuring the root logger, only the module-specific logger in main.py. As a result, the module-specific logger in mymodule.py is never configured.
The simplest fix is probably to use logging.basicConfig in main.py to set options you want shared by all loggers.
Chepner is correct. I got absorbed into this problem. The problem is simply in your main script
16 log = logging.getLogger() # use this form to initialize the root logger
17 #log = logging.getLogger(__name__) # never use this one
If you use line 17, then your imported python modules will not log any messages
In you submodule.py
import logging
logger = logging.getLogger()
logger.debug("You will not see this message if you use line 17 in main")
Hope this posting can help someone who got stuck on this problem.
While the logging-package is conceptually arranged in a namespace hierarchy using dots as separators, all loggers implicitly inherit from the root logger (like every class in Python 3 silently inherits from object). Each logger passes log messages on to its parent.
In your case, your loggers are incorrectly chained. Try adding print(logger.name) in your both modules and you'll realize, that your instantiation of logger in main.py is equivalent to
logger = logging.getLogger('__main__')
while in mymodule.py, you effectively produce
logger = logging.getLogger('mymodule')
The call to log INFO-message from myfunc() passes directly the root logger (as the logger in main.py is not among its ancestors), which has no handler set up (in this case the default message dispatch will be triggered, see here)
I want to use the logging module instead of printing for debug information and documentation.
The goal is to print on the console with DEBUG level and log to a file with INFO level.
I read through a lot of documentation, the cookbook and other tutorials on the logging module but couldn't figure out, how I can use it the way I want it. (I'm on python25)
I want to have the names of the modules in which the logs are written in my logfile.
The documentation says I should use logger = logging.getLogger(__name__) but how do I declare the loggers used in classes in other modules / packages, so they use the same handlers like the main logger? To recognize the 'parent' I can use logger = logging.getLogger(parent.child) but where do I know, who has called the class/method?`
The example below shows my problem, if I run this, the output will only have the __main__ logs in and ignore the logs in Class
This is my Mainfile:
# main.py
import logging
from module import Class
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# create file handler which logs info messages
fh = logging.FileHandler('foo.log', 'w', 'utf-8')
fh.setLevel(logging.INFO)
# create console handler with a debug log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# creating a formatter
formatter = logging.Formatter('- %(name)s - %(levelname)-8s: %(message)s')
# setting handler format
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)
if __name__ == '__main__':
logger.info('Script starts')
logger.info('calling class Class')
c = Class()
logger.info('calling c.do_something()')
c.do_something()
logger.info('calling c.try_something()')
c.try_something()
Module:
# module.py
imnport logging
class Class:
def __init__(self):
self.logger = logging.getLogger(__name__) # What do I have to enter here?
self.logger.info('creating an instance of Class')
self.dict = {'a':'A'}
def do_something(self):
self.logger.debug('doing something')
a = 1 + 1
self.logger.debug('done doing something')
def try_something(self):
try:
logging.debug(self.dict['b'])
except KeyError, e:
logging.exception(e)
Output in console:
- __main__ - INFO : Script starts
- __main__ - INFO : calling class Class
- __main__ - INFO : calling c.do_something()
- __main__ - INFO : calling c.try_something()
No handlers could be found for logger "module"
Besides: Is there a way to get the module names were the logs ocurred in my logfile, without declaring a new logger in each class like above? Also like this way I have to go for self.logger.info() each time I want to log something. I would prefer to use logging.info() or logger.info() in my whole code.
Is a global logger perhaps the right answer for this? But then I won't get the modules where the errors occur in the logs...
And my last question: Is this pythonic? Or is there a better recommendation to do such things right.
In your main module, you're configuring the logger of name '__main__' (or whatever __name__ equates to in your case) while in module.py you're using a different logger. You either need to configure loggers per module, or you can configure the root logger (by configuring logging.getLogger()) in your main module which will apply by default to all loggers in your project.
I recommend using configuration files for configuring loggers. This link should give you a good idea of good practices: http://victorlin.me/posts/2012/08/26/good-logging-practice-in-python
EDIT: use %(module) in your formatter to include the module name in the log message.
The generally recommended logging setup is having at most 1 logger per module.
If your project is properly packaged, __name__ will have the value of "mypackage.mymodule", except in your main file, where it has the value "__main__"
If you want more context about the code that is logging messages, note that you can set your formatter with a formatter string like %(funcName)s, which will add the function name to all messages.
If you really want per-class loggers, you can do something like:
class MyClass:
def __init__(self):
self.logger = logging.getLogger(__name__+"."+self.__class__.__name__)