I am having issues in getting access to the logger created from main program from another module.
For example:
In package 'common" I have a module "util01.py" and I have a function get_logger:
util01.py
import logging
def get_logger(file_name,logger_level):
# get logger
logger=logging.getLogger(__name__)
# set desired level
logger.setLevel(logger_level)=
# Get needed formatter
formatter = logging.Formatter('%(asctime)s %(module)s %(lineno)d %(levelname)s %(message)s')
# Get the log file handler
fh = logging.FileHandler(file_name, mode = 'w')
# Apply formatter and level to log file handler
fh.setFormatter(formatter)
logger.addHandler(fh)
return logger
In main.py, I create that logger:
import logging
import OSLCHelper
my_logger = util01.get_logger('c:\\temp\\test1.log', logging.INFO)
In main.py, the my_logger has proper visibility.
From main, I want to execute a function from another module e.g. function from OSLCHelper.py.
return OSCLHelper.get_something(var1)
Now, I have another module e.g. OLSCHelper.py with following code
import logging
from common import util01
get_something(var1):
my_logging.info("i am in getsomething method") // my_logging does not exist
Unfortunately. I don't have access to "my_logger" variable. It does log any statement in the test1.log file.
How to get access to "my_logger" from different modules? Any best practices?
Please help
I tried the above and it did not work
From logging.getLogger():
All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
So one solution would be to replace
logger = logging.getLogger(__name__)
with
logger = logging.getLogger("OLSC")
or any other string that makes sense, I'm guessing.
Then you can always "ask" the logging module for the logger associated with that name, from any module:
import logging
logger = logging.getLogger("OLSC")
Related
I have two file entrypoint.py and op_helper.py that I am trying to send each scripts logs to different log files (webhook.log & op.log). I set up my logger.py file with two different log classes.
import logging
from logging.handlers import TimedRotatingFileHandler
class Logger:
def create_timed_rotating_log(self, path):
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedRotatingFileHandler(path,
when="d",
interval=1,
backupCount=7)
formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
class WebhookLogger:
def create_timed_rotating_log(self, path):
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedRotatingFileHandler(path,
when="d",
interval=1,
backupCount=7)
formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
today = datetime.today()
month = today.strftime("%B")
logger = Logger().create_timed_rotating_log(f'./{month + str(today.year)}Logger.log')
webhook_logger = WebhookLogger().create_timed_rotating_log(f'./{month + str(today.year)}WebhookLogger.log')
In my entrypoint.py script:
from logger import webhook_logger
webhook_logger.info("Something to log")
And in my op_helper.py script:
from logger import logger
logger.info("Something else to log")
But when I run the script, both log statements are logged to both log files.
2021-10-15 14:17:51 INFO Something to log
2021-10-15 14:17:51 INFO Something else to log
Can anyone explain to me what's going on here, and possibly, what I'm doing incorrectly?
Thank you in advance!
Here is an excerpt from the documentation for logging (the bold is mine):
logging.getLogger(name=None)
Return a logger with the specified name or, if name is None, return a logger which is the root logger of the hierarchy. If specified, the name is typically a dot-separated hierarchical name like ‘a’, ‘a.b’ or ‘a.b.c.d’. Choice of these names is entirely up to the developer who is using logging.
All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
...
The solution, therefore, is to assign a different name to your second logger.
EDIT:
Keep in mind, however, that, as you can see, calling getLogger either creates a new instance, if one under the given name doesn't exist, or returns an already existing instance. Therefore every following instruction will only modify an existing logger. If your intention is to use your classes to create multiple instances of one logger type, that approach will not work. Right now, they both do exactly the same thing, so there's not really a need for two separate classes either. As you can see, logging doesn't lend itself well to being used with an object-oriented approach, because the objects are already instanced elsewhere and can be accessed as "global" objects. But this is all just a side note.
I have put the following in my config.py:
import time
import logging
#logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S', level=logging.INFO)
logFormatter = logging.Formatter('%(asctime)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
rootLogger = logging.getLogger()
rootLogger.setLevel(logging.INFO)
fileHandler = logging.FileHandler("{0}.log".format(time.strftime('%Y%m%d%H%M%S')))
fileHandler.setFormatter(logFormatter)
rootLogger.addHandler(fileHandler)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
rootLogger.addHandler(consoleHandler)
and then I am doing
from config import *
in all of my scripts and imported files.
Unfortunately, this causes multiple log files created.
How to fix this? I wan't centralized config.py with logging configured both to console and file.
Case 1: Independent Scripts / Programs
In case we are talking about multiple, independent scripts, that should have logging set up in the same way: I would say, each independent application should have its own log. If you definitively do not want this, you would have to
make sure that all applications have the same log file name (e.g. create a function in config.py, with a parameter "timestamp", which is provided by your script(s)
specify the append filemode for the fileHandler
make sure that config.py is not called twice somewhere, as you would add the log handlers twice, which would result in each log message being printed twice.
Case 2: One big application consisting of modules
In case we are talking about one big application, consisting of modules, you could adopt a structure like the following:
config.py:
def set_up_logging():
# your logging setup code
module example (some_module.py):
import logging
def some_function():
logger = logging.getLogger(__name__)
[...]
logger.info('sample log')
[...]
main example (main.py)
import logging
from config import set_up_logging
from some_module import some_function
def main():
set_up_logging()
logger = logging.getLogger(__name__)
logger.info('Executing some function')
some_function()
logger.info('Finished')
if __name__ == '__main__':
main()
Explanation:
With the call to set_up_logging() in main() you configure your applications root logger
each module is called from main(), and get its logger via logger = logging.getLogger(__name__). As the modules logger are in the hierarchy below the root logger, those loggings get "propagated up" to the root logger and handled by the handlers of the root logger.
For more information see Pythons logging module doc and/or the logging cookbook
I have read from python docs https://docs.python.org/3/howto/logging.html#advanced-logging-tutorial that the best way to configure module level logging is to name the logger:
logger = logging.getLogger(__name__)
I the main application logging works fine:
if __name__ == '__main__':
logging.config.fileConfig('logging.conf')
# create logger
logger = logging.getLogger(__name__)
However in another module when I set the logger in the module scope:
logger = logging.getLogger(__name__)
the logger does not log anything. When I create the logger within a method logging works fine:
class TestDialog(QDialog, Ui_TestDialog):
def __init__(self, fileName, parent=None):
super(TestDialog, self).__init__(parent)
self.logger = logging.getLogger(__name__)
logger.debug("--_init_() "+str(fileName))
Then I would need to use the self.logger.. formatting to get a logger in all methods in the class - which I have never seen before. I tried to set the logging.conf to log where the call is comming from:
[loggers]
keys=root,fileLogger
...
[formatter_consoleFormatter]
format=%(asctime)s - %(module)s - %(lineno)d - %(name)s - %(levelname)s - %(message)s
datefmt=
However when the logger is set in the module scope logging still doesn't work even with this configuration.
I also tried:
logger = logging.getLogger('root')
at the start of a module, again no logger. However If I use:
logger = logging.getLogger('fileLogger')
at the start of a module, logging works fine and with config file above I can see which module the call is comming from.
Why is configuring logger using name not inheriting it's config from root?
Why does configuring using root not work while using fileHandler does, when both root and fileHandler are configured in the logging.conf file?
To avoid surprises, use disable_existing_loggers=False in your fileConfig() call, as documented in this section of the docs.
You should never need to use a self.logger instance variable - module level loggers should be fine.
I want to use the logging module instead of printing for debug information and documentation.
The goal is to print on the console with DEBUG level and log to a file with INFO level.
I read through a lot of documentation, the cookbook and other tutorials on the logging module but couldn't figure out, how I can use it the way I want it. (I'm on python25)
I want to have the names of the modules in which the logs are written in my logfile.
The documentation says I should use logger = logging.getLogger(__name__) but how do I declare the loggers used in classes in other modules / packages, so they use the same handlers like the main logger? To recognize the 'parent' I can use logger = logging.getLogger(parent.child) but where do I know, who has called the class/method?`
The example below shows my problem, if I run this, the output will only have the __main__ logs in and ignore the logs in Class
This is my Mainfile:
# main.py
import logging
from module import Class
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# create file handler which logs info messages
fh = logging.FileHandler('foo.log', 'w', 'utf-8')
fh.setLevel(logging.INFO)
# create console handler with a debug log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# creating a formatter
formatter = logging.Formatter('- %(name)s - %(levelname)-8s: %(message)s')
# setting handler format
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)
if __name__ == '__main__':
logger.info('Script starts')
logger.info('calling class Class')
c = Class()
logger.info('calling c.do_something()')
c.do_something()
logger.info('calling c.try_something()')
c.try_something()
Module:
# module.py
imnport logging
class Class:
def __init__(self):
self.logger = logging.getLogger(__name__) # What do I have to enter here?
self.logger.info('creating an instance of Class')
self.dict = {'a':'A'}
def do_something(self):
self.logger.debug('doing something')
a = 1 + 1
self.logger.debug('done doing something')
def try_something(self):
try:
logging.debug(self.dict['b'])
except KeyError, e:
logging.exception(e)
Output in console:
- __main__ - INFO : Script starts
- __main__ - INFO : calling class Class
- __main__ - INFO : calling c.do_something()
- __main__ - INFO : calling c.try_something()
No handlers could be found for logger "module"
Besides: Is there a way to get the module names were the logs ocurred in my logfile, without declaring a new logger in each class like above? Also like this way I have to go for self.logger.info() each time I want to log something. I would prefer to use logging.info() or logger.info() in my whole code.
Is a global logger perhaps the right answer for this? But then I won't get the modules where the errors occur in the logs...
And my last question: Is this pythonic? Or is there a better recommendation to do such things right.
In your main module, you're configuring the logger of name '__main__' (or whatever __name__ equates to in your case) while in module.py you're using a different logger. You either need to configure loggers per module, or you can configure the root logger (by configuring logging.getLogger()) in your main module which will apply by default to all loggers in your project.
I recommend using configuration files for configuring loggers. This link should give you a good idea of good practices: http://victorlin.me/posts/2012/08/26/good-logging-practice-in-python
EDIT: use %(module) in your formatter to include the module name in the log message.
The generally recommended logging setup is having at most 1 logger per module.
If your project is properly packaged, __name__ will have the value of "mypackage.mymodule", except in your main file, where it has the value "__main__"
If you want more context about the code that is logging messages, note that you can set your formatter with a formatter string like %(funcName)s, which will add the function name to all messages.
If you really want per-class loggers, you can do something like:
class MyClass:
def __init__(self):
self.logger = logging.getLogger(__name__+"."+self.__class__.__name__)
I have 3 python modules.
LogManager.py
Runner.py
Other.py
Runner.py is the first main module in the chain of events, and from that module functions inside Other.py are called.
So, inside Runner.py I have a function call to the LogManager.py
logger = LogManager.get_log()
and from there, I can make simple logs, e.g. logger.critical("OHNOES")
What I WANT the get_log function to do, is something similar to a singleton pattern, where if the logger has not been set up, it will set up the logger and return it. Else, it will just return the logger.
Contents of LogManager.py:
import logging
def get_log():
logger = logging.getLogger('PyPro')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('pypro.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.WARNING)
# create formatter and add it to the handlers
fhFormatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
chFormatter = logging.Formatter('%(levelname)s - %(filename)s - Line: %(lineno)d - %(message)s')
fh.setFormatter(fhFormatter)
ch.setFormatter(chFormatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
logger.info("-----------------------------------")
logger.info("Log system successfully initialised")
logger.info("-----------------------------------")
return logger
As you can see, LogManager.get_log() will attempt to set up a log each time it is called. Really, I am a bit confused as to exactly what is happening...
Runner.py calls the get_log function in it's main method.
Other.py calls the get_log in the global scope (right after imports, not in any function)
The result is that all of the logs I make are logged twice, as handlers are made twice for the logger.
What is the simplest way that I am missing to make the get_log function to return an instance of the same log elsewise?
The logging module already implements a singleton pattern for you - when you call logger.getLogger(name), it will create the logger if it hasn't done so already and return it. Although it's not exactly what you're asking for, I would suggest just renaming get_log() to setup_log(), since that's what it does. Then you can just call setup_log() once, at the beginning of your code. Afterwards, when you actually need the logger, just use logging.getLogger() and it will return the already-configured logger.