I am having some difficulties using python's logging. I have two files, main.py and mymodule.py. Generally main.py is run, and it will import mymodule.py and use some functions from there. But sometimes, I will run mymodule.py directly.
I tried to make it so that logging is configured in only 1 location, but something seems wrong.
Here is the code.
# main.py
import logging
import mymodule
logger = logging.getLogger(__name__)
def setup_logging():
# only cofnigure logger if script is main module
# configuring logger in multiple places is bad
# only top-level module should configure logger
if not len(logger.handlers):
logger.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(levelname)s: %(asctime)s %(funcName)s(%(lineno)d) -- %(message)s', datefmt = '%Y-%m-%d %H:%M:%S')
ch.setFormatter(formatter)
logger.addHandler(ch)
if __name__ == '__main__':
setup_logging()
logger.info('calling mymodule.myfunc()')
mymodule.myfunc()
and the imported module:
# mymodule.py
import logging
logger = logging.getLogger(__name__)
def myfunc():
msg = 'myfunc is being called'
logger.info(msg)
print('done with myfunc')
if __name__ == '__main__':
# only cofnigure logger if script is main module
# configuring logger in multiple places is bad
# only top-level module should configure logger
if not len(logger.handlers):
logger.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(levelname)s: %(asctime)s %(funcName)s(%(lineno)d) -- %(message)s', datefmt = '%Y-%m-%d %H:%M:%S')
ch.setFormatter(formatter)
logger.addHandler(ch)
logger.info('myfunc was executed directly')
myfunc()
When I run the code, I see this output:
$>python main.py
INFO: 2016-07-14 18:13:04 <module>(22) -- calling mymodule.myfunc()
done with myfunc
But I expect to see this:
$>python main.py
INFO: 2016-07-14 18:13:04 <module>(22) -- calling mymodule.myfunc()
INFO: 2016-07-14 18:15:09 myfunc(8) -- myfunc is being called
done with myfunc
Anybody have any idea why the second logging.info call doesn't print to screen? thanks in advance!
Loggers exist in a hierarchy, with a root logger (retrieved with logging.getLogger(), no arguments) at the top. Each logger inherits configuration from its parent, with any configuration on the logger itself overriding the inherited configuration. In this case, you are never configuring the root logger, only the module-specific logger in main.py. As a result, the module-specific logger in mymodule.py is never configured.
The simplest fix is probably to use logging.basicConfig in main.py to set options you want shared by all loggers.
Chepner is correct. I got absorbed into this problem. The problem is simply in your main script
16 log = logging.getLogger() # use this form to initialize the root logger
17 #log = logging.getLogger(__name__) # never use this one
If you use line 17, then your imported python modules will not log any messages
In you submodule.py
import logging
logger = logging.getLogger()
logger.debug("You will not see this message if you use line 17 in main")
Hope this posting can help someone who got stuck on this problem.
While the logging-package is conceptually arranged in a namespace hierarchy using dots as separators, all loggers implicitly inherit from the root logger (like every class in Python 3 silently inherits from object). Each logger passes log messages on to its parent.
In your case, your loggers are incorrectly chained. Try adding print(logger.name) in your both modules and you'll realize, that your instantiation of logger in main.py is equivalent to
logger = logging.getLogger('__main__')
while in mymodule.py, you effectively produce
logger = logging.getLogger('mymodule')
The call to log INFO-message from myfunc() passes directly the root logger (as the logger in main.py is not among its ancestors), which has no handler set up (in this case the default message dispatch will be triggered, see here)
Related
I want to log to a single log file from main and all sub modules.
The log messages send from a main file, where I define the logger, work as expected. But the ones send from a call to an imported function are missing.
It is working if I use logging.basicConfig as in Example 1 below.
But the second example which allows for more custom settings does not work.
Any ideas why?
# in the submodule I have this code
import logging
logger = logging.getLogger(__name__)
EXAMPLE 1 - Working
Here I create two handlers and just pass them to basicConfig:
# definition of root looger in main module
formatter = logging.Formatter(fmt="%(asctime)s %(name)s.%(levelname)s: %(message)s", datefmt="%Y.%m.%d %H:%M:%S")
handler = logging.FileHandler('logger.log')
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
handler2 = logging.StreamHandler(stream=None)
handler2.setFormatter(formatter)
handler2.setLevel(logging.DEBUG)
logging.basicConfig(handlers=[handler, handler2], level=logging.DEBUG)
logger = logging.getLogger(__name__)
EXAMPLE 2 - Not working
Here I create two handlers and addHandler() them to the root logger:
# definition of root looger in main module
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
handler = logging.FileHandler('logger.log')
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
#handler.setLevel(logging.ERROR)
logger.addHandler(handler)
handler = logging.StreamHandler(stream=None)
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
You need to configure the (one and only) root logger in the main module of your software. This is done by calling
logger = logging.getLogger() #without arguments
instead of
logger = logging.getLogger(__name__)
(Python doc on logging)
The second example creates a separate, child logger with the name of your script.
If there are no handlers defined in the submodules, the log message is being passed down to the root logger to handle it.
A related question can be found here:
Python Logging - How to inherit root logger level & handler
I have put the following in my config.py:
import time
import logging
#logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S', level=logging.INFO)
logFormatter = logging.Formatter('%(asctime)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
rootLogger = logging.getLogger()
rootLogger.setLevel(logging.INFO)
fileHandler = logging.FileHandler("{0}.log".format(time.strftime('%Y%m%d%H%M%S')))
fileHandler.setFormatter(logFormatter)
rootLogger.addHandler(fileHandler)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
rootLogger.addHandler(consoleHandler)
and then I am doing
from config import *
in all of my scripts and imported files.
Unfortunately, this causes multiple log files created.
How to fix this? I wan't centralized config.py with logging configured both to console and file.
Case 1: Independent Scripts / Programs
In case we are talking about multiple, independent scripts, that should have logging set up in the same way: I would say, each independent application should have its own log. If you definitively do not want this, you would have to
make sure that all applications have the same log file name (e.g. create a function in config.py, with a parameter "timestamp", which is provided by your script(s)
specify the append filemode for the fileHandler
make sure that config.py is not called twice somewhere, as you would add the log handlers twice, which would result in each log message being printed twice.
Case 2: One big application consisting of modules
In case we are talking about one big application, consisting of modules, you could adopt a structure like the following:
config.py:
def set_up_logging():
# your logging setup code
module example (some_module.py):
import logging
def some_function():
logger = logging.getLogger(__name__)
[...]
logger.info('sample log')
[...]
main example (main.py)
import logging
from config import set_up_logging
from some_module import some_function
def main():
set_up_logging()
logger = logging.getLogger(__name__)
logger.info('Executing some function')
some_function()
logger.info('Finished')
if __name__ == '__main__':
main()
Explanation:
With the call to set_up_logging() in main() you configure your applications root logger
each module is called from main(), and get its logger via logger = logging.getLogger(__name__). As the modules logger are in the hierarchy below the root logger, those loggings get "propagated up" to the root logger and handled by the handlers of the root logger.
For more information see Pythons logging module doc and/or the logging cookbook
I'm having trouble getting child loggers named properly in python (2.7). I have the following file structure:
-mypackage
-__init__.py
-main.py
-log
-__init__.py
-logfile.log
-src
-__init__.py
-logger.py
-otherfile.py
The contents of main.py are:
import logging
import src.logger
from src.otherfile import Foo
logger = logging.getLogger(__name__)
logger.info('Logging from main')
foo = Foo()
The contents of otherfile.py are:
import logging
class Foo():
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.info('Logging from class in otherfile')
The contents of logger.py are:
import os
import logging
logdir = os.path.dirname(__file__)
logfile = os.path.join(logdir, '../log/controller.log')
logger = logging.getLogger('__main__')
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler(logfile)
fh.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - $(name)s - %(levelname)s: %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.info('logging from logger.py')
I used logging.getLogger(__name__) in each file based on the docs. The exception is logger.py, where I name the logger __main__ so that it would run from top down and not try to derive everything from a buried file.
When I run main.py, it logs correctly from logger.py and main.py, but the logger from otherfile.py isn't derived correctly from the main logger.
How do I get the logger in otherfile.py to derive from my main logger?
In logger.py you are configuring the "__main__" logger. I was tricked by the fact that in main.py you use __name__. Since you are invoking python main.py, __name__ evaluates to "__main__". Right.
This can become a problem since when imported (instead of executed), main.py's logger won't be "__main__" but "main". It can be fixed by making your package executable: rename main.py to __main__.py and running your package like this:
python -m mypackage
This way, logger names (actually module __name__'s) will remain consistent.
That said, in no way the logger that you configure in logger.py is a parent of the logger in otherfile.py. The real parent of that logger is called "mypackage" but you haven't configured it, so it's logs are invisible.
You have several choices, you can configure (set log level, handler and formatter):
the root logger : logger = logging.getLogger()
mypackage's logger : logger = logger.getLogger(__name__) in mypackage.__init__
... or go down to the level of granularity you wish.
You can easily create hierarchies of loggers by calling separating levels of loggers by a dot ("."). For example, the logger returned by calling logging.getLogger('__main__.' + __name__) inherits all properties from the logger returned by logging.getLogger('__main__'). This behaviour is described here: https://docs.python.org/2/library/logging.html#module-level-functions.
I want to use the logging module instead of printing for debug information and documentation.
The goal is to print on the console with DEBUG level and log to a file with INFO level.
I read through a lot of documentation, the cookbook and other tutorials on the logging module but couldn't figure out, how I can use it the way I want it. (I'm on python25)
I want to have the names of the modules in which the logs are written in my logfile.
The documentation says I should use logger = logging.getLogger(__name__) but how do I declare the loggers used in classes in other modules / packages, so they use the same handlers like the main logger? To recognize the 'parent' I can use logger = logging.getLogger(parent.child) but where do I know, who has called the class/method?`
The example below shows my problem, if I run this, the output will only have the __main__ logs in and ignore the logs in Class
This is my Mainfile:
# main.py
import logging
from module import Class
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# create file handler which logs info messages
fh = logging.FileHandler('foo.log', 'w', 'utf-8')
fh.setLevel(logging.INFO)
# create console handler with a debug log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# creating a formatter
formatter = logging.Formatter('- %(name)s - %(levelname)-8s: %(message)s')
# setting handler format
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)
if __name__ == '__main__':
logger.info('Script starts')
logger.info('calling class Class')
c = Class()
logger.info('calling c.do_something()')
c.do_something()
logger.info('calling c.try_something()')
c.try_something()
Module:
# module.py
imnport logging
class Class:
def __init__(self):
self.logger = logging.getLogger(__name__) # What do I have to enter here?
self.logger.info('creating an instance of Class')
self.dict = {'a':'A'}
def do_something(self):
self.logger.debug('doing something')
a = 1 + 1
self.logger.debug('done doing something')
def try_something(self):
try:
logging.debug(self.dict['b'])
except KeyError, e:
logging.exception(e)
Output in console:
- __main__ - INFO : Script starts
- __main__ - INFO : calling class Class
- __main__ - INFO : calling c.do_something()
- __main__ - INFO : calling c.try_something()
No handlers could be found for logger "module"
Besides: Is there a way to get the module names were the logs ocurred in my logfile, without declaring a new logger in each class like above? Also like this way I have to go for self.logger.info() each time I want to log something. I would prefer to use logging.info() or logger.info() in my whole code.
Is a global logger perhaps the right answer for this? But then I won't get the modules where the errors occur in the logs...
And my last question: Is this pythonic? Or is there a better recommendation to do such things right.
In your main module, you're configuring the logger of name '__main__' (or whatever __name__ equates to in your case) while in module.py you're using a different logger. You either need to configure loggers per module, or you can configure the root logger (by configuring logging.getLogger()) in your main module which will apply by default to all loggers in your project.
I recommend using configuration files for configuring loggers. This link should give you a good idea of good practices: http://victorlin.me/posts/2012/08/26/good-logging-practice-in-python
EDIT: use %(module) in your formatter to include the module name in the log message.
The generally recommended logging setup is having at most 1 logger per module.
If your project is properly packaged, __name__ will have the value of "mypackage.mymodule", except in your main file, where it has the value "__main__"
If you want more context about the code that is logging messages, note that you can set your formatter with a formatter string like %(funcName)s, which will add the function name to all messages.
If you really want per-class loggers, you can do something like:
class MyClass:
def __init__(self):
self.logger = logging.getLogger(__name__+"."+self.__class__.__name__)
I have 3 python modules.
LogManager.py
Runner.py
Other.py
Runner.py is the first main module in the chain of events, and from that module functions inside Other.py are called.
So, inside Runner.py I have a function call to the LogManager.py
logger = LogManager.get_log()
and from there, I can make simple logs, e.g. logger.critical("OHNOES")
What I WANT the get_log function to do, is something similar to a singleton pattern, where if the logger has not been set up, it will set up the logger and return it. Else, it will just return the logger.
Contents of LogManager.py:
import logging
def get_log():
logger = logging.getLogger('PyPro')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('pypro.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.WARNING)
# create formatter and add it to the handlers
fhFormatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
chFormatter = logging.Formatter('%(levelname)s - %(filename)s - Line: %(lineno)d - %(message)s')
fh.setFormatter(fhFormatter)
ch.setFormatter(chFormatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
logger.info("-----------------------------------")
logger.info("Log system successfully initialised")
logger.info("-----------------------------------")
return logger
As you can see, LogManager.get_log() will attempt to set up a log each time it is called. Really, I am a bit confused as to exactly what is happening...
Runner.py calls the get_log function in it's main method.
Other.py calls the get_log in the global scope (right after imports, not in any function)
The result is that all of the logs I make are logged twice, as handlers are made twice for the logger.
What is the simplest way that I am missing to make the get_log function to return an instance of the same log elsewise?
The logging module already implements a singleton pattern for you - when you call logger.getLogger(name), it will create the logger if it hasn't done so already and return it. Although it's not exactly what you're asking for, I would suggest just renaming get_log() to setup_log(), since that's what it does. Then you can just call setup_log() once, at the beginning of your code. Afterwards, when you actually need the logger, just use logging.getLogger() and it will return the already-configured logger.