I do have the following logger class (as logger.py):
import logging, logging.handlers
import config
log = logging.getLogger('myLog')
def start():
"Function sets up the logging environment."
log.setLevel(logging.DEBUG)
formatter = logging.Formatter(fmt='%(asctime)s [%(levelname)s] %(message)s', datefmt='%d-%m-%y %H:%M:%S')
if config.logfile_enable:
filehandler = logging.handlers.RotatingFileHandler(config.logfile_name, maxBytes=config.logfile_maxsize,backupCount=config.logfile_backupCount)
filehandler.setLevel(logging.DEBUG)
filehandler.setFormatter(formatter)
log.addHandler(filehandler)
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
console.setFormatter(logging.Formatter('[%(levelname)s] %(message)s')) # nicer format for console
log.addHandler(console)
# Levels are: debug, info, warning, error, critical.
log.debug("Started logging to %s [maxBytes: %d, backupCount: %d]" % (config.logfile_name, config.logfile_maxsize, config.logfile_backupCount))
def stop():
"Function closes and cleans up the logging environment."
logging.shutdown()
For logging, I launch logger.start() once, and then import from logger import log at any project file. Then I just use log.debug() and log.error() when needed.
It works fine from everywhere on the script (different classes, functions and files) but it won't work on different processes lanuched through the multiprocessing class.
I get the following error: No handlers could be found for logger "myLog".
What can I do?
from the python docs: logging to a single file from multiple processes is not supported, because there is no standard way to serialize access to a single file across multiple processes in Python.
See: http://docs.python.org/howto/logging-cookbook.html#logging-to-a-single-file-from-multiple-processes
BTW: What I do in this situation, is use Scribe which is a distributed logging aggregator, that I log to via TCP. This allows me to log all servers I have to the same place, not just all processes.
See this project: http://pypi.python.org/pypi/ScribeHandler
Related
I am having issues in getting access to the logger created from main program from another module.
For example:
In package 'common" I have a module "util01.py" and I have a function get_logger:
util01.py
import logging
def get_logger(file_name,logger_level):
# get logger
logger=logging.getLogger(__name__)
# set desired level
logger.setLevel(logger_level)=
# Get needed formatter
formatter = logging.Formatter('%(asctime)s %(module)s %(lineno)d %(levelname)s %(message)s')
# Get the log file handler
fh = logging.FileHandler(file_name, mode = 'w')
# Apply formatter and level to log file handler
fh.setFormatter(formatter)
logger.addHandler(fh)
return logger
In main.py, I create that logger:
import logging
import OSLCHelper
my_logger = util01.get_logger('c:\\temp\\test1.log', logging.INFO)
In main.py, the my_logger has proper visibility.
From main, I want to execute a function from another module e.g. function from OSLCHelper.py.
return OSCLHelper.get_something(var1)
Now, I have another module e.g. OLSCHelper.py with following code
import logging
from common import util01
get_something(var1):
my_logging.info("i am in getsomething method") // my_logging does not exist
Unfortunately. I don't have access to "my_logger" variable. It does log any statement in the test1.log file.
How to get access to "my_logger" from different modules? Any best practices?
Please help
I tried the above and it did not work
From logging.getLogger():
All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
So one solution would be to replace
logger = logging.getLogger(__name__)
with
logger = logging.getLogger("OLSC")
or any other string that makes sense, I'm guessing.
Then you can always "ask" the logging module for the logger associated with that name, from any module:
import logging
logger = logging.getLogger("OLSC")
I'm seeing a behavior that I have no way of explaining... Here's my simplified setup:
module x:
import logging
logger = logging.getLogger('x')
def test_debugging():
logger.debug('Debugging')
test for module x:
import logging
import unittest
from x import test_debugging
class TestX(unittest.TestCase):
def test_test_debugging(self):
test_debugging()
if __name__ == '__main__':
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
# logging.debug('another test')
unittest.main()
If I uncomment the logging.debug('another test') line I can also see the log from x. Note, it is not a typo, I'm calling debug on logging, not on the logger from module x. And if I call debug on logger, I don't see logs.
What is this, I can't even?..
In your setup, you didn't actually configure logging. Although the configuration can be pretty complex, it would suffice to set the log level in your example:
if __name__ == '__main__':
# note I configured logging, setting e.g. the level globally
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
unittest.main()
This will create a simple StreamHandler with a predefined output format that prints all the log records to the stdout. I suggest you to quickly look over the relevant docs for more info.
Why did it work with the logging.debug call? Because the logging.{info,debug,warn,error} functions all call logging.basicConfig internally, so once you have called logging.debug, you configured logging implicitly.
Edit: let's take a quick look under the hood what is the actual meaning of the logging.{info,debug,error,warning} functions. Let's take the following snippet:
import logging
logger = logging.getLogger('mylogger')
logger.warning('hello world')
If you run the snippet, hello world will be not printed (and this is correct so!). Why not? It's because you didn't actually specify how the log records should be treated - should they be printed to stdout, or maybe printed to a file, or maybe sent to some server that will email them to the recipients? The logger mylogger will receive the log record hello world, but it doesn't know yet what to do with it. So, to actually print the record, let's do some configuration for the logger:
import logging
logger = logging.getLogger('mylogger')
formatter = logging.Formatter('Logger received message %(message)s at time %(asctime)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.warning('hello world')
We now attached a handler that handles the record by printing it to the stdout in the format specified by formatter. Now the record hello world will be printed to the stdout. We could attach more handlers and the record would be handled by each of the handler. Example: try to attach another StreamHandler and you will notice that the record is now printed twice.
So, what's with the logging functions now? If you have some simple program that has only one logger that should print the messages and that's all, you can replace the manual configuration by using convenience logging functions:
import logging
logging.warning('hello world')
This will configure the root logger to print the messages to stdout by adding a StreamHandler to it with some default formatter, so you don't have to configure it yourself. After that, it will tell the root logger to process the record hello world. Merely a convenience, nothing more. If you want to explicitly trigger this basic configuration of the root logger, issue
logging.basicConfig()
with or without the additional configuration parameters.
Now, let's go through my first code snippet once again:
logging.basicConfig(level=logging.DEBUG)
After this line, the root logger will print all log records with level DEBUG and higher to the command line.
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
We did not configure this logger explicitly, so why are the records still being printed? This is because by default, any logger will propagate the log records to the root logger. So the logger x does not print the records - it has not been configured for that, but it will pass the record further up to the root logger that knows how to print the records.
I have put the following in my config.py:
import time
import logging
#logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S', level=logging.INFO)
logFormatter = logging.Formatter('%(asctime)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
rootLogger = logging.getLogger()
rootLogger.setLevel(logging.INFO)
fileHandler = logging.FileHandler("{0}.log".format(time.strftime('%Y%m%d%H%M%S')))
fileHandler.setFormatter(logFormatter)
rootLogger.addHandler(fileHandler)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
rootLogger.addHandler(consoleHandler)
and then I am doing
from config import *
in all of my scripts and imported files.
Unfortunately, this causes multiple log files created.
How to fix this? I wan't centralized config.py with logging configured both to console and file.
Case 1: Independent Scripts / Programs
In case we are talking about multiple, independent scripts, that should have logging set up in the same way: I would say, each independent application should have its own log. If you definitively do not want this, you would have to
make sure that all applications have the same log file name (e.g. create a function in config.py, with a parameter "timestamp", which is provided by your script(s)
specify the append filemode for the fileHandler
make sure that config.py is not called twice somewhere, as you would add the log handlers twice, which would result in each log message being printed twice.
Case 2: One big application consisting of modules
In case we are talking about one big application, consisting of modules, you could adopt a structure like the following:
config.py:
def set_up_logging():
# your logging setup code
module example (some_module.py):
import logging
def some_function():
logger = logging.getLogger(__name__)
[...]
logger.info('sample log')
[...]
main example (main.py)
import logging
from config import set_up_logging
from some_module import some_function
def main():
set_up_logging()
logger = logging.getLogger(__name__)
logger.info('Executing some function')
some_function()
logger.info('Finished')
if __name__ == '__main__':
main()
Explanation:
With the call to set_up_logging() in main() you configure your applications root logger
each module is called from main(), and get its logger via logger = logging.getLogger(__name__). As the modules logger are in the hierarchy below the root logger, those loggings get "propagated up" to the root logger and handled by the handlers of the root logger.
For more information see Pythons logging module doc and/or the logging cookbook
I have been learning the python logging module but have been having problems getting the logging to shutdown after it's done. Here is an example -
import logging
log = logging.getLogger()
log.setLevel(logging.INFO)
handler = logging.FileHandler('test.log')
handler.setLevel(logging.INFO)
formatter = logging.Formatter(
fmt='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
handler.setFormatter(formatter)
log.addHandler(handler)
log.info('log file is open')
logging.shutdown()
log.info('log file should be closed')
But the module is stilling logging after the logging.shutdown() as the log file looks like this -
# cat test.log
2014-07-17 19:39:35 INFO: log file is open
2014-07-17 19:39:35 INFO: log file should be closed
According to the documentation this command should "perform an orderly shutdown by flushing and closing all handlers". Should I be doing something else to close the log file ?
So I found that the file handler for logging is not completely going away by using shutdown(). The best way seems to be to manually remove the file handler -
def main():
log = logging.getLogger()
log.setLevel(logging.INFO)
fh = logging.FileHandler(filename='test.log')
fh.setLevel(logging.INFO)
formatter = logging.Formatter(
fmt='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
fh.setFormatter(formatter)
log.addHandler(fh)
log.info('-------Start--------')
log.info('this function is doing something')
log.info('this function is finished')
log.removeHandler(fh) <--------------------------Add this line
del log,fh
The Python Doc for shutdown specifies that:
This should be called at application exit and no further use of the
logging system should be made after this call.
It does not mean that the logger object cannot be used afterwards. After calling shutdown, simply don't try to log anything else using that logger object and that should be fine.
I had the same probleme with the creation of a new logger with every call of the function. A solution that i found is to clear all handlers after executing the code
log.handlers.clear()
Don't know if it is the correct way but it works for me.
I had this problem while writing a tKinter application. I had extensive logging as various controls were being used and every time I (for instance) pressed a certain button multiple times consecutively, the number of times a line was logged was equal to the number of times I pressed the button consecutively.
I added: log.handlers.clear() at the end of main() and it cleared the issue up.
I want to use the logging module instead of printing for debug information and documentation.
The goal is to print on the console with DEBUG level and log to a file with INFO level.
I read through a lot of documentation, the cookbook and other tutorials on the logging module but couldn't figure out, how I can use it the way I want it. (I'm on python25)
I want to have the names of the modules in which the logs are written in my logfile.
The documentation says I should use logger = logging.getLogger(__name__) but how do I declare the loggers used in classes in other modules / packages, so they use the same handlers like the main logger? To recognize the 'parent' I can use logger = logging.getLogger(parent.child) but where do I know, who has called the class/method?`
The example below shows my problem, if I run this, the output will only have the __main__ logs in and ignore the logs in Class
This is my Mainfile:
# main.py
import logging
from module import Class
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# create file handler which logs info messages
fh = logging.FileHandler('foo.log', 'w', 'utf-8')
fh.setLevel(logging.INFO)
# create console handler with a debug log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# creating a formatter
formatter = logging.Formatter('- %(name)s - %(levelname)-8s: %(message)s')
# setting handler format
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)
if __name__ == '__main__':
logger.info('Script starts')
logger.info('calling class Class')
c = Class()
logger.info('calling c.do_something()')
c.do_something()
logger.info('calling c.try_something()')
c.try_something()
Module:
# module.py
imnport logging
class Class:
def __init__(self):
self.logger = logging.getLogger(__name__) # What do I have to enter here?
self.logger.info('creating an instance of Class')
self.dict = {'a':'A'}
def do_something(self):
self.logger.debug('doing something')
a = 1 + 1
self.logger.debug('done doing something')
def try_something(self):
try:
logging.debug(self.dict['b'])
except KeyError, e:
logging.exception(e)
Output in console:
- __main__ - INFO : Script starts
- __main__ - INFO : calling class Class
- __main__ - INFO : calling c.do_something()
- __main__ - INFO : calling c.try_something()
No handlers could be found for logger "module"
Besides: Is there a way to get the module names were the logs ocurred in my logfile, without declaring a new logger in each class like above? Also like this way I have to go for self.logger.info() each time I want to log something. I would prefer to use logging.info() or logger.info() in my whole code.
Is a global logger perhaps the right answer for this? But then I won't get the modules where the errors occur in the logs...
And my last question: Is this pythonic? Or is there a better recommendation to do such things right.
In your main module, you're configuring the logger of name '__main__' (or whatever __name__ equates to in your case) while in module.py you're using a different logger. You either need to configure loggers per module, or you can configure the root logger (by configuring logging.getLogger()) in your main module which will apply by default to all loggers in your project.
I recommend using configuration files for configuring loggers. This link should give you a good idea of good practices: http://victorlin.me/posts/2012/08/26/good-logging-practice-in-python
EDIT: use %(module) in your formatter to include the module name in the log message.
The generally recommended logging setup is having at most 1 logger per module.
If your project is properly packaged, __name__ will have the value of "mypackage.mymodule", except in your main file, where it has the value "__main__"
If you want more context about the code that is logging messages, note that you can set your formatter with a formatter string like %(funcName)s, which will add the function name to all messages.
If you really want per-class loggers, you can do something like:
class MyClass:
def __init__(self):
self.logger = logging.getLogger(__name__+"."+self.__class__.__name__)