Please consider the following example:
import logging
#create a logger object:
logger = logging.getLogger("MyLogger")
#define a logging handler for the standard output:
stdoutHandler = logging.StreamHandler(sys.stdout)
logger.addHandler(stdoutHandler)
#...
#initialization code with several logging events (for example, loading a configuration file to a 'conf' object)
#...
logger.info("Log event 1")
#after configuration is loaded, a new logging handler is defined for a log file:
fileHandler = logging.FileHandler(conf.get("main","log_file"),'w')
logger.addHandler(fileHandler)
logger.info("Log event 2")
With this example, "Log event 1" does not appear in the log file (only in stdout).
The log file is inevitably initialized after "Log event 1" (because it's dependent on the configuration).
My question is:
How do I include previously logged events (such as "Log event 1") in a new logging handler (such as the file handler in the example)?
My solution for the question:
Define a MemoryHandler to handle all the events prior to the definition of the FileHandler.
When the FileHandler is defined, set it as the flush target of the MemoryHandler and flush it.
import logging
import logging.handlers
#create a logger object:
logger = logging.getLogger("MyLogger")
#define a memory handler:
memHandler = logging.handlers.MemoryHandler(capacity = 1024*10)
logger.addHandler(memHandler)
#...
#initialization code with several logging events (for example, loading a configuration file to a 'conf' object)
#everything is logged by the memory handler
#...
#after configuration is loaded, a new logging handler is defined for a log file:
fileHandler = logging.FileHandler(conf.get("main","log_file"),'w')
#flush the memory handler into the new file handler:
memHandler.setTarget(fileHandler)
memHandler.flush()
memHandler.close()
logger.removeHandler(memHandler)
logger.addHandler(fileHandler)
This works for me, so I'm accepting this as the correct answer, until a more elegant answer comes along.
Related
I'm seeing a behavior that I have no way of explaining... Here's my simplified setup:
module x:
import logging
logger = logging.getLogger('x')
def test_debugging():
logger.debug('Debugging')
test for module x:
import logging
import unittest
from x import test_debugging
class TestX(unittest.TestCase):
def test_test_debugging(self):
test_debugging()
if __name__ == '__main__':
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
# logging.debug('another test')
unittest.main()
If I uncomment the logging.debug('another test') line I can also see the log from x. Note, it is not a typo, I'm calling debug on logging, not on the logger from module x. And if I call debug on logger, I don't see logs.
What is this, I can't even?..
In your setup, you didn't actually configure logging. Although the configuration can be pretty complex, it would suffice to set the log level in your example:
if __name__ == '__main__':
# note I configured logging, setting e.g. the level globally
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
unittest.main()
This will create a simple StreamHandler with a predefined output format that prints all the log records to the stdout. I suggest you to quickly look over the relevant docs for more info.
Why did it work with the logging.debug call? Because the logging.{info,debug,warn,error} functions all call logging.basicConfig internally, so once you have called logging.debug, you configured logging implicitly.
Edit: let's take a quick look under the hood what is the actual meaning of the logging.{info,debug,error,warning} functions. Let's take the following snippet:
import logging
logger = logging.getLogger('mylogger')
logger.warning('hello world')
If you run the snippet, hello world will be not printed (and this is correct so!). Why not? It's because you didn't actually specify how the log records should be treated - should they be printed to stdout, or maybe printed to a file, or maybe sent to some server that will email them to the recipients? The logger mylogger will receive the log record hello world, but it doesn't know yet what to do with it. So, to actually print the record, let's do some configuration for the logger:
import logging
logger = logging.getLogger('mylogger')
formatter = logging.Formatter('Logger received message %(message)s at time %(asctime)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.warning('hello world')
We now attached a handler that handles the record by printing it to the stdout in the format specified by formatter. Now the record hello world will be printed to the stdout. We could attach more handlers and the record would be handled by each of the handler. Example: try to attach another StreamHandler and you will notice that the record is now printed twice.
So, what's with the logging functions now? If you have some simple program that has only one logger that should print the messages and that's all, you can replace the manual configuration by using convenience logging functions:
import logging
logging.warning('hello world')
This will configure the root logger to print the messages to stdout by adding a StreamHandler to it with some default formatter, so you don't have to configure it yourself. After that, it will tell the root logger to process the record hello world. Merely a convenience, nothing more. If you want to explicitly trigger this basic configuration of the root logger, issue
logging.basicConfig()
with or without the additional configuration parameters.
Now, let's go through my first code snippet once again:
logging.basicConfig(level=logging.DEBUG)
After this line, the root logger will print all log records with level DEBUG and higher to the command line.
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
We did not configure this logger explicitly, so why are the records still being printed? This is because by default, any logger will propagate the log records to the root logger. So the logger x does not print the records - it has not been configured for that, but it will pass the record further up to the root logger that knows how to print the records.
I have two files: script.py and functions.py. In functions.py, I have logger setup, and a set of functions (made up one below):
class ecosystem():
def __init__(self, environment, mode):
self.logger = logging.getLogger(__name__)
if os.path.exists('log.log'):
os.remove('log.log')
handler= logging.FileHandler('log.log')
if mode.lower()== 'info':
handler.setLevel(logging.INFO)
self.logger.setLevel(logging.INFO)
elif mode.lower()== 'warning':
handler.setLevel(logging.WARNING)
self.logger.setLevel(logging.WARNING)
elif mode.lower()== 'error':
handler.setLevel(logging.ERROR)
self.logger.setLevel(logging.ERROR)
elif mode.lower()== 'critical':
handler.setLevel(logging.CRITICAL)
self.logger.setLevel(logging.CRITICAL)
else:
handler.setLevel(logging.DEBUG)
self.logger.setLevel(logging.DEBUG)
#Logging file format
formatter = logging.Formatter(' %(levelname)s | %(asctime)s | %(message)s \n')
handler.setFormatter(formatter)
#Add the handler to the logger
self.logger.addHandler(handler)
self.logger.info('Logging starts here')
def my_function():
self.logger.debug('test log'))
return True
I'm trying to call ecosystem.my_function from script.py, but when I do, the logger.debug message shows up in both the terminal window AND log.log. Any ideas why this might be happening?
If it helps, I also import other modules into functions.py, if those modules import logging as well, could that cause issues?
It looks like you're initializing the logger with log.log file inside the __init__ method of ecosystem class. This means that any code that creates an object of ecosystem will initialize the logger. Somewhere in your code, in one the files, you are creating that object and hence the logger is initialized and writes to the file.
Note that you do not need to call __init__ yourself as it is called on object creation. ie. after this call
my_obj = ecosystem()
log files will be written.
You're asking why both stderr and file is used after your new file handler is attached. This is because of the propagate attribute. By default propagate is True and this means your log will bubble up the hierarchy of loggers and each one will continue handling it. Since default root logger is at the top of the hierarchy, it will be handling your log too. Set propagate to False fix this:
self.logger.propagate = False
You might want to read up a bit on logging. Also, if you want to keep your sanity regarding logging, check how you can use dict to configure logging.
I need a logger creating a new log file for every day, so I am using a TimedRotatingFileHandler and let it rotate at midnight. But every time it rotates, only the first logging message after midnight is stored in the backup file. The old log is deleted and the "main" log file is empty. That's how I create my logger:
def get_logger(name):
# Create the Logger
logger = logging.getLogger(name)
logger.setLevel(logging_lvl)
# Create the Handler for logging data to a file
logger_handler = TimedRotatingFileHandler(logging_filename, when='midnight', backupCount=7)
logger_handler.setLevel(logging_lvl)
# Create a Formatter for formatting the log messages
logger_formatter = logging.Formatter(logging_format)
# Add the Formatter to the Handler
logger_handler.setFormatter(logger_formatter)
# Add the Handler to the Logger
logger.addHandler(logger_handler)
all_logger[name] = logger
return logger
Could the problem be, that I close my application just by pressing ctrl+c? Do I have to shutdown the FileHandler manually?
I am using Python 3.4 on a Linux machine.
EDIT: logging_lvl, logging_filename, logging_lvl and logging_format are variables defined above.
I have been working on this almost all day couldn't figure what I am missing. I am trying to add a custom handler to emit all log data into a GUI session. It works but the handler doesn't extend to the submodules and just emits records from the main module. Here is a small snippet I tried
I have two files
# main.py
import logging
import logging_two
def myapp():
logger = logging.getLogger('myapp')
logging.basicConfig()
logger.info('Using myapp')
ch = logging.StreamHandler()
logger.addHandler(ch)
logging_two.testme()
print logger.handlers
myapp()
Second module
#logging_two
import logging
def testme():
logger = logging.getLogger('testme')
logger.info('IN test me')
print logger.handlers
I would expect the logger in logging_two.testme to have the handler I have added in the main module. I looked at the docs to me it seems this should work but I am not sure if I got it wrong?
the result I get is
[]
[<logging.StreamHandler object at 0x00000000024ED240>]
In myapp() you are adding the handler to the logger named 'myapp'. Since testme() is getting the logger named 'testme' it does not have the handler since it is a different part of the logging hierarchy.
If you just have logger = logger.getLogger() in myapp() then it would work since you are adding the handler to the root of the hierarchy.
Check out the python logging docs.
I have an application which has to run a number of simulation runs. I want to setup a logging mechanisme where all logrecords are logged in a general.log, and all logs for a simulation run go to run00001.log, .... For this I have defined a class Run. in the __init__() a new filehandle is added for the runlog.
The problem is that the logfiles for the runs never get released, so after a number of runs the available handles are exhausted and the run crashes.
I've set up some routines to test this as follows
main routine
import Model
try:
myrun = Model.Run('20130315150340_run_49295')
ha = raw_input('enter')
myrun.log.info("some info")
except:
traceback.print_exc(file=sys.stdout)
ha = raw_input('enter3')
The class Run is defined in module Model as follows
import logging
class Run(object):
""" Implements the functionality of a single run. """
def __init__(self, runid):
self.logdir="."
self.runid = runid
self.logFile = os.path.join(self.logdir , self.runid + '.log')
self.log = logging.getLogger('Run'+self.runid)
myformatter = logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
myhandler = logging.FileHandler(self.logFile)
myhandler.setLevel(logging.INFO)
myhandler.setFormatter(myformatter)
self.log.addHandler(myhandler)
Then I use the program process explorer to follow the filehandlers. And I see the runlogs appear, but never disappear.
Is there a way I can force this?
You need to call .close() on the filehandler.
When your Run class completes, call:
handlers = self.log.handlers[:]
for handler in handlers:
self.log.removeHandler(handler)
handler.close()
A file handler will automatically re-open the configured filename every time a new log message arrives, so calling handler.close() may sometimes appear futile. Removing the handler from the logger stops future log records from being sent to it; in the above code we do this first, to avoid an untimely log message from another thread reopening the handler.
Another answer here suggest you use logging.shutdown(). However, all that logging.shutdown() does is call handler.flush() and handler.close(), and I'd not recommend using it. It leaves the logging module in a state where you can't use logging.shutdown() again, not reliably.
You can also shutdown the logging completely. In that case, file handles are being released:
logging.shutdown()
It closes opened handles of all configured logging handlers.
I needed it to be able to delete a log file after a unit test is finished and I was able to delete it right after the call to the logging.shutdown() method.
Probably we'll just want to .close() the FileHandler's and not the others, so the accepted answer could be slightly modified like:
for handler in self.log.handlers:
if isinstance(handler, logging.FileHandler):
handler.close()
Also, for the simpler cases where we just have a logger configured with logging.basicConfig(), the handlers can be retrieved by calling logging.getLogger().handlers.