I have been learning the python logging module but have been having problems getting the logging to shutdown after it's done. Here is an example -
import logging
log = logging.getLogger()
log.setLevel(logging.INFO)
handler = logging.FileHandler('test.log')
handler.setLevel(logging.INFO)
formatter = logging.Formatter(
fmt='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
handler.setFormatter(formatter)
log.addHandler(handler)
log.info('log file is open')
logging.shutdown()
log.info('log file should be closed')
But the module is stilling logging after the logging.shutdown() as the log file looks like this -
# cat test.log
2014-07-17 19:39:35 INFO: log file is open
2014-07-17 19:39:35 INFO: log file should be closed
According to the documentation this command should "perform an orderly shutdown by flushing and closing all handlers". Should I be doing something else to close the log file ?
So I found that the file handler for logging is not completely going away by using shutdown(). The best way seems to be to manually remove the file handler -
def main():
log = logging.getLogger()
log.setLevel(logging.INFO)
fh = logging.FileHandler(filename='test.log')
fh.setLevel(logging.INFO)
formatter = logging.Formatter(
fmt='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
fh.setFormatter(formatter)
log.addHandler(fh)
log.info('-------Start--------')
log.info('this function is doing something')
log.info('this function is finished')
log.removeHandler(fh) <--------------------------Add this line
del log,fh
The Python Doc for shutdown specifies that:
This should be called at application exit and no further use of the
logging system should be made after this call.
It does not mean that the logger object cannot be used afterwards. After calling shutdown, simply don't try to log anything else using that logger object and that should be fine.
I had the same probleme with the creation of a new logger with every call of the function. A solution that i found is to clear all handlers after executing the code
log.handlers.clear()
Don't know if it is the correct way but it works for me.
I had this problem while writing a tKinter application. I had extensive logging as various controls were being used and every time I (for instance) pressed a certain button multiple times consecutively, the number of times a line was logged was equal to the number of times I pressed the button consecutively.
I added: log.handlers.clear() at the end of main() and it cleared the issue up.
Related
I have the following code to create two logging handlers. The goal is to have the stream_handler write only to the sterr and the file_handler only to file.
In the code below, stream_handler writes to the file as well, so I have duplicate log entries whenever I log a message. How can I modify it to get the stream_handler not to write to the file?
def create_timed_rotating_log(
name='log',
path='logs/',
when='D',
interval=1,
backupCount=30,
form='%(asctime)s | %(name)s | %(levelname)s: %(message)s',
sterr = False,
verbose=False):
logger = logging.getLogger(name)
formatter = logging.Formatter(form)
logger.setLevel(logging.DEBUG)
if sterr:
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.DEBUG if verbose else logging.ERROR)
stream_handler.setFormatter(formatter)
logger.addHandler(stream_handler)
file_handler = TimedRotatingFileHandler(filename=path+name,
when=when,
interval=interval,
backupCount=backupCount)
file_handler.setFormatter(formatter)
file_handler.setLevel(logging.DEBUG if verbose else logging.ERROR)
logger.addHandler(file_handler)
return logger
Looks like you have considered that the messages go to both handlers as duplication. If you wanted the file not to receive messages and them to only go to stderr, when stderr=True, you never to put the code that adds the file handler in an else so that it is only added if stderr=False
Specifically:
if sterr:
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.DEBUG if verbose else logging.ERROR)
stream_handler.setFormatter(formatter)
logger.addHandler(stream_handler)
else:
file_handler = TimedRotatingFileHandler(filename=path+name,
when=when,
interval=interval,
backupCount=backupCount)
file_handler.setFormatter(formatter)
file_handler.setLevel(logging.DEBUG if verbose else logging.ERROR)
logger.addHandler(file_handler)
I had considered this possible but less likely than the more common issue of setting up handlers multiple times when how you are supposed use logging is that in the main module you add the handlers to the root logger and then only get the logger for the current module via logger = logging.getLogger(__name__). This and import logging is the only logging code that should appear in any but the module where if __name__=="__main__": occurs where right after that it should setup the logging.
The only way you will have duplicates is if you call that function in your program twice. The issue isn't these two handlers but the other two handlers from the second call to this setup function.
You need to just add the handlers once to the root logger not once for every logger in each module.
Not exactly sure why this is the case but adding the StreamHandler first causes it to also write to the file. I moved the StreamHandler to after adding the TimedRotatingFileHandler and this resolved the problem.
I'm seeing a behavior that I have no way of explaining... Here's my simplified setup:
module x:
import logging
logger = logging.getLogger('x')
def test_debugging():
logger.debug('Debugging')
test for module x:
import logging
import unittest
from x import test_debugging
class TestX(unittest.TestCase):
def test_test_debugging(self):
test_debugging()
if __name__ == '__main__':
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
# logging.debug('another test')
unittest.main()
If I uncomment the logging.debug('another test') line I can also see the log from x. Note, it is not a typo, I'm calling debug on logging, not on the logger from module x. And if I call debug on logger, I don't see logs.
What is this, I can't even?..
In your setup, you didn't actually configure logging. Although the configuration can be pretty complex, it would suffice to set the log level in your example:
if __name__ == '__main__':
# note I configured logging, setting e.g. the level globally
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
unittest.main()
This will create a simple StreamHandler with a predefined output format that prints all the log records to the stdout. I suggest you to quickly look over the relevant docs for more info.
Why did it work with the logging.debug call? Because the logging.{info,debug,warn,error} functions all call logging.basicConfig internally, so once you have called logging.debug, you configured logging implicitly.
Edit: let's take a quick look under the hood what is the actual meaning of the logging.{info,debug,error,warning} functions. Let's take the following snippet:
import logging
logger = logging.getLogger('mylogger')
logger.warning('hello world')
If you run the snippet, hello world will be not printed (and this is correct so!). Why not? It's because you didn't actually specify how the log records should be treated - should they be printed to stdout, or maybe printed to a file, or maybe sent to some server that will email them to the recipients? The logger mylogger will receive the log record hello world, but it doesn't know yet what to do with it. So, to actually print the record, let's do some configuration for the logger:
import logging
logger = logging.getLogger('mylogger')
formatter = logging.Formatter('Logger received message %(message)s at time %(asctime)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.warning('hello world')
We now attached a handler that handles the record by printing it to the stdout in the format specified by formatter. Now the record hello world will be printed to the stdout. We could attach more handlers and the record would be handled by each of the handler. Example: try to attach another StreamHandler and you will notice that the record is now printed twice.
So, what's with the logging functions now? If you have some simple program that has only one logger that should print the messages and that's all, you can replace the manual configuration by using convenience logging functions:
import logging
logging.warning('hello world')
This will configure the root logger to print the messages to stdout by adding a StreamHandler to it with some default formatter, so you don't have to configure it yourself. After that, it will tell the root logger to process the record hello world. Merely a convenience, nothing more. If you want to explicitly trigger this basic configuration of the root logger, issue
logging.basicConfig()
with or without the additional configuration parameters.
Now, let's go through my first code snippet once again:
logging.basicConfig(level=logging.DEBUG)
After this line, the root logger will print all log records with level DEBUG and higher to the command line.
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
We did not configure this logger explicitly, so why are the records still being printed? This is because by default, any logger will propagate the log records to the root logger. So the logger x does not print the records - it has not been configured for that, but it will pass the record further up to the root logger that knows how to print the records.
I have a logging configuration where I log to a file and to the console:
logging.basicConfig(filename=logfile, filemode='w',
level=numlevel,
format='%(asctime)s - %(levelname)s - %(name)s:%(funcName)s - %(message)s')
# add console messages
console = logging.StreamHandler()
console.setLevel(logging.INFO)
consoleformatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
console.setFormatter(consoleformatter)
logging.getLogger('').addHandler(console)
At some point in my script, i need to interact with the user by printing a summary and asking for confirmation. The summary is currently produced by prints in a loop. I would like to suspend the current format of the console logs, so that I can printout one big block of text with a question at the end and wait for user input. But i still want all of this to be logged to file!
The function that does this is in a module, where I tried the following :
logger = logging.getLogger(__name__)
def summaryfunc:
logger.info('normal logging business')
clearformatter = logging.Formatter('%(message)s')
logger.setFormatter(clearformatter)
logger.info('\n##########################################')
logger.info('Summary starts here')
Which yields the error: AttributeError: 'Logger' object has no attribute 'setFormatter'
I understand that a logger is a logger, not a handler, but i'm not sure on how to get things to work...
EDIT:
Following the answers, my problem turned into : how can i suspend logging to the console when interacting with the user, while still being able to log to file. IE: suspend only the streamHandler. Since this is happening in a module, the specifics of the handlers are defined elsewhere, so here is how i did it :
logger.debug('Normal logging to file and console')
root_logger = logging.getLogger()
stream_handler = root_logger.handlers[1]
root_logger.removeHandler(stream_handler)
print('User interaction')
logger.info('Logging to file only')
root_logger.addHandler(stream_handler)
logger.info('Back to logging to both file and console')
This relies on the streamHandler always being the second in the list returned by handlers but i believe this is the case because it's in the order I added the handlers to the root logger...
I agree with Vinay that you should use print for normal program output and only use logging for logging purpose. However, if you still want to switch formatting in the middle, then switch back, here is how to do it:
import logging
def summarize():
console_handler.setFormatter(logging.Formatter('%(message)s'))
logger.info('Here is my report')
console_handler.setFormatter(console_formatter)
numlevel = logging.DEBUG
logfile = 's2.log'
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
console_handler = logging.StreamHandler()
console_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
console_handler.setFormatter(console_formatter)
logger.addHandler(console_handler)
file_handler = logging.FileHandler(filename=logfile, mode='w')
file_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s:%(funcName)s - %(message)s')
file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler)
logger.info('Before summary')
summarize()
logger.info('After summary')
Discussion
The script creates a logger object and assigned to it two handlers: one for the console and one for file.
In the function summarize(), I switched in a new formatter for the console handler, do some logging, then switched back.
Again, let me remind that you should not use logging to display normal program output.
Update
If you want to suppress console logging, then turn it back on. Here is a suggestion:
def interact():
# Remove the console handler
for handler in logger.handlers:
if not isinstance(handler, logging.FileHandler):
saved_handler = handler
logger.removeHandler(handler)
break
# Interact
logger.info('to file only')
# Add the console handler back
logger.addHandler(saved_handler)
Note that I did not test the handlers against logging.StreamHandler since a logging.FileHandler is derived from logging.StreamHandler. Therefore, I removed those handlers that are not FileHandler. Before removing, I saved that handler for later restoration.
Update 2: .handlers = []
In the main script, if you have:
logger = logging.getLogger(__name__) # __name__ == '__main__'
Then in a module, you do:
logger = logging.getLogger(__name__) # __name__ == module's name, not '__main__'
The problem is, in the script, __name__ == '__main__' and in the module, __name__ == <the module's name> and not '__main__'. In order to achieve consistency, you will need to make up some name and use them in both places:
logger = logging.getLogger('MyScript')
Logging shouldn't be used to provide the actual output of your program - a program is supposed to run the same way if logging is turned off completely. So I would suggest that it's better to do what you were doing before, i.e. prints in a loop.
I do have the following logger class (as logger.py):
import logging, logging.handlers
import config
log = logging.getLogger('myLog')
def start():
"Function sets up the logging environment."
log.setLevel(logging.DEBUG)
formatter = logging.Formatter(fmt='%(asctime)s [%(levelname)s] %(message)s', datefmt='%d-%m-%y %H:%M:%S')
if config.logfile_enable:
filehandler = logging.handlers.RotatingFileHandler(config.logfile_name, maxBytes=config.logfile_maxsize,backupCount=config.logfile_backupCount)
filehandler.setLevel(logging.DEBUG)
filehandler.setFormatter(formatter)
log.addHandler(filehandler)
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
console.setFormatter(logging.Formatter('[%(levelname)s] %(message)s')) # nicer format for console
log.addHandler(console)
# Levels are: debug, info, warning, error, critical.
log.debug("Started logging to %s [maxBytes: %d, backupCount: %d]" % (config.logfile_name, config.logfile_maxsize, config.logfile_backupCount))
def stop():
"Function closes and cleans up the logging environment."
logging.shutdown()
For logging, I launch logger.start() once, and then import from logger import log at any project file. Then I just use log.debug() and log.error() when needed.
It works fine from everywhere on the script (different classes, functions and files) but it won't work on different processes lanuched through the multiprocessing class.
I get the following error: No handlers could be found for logger "myLog".
What can I do?
from the python docs: logging to a single file from multiple processes is not supported, because there is no standard way to serialize access to a single file across multiple processes in Python.
See: http://docs.python.org/howto/logging-cookbook.html#logging-to-a-single-file-from-multiple-processes
BTW: What I do in this situation, is use Scribe which is a distributed logging aggregator, that I log to via TCP. This allows me to log all servers I have to the same place, not just all processes.
See this project: http://pypi.python.org/pypi/ScribeHandler
Is there an easy way with python's logging module to send messages with a DEBUG or INFO level and the one with a higher level to different streams?
Is it a good idea anyway?
import logging
import sys
class LessThanFilter(logging.Filter):
def __init__(self, exclusive_maximum, name=""):
super(LessThanFilter, self).__init__(name)
self.max_level = exclusive_maximum
def filter(self, record):
#non-zero return means we log this message
return 1 if record.levelno < self.max_level else 0
#Get the root logger
logger = logging.getLogger()
#Have to set the root logger level, it defaults to logging.WARNING
logger.setLevel(logging.NOTSET)
logging_handler_out = logging.StreamHandler(sys.stdout)
logging_handler_out.setLevel(logging.DEBUG)
logging_handler_out.addFilter(LessThanFilter(logging.WARNING))
logger.addHandler(logging_handler_out)
logging_handler_err = logging.StreamHandler(sys.stderr)
logging_handler_err.setLevel(logging.WARNING)
logger.addHandler(logging_handler_err)
#demonstrate the logging levels
logger.debug('DEBUG')
logger.info('INFO')
logger.warning('WARNING')
logger.error('ERROR')
logger.critical('CRITICAL')
Implementation aside, I do think it is a good idea to use the logging facilities in python to output to the terminal, in particular because you can add another handler to additionally log to a file. If you set stdout to be INFO instead of DEBUG, you can even include additional DEBUG information that the user wouldn't standardly see in the log file.
Yes. You must define multiple handlers for your logging.
http://docs.python.org/library/logging.html#logging-to-multiple-destinations
http://docs.python.org/library/logging.handlers.html#module-logging.handlers
I had the same problem and wrote a custom logging handler called SplitStreamHandler:
import sys
import logging
class SplitStreamHandler(logging.Handler):
def __init__(self):
logging.Handler.__init__(self)
def emit(self, record):
# mostly copy-paste from logging.StreamHandler
try:
msg = self.format(record)
if record.levelno < logging.WARNING:
stream = sys.stdout
else:
stream = sys.stderr
fs = "%s\n"
try:
if (isinstance(msg, unicode) and
getattr(stream, 'encoding', None)):
ufs = fs.decode(stream.encoding)
try:
stream.write(ufs % msg)
except UnicodeEncodeError:
stream.write((ufs % msg).encode(stream.encoding))
else:
stream.write(fs % msg)
except UnicodeError:
stream.write(fs % msg.encode("UTF-8"))
stream.flush()
except (KeyboardInterrupt, SystemExit):
raise
except:
self.handleError(record)
Just for your convenience adding everything together with the formatter in one package:
# shared formatter, but you can use separate ones:
FORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(threadName)s - %(message)s'
formatter = logging.Formatter(FORMAT)
# single app logger:
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
# 2 handlers for the same logger:
h1 = logging.StreamHandler(sys.stdout)
h1.setLevel(logging.DEBUG)
# filter out everything that is above INFO level (WARN, ERROR, ...)
h1.addFilter(lambda record: record.levelno <= logging.INFO)
h1.setFormatter(formatter)
log.addHandler(h1)
h2 = logging.StreamHandler(sys.stderr)
# take only warnings and error logs
h2.setLevel(logging.WARNING)
h2.setFormatter(formatter)
log.addHandler(h2)
# profit:
log.info(...)
log.debug(...)
My use case was to redirect stdout to a datafile while seeing errors on the screen during processing.
right from the updated docs, it cover this case pretty well now.
http://docs.python.org/howto/logging.html#logging-advanced-tutorial
import sys # Add this.
import logging
# create logger
logger = logging.getLogger('simple_example')
logger.setLevel(logging.DEBUG)
# create console handler and set level to debug
ch = logging.StreamHandler( sys.__stdout__ ) # Add this
ch.setLevel(logging.DEBUG)
# create formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# add formatter to ch
ch.setFormatter(formatter)
# add ch to logger
logger.addHandler(ch)
# 'application' code
logger.debug('debug message')
logger.info('info message')
logger.warn('warn message')
logger.error('error message')
logger.critical('critical message')
i've mentioned on comments the two changes required from the example to make the output go to stdout. you may also use filters to redirect depending on the level.
more information to understand the changes is at
http://docs.python.org/library/logging.handlers.html#module-logging.handlers
Here's the compact solution I use (tested with Python 3.10):
import logging
import sys
root_logger = logging.getLogger()
# configure default StreamHandler to stderr
logging.basicConfig(
format="%(asctime)s | %(levelname)-8s | %(filename)s:%(funcName)s(%(lineno)d) | %(message)s",
level=logging.INFO, # overall log level; DEBUG or INFO make most sense
)
stderr_handler = root_logger.handlers[0] # get default handler
stderr_handler.setLevel(logging.WARNING) # only print WARNING+
# add another StreamHandler to stdout which only emits below WARNING
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.addFilter(lambda rec: rec.levelno < logging.WARNING)
stdout_handler.setFormatter(stderr_handler.formatter) # reuse the stderr formatter
root_logger.addHandler(stdout_handler)
You will have to associated a handler to each stream you want logged statements to be sent to.
Python has 5 logging levels - CRITICAL, ERROR, WARNING, INFO, and DEBUG.
Level
Numeric Value
Stream
CRITICAL
50
STDERROR
ERROR
40
STDERROR
WARNING
30
STDOUT
INFO
20
STDOUT
DEBUG
10
STDOUT
NOTSET
0
Let's assume you want that Critical and Error logs should go to STDERROR, while others should go to STDOUT - as shown in the table above.
Here are the steps you should follow.
Step1 Initialize the logging module
import logging, sys
log = logging.getLogger()
Step 2 Create two streams, one attached to STDERROR,
other to STDOUT
hStErr = logging.StreamHandler(sys.stderr)
hStOut = logging.StreamHandler(sys.stdout)
Step 3 Set log level for each handler. This tells handler to only log statements which have log level equal to or higher than the level we set. eg. if we set this to WARNING, the handler will only handle logging statements for WARNING, ERROR, and CRITICAL levels. Also make sure that the log level is set in handler only, and not with the log object, so that there is no conflict
hStErr.setLevel('ERROR')
hStOut.setLevel('DEBUG')
log.setLevel('NOTSET')
Now here comes a catch. While hStErr will only output logging for ERROR and CRITICAL, hStOut will output for all 5 levels. Remember that setLevel only tells the minimum logging level which should be handled, so all levels which are greater will also be handled. To limit hStOut to not handle ERROR and CRITICAL, we use a filter.
Step 4 Specify a filter so that ERROR, and CRITICAL aren't handled by hStOut
hStOut.addFilter(lambda x : x.levelno < logging.ERROR)
Step 5 Add these handlers to logger
log.addHandler(hStErr)
log.addHandler(hStOut)
Here are all the pieces together.
import logging, sys
log = logging.getLogger()
hStErr = logging.StreamHandler(sys.stderr)
hStOut = logging.StreamHandler(sys.stdout)
hStErr.setLevel('ERROR')
hStOut.setLevel('DEBUG')
log.setLevel('NOTSET')
hStOut.addFilter(lambda x : x.levelno < logging.ERROR)
log.addHandler(hStErr)
log.addHandler(hStOut)
log.error("error log")
log.info("info log")
Output when we run this script.
error log
info log
Pycharm IDE colors output from std error red. The following image shows that the error log statement above was sent to stderr.
If we comment the addFilter line in above script, we will see the following output.
error log
error log
info log
Note that without filter hStOut will output logging statements from both INFO and ERROR, while for INFO hStErr outputs nothing, and hStOut outputs a single statement - info log
Not necessarily a good idea (it could be confusing to see info and debug messages mixed in with normal output!), but feasible, since you can have multiple handler objects and a custom filter for each of them, in order to pick and choose which log records each handler gets to handle.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import logging
import sys
class LessThenFilter(logging.Filter):
def __init__(self, level):
self._level = level
logging.Filter.__init__(self)
def filter(self, rec):
return rec.levelno < self._level
log = logging.getLogger()
log.setLevel(logging.NOTSET)
sh_out = logging.StreamHandler(stream=sys.stdout)
sh_out.setLevel(logging.DEBUG)
sh_out.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
sh_out.addFilter(LessThenFilter(logging.WARNING))
log.addHandler(sh_out)
sh_err = logging.StreamHandler(stream=sys.stderr)
sh_err.setLevel(logging.WARNING)
sh_err.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
log.addHandler(sh_err)
logging.critical('x')
logging.error('x')
logging.warning('x')
logging.info('x')
logging.debug('x')