I need a logger creating a new log file for every day, so I am using a TimedRotatingFileHandler and let it rotate at midnight. But every time it rotates, only the first logging message after midnight is stored in the backup file. The old log is deleted and the "main" log file is empty. That's how I create my logger:
def get_logger(name):
# Create the Logger
logger = logging.getLogger(name)
logger.setLevel(logging_lvl)
# Create the Handler for logging data to a file
logger_handler = TimedRotatingFileHandler(logging_filename, when='midnight', backupCount=7)
logger_handler.setLevel(logging_lvl)
# Create a Formatter for formatting the log messages
logger_formatter = logging.Formatter(logging_format)
# Add the Formatter to the Handler
logger_handler.setFormatter(logger_formatter)
# Add the Handler to the Logger
logger.addHandler(logger_handler)
all_logger[name] = logger
return logger
Could the problem be, that I close my application just by pressing ctrl+c? Do I have to shutdown the FileHandler manually?
I am using Python 3.4 on a Linux machine.
EDIT: logging_lvl, logging_filename, logging_lvl and logging_format are variables defined above.
Related
I want to log to a single log file from main and all sub modules.
The log messages send from a main file, where I define the logger, work as expected. But the ones send from a call to an imported function are missing.
It is working if I use logging.basicConfig as in Example 1 below.
But the second example which allows for more custom settings does not work.
Any ideas why?
# in the submodule I have this code
import logging
logger = logging.getLogger(__name__)
EXAMPLE 1 - Working
Here I create two handlers and just pass them to basicConfig:
# definition of root looger in main module
formatter = logging.Formatter(fmt="%(asctime)s %(name)s.%(levelname)s: %(message)s", datefmt="%Y.%m.%d %H:%M:%S")
handler = logging.FileHandler('logger.log')
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
handler2 = logging.StreamHandler(stream=None)
handler2.setFormatter(formatter)
handler2.setLevel(logging.DEBUG)
logging.basicConfig(handlers=[handler, handler2], level=logging.DEBUG)
logger = logging.getLogger(__name__)
EXAMPLE 2 - Not working
Here I create two handlers and addHandler() them to the root logger:
# definition of root looger in main module
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
handler = logging.FileHandler('logger.log')
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
#handler.setLevel(logging.ERROR)
logger.addHandler(handler)
handler = logging.StreamHandler(stream=None)
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
You need to configure the (one and only) root logger in the main module of your software. This is done by calling
logger = logging.getLogger() #without arguments
instead of
logger = logging.getLogger(__name__)
(Python doc on logging)
The second example creates a separate, child logger with the name of your script.
If there are no handlers defined in the submodules, the log message is being passed down to the root logger to handle it.
A related question can be found here:
Python Logging - How to inherit root logger level & handler
I would like to log my python script that uses elasticsearch-py. In particular, I want to have three logs:
General log: log INFO and above both to the stdout and to a file.
ES log: only ES related messages only to a file.
ES tracing log: Extended ES logging (curl queries and their output for instance) only to a file.
Here is what I have so far:
import logging
import logging.handlers
es_logger = logging.getLogger('elasticsearch')
es_logger.setLevel(logging.INFO)
es_logger_handler=logging.handlers.RotatingFileHandler('top-camps-base.log',
maxBytes=0.5*10**9,
backupCount=3)
es_logger.addHandler(es_logger_handler)
es_tracer = logging.getLogger('elasticsearch.trace')
es_tracer.setLevel(logging.DEBUG)
es_tracer_handler=logging.handlers.RotatingFileHandler('top-camps-full.log',
maxBytes=0.5*10**9,
backupCount=3)
es_tracer.addHandler(es_tracer_handler)
logger = logging.getLogger('mainLog')
logger.setLevel(logging.DEBUG)
# create file handler
fileHandler = logging.handlers.RotatingFileHandler('top-camps.log',
maxBytes=10**6,
backupCount=3)
fileHandler.setLevel(logging.INFO)
# create console handler
consoleHandler = logging.StreamHandler()
consoleHandler.setLevel(logging.INFO)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
consoleHandler.setFormatter(formatter)
fileHandler.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(consoleHandler)
logger.addHandler(fileHandler)
My problem is that INFO messages of es_logger are displayed also on the terminal. As a matter of fact the log messages are saved to the right files!
If I remover the part related to logger, then the ES logging works fine, i.e. only saved to the corresponding files. But then I don't have the other part.... What is it that I'm doing wrong with the last part of the settings?
Edit
Possible hint: In the sources of elasticsearch-py there's a logger named logger. Could it be that it conflicts with mine? I tried to change the name of logger to main_logger in the lines above but it didn't help.
Possible hint 2: If I replace logger = logging.getLogger('mainLog') with logger = logging.getLogger(), then the format of the output to the console of es_logger changes and becomes identical to the one defined in the snippet.
I think you are being hit by the somewhat confusing logger hierarchy propagation. Everything that is logged in "elasticsearch.trace" that passes the loglevel of that logger, will propagate first to the "elasticsearch" logger and then to the root ("") logger. Note that once the message passes the loglevel of the "elasticsearch.trace" logger, the loglevels of the parents ("elasticsearch" and root) are not checked, but all messages will be sent to the handlers. (The handlers themselves have log levels that do apply.)
Consider the following example that illustrates the issue, and a possible solution:
import logging
# The following line will basicConfig() the root handler
logging.info('DUMMY - NOT SEEN')
ll = logging.getLogger('foo')
ll.setLevel('DEBUG')
ll.addHandler(logging.StreamHandler())
ll.debug('msg1')
ll.propagate = False
ll.debug('msg2')
Output:
msg1
DEBUG:foo:msg1
msg2
You see that "msg1" is logged both by the "foo" logger, and its parent, the root logger (as "DEBUG:foo:msg1"). Then, when propagation is turned off ll.propagate = False before "msg2", the root logger no longer logs it. Now, if you were to comment out the first line (logging.info("DUMMY..."), then the behavior would change so that the root logger line would not be shown. This is because the logging module top level functions info(), debug() etc. configure the root logger with a handler when no handler has yet been defined. That is also why you see different behavior in your example when you modify the root handler by doing logger = logging.getLogger().
I can't see in your code that you would be doing anything to the root logger, but as you see, a stray logging.info() or the like in your code or library code would cause a handler to be added.
So, to answer your question, I would set logger.propagate = False to the loggers where it makes sense for you and where you want propagation, check that the log level of the handlers themselves are as you want them.
Here is an attempt:
es_logger = logging.getLogger('elasticsearch')
es_logger.propagate = False
es_logger.setLevel(logging.INFO)
es_logger_handler=logging.handlers.RotatingFileHandler('top-camps-base.log',
maxBytes=0.5*10**9,
backupCount=3)
es_logger.addHandler(es_logger_handler)
es_tracer = logging.getLogger('elasticsearch.trace')
es_tracer.propagate = False
es_tracer.setLevel(logging.DEBUG)
es_tracer_handler=logging.handlers.RotatingFileHandler('top-camps-full.log',
maxBytes=0.5*10**9,
backupCount=3)
es_tracer.addHandler(es_tracer_handler)
logger = logging.getLogger('mainLog')
logger.propagate = False
logger.setLevel(logging.DEBUG)
# create file handler
fileHandler = logging.handlers.RotatingFileHandler('top-camps.log',
maxBytes=10**6,
backupCount=3)
fileHandler.setLevel(logging.INFO)
# create console handler
consoleHandler = logging.StreamHandler()
consoleHandler.setLevel(logging.INFO)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
consoleHandler.setFormatter(formatter)
fileHandler.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(consoleHandler)
logger.addHandler(fileHandler)
I have been learning the python logging module but have been having problems getting the logging to shutdown after it's done. Here is an example -
import logging
log = logging.getLogger()
log.setLevel(logging.INFO)
handler = logging.FileHandler('test.log')
handler.setLevel(logging.INFO)
formatter = logging.Formatter(
fmt='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
handler.setFormatter(formatter)
log.addHandler(handler)
log.info('log file is open')
logging.shutdown()
log.info('log file should be closed')
But the module is stilling logging after the logging.shutdown() as the log file looks like this -
# cat test.log
2014-07-17 19:39:35 INFO: log file is open
2014-07-17 19:39:35 INFO: log file should be closed
According to the documentation this command should "perform an orderly shutdown by flushing and closing all handlers". Should I be doing something else to close the log file ?
So I found that the file handler for logging is not completely going away by using shutdown(). The best way seems to be to manually remove the file handler -
def main():
log = logging.getLogger()
log.setLevel(logging.INFO)
fh = logging.FileHandler(filename='test.log')
fh.setLevel(logging.INFO)
formatter = logging.Formatter(
fmt='%(asctime)s %(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
fh.setFormatter(formatter)
log.addHandler(fh)
log.info('-------Start--------')
log.info('this function is doing something')
log.info('this function is finished')
log.removeHandler(fh) <--------------------------Add this line
del log,fh
The Python Doc for shutdown specifies that:
This should be called at application exit and no further use of the
logging system should be made after this call.
It does not mean that the logger object cannot be used afterwards. After calling shutdown, simply don't try to log anything else using that logger object and that should be fine.
I had the same probleme with the creation of a new logger with every call of the function. A solution that i found is to clear all handlers after executing the code
log.handlers.clear()
Don't know if it is the correct way but it works for me.
I had this problem while writing a tKinter application. I had extensive logging as various controls were being used and every time I (for instance) pressed a certain button multiple times consecutively, the number of times a line was logged was equal to the number of times I pressed the button consecutively.
I added: log.handlers.clear() at the end of main() and it cleared the issue up.
I have a logging configuration where I log to a file and to the console:
logging.basicConfig(filename=logfile, filemode='w',
level=numlevel,
format='%(asctime)s - %(levelname)s - %(name)s:%(funcName)s - %(message)s')
# add console messages
console = logging.StreamHandler()
console.setLevel(logging.INFO)
consoleformatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
console.setFormatter(consoleformatter)
logging.getLogger('').addHandler(console)
At some point in my script, i need to interact with the user by printing a summary and asking for confirmation. The summary is currently produced by prints in a loop. I would like to suspend the current format of the console logs, so that I can printout one big block of text with a question at the end and wait for user input. But i still want all of this to be logged to file!
The function that does this is in a module, where I tried the following :
logger = logging.getLogger(__name__)
def summaryfunc:
logger.info('normal logging business')
clearformatter = logging.Formatter('%(message)s')
logger.setFormatter(clearformatter)
logger.info('\n##########################################')
logger.info('Summary starts here')
Which yields the error: AttributeError: 'Logger' object has no attribute 'setFormatter'
I understand that a logger is a logger, not a handler, but i'm not sure on how to get things to work...
EDIT:
Following the answers, my problem turned into : how can i suspend logging to the console when interacting with the user, while still being able to log to file. IE: suspend only the streamHandler. Since this is happening in a module, the specifics of the handlers are defined elsewhere, so here is how i did it :
logger.debug('Normal logging to file and console')
root_logger = logging.getLogger()
stream_handler = root_logger.handlers[1]
root_logger.removeHandler(stream_handler)
print('User interaction')
logger.info('Logging to file only')
root_logger.addHandler(stream_handler)
logger.info('Back to logging to both file and console')
This relies on the streamHandler always being the second in the list returned by handlers but i believe this is the case because it's in the order I added the handlers to the root logger...
I agree with Vinay that you should use print for normal program output and only use logging for logging purpose. However, if you still want to switch formatting in the middle, then switch back, here is how to do it:
import logging
def summarize():
console_handler.setFormatter(logging.Formatter('%(message)s'))
logger.info('Here is my report')
console_handler.setFormatter(console_formatter)
numlevel = logging.DEBUG
logfile = 's2.log'
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
console_handler = logging.StreamHandler()
console_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
console_handler.setFormatter(console_formatter)
logger.addHandler(console_handler)
file_handler = logging.FileHandler(filename=logfile, mode='w')
file_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s:%(funcName)s - %(message)s')
file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler)
logger.info('Before summary')
summarize()
logger.info('After summary')
Discussion
The script creates a logger object and assigned to it two handlers: one for the console and one for file.
In the function summarize(), I switched in a new formatter for the console handler, do some logging, then switched back.
Again, let me remind that you should not use logging to display normal program output.
Update
If you want to suppress console logging, then turn it back on. Here is a suggestion:
def interact():
# Remove the console handler
for handler in logger.handlers:
if not isinstance(handler, logging.FileHandler):
saved_handler = handler
logger.removeHandler(handler)
break
# Interact
logger.info('to file only')
# Add the console handler back
logger.addHandler(saved_handler)
Note that I did not test the handlers against logging.StreamHandler since a logging.FileHandler is derived from logging.StreamHandler. Therefore, I removed those handlers that are not FileHandler. Before removing, I saved that handler for later restoration.
Update 2: .handlers = []
In the main script, if you have:
logger = logging.getLogger(__name__) # __name__ == '__main__'
Then in a module, you do:
logger = logging.getLogger(__name__) # __name__ == module's name, not '__main__'
The problem is, in the script, __name__ == '__main__' and in the module, __name__ == <the module's name> and not '__main__'. In order to achieve consistency, you will need to make up some name and use them in both places:
logger = logging.getLogger('MyScript')
Logging shouldn't be used to provide the actual output of your program - a program is supposed to run the same way if logging is turned off completely. So I would suggest that it's better to do what you were doing before, i.e. prints in a loop.
Please consider the following example:
import logging
#create a logger object:
logger = logging.getLogger("MyLogger")
#define a logging handler for the standard output:
stdoutHandler = logging.StreamHandler(sys.stdout)
logger.addHandler(stdoutHandler)
#...
#initialization code with several logging events (for example, loading a configuration file to a 'conf' object)
#...
logger.info("Log event 1")
#after configuration is loaded, a new logging handler is defined for a log file:
fileHandler = logging.FileHandler(conf.get("main","log_file"),'w')
logger.addHandler(fileHandler)
logger.info("Log event 2")
With this example, "Log event 1" does not appear in the log file (only in stdout).
The log file is inevitably initialized after "Log event 1" (because it's dependent on the configuration).
My question is:
How do I include previously logged events (such as "Log event 1") in a new logging handler (such as the file handler in the example)?
My solution for the question:
Define a MemoryHandler to handle all the events prior to the definition of the FileHandler.
When the FileHandler is defined, set it as the flush target of the MemoryHandler and flush it.
import logging
import logging.handlers
#create a logger object:
logger = logging.getLogger("MyLogger")
#define a memory handler:
memHandler = logging.handlers.MemoryHandler(capacity = 1024*10)
logger.addHandler(memHandler)
#...
#initialization code with several logging events (for example, loading a configuration file to a 'conf' object)
#everything is logged by the memory handler
#...
#after configuration is loaded, a new logging handler is defined for a log file:
fileHandler = logging.FileHandler(conf.get("main","log_file"),'w')
#flush the memory handler into the new file handler:
memHandler.setTarget(fileHandler)
memHandler.flush()
memHandler.close()
logger.removeHandler(memHandler)
logger.addHandler(fileHandler)
This works for me, so I'm accepting this as the correct answer, until a more elegant answer comes along.