Python logging duplicated - python

I have Four files,
Main.py
A.py
B.py
Log_System
i am using main to use functions of A.py and B.py, so now i have to log all the information when ever i call them.
so i wrote a script called log_system to create log handler for each script file such as A.py, B.py
import logging
def fetchLogger(name="None") :
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
if (name == "None"):
#create File for Log
handler = logging.FileHandler('./engine_log/Generic.log')
handler.setLevel(logging.DEBUG)
#log format
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
#adding the handler to Logging System
logger.addHandler(handler)
else:
#create File for Log
handler = logging.FileHandler('./engine_log/'+str(name))
handler.setLevel(logging.DEBUG)
#log format
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
#adding the handler to Logging System
logger.addHandler(handler)
return logger
so if i want to use logging in script file A.py, i would write these line:
import log_system
"""Log System Building """
file_name = 'A.py'
logger = log_system.fetchLogger(file_name)
def hello():
try:
logger.info("excuting Hello")
except:
logger.debug("something went wrong in hello")
but my log files
2017-10-18 14:59:28,695 - log_system - INFO - A.py-excuting Hello
2017-10-18 14:59:28,695 - log_system - INFO - A.py-excuting Hello
2017-10-18 14:59:28,695 - log_system - INFO - A.py-excuting Hello
2017-10-18 14:59:28,695 - log_system - INFO - A.py-excuting Hello
2017-10-18 14:59:28,695 - log_system - INFO - A.py-excuting Hello
it is repeating the log many times....
so what should i do ??
solution
logger = logging.getLogger(name)
if logger.hasHandlers():
logger.handlers = []
logger.setLevel(logging.DEBUG)
#create File for Log
handler = logging.FileHandler('./engine_log/'+str(name))
handler.setLevel(logging.DEBUG)
#log format
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
#adding the handler to Logging System
logger.addHandler(handler)
return logger
this is how i changed my log_system code, so i just emptied the Handler list if i had created an log handler already so that it does not create duplicate records.

Every time fetch_logger is called it adds a new FileHandler to the logger. Each FileHandler writes to the log file, resulting in the repeating output in the file.
One solution is to call the logger's hasHandlers method. This will return True if any handlers have been configured on the logger, and then you can delete them.
def fetchLogger(name="None") :
logger = logging.getLogger(__name__)
if logger.hasHandlers():
# Logger is already configured, remove all handlers
logger.handlers = []
# Configure the logger as before.
...

There may be another cause for duplicates when using the __name__ variable as the logger name.
The logger assumes that dots in the name indicate a parent-child relatationship. If you already created a logger from a module higher in the tree, it is also called to handle the logging.
So if you first logger = logger.get_logger(__name__) in module 'foo' and add handlers to it,
then another_logger = logger.get_logger(__name__) in module 'foo.bar' and add handlers to it,you have created a parent and a child.
Using another_logger.info('Some text') in the foo.bar module will result in executing the handlers of the 'foo.bar' handler first and then the same message by the 'foo' handler.

Related

Logging - How to use the same logger as a class for different files?

How can I use the same logger as a class for different files?
Example:
mylog.py
import logging
class MyLog():
def __init__(self):
pass
def getLogger():
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
logger.addHandler(file)
#etc list of formatting that I want to be the same for all the files using this class
How can I call the same logger for this file? (I'm guessing I need to pass the file name rather than using name__)
myfile.py
import mylog
logger = mylog.MyLog() #? how can I get correctly the logger from the other file?
logger.info("Test Message") #I want this logger info to use the formatting specified in mylog.py
'''
A module can contains member variables in addition to definitions. So you could do (more or less):
mylog.py
import logging
def getLogger():
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
logger.addHandler(file)
#etc list of formatting that I want to be the same for all the files using this class
return logger
logger = getlogger()
myfile.py
from mylog import logger
logger.info("Test Message") #I want this logger info to use the formatting specified in mylog.py

python.logging: Why does my non-basicConfig setting not work?

I want to log to a single log file from main and all sub modules.
The log messages send from a main file, where I define the logger, work as expected. But the ones send from a call to an imported function are missing.
It is working if I use logging.basicConfig as in Example 1 below.
But the second example which allows for more custom settings does not work.
Any ideas why?
# in the submodule I have this code
import logging
logger = logging.getLogger(__name__)
EXAMPLE 1 - Working
Here I create two handlers and just pass them to basicConfig:
# definition of root looger in main module
formatter = logging.Formatter(fmt="%(asctime)s %(name)s.%(levelname)s: %(message)s", datefmt="%Y.%m.%d %H:%M:%S")
handler = logging.FileHandler('logger.log')
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
handler2 = logging.StreamHandler(stream=None)
handler2.setFormatter(formatter)
handler2.setLevel(logging.DEBUG)
logging.basicConfig(handlers=[handler, handler2], level=logging.DEBUG)
logger = logging.getLogger(__name__)
EXAMPLE 2 - Not working
Here I create two handlers and addHandler() them to the root logger:
# definition of root looger in main module
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
handler = logging.FileHandler('logger.log')
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
#handler.setLevel(logging.ERROR)
logger.addHandler(handler)
handler = logging.StreamHandler(stream=None)
handler.setFormatter(formatter)
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
You need to configure the (one and only) root logger in the main module of your software. This is done by calling
logger = logging.getLogger() #without arguments
instead of
logger = logging.getLogger(__name__)
(Python doc on logging)
The second example creates a separate, child logger with the name of your script.
If there are no handlers defined in the submodules, the log message is being passed down to the root logger to handle it.
A related question can be found here:
Python Logging - How to inherit root logger level & handler

How to print current file name with logger?

I am using the Logger model in Python.
I want the log to also print the file name from which the logger is called.
How could this be done?
This is my current implemation:
class Logger(object):
def __init__(self, name):
self.logger = logging.getLogger(name)
self.logger.setLevel(logging.INFO)
# create the logging file handler
fh = logging.FileHandler("{}.log".format(name))
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# Add handle to logger object
self.logger.addHandler(fh)
def get_logger(self):
return self.logger
import logging
LOG = logging.getLogger(__name__)
if __name__ == "__main__":
logging.basicConfig()
LOG.error("Foo")
LOG.error(__file__)
Results in:
ERROR:__main__:Foo
ERROR:__main__:mylog.py
For something more complex, you can use the logger configuration to set a a log message formatter that uses the filename attribute of the LogRecords to include the filename in every log line without making it part of the log message body.
E.g.
logging.basicConfig(format="%(filename)s: %(message)s")
will result in log lines:
mylog.py: Foo
mylog.py: mylog.py

Python logging in several modules outputs twice

I'm not very familiar with python logging and I am trying to get it to log output to the console. I've gotten it to work but it seems that the output is seen twice in the console and I'm not sure why. I've looked at the other questions asked here about similar situations but I could not find anything that helped me.
I have three modules, lets call them ma, mb, mc and a main. The main imports these 3 modules and calls their functions.
In each module I set up a logger like so:
ma.py
import logging
logger = logging.getLogger('test.ma') #test.mb for mb.py/test.mc for mc.py
logger.setLevel(logging.DEBUG)
console_log = logging.StreamHandler()
console_log.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(message)s')
console_log.setFormatter(formatter)
logger.addHandler(console_log)
...
...
#right before the end of each module, after I'm finished logging what I need.
logger.removeHandler(console_log)
mb.py
import logging
logger = logging.getLogger('test.mb')
logger.setLevel(logging.DEBUG)
console_log = logging.StreamHandler()
console_log.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(message)s')
console_log.setFormatter(formatter)
logger.addHandler(console_log)
...
...
#end of file
logger.removeHandler(console_log)
mc.py
import logging
logger = logging.getLogger('test.mc')
logger.setLevel(logging.DEBUG)
console_log = logging.StreamHandler()
console_log.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(message)s')
console_log.setFormatter(formatter)
logger.addHandler(console_log)
...
...
#end of file
logger.removeHandler(console_log)
The main problem I'm having is that output is printing twice and for some parts of the program it is not formatted. Any help would appreciated, thanks!
A better approach to using a global logger is to use a function that wraps the call to logging.getLogger:
import logging
def get_logger(name):
logger = logging.getLogger(name)
if not logger.handlers:
# Prevent logging from propagating to the root logger
logger.propagate = 0
console = logging.StreamHandler()
logger.addHandler(console)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(message)s')
console.setFormatter(formatter)
return logger
def main():
logger = get_logger(__name__)
logger.setLevel(logging.DEBUG)
logger.info("This is an info message")
logger.error("This is an error message")
logger.debug("This is a debug message")
if __name__ == '__main__':
main()
Output
2017-03-09 12:02:41,083 - __main__ - This is an info message
2017-03-09 12:02:41,083 - __main__ - This is an error message
2017-03-09 12:02:41,083 - __main__ - This is a debug message
Note: Using a function also allows us to set logger.propagate to False to prevent log messages from being sent to the root logger. This is almost certainly the cause of the duplicated output you are seeing.
Update: After calling get_logger, the desired level must be set by calling logger.setLevel. If this is not done, then only error messages will be seen.
To log in multiple modules, to one global file, create a logging instance and then get that instance in whichever modules you like. So for example in main, i would have:
import logging
logging.basicConfig(filename='logfile.log', level=logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(message)s')
logger = logging.getLogger('Global Log')
Then, in every module that I want to be able to append to this log, I have:
import logging
logger = logging.getLogger('Global Log')
Now everytime you access the logger in your module, you are accessing the same logging instance. This way, you should only have to set up and format the logger once.
More about logging instance in the docs:
https://docs.python.org/2/library/logging.html#logging.getLogger
In my case I was seeing logging twice due to the way /dev/log redirects records to the rsyslog daemon I have locally running.
Using a
handler = logging.handlers.SysLogHandler(address='/run/systemd/journal/syslog')
solved my problem of double logging.

Logging using elasticsearch-py

I would like to log my python script that uses elasticsearch-py. In particular, I want to have three logs:
General log: log INFO and above both to the stdout and to a file.
ES log: only ES related messages only to a file.
ES tracing log: Extended ES logging (curl queries and their output for instance) only to a file.
Here is what I have so far:
import logging
import logging.handlers
es_logger = logging.getLogger('elasticsearch')
es_logger.setLevel(logging.INFO)
es_logger_handler=logging.handlers.RotatingFileHandler('top-camps-base.log',
maxBytes=0.5*10**9,
backupCount=3)
es_logger.addHandler(es_logger_handler)
es_tracer = logging.getLogger('elasticsearch.trace')
es_tracer.setLevel(logging.DEBUG)
es_tracer_handler=logging.handlers.RotatingFileHandler('top-camps-full.log',
maxBytes=0.5*10**9,
backupCount=3)
es_tracer.addHandler(es_tracer_handler)
logger = logging.getLogger('mainLog')
logger.setLevel(logging.DEBUG)
# create file handler
fileHandler = logging.handlers.RotatingFileHandler('top-camps.log',
maxBytes=10**6,
backupCount=3)
fileHandler.setLevel(logging.INFO)
# create console handler
consoleHandler = logging.StreamHandler()
consoleHandler.setLevel(logging.INFO)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
consoleHandler.setFormatter(formatter)
fileHandler.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(consoleHandler)
logger.addHandler(fileHandler)
My problem is that INFO messages of es_logger are displayed also on the terminal. As a matter of fact the log messages are saved to the right files!
If I remover the part related to logger, then the ES logging works fine, i.e. only saved to the corresponding files. But then I don't have the other part.... What is it that I'm doing wrong with the last part of the settings?
Edit
Possible hint: In the sources of elasticsearch-py there's a logger named logger. Could it be that it conflicts with mine? I tried to change the name of logger to main_logger in the lines above but it didn't help.
Possible hint 2: If I replace logger = logging.getLogger('mainLog') with logger = logging.getLogger(), then the format of the output to the console of es_logger changes and becomes identical to the one defined in the snippet.
I think you are being hit by the somewhat confusing logger hierarchy propagation. Everything that is logged in "elasticsearch.trace" that passes the loglevel of that logger, will propagate first to the "elasticsearch" logger and then to the root ("") logger. Note that once the message passes the loglevel of the "elasticsearch.trace" logger, the loglevels of the parents ("elasticsearch" and root) are not checked, but all messages will be sent to the handlers. (The handlers themselves have log levels that do apply.)
Consider the following example that illustrates the issue, and a possible solution:
import logging
# The following line will basicConfig() the root handler
logging.info('DUMMY - NOT SEEN')
ll = logging.getLogger('foo')
ll.setLevel('DEBUG')
ll.addHandler(logging.StreamHandler())
ll.debug('msg1')
ll.propagate = False
ll.debug('msg2')
Output:
msg1
DEBUG:foo:msg1
msg2
You see that "msg1" is logged both by the "foo" logger, and its parent, the root logger (as "DEBUG:foo:msg1"). Then, when propagation is turned off ll.propagate = False before "msg2", the root logger no longer logs it. Now, if you were to comment out the first line (logging.info("DUMMY..."), then the behavior would change so that the root logger line would not be shown. This is because the logging module top level functions info(), debug() etc. configure the root logger with a handler when no handler has yet been defined. That is also why you see different behavior in your example when you modify the root handler by doing logger = logging.getLogger().
I can't see in your code that you would be doing anything to the root logger, but as you see, a stray logging.info() or the like in your code or library code would cause a handler to be added.
So, to answer your question, I would set logger.propagate = False to the loggers where it makes sense for you and where you want propagation, check that the log level of the handlers themselves are as you want them.
Here is an attempt:
es_logger = logging.getLogger('elasticsearch')
es_logger.propagate = False
es_logger.setLevel(logging.INFO)
es_logger_handler=logging.handlers.RotatingFileHandler('top-camps-base.log',
maxBytes=0.5*10**9,
backupCount=3)
es_logger.addHandler(es_logger_handler)
es_tracer = logging.getLogger('elasticsearch.trace')
es_tracer.propagate = False
es_tracer.setLevel(logging.DEBUG)
es_tracer_handler=logging.handlers.RotatingFileHandler('top-camps-full.log',
maxBytes=0.5*10**9,
backupCount=3)
es_tracer.addHandler(es_tracer_handler)
logger = logging.getLogger('mainLog')
logger.propagate = False
logger.setLevel(logging.DEBUG)
# create file handler
fileHandler = logging.handlers.RotatingFileHandler('top-camps.log',
maxBytes=10**6,
backupCount=3)
fileHandler.setLevel(logging.INFO)
# create console handler
consoleHandler = logging.StreamHandler()
consoleHandler.setLevel(logging.INFO)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
consoleHandler.setFormatter(formatter)
fileHandler.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(consoleHandler)
logger.addHandler(fileHandler)

Categories

Resources