Trying to understand how to create a single instance of a logging class in a separate module. I am trying to access the logging module from my python script. So, I have different files in this automation script and I am trying to record the logs in a file and displaying the logs in the console at the same time so I would be utilizing two handlers namely: FileHandler() and StreamHandler(). The initialization of logger is in a different file called debugLogs.py and I am accessing this file from multiple Python modules running the script. But if the separate modules call debugLogs.py it creates multiple instances of the logger which means it gets printed multiple times which is not what I want. That is why I need to use singleton method to create just one instance. How do you suggest I go about doing that? Should I create a method inside class called setLogger and use it to initialize logger, if so how to do declare method inside a singleton class? I have included my version of debugLogs.py in this code and I have shown:
#debugLogs.py
import logging
import logging.handlers
#open readme file and read name of latest log file created
def get_name():
with open("latestLogNames.txt") as f:
for line in f:
pass
latestLog = line
logfile_name = latestLog[:-1]
return logfile_name
class Logger(object):
_instance = None
def __new__(self, logfile_name):
if not self._instance:
self._instance = super(Logger, self).__new__(self)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(message)s')
file_handler = logging.FileHandler(logfile_name)
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
return self._instance
So get_name() gets the latest log name already indicated in the readme file latestLogNames.txt and inputs the logs in the latest log file already there. I understand that my singleton class code is not right and I am confused on how to initialize the whole class structure. But somehow I would have to pass that logfile_name value to that class. So I am planning to call this logger from a different module with something like this:
#differentModule.py
import debugLogs
logger = debugLogs.Logger(debugLogs.get_name())
And then I would use logger.info("...") to print the logs as well as store it in the file. Please tell me how to restructure the debugLogs.py and how to call it from different modules of my script.
Related
Let's say I have different classes defined like this
class PS1:
def __init__(self):
pass
def do_sth(self):
logging.info('PS1 do sth')
class PS2:
def __init__(self):
pass
def do_sth(self):
logging.info('PS2 do sth')
How can I make PS1 log into PS1.log and PS2 log into PS2.log? I know I can set up loggers and file handlers and do something likes ps1_logger.info instead of logging.info. However, my current code base already has hundreds of logging.info, do I have to change them all?
When you call logging methods on the module level you are creating logs directly at the root logger, so you no longer can separate them by logger. The only chance left is to separate your logs by Handler with a filter. Since you don't want to update your logging calls to add any information about where the call happens you have to inspect the stack to find the calling class. The code below does this, but I highly recommend not using this solution in production and refactor your code to not send everything to the root logger. Look into logging adapters and/or the extra keyword argument for an alternative solution that modifies the logging calls.
import logging
import inspect
root = logging.getLogger()
ps1_handler = logging.FileHandler('ps1.log')
ps2_handler = logging.FileHandler('ps2.log')
def make_class_filter(class_name):
def filter(record):
for fi in inspect.stack():
if 'self' in fi[0].f_locals and class_name in str(fi[0].f_locals['self']):
return True
return False
return filter
ps1_handler.addFilter(make_class_filter('PS1'))
ps2_handler.addFilter(make_class_filter('PS2'))
root.addHandler(ps1_handler)
root.addHandler(ps2_handler)
class PS1:
def do_sth(self):
logging.warning('PS1 do sth')
class PS2:
def do_sth(self):
logging.warning('PS2 do sth')
PS1().do_sth()
PS2().do_sth()
I have two files: script.py and functions.py. In functions.py, I have logger setup, and a set of functions (made up one below):
class ecosystem():
def __init__(self, environment, mode):
self.logger = logging.getLogger(__name__)
if os.path.exists('log.log'):
os.remove('log.log')
handler= logging.FileHandler('log.log')
if mode.lower()== 'info':
handler.setLevel(logging.INFO)
self.logger.setLevel(logging.INFO)
elif mode.lower()== 'warning':
handler.setLevel(logging.WARNING)
self.logger.setLevel(logging.WARNING)
elif mode.lower()== 'error':
handler.setLevel(logging.ERROR)
self.logger.setLevel(logging.ERROR)
elif mode.lower()== 'critical':
handler.setLevel(logging.CRITICAL)
self.logger.setLevel(logging.CRITICAL)
else:
handler.setLevel(logging.DEBUG)
self.logger.setLevel(logging.DEBUG)
#Logging file format
formatter = logging.Formatter(' %(levelname)s | %(asctime)s | %(message)s \n')
handler.setFormatter(formatter)
#Add the handler to the logger
self.logger.addHandler(handler)
self.logger.info('Logging starts here')
def my_function():
self.logger.debug('test log'))
return True
I'm trying to call ecosystem.my_function from script.py, but when I do, the logger.debug message shows up in both the terminal window AND log.log. Any ideas why this might be happening?
If it helps, I also import other modules into functions.py, if those modules import logging as well, could that cause issues?
It looks like you're initializing the logger with log.log file inside the __init__ method of ecosystem class. This means that any code that creates an object of ecosystem will initialize the logger. Somewhere in your code, in one the files, you are creating that object and hence the logger is initialized and writes to the file.
Note that you do not need to call __init__ yourself as it is called on object creation. ie. after this call
my_obj = ecosystem()
log files will be written.
You're asking why both stderr and file is used after your new file handler is attached. This is because of the propagate attribute. By default propagate is True and this means your log will bubble up the hierarchy of loggers and each one will continue handling it. Since default root logger is at the top of the hierarchy, it will be handling your log too. Set propagate to False fix this:
self.logger.propagate = False
You might want to read up a bit on logging. Also, if you want to keep your sanity regarding logging, check how you can use dict to configure logging.
I have the following file structure in my Python project:
Main.py
classes
|--- __init__.py
|--- Common.py
|--- Logger.py
|--- Dictionary.py
I'm setting a static variable of the Common class from within my Main file:
from classes.Common import Common
# set path to the log file for log output
Common.set_log_file_path(C:\\logging\\output.txt"))
which gets set in the Common class:
class Common():
log_file_path = ''
#staticmethod
def set_log_file_path(log_file_path):
Common.log_file_path = log_file_path
Now I instantiate a Logger object from within my Main file:
from classes.Logger import Logger
# initiate logger
log = Logger()
The Logger object reads the log file path from the Common object which works fine:
from Common import Common
class Logger():
log_file_path = ''
def __init__(self, log_file_path=''):
# if not specified, take default log file path from Common class
if log_file_path == '':
self.log_file_path = Common.log_file_path
Now comes the problem: From my Main file I instantiate a Dictionary object:
from classes.Dictionary import Dictionary
# load dictionary
dictionary = Dictionary()
In the dictionary object I also want to have a logger, so I create one there:
from Logger import Logger
class Dictionary():
log = Logger()
def __init__(self):
Dictionary.log.info('Dictionary > __init__()')
But this one doesn't work. Somehow when the Logger from within the Dictionary tries to load the log file path from the Common class, it is empty.
Why is that so? Shouldn't that be the same Common class and therefore holding the same static information here? Am I missing something? Do I do the imports in a wrong way?
I'm working with Python 2.6.5, and my imports are as follows:
Main.py imports Dictionary, Logger, Common
Dictionary.py imports Logger
Logger.py imports Common
Common has no imports
Almost everything in a Python module is dynamically executed — including import and class statements.
When you import the Directory module the first time the module is executed which means the import statement which imports Logger is executed and after the Logger is imported the class Directory: is executed which instantiates a Logger object. At this point your Common.set_log_file_path() was not yet called so that Logger object takes the default value which is defined when class Common: was executed.
The ”solution” would be importing Common and set the default path before executing anything that actually uses that default path attribute:
from classes.Common import Common
Common.log_file_path = 'C:/logging/output.txt'
from classes.Directory import Directory
from classes.Logger import Logger
Solution in quotes because having imports depending on other code being executed before is something that can turn into little nightmares very easily. Therefore it is something very seldom seen in productive code.
I want to write a logger which I can use in multiple modules. I must be able to enable and disable it from one place. And it must be reusable.
Following is the scenario.
switch_module.py
class Brocade(object):
def __init__(self, ip, username, password):
...
def connect(self):
...
def disconnect(self):
...
def switch_show(self):
...
switch_module_library.py
import switch_module
class Keyword_Mapper(object):
def __init__(self, keyword_to_execute):
self._brocade_object = switch_module.Brocade(ip, username, password)
...
def map_keyword_to_command(self)
...
application_gui.py
class GUI:
# I can open a file containing keyword for brocade switch
# in this GUI in a tab and tree widget(it uses PyQt which I don't know)
# Each tab is a QThread and there could be multiple tabs
# Each tab is accompanied by an execute button.
# On pressing exeute it will read the string/keywords from the file
# and create an object of keyword_Mapper class and call
# map_key_word_to_command method, execute the command on brocade
# switch and log the results. Current I am logging the result
# only from the Keyword_Mapper class.
Problem I have is how to write a logger which could be enabled and disabled at will
and it must log to one file as well as console from all three modules.
I tried writing global logger in int.py and then importing it in all three modules
and had to give a common name so that they log to the same file, but then
ran into trouble since there is multi-threading and later created logger to
log to a file which has thread-id in its name so that I can have each log
per thread.
What if I am required to log to different file rather than the same file?
I have gone through python logging documentation but am unable to get a clear picture
about writing a proper logging system which could be reused.
I have gone through this link
Is it better to use root logger or named logger in Python
but due to the GUI created by someone other than me using PyQt and multi-threading I am unable to get my head around logging here.
I my project I only use a root logger (I dont have the time to create named loggers, even if it would be nice). So if you don't want to use a named logger, here is a quick solution:
I created a function to set up logger quickly:
import logging
def initLogger(level=logging.DEBUG):
if level == logging.DEBUG:
# Display more stuff when in a debug mode
logging.basicConfig(
format='%(levelname)s-%(module)s:%(lineno)d-%(funcName)s: %(message)s',
level=level)
else:
# Display less stuff for info mode
logging.basicConfig(format='%(levelname)s: %(message)s', level=level)
I created a package for it so that I can import it anywhere.
Then, in my top level I have:
import LoggingTools
if __name__ == '__main__':
# Configure logger
LoggingTools.initLogger(logging.DEBUG)
#LoggingTools.initLogger(logging.INFO)
Depending if I am debugging or not, I using the corresponding statement.
Then in each other files, I just use the logging:
import logging
class MyClass():
def __init__(self):
logging.debug("Debug message")
logging.info("Info message")
I can create a named child logger, so that all the logs output by that logger are marked with it's name. I can use that logger exclusively in my function/class/whatever.
However, if that code calls out to functions in another module that makes use of logging using just the logging module functions (that proxy to the root logger), how can I ensure that those log messages go through the same logger (or are at least logged in the same way)?
For example:
main.py
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
do_stuff(logger)
other.py
import logging
def do_more_stuff():
logging.info("doing other stuff")
Outputs:
$ python main.py
INFO:stuff:doing stuff
INFO:root:doing other stuff
I want to be able to cause both log lines to be marked with the name 'stuff', and I want to be able to do this only changing main.py.
How can I cause the logging calls in other.py to use a different logger without changing that module?
This is the solution I've come up with:
Using thread local data to store the contextual information, and using a Filter on the root loggers handlers to add this information to LogRecords before they are emitted.
context = threading.local()
context.name = None
class ContextFilter(logging.Filter):
def filter(self, record):
if context.name is not None:
record.name = "%s.%s" % (context.name, record.name)
return True
This is fine for me, because I'm using the logger name to indicate what task was being carried out when this message was logged.
I can then use context managers or decorators to make logging from a particular passage of code all appear as though it was logged from a particular child logger.
#contextlib.contextmanager
def logname(name):
old_name = context.name
if old_name is None:
context.name = name
else:
context.name = "%s.%s" % (old_name, name)
try:
yield
finally:
context.name = old_name
def as_logname(name):
def decorator(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
with logname(name):
return f(*args, **kwargs)
return wrapper
return decorator
So then, I can do:
with logname("stuff"):
logging.info("I'm doing stuff!")
do_more_stuff()
or:
#as_logname("things")
def do_things():
logging.info("Starting to do things")
do_more_stuff()
The key thing being that any logging that do_more_stuff() does will be logged as if it were logged with either a "stuff" or "things" child logger, without having to change do_more_stuff() at all.
This solution would have problems if you were going to have different handlers on different child loggers.
Use logging.setLoggerClass so that all loggers used by other modules use your logger subclass (emphasis mine):
Tells the logging system to use the class klass when instantiating a logger. The class should define __init__() such that only a name argument is required, and the __init__() should call Logger.__init__(). This function is typically called before any loggers are instantiated by applications which need to use custom logger behavior.
This is what logging.handlers (or the handlers in the logging module) is for. In addition to creating your logger, you create one or more handlers to send the logging information to various places and add them to the root logger. Most modules that do logging create a logger that they use for there own purposes but depend on the controlling script to create the handlers. Some frameworks decide to be super helpful and add handlers for you.
Read the logging docs, its all there.
(edit)
logging.basicConfig() is a helper function that adds a single handler to the root logger. You can control the format string it uses with the 'format=' parameter. If all you want to do is have all modules display "stuff", then use logging.basicConfig(level=logging.INFO, format="%(levelname)s:stuff:%(message)s").
The logging.{info,warning,…} methods just call the respective methods on a Logger object called root (cf. the logging module source), so if you know the other module is only calling the functions exported by the logging module, you can overwrite the logging module in others namespace with your logger object:
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
# Overwrite other.logging with your just-generated logger object:
other.logging = logger
do_stuff(logger)