Python - Avoid passing logger reference between functions? - python

I have a simple Python script that uses the in-built logging.
I'm configuring logging inside a function. Basic structure would be something like this:
#!/usr/bin/env python
import logging
import ...
def configure_logging():
logger = logging.getLogger("my logger")
logger.setLevel(logging.DEBUG)
# Format for our loglines
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
# Setup console logging
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
ch.setFormatter(formatter)
logger.addHandler(ch)
# Setup file logging as well
fh = logging.FileHandler(LOG_FILENAME)
fh.setLevel(logging.DEBUG)
fh.setFormatter(formatter)
logger.addHandler(fh)
return logger
def count_parrots():
...
logger.debug??
if __name__ == '__main__':
logger = configure_logging()
logger.debug("I'm a log file")
parrots = count_parrots()
I can call logger fine from inside __main__. However, how do I call logger from inside the count_parrots() function? What's the most pythonic way of handling configuring a logger like this?

You can either use the root (default) logger, and thus the module level functions logging.debug, ... or get your logger in the function using it.
Indeed, the getLogger function is a factory-like function with a registry (singleton like), i.e. it always returns the same instance for the given logger name.
You can thus get your logger in count_parrots by simply using
logger = logging.getLogger("my logger")
at the beginning. However, the convention is to use a dotted hierarchical name for your logger. See http://docs.python.org/library/logging.html#logging.getLogger
EDIT:
You can use a decorator to add the logging behaviour to your individual functions, for example:
def debug(loggername):
logger = logging.getLogger(loggername)
def log_(enter_message, exit_message=None):
def wrapper(f):
def wrapped(*args, **kargs):
logger.debug(enter_message)
r = f(*args, **kargs)
if exit_message:
logger.debug(exit_message)
return r
return wrapped
return wrapper
return log_
my_debug = debug('my.logger')
#my_debug('enter foo', 'exit foo')
def foo(a, b):
return a+b
you can "hardcode" the logger name and remove the top-level closure and my_debug.

You can just do :
logger = logging.getLogger("my logger")
in your count_parrots() method. When you pass the name that was used earlier (i.e. "my logger") the logging module would return the same instance that was created corresponding to that name.
Update: From the logging tutorial
(emphais mine)
getLogger() returns a reference to a
logger instance with the specified
name if it is provided, or root if
not. The names are period-separated
hierarchical structures. Multiple
calls to getLogger() with the same
name will return a reference to the
same logger object.

The typical way to handle logging is to have a per-module logger stored in a global variable. Any functions and methods within that module then just reference that same logger instance.
This is discussed briefly in the intro to the advance logging tutorial in the documentation:
http://docs.python.org/howto/logging.html#advanced-logging-tutorial
You can pass logger instances around as parameters, but doing so is typically rare.

I got confused by how global variables work in Python. Within a function you only need to declare global logger if you were doing something like logger = logging.getLogger("my logger") and hoping to modify the global logger.
So to modify your example, you can create a global logger object at the start of the file. If your module can be imported by another one, you should add the NullHandler so that if the importer of the library doesn't want logging enabled, they don't have any issues with your lib (ref).
#!/usr/bin/env python
import logging
import ...
logger = logging.getLogger("my logger").addHandler(logging.NullHandler())
def configure_logging():
logger.setLevel(logging.DEBUG)
# Format for our loglines
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
# Setup console logging
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
ch.setFormatter(formatter)
logger.addHandler(ch)
# Setup file logging as well
fh = logging.FileHandler(LOG_FILENAME)
fh.setLevel(logging.DEBUG)
fh.setFormatter(formatter)
logger.addHandler(fh)
def count_parrots():
...
logger.debug('counting parrots')
...
return parrots
if __name__ == '__main__':
configure_logging()
logger.debug("I'm a log file")
parrots = count_parrots()

If you don't need the log messages on your console, you can use in a minimalist way.
Alternatively you can use tail -f myapp.log to see the messages on the console.
import logging
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', \
filename='myapp.log', \
level=logging.INFO)
def do_something():
logging.info('Doing something')
def main():
logging.info('Started')
do_something()
logging.info('Finished')
if __name__ == '__main__':
main()

You can give logger as argument to count_parrots() Or, what I would do, create class parrots and use logger as one of its method.

Related

How to get access to the logger from another module?

I am having issues in getting access to the logger created from main program from another module.
For example:
In package 'common" I have a module "util01.py" and I have a function get_logger:
util01.py
import logging
def get_logger(file_name,logger_level):
# get logger
logger=logging.getLogger(__name__)
# set desired level
logger.setLevel(logger_level)=
# Get needed formatter
formatter = logging.Formatter('%(asctime)s %(module)s %(lineno)d %(levelname)s %(message)s')
# Get the log file handler
fh = logging.FileHandler(file_name, mode = 'w')
# Apply formatter and level to log file handler
fh.setFormatter(formatter)
logger.addHandler(fh)
return logger
In main.py, I create that logger:
import logging
import OSLCHelper
my_logger = util01.get_logger('c:\\temp\\test1.log', logging.INFO)
In main.py, the my_logger has proper visibility.
From main, I want to execute a function from another module e.g. function from OSLCHelper.py.
return OSCLHelper.get_something(var1)
Now, I have another module e.g. OLSCHelper.py with following code
import logging
from common import util01
get_something(var1):
my_logging.info("i am in getsomething method") // my_logging does not exist
Unfortunately. I don't have access to "my_logger" variable. It does log any statement in the test1.log file.
How to get access to "my_logger" from different modules? Any best practices?
Please help
I tried the above and it did not work
From logging.getLogger():
All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
So one solution would be to replace
logger = logging.getLogger(__name__)
with
logger = logging.getLogger("OLSC")
or any other string that makes sense, I'm guessing.
Then you can always "ask" the logging module for the logger associated with that name, from any module:
import logging
logger = logging.getLogger("OLSC")

Two loggers for two separate python files

I have two file entrypoint.py and op_helper.py that I am trying to send each scripts logs to different log files (webhook.log & op.log). I set up my logger.py file with two different log classes.
import logging
from logging.handlers import TimedRotatingFileHandler
class Logger:
def create_timed_rotating_log(self, path):
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedRotatingFileHandler(path,
when="d",
interval=1,
backupCount=7)
formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
class WebhookLogger:
def create_timed_rotating_log(self, path):
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.INFO)
handler = TimedRotatingFileHandler(path,
when="d",
interval=1,
backupCount=7)
formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
today = datetime.today()
month = today.strftime("%B")
logger = Logger().create_timed_rotating_log(f'./{month + str(today.year)}Logger.log')
webhook_logger = WebhookLogger().create_timed_rotating_log(f'./{month + str(today.year)}WebhookLogger.log')
In my entrypoint.py script:
from logger import webhook_logger
webhook_logger.info("Something to log")
And in my op_helper.py script:
from logger import logger
logger.info("Something else to log")
But when I run the script, both log statements are logged to both log files.
2021-10-15 14:17:51 INFO Something to log
2021-10-15 14:17:51 INFO Something else to log
Can anyone explain to me what's going on here, and possibly, what I'm doing incorrectly?
Thank you in advance!
Here is an excerpt from the documentation for logging (the bold is mine):
logging.getLogger(name=None)
Return a logger with the specified name or, if name is None, return a logger which is the root logger of the hierarchy. If specified, the name is typically a dot-separated hierarchical name like ‘a’, ‘a.b’ or ‘a.b.c.d’. Choice of these names is entirely up to the developer who is using logging.
All calls to this function with a given name return the same logger instance. This means that logger instances never need to be passed between different parts of an application.
...
The solution, therefore, is to assign a different name to your second logger.
EDIT:
Keep in mind, however, that, as you can see, calling getLogger either creates a new instance, if one under the given name doesn't exist, or returns an already existing instance. Therefore every following instruction will only modify an existing logger. If your intention is to use your classes to create multiple instances of one logger type, that approach will not work. Right now, they both do exactly the same thing, so there's not really a need for two separate classes either. As you can see, logging doesn't lend itself well to being used with an object-oriented approach, because the objects are already instanced elsewhere and can be accessed as "global" objects. But this is all just a side note.

Logging across sub-modules – how to access parent logger name from sub-modules?

I am trying to add logging to my python application that has several modules and submodules. Several sites say to create child loggers in modules. The advantage I see is that the child logger inheriting the parent logging config, it will provide consistency for the logging output (handlers, formatters, ...).
So far I am defining the __main__ class logger name in each class and concatenate it with the current class’ name (parentName.childName) to get the module’s class loggers. It does not feel right, nor scalable. How could I improve this so I don’t have to hard code the __main__ class logger name in each class? Here is what my code looks like:
Py file that I run:
###app.py
import logging
from SubModule1 import *
class myApplication():
loggerName='myAPI'
def __init__(self, config_item_1=None,config_item_2=None,...config_item_N=None):
self.logger=self.logConfig()
self.logger.info("Starting my Application")
self.object1=mySubClass1(arg1, arg2, ... argM)
def logConfig(self):
fileLogFormat='%(asctime)s - %(levelname)s - %(message)s'
consoleLogFormat='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
# create logger
#logger = logging.getLogger(__name__)
logger = logging.getLogger(self.loggerName)
logger.setLevel(logging.DEBUG)
####CONSOLE HANDLER####
# create console handler and set level to debug
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
# create formatter
consoleFormatter = logging.Formatter(consoleLogFormat)
# add formatter to ch
ch.setFormatter(consoleFormatter)
# add ch to logger
logger.addHandler(ch)
####FILE HANDLER####
# create file handler and set level to debug
fh = logging.FileHandler('API_Log.txt')
fh.setLevel(logging.DEBUG)
# create formatter
fileFormatter = logging.Formatter(fileLogFormat)
# add formatter to ch
fh.setFormatter(fileFormatter)
# add ch to logger
logger.addHandler(fh)
logger.info('Logging is started!!')
return logger
def connect(self):
self.logger.info("Connecting to API")
self.object1.connect()
if __name__=="__main__":
config={"config_item_1": x,
"config_item_2": y,
...
"config_item_N": z
}
myApp=myApplication(config['config_item_1'],config['config_item_2'],...config['config_item_N'])
myApp.connect()
Module:
###SubModule1.py
import logging
import app
class mySubClass1():
appRootLoggerName='myAPI'
def __init__(self,item1):
self.logger=logging.getLogger(self.appRootLoggerName + '.'+mySubClass1.__name__)
self.logger.debug("mySubClass1 object created - mySubClass1")
self.classObject1=item1
The line below is the one that is bothering me. What alternative to self.appRootLoggerName + '.'+mySubClass1.__name__) would keep the same logging configuration shared across my application’s modules/classes?
logging.getLogger(self.appRootLoggerName + '.'+mySubClass1.__name__)

Why is my configured logger not being used?

Reading the logging HOWTO (https://docs.python.org/3/howto/logging.html) I came away under the impression that if I configured a logger, then I could subsequently request my logger from the factory via logging.getLogger() and python would know how to get the right logger (the one I configured) and everything would just auto-work, i.e. I wouldn't need to pass the configured logger instance around my code, I could just ask for it wherever I needed it. Instead, I'm observing something different.
File log_tester.py:
from util.logging_custom import SetupLogger
import logging
import datetime
def test():
logger = logging.getLogger()
logger.debug("In test()")
def main():
logger = SetupLogger("logger_test")
logger.setLevel(logging.DEBUG)
logger.info(f"now is {datetime.datetime.now()}", )
logger.debug("In main()")
test()
if __name__ == '__main__':
main()
File util/logging_custom.py:
import os
import time
import logging
from logging.handlers import RotatingFileHandler
def SetupLogger(name_prefix):
if not os.path.exists("log"):
os.makedirs("log")
recfmt = logging.Formatter('%(asctime)s.%(msecs)03d %(levelname)s %(message)s')
handler = RotatingFileHandler(time.strftime(f"log/{name_prefix}.log"),maxBytes=5000000, backupCount=10)
handler.setFormatter(recfmt)
handler.setLevel(logging.DEBUG)
logger = logging.getLogger(f"{name_prefix} {__name__}")
logger.addHandler(handler)
return logger
When I run this code only the debug statement that is in main() ends up in the log file. The debug statement from test() ends up I'm not sure where exactly.
Contents of log/logger_test.log:
2019-02-07 09:14:39,906.906 INFO now is 2019-02-07 09:14:39.906848
2019-02-07 09:14:39,906.906 DEBUG In main()
My expectation was that In test() would also show up in my log file. Have I made some assumptions about how python logging works that are untrue? How do I make it so that all of the logging in my program (which has many classes and modules) goes to the same configured logger? Is that possible without passing around a logger instance everywhere, after it's created in main()?
Thanks.
The getLogger function will return a the logger by its name (kind of a singleton):
if it doesn't exist, it creates it
If it already exist, it returns it
Then what you could do is:
util/logging_custom.py
def SetupLogger(logger_name, level=logging.INFO):
if not os.path.exists("log"):
os.makedirs("log")
recfmt = logging.Formatter('%(asctime)s.%(msecs)03d %(levelname)s %(message)s')
handler = RotatingFileHandler(time.strftime(f"log/{logger_name}.log"),maxBytes=5000000, backupCount=10)
handler.setFormatter(recfmt)
handler.setLevel(level)
logger = logging.getLogger(logger_name)
logger.addHandler(handler)
# no need to return the logger, I would even advice not to do so
log_tester.py
from util.logging_custom import SetupLogger
import logging
import datetime
logger = SetupLogger("logger_test", logging.DEBUG) # you only need to run this once, in your main script.
logger = logging.getLogger("logger_test")
def test():
logger.debug("In test()")
def main():
logger.info(f"now is {datetime.datetime.now()}", )
logger.debug("In main()")
test()
if __name__ == '__main__':
main()
any_other.py
import logging
logger = logging.getLogger("logger_test") # this will return the logger you already instantiate in log_tester.py
logger.info("that works!")
Update
To set the level and the handling of the root logger instead of the one you setted up, use logging.getLogger() without passing any name:
root_logger = logging.getLogger()
root_logger.addHandler(your_handler)
root_logger.setLevel(logging.DEBUG)
root_logger.info("hello world")
From the docs:
Multiple calls to getLogger() with the same name will return a
reference to the same logger object.
Your assumptions are quite correct. The problem here is the way you are calling getLogger() in test(). You should be passing the name you used in SetupLogger()'s getLogger() i.e. logger = logging.getLogger(f"{name_prefix} {__name__}").

logging in multiple classes with module name in log

I want to use the logging module instead of printing for debug information and documentation.
The goal is to print on the console with DEBUG level and log to a file with INFO level.
I read through a lot of documentation, the cookbook and other tutorials on the logging module but couldn't figure out, how I can use it the way I want it. (I'm on python25)
I want to have the names of the modules in which the logs are written in my logfile.
The documentation says I should use logger = logging.getLogger(__name__) but how do I declare the loggers used in classes in other modules / packages, so they use the same handlers like the main logger? To recognize the 'parent' I can use logger = logging.getLogger(parent.child) but where do I know, who has called the class/method?`
The example below shows my problem, if I run this, the output will only have the __main__ logs in and ignore the logs in Class
This is my Mainfile:
# main.py
import logging
from module import Class
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# create file handler which logs info messages
fh = logging.FileHandler('foo.log', 'w', 'utf-8')
fh.setLevel(logging.INFO)
# create console handler with a debug log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# creating a formatter
formatter = logging.Formatter('- %(name)s - %(levelname)-8s: %(message)s')
# setting handler format
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)
if __name__ == '__main__':
logger.info('Script starts')
logger.info('calling class Class')
c = Class()
logger.info('calling c.do_something()')
c.do_something()
logger.info('calling c.try_something()')
c.try_something()
Module:
# module.py
imnport logging
class Class:
def __init__(self):
self.logger = logging.getLogger(__name__) # What do I have to enter here?
self.logger.info('creating an instance of Class')
self.dict = {'a':'A'}
def do_something(self):
self.logger.debug('doing something')
a = 1 + 1
self.logger.debug('done doing something')
def try_something(self):
try:
logging.debug(self.dict['b'])
except KeyError, e:
logging.exception(e)
Output in console:
- __main__ - INFO : Script starts
- __main__ - INFO : calling class Class
- __main__ - INFO : calling c.do_something()
- __main__ - INFO : calling c.try_something()
No handlers could be found for logger "module"
Besides: Is there a way to get the module names were the logs ocurred in my logfile, without declaring a new logger in each class like above? Also like this way I have to go for self.logger.info() each time I want to log something. I would prefer to use logging.info() or logger.info() in my whole code.
Is a global logger perhaps the right answer for this? But then I won't get the modules where the errors occur in the logs...
And my last question: Is this pythonic? Or is there a better recommendation to do such things right.
In your main module, you're configuring the logger of name '__main__' (or whatever __name__ equates to in your case) while in module.py you're using a different logger. You either need to configure loggers per module, or you can configure the root logger (by configuring logging.getLogger()) in your main module which will apply by default to all loggers in your project.
I recommend using configuration files for configuring loggers. This link should give you a good idea of good practices: http://victorlin.me/posts/2012/08/26/good-logging-practice-in-python
EDIT: use %(module) in your formatter to include the module name in the log message.
The generally recommended logging setup is having at most 1 logger per module.
If your project is properly packaged, __name__ will have the value of "mypackage.mymodule", except in your main file, where it has the value "__main__"
If you want more context about the code that is logging messages, note that you can set your formatter with a formatter string like %(funcName)s, which will add the function name to all messages.
If you really want per-class loggers, you can do something like:
class MyClass:
def __init__(self):
self.logger = logging.getLogger(__name__+"."+self.__class__.__name__)

Categories

Resources