How to set a handler for all loggers within a project? - python

I want to use a memory logger in my project. It keeps track of the last n logging records. A minimal example main file looks like this:
import sys
import logging
from logging import StreamHandler
from test_module import do_stuff
logger = logging.getLogger(__name__)
class MemoryHandler(StreamHandler):
def __init__(self, n_logs: int):
StreamHandler.__init__(self)
self.n_logs = n_logs
self.my_records = []
def emit(self, record):
self.my_records.append(self.format(record))
self.my_records = self.my_records[-self.n_logs:]
def to_string(self):
return '\n'.join(self.my_records)
if __name__ == '__main__':
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
mem_handler = MemoryHandler(n_logs=10)
logger.addHandler(mem_handler)
logger.info('hello')
do_stuff()
print(mem_handler.to_string())
The test module I am importing do_stuff from looks like this:
import logging
logger = logging.getLogger(__name__)
def do_stuff():
logger.info('doing stuff')
When I run the main function two log statements appear. The one from main and the one from doing stuff, but the memory logger only receives "hello" and not "doing stuff":
INFO:__main__:hello
INFO:test_module:doing stuff
hello
I assume that this is because mem_handler is not added to the test_module logger. I can fix this by adding the mem_handler explicitely:
logging.getLogger('test_module').addHandler(mem_handler)
But in general I don't want to list all modules and add the mem_handler manually. How can I add the mem_handler to all loggers in my project?

The Python logging system is federated. That means there is a tree like structure similar to the package structure. This structure works by logger name and the levels are separated by dots.
If you use the module's __name__ to get the logger it will be equivalant to the dotted name of the package. for example:
package.subpackage.module
In this federated system a message is send up the loggers structure (unless one of the loggers is explicitly configured with propagate=False).
So, the best way to add a handler is to add it to the root logger on the top of the structure and make sure all loggers below propagate.
You can get the root logger with logging.getLogger() (without any name) and then add handlers or other configuration as you like.

Related

How to configure logging to all scripts in the prject?

I have put the following in my config.py:
import time
import logging
#logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S', level=logging.INFO)
logFormatter = logging.Formatter('%(asctime)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
rootLogger = logging.getLogger()
rootLogger.setLevel(logging.INFO)
fileHandler = logging.FileHandler("{0}.log".format(time.strftime('%Y%m%d%H%M%S')))
fileHandler.setFormatter(logFormatter)
rootLogger.addHandler(fileHandler)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
rootLogger.addHandler(consoleHandler)
and then I am doing
from config import *
in all of my scripts and imported files.
Unfortunately, this causes multiple log files created.
How to fix this? I wan't centralized config.py with logging configured both to console and file.
Case 1: Independent Scripts / Programs
In case we are talking about multiple, independent scripts, that should have logging set up in the same way: I would say, each independent application should have its own log. If you definitively do not want this, you would have to
make sure that all applications have the same log file name (e.g. create a function in config.py, with a parameter "timestamp", which is provided by your script(s)
specify the append filemode for the fileHandler
make sure that config.py is not called twice somewhere, as you would add the log handlers twice, which would result in each log message being printed twice.
Case 2: One big application consisting of modules
In case we are talking about one big application, consisting of modules, you could adopt a structure like the following:
config.py:
def set_up_logging():
# your logging setup code
module example (some_module.py):
import logging
def some_function():
logger = logging.getLogger(__name__)
[...]
logger.info('sample log')
[...]
main example (main.py)
import logging
from config import set_up_logging
from some_module import some_function
def main():
set_up_logging()
logger = logging.getLogger(__name__)
logger.info('Executing some function')
some_function()
logger.info('Finished')
if __name__ == '__main__':
main()
Explanation:
With the call to set_up_logging() in main() you configure your applications root logger
each module is called from main(), and get its logger via logger = logging.getLogger(__name__). As the modules logger are in the hierarchy below the root logger, those loggings get "propagated up" to the root logger and handled by the handlers of the root logger.
For more information see Pythons logging module doc and/or the logging cookbook

add custom handler with Python logging

I have been working on this almost all day couldn't figure what I am missing. I am trying to add a custom handler to emit all log data into a GUI session. It works but the handler doesn't extend to the submodules and just emits records from the main module. Here is a small snippet I tried
I have two files
# main.py
import logging
import logging_two
def myapp():
logger = logging.getLogger('myapp')
logging.basicConfig()
logger.info('Using myapp')
ch = logging.StreamHandler()
logger.addHandler(ch)
logging_two.testme()
print logger.handlers
myapp()
Second module
#logging_two
import logging
def testme():
logger = logging.getLogger('testme')
logger.info('IN test me')
print logger.handlers
I would expect the logger in logging_two.testme to have the handler I have added in the main module. I looked at the docs to me it seems this should work but I am not sure if I got it wrong?
the result I get is
[]
[<logging.StreamHandler object at 0x00000000024ED240>]
In myapp() you are adding the handler to the logger named 'myapp'. Since testme() is getting the logger named 'testme' it does not have the handler since it is a different part of the logging hierarchy.
If you just have logger = logger.getLogger() in myapp() then it would work since you are adding the handler to the root of the hierarchy.
Check out the python logging docs.

logging in multiple classes with module name in log

I want to use the logging module instead of printing for debug information and documentation.
The goal is to print on the console with DEBUG level and log to a file with INFO level.
I read through a lot of documentation, the cookbook and other tutorials on the logging module but couldn't figure out, how I can use it the way I want it. (I'm on python25)
I want to have the names of the modules in which the logs are written in my logfile.
The documentation says I should use logger = logging.getLogger(__name__) but how do I declare the loggers used in classes in other modules / packages, so they use the same handlers like the main logger? To recognize the 'parent' I can use logger = logging.getLogger(parent.child) but where do I know, who has called the class/method?`
The example below shows my problem, if I run this, the output will only have the __main__ logs in and ignore the logs in Class
This is my Mainfile:
# main.py
import logging
from module import Class
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# create file handler which logs info messages
fh = logging.FileHandler('foo.log', 'w', 'utf-8')
fh.setLevel(logging.INFO)
# create console handler with a debug log level
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# creating a formatter
formatter = logging.Formatter('- %(name)s - %(levelname)-8s: %(message)s')
# setting handler format
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)
if __name__ == '__main__':
logger.info('Script starts')
logger.info('calling class Class')
c = Class()
logger.info('calling c.do_something()')
c.do_something()
logger.info('calling c.try_something()')
c.try_something()
Module:
# module.py
imnport logging
class Class:
def __init__(self):
self.logger = logging.getLogger(__name__) # What do I have to enter here?
self.logger.info('creating an instance of Class')
self.dict = {'a':'A'}
def do_something(self):
self.logger.debug('doing something')
a = 1 + 1
self.logger.debug('done doing something')
def try_something(self):
try:
logging.debug(self.dict['b'])
except KeyError, e:
logging.exception(e)
Output in console:
- __main__ - INFO : Script starts
- __main__ - INFO : calling class Class
- __main__ - INFO : calling c.do_something()
- __main__ - INFO : calling c.try_something()
No handlers could be found for logger "module"
Besides: Is there a way to get the module names were the logs ocurred in my logfile, without declaring a new logger in each class like above? Also like this way I have to go for self.logger.info() each time I want to log something. I would prefer to use logging.info() or logger.info() in my whole code.
Is a global logger perhaps the right answer for this? But then I won't get the modules where the errors occur in the logs...
And my last question: Is this pythonic? Or is there a better recommendation to do such things right.
In your main module, you're configuring the logger of name '__main__' (or whatever __name__ equates to in your case) while in module.py you're using a different logger. You either need to configure loggers per module, or you can configure the root logger (by configuring logging.getLogger()) in your main module which will apply by default to all loggers in your project.
I recommend using configuration files for configuring loggers. This link should give you a good idea of good practices: http://victorlin.me/posts/2012/08/26/good-logging-practice-in-python
EDIT: use %(module) in your formatter to include the module name in the log message.
The generally recommended logging setup is having at most 1 logger per module.
If your project is properly packaged, __name__ will have the value of "mypackage.mymodule", except in your main file, where it has the value "__main__"
If you want more context about the code that is logging messages, note that you can set your formatter with a formatter string like %(funcName)s, which will add the function name to all messages.
If you really want per-class loggers, you can do something like:
class MyClass:
def __init__(self):
self.logger = logging.getLogger(__name__+"."+self.__class__.__name__)

How can you make logging module functions use a different logger?

I can create a named child logger, so that all the logs output by that logger are marked with it's name. I can use that logger exclusively in my function/class/whatever.
However, if that code calls out to functions in another module that makes use of logging using just the logging module functions (that proxy to the root logger), how can I ensure that those log messages go through the same logger (or are at least logged in the same way)?
For example:
main.py
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
do_stuff(logger)
other.py
import logging
def do_more_stuff():
logging.info("doing other stuff")
Outputs:
$ python main.py
INFO:stuff:doing stuff
INFO:root:doing other stuff
I want to be able to cause both log lines to be marked with the name 'stuff', and I want to be able to do this only changing main.py.
How can I cause the logging calls in other.py to use a different logger without changing that module?
This is the solution I've come up with:
Using thread local data to store the contextual information, and using a Filter on the root loggers handlers to add this information to LogRecords before they are emitted.
context = threading.local()
context.name = None
class ContextFilter(logging.Filter):
def filter(self, record):
if context.name is not None:
record.name = "%s.%s" % (context.name, record.name)
return True
This is fine for me, because I'm using the logger name to indicate what task was being carried out when this message was logged.
I can then use context managers or decorators to make logging from a particular passage of code all appear as though it was logged from a particular child logger.
#contextlib.contextmanager
def logname(name):
old_name = context.name
if old_name is None:
context.name = name
else:
context.name = "%s.%s" % (old_name, name)
try:
yield
finally:
context.name = old_name
def as_logname(name):
def decorator(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
with logname(name):
return f(*args, **kwargs)
return wrapper
return decorator
So then, I can do:
with logname("stuff"):
logging.info("I'm doing stuff!")
do_more_stuff()
or:
#as_logname("things")
def do_things():
logging.info("Starting to do things")
do_more_stuff()
The key thing being that any logging that do_more_stuff() does will be logged as if it were logged with either a "stuff" or "things" child logger, without having to change do_more_stuff() at all.
This solution would have problems if you were going to have different handlers on different child loggers.
Use logging.setLoggerClass so that all loggers used by other modules use your logger subclass (emphasis mine):
Tells the logging system to use the class klass when instantiating a logger. The class should define __init__() such that only a name argument is required, and the __init__() should call Logger.__init__(). This function is typically called before any loggers are instantiated by applications which need to use custom logger behavior.
This is what logging.handlers (or the handlers in the logging module) is for. In addition to creating your logger, you create one or more handlers to send the logging information to various places and add them to the root logger. Most modules that do logging create a logger that they use for there own purposes but depend on the controlling script to create the handlers. Some frameworks decide to be super helpful and add handlers for you.
Read the logging docs, its all there.
(edit)
logging.basicConfig() is a helper function that adds a single handler to the root logger. You can control the format string it uses with the 'format=' parameter. If all you want to do is have all modules display "stuff", then use logging.basicConfig(level=logging.INFO, format="%(levelname)s:stuff:%(message)s").
The logging.{info,warning,…} methods just call the respective methods on a Logger object called root (cf. the logging module source), so if you know the other module is only calling the functions exported by the logging module, you can overwrite the logging module in others namespace with your logger object:
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
# Overwrite other.logging with your just-generated logger object:
other.logging = logger
do_stuff(logger)

Using logging in multiple modules

I have a small python project that has the following structure -
Project
-- pkg01
-- test01.py
-- pkg02
-- test02.py
-- logging.conf
I plan to use the default logging module to print messages to stdout and a log file.
To use the logging module, some initialization is required -
import logging.config
logging.config.fileConfig('logging.conf')
logger = logging.getLogger('pyApp')
logger.info('testing')
At present, I perform this initialization in every module before I start logging messages. Is it possible to perform this initialization only once in one place such that the same settings are reused by logging all over the project?
Best practice is, in each module, to have a logger defined like this:
import logging
logger = logging.getLogger(__name__)
near the top of the module, and then in other code in the module do e.g.
logger.debug('My message with %s', 'variable data')
If you need to subdivide logging activity inside a module, use e.g.
loggerA = logging.getLogger(__name__ + '.A')
loggerB = logging.getLogger(__name__ + '.B')
and log to loggerA and loggerB as appropriate.
In your main program or programs, do e.g.:
def main():
"your program code"
if __name__ == '__main__':
import logging.config
logging.config.fileConfig('/path/to/logging.conf')
main()
or
def main():
import logging.config
logging.config.fileConfig('/path/to/logging.conf')
# your program code
if __name__ == '__main__':
main()
See here for logging from multiple modules, and here for logging configuration for code which will be used as a library module by other code.
Update: When calling fileConfig(), you may want to specify disable_existing_loggers=False if you're using Python 2.6 or later (see the docs for more information). The default value is True for backward compatibility, which causes all existing loggers to be disabled by fileConfig() unless they or their ancestor are explicitly named in the configuration. With the value set to False, existing loggers are left alone. If using Python 2.7/Python 3.2 or later, you may wish to consider the dictConfig() API which is better than fileConfig() as it gives more control over the configuration.
Actually every logger is a child of the parent's package logger (i.e. package.subpackage.module inherits configuration from package.subpackage), so all you need to do is just to configure the root logger. This can be achieved by logging.config.fileConfig (your own config for loggers) or logging.basicConfig (sets the root logger). Setup logging in your entry module (__main__.py or whatever you want to run, for example main_script.py. __init__.py works as well)
using basicConfig:
# package/__main__.py
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
using fileConfig:
# package/__main__.py
import logging
import logging.config
logging.config.fileConfig('logging.conf')
and then create every logger using:
# package/submodule.py
# or
# package/subpackage/submodule.py
import logging
log = logging.getLogger(__name__)
log.info("Hello logging!")
For more information see Advanced Logging Tutorial.
A simple way of using one instance of logging library in multiple modules for me was following solution:
base_logger.py
import logging
logger = logging
logger.basicConfig(format='%(asctime)s - %(message)s', level=logging.INFO)
Other files
from base_logger import logger
if __name__ == '__main__':
logger.info("This is an info message")
I always do it as below.
Use a single python file to config my log as singleton pattern which named 'log_conf.py'
#-*-coding:utf-8-*-
import logging.config
def singleton(cls):
instances = {}
def get_instance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return get_instance()
#singleton
class Logger():
def __init__(self):
logging.config.fileConfig('logging.conf')
self.logr = logging.getLogger('root')
In another module, just import the config.
from log_conf import Logger
Logger.logr.info("Hello World")
This is a singleton pattern to log, simply and efficiently.
Throwing in another solution.
In my module's init.py I have something like:
# mymodule/__init__.py
import logging
def get_module_logger(mod_name):
logger = logging.getLogger(mod_name)
handler = logging.StreamHandler()
formatter = logging.Formatter(
'%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
return logger
Then in each module I need a logger, I do:
# mymodule/foo.py
from [modname] import get_module_logger
logger = get_module_logger(__name__)
When the logs are missed, you can differentiate their source by the module they came from.
Several of these answers suggest that at the top of a module you you do
import logging
logger = logging.getLogger(__name__)
It is my understanding that this is considered very bad practice. The reason is that the file config will disable all existing loggers by default. E.g.
#my_module
import logging
logger = logging.getLogger(__name__)
def foo():
logger.info('Hi, foo')
class Bar(object):
def bar(self):
logger.info('Hi, bar')
And in your main module :
#main
import logging
# load my module - this now configures the logger
import my_module
# This will now disable the logger in my module by default, [see the docs][1]
logging.config.fileConfig('logging.ini')
my_module.foo()
bar = my_module.Bar()
bar.bar()
Now the log specified in logging.ini will be empty, as the existing logger was disabled by fileconfig call.
While is is certainly possible to get around this (disable_existing_Loggers=False), realistically many clients of your library will not know about this behavior, and will not receive your logs. Make it easy for your clients by always calling logging.getLogger locally. Hat Tip : I learned about this behavior from Victor Lin's Website.
So good practice is instead to always call logging.getLogger locally. E.g.
#my_module
import logging
logger = logging.getLogger(__name__)
def foo():
logging.getLogger(__name__).info('Hi, foo')
class Bar(object):
def bar(self):
logging.getLogger(__name__).info('Hi, bar')
Also, if you use fileconfig in your main, set disable_existing_loggers=False, just in case your library designers use module level logger instances.
I would like to add my solution (which is based on logging cookbook and other articles and suggestions from this thread. However it took me quite a while to figure out, why it wasn't immediately working how I expected. So I created a little test project to learn how logging is working.
Since I have figured it out, I wanted to share my solution, maybe it can be of help to someone.
I know some of my code might not be best practice, but I am still learning. I left the print() functions in there, as I used them, while logging was not working as expected. Those are removed in my other application. Also I welcome any feedback on any parts of the code or structure.
my_log_test project structure (cloned/simplified from another project I work on)
my_log_test
├── __init__.py
├── __main__.py
├── daemon.py
├── common
│   ├── my_logger.py
├── pkg1
│   ├── __init__.py
│   └── mod1.py
└── pkg2
├── __init__.py
└── mod2.py
Requirements
A few things different or that I have not seen explicitly mentioned in the combination I use:
the main module is daemon.pywhich is called by __main__.py
I want to be able to call the modules mod1.py and mod2.py seperately while in development/testing
At this point I did not want to use basicConfig() or FileConfig() but keep it like in the logging cookbook
So basically, that means, I need to initialize the root logger in daemon.py (always) and in the modules mod1.py and mod2.py (only when calling them directly).
To make this init in several modules easier, I created my_logger.py which does, what is described in the cookbook.
My mistakes
Beforehand, my mistake in that module was to init the logger with logger = logging.getLogger(__name__) (module logger) instead of using logger = logging.getLogger() (to get the root logger).
The first problem was, that when called from daemon.py the logger's namespace was set to my_log_test.common.my_logger. The module logger in mod1.py with an "unmatching" namespace my_log_test.pkg1.mod1 could hence not attach to the other logger and I would see no log output from mod1.
The second "problem" was, that my main program is in daemon.py and not in __main__.py. But after all not a real problem for me, but it added to the namespace confusion.
Working solution
This is from the cookbook but in a separate module. I also added a logger_cleanup function that I can call from daemon, to remove logs older than x days.
## my_logger.py
from datetime import datetime
import time
import os
## Init logging start
import logging
import logging.handlers
def logger_init():
print("print in my_logger.logger_init()")
print("print my_logger.py __name__: " +__name__)
path = "log/"
filename = "my_log_test.log"
## get logger
#logger = logging.getLogger(__name__) ## this was my mistake, to init a module logger here
logger = logging.getLogger() ## root logger
logger.setLevel(logging.INFO)
# File handler
logfilename = datetime.now().strftime("%Y%m%d_%H%M%S") + f"_{filename}"
file = logging.handlers.TimedRotatingFileHandler(f"{path}{logfilename}", when="midnight", interval=1)
#fileformat = logging.Formatter("%(asctime)s [%(levelname)s] %(message)s")
fileformat = logging.Formatter("%(asctime)s [%(levelname)s]: %(name)s: %(message)s")
file.setLevel(logging.INFO)
file.setFormatter(fileformat)
# Stream handler
stream = logging.StreamHandler()
#streamformat = logging.Formatter("%(asctime)s [%(levelname)s:%(module)s] %(message)s")
streamformat = logging.Formatter("%(asctime)s [%(levelname)s]: %(name)s: %(message)s")
stream.setLevel(logging.INFO)
stream.setFormatter(streamformat)
# Adding all handlers to the logs
logger.addHandler(file)
logger.addHandler(stream)
def logger_cleanup(path, days_to_keep):
lclogger = logging.getLogger(__name__)
logpath = f"{path}"
now = time.time()
for filename in os.listdir(logpath):
filestamp = os.stat(os.path.join(logpath, filename)).st_mtime
filecompare = now - days_to_keep * 86400
if filestamp < filecompare:
lclogger.info("Delete old log " + filename)
try:
os.remove(os.path.join(logpath, filename))
except Exception as e:
lclogger.exception(e)
continue
to run deamon.py (through __main__.py) use python3 -m my_log_test
## __main__.py
from my_log_test import daemon
if __name__ == '__main__':
print("print in __main__.py")
daemon.run()
to run deamon.py (directly) use python3 -m my_log_test.daemon
## daemon.py
from datetime import datetime
import time
import logging
import my_log_test.pkg1.mod1 as mod1
import my_log_test.pkg2.mod2 as mod2
## init ROOT logger from my_logger.logger_init()
from my_log_test.common.my_logger import logger_init
logger_init() ## init root logger
logger = logging.getLogger(__name__) ## module logger
def run():
print("print in daemon.run()")
print("print daemon.py __name__: " +__name__)
logger.info("Start daemon")
loop_count = 1
while True:
logger.info(f"loop_count: {loop_count}")
logger.info("do stuff from pkg1")
mod1.do1()
logger.info("finished stuff from pkg1")
logger.info("do stuff from pkg2")
mod2.do2()
logger.info("finished stuff from pkg2")
logger.info("Waiting a bit...")
time.sleep(30)
if __name__ == '__main__':
try:
print("print in daemon.py if __name__ == '__main__'")
logger.info("running daemon.py as main")
run()
except KeyboardInterrupt as e:
logger.info("Program aborted by user")
except Exception as e:
logger.info(e)
To run mod1.py (directly) use python3 -m my_log_test.pkg1.mod1
## mod1.py
import logging
# mod1_logger = logging.getLogger(__name__)
mod1_logger = logging.getLogger("my_log_test.daemon.pkg1.mod1") ## for testing, namespace set manually
def do1():
print("print in mod1.do1()")
print("print mod1.py __name__: " +__name__)
mod1_logger.info("Doing someting in pkg1.do1()")
if __name__ == '__main__':
## Also enable this pkg to be run directly while in development with
## python3 -m my_log_test.pkg1.mod1
## init root logger
from my_log_test.common.my_logger import logger_init
logger_init() ## init root logger
print("print in mod1.py if __name__ == '__main__'")
mod1_logger.info("Running mod1.py as main")
do1()
To run mod2.py (directly) use python3 -m my_log_test.pkg2.mod2
## mod2.py
import logging
logger = logging.getLogger(__name__)
def do2():
print("print in pkg2.do2()")
print("print mod2.py __name__: " +__name__) # setting namespace through __name__
logger.info("Doing someting in pkg2.do2()")
if __name__ == '__main__':
## Also enable this pkg to be run directly while in development with
## python3 -m my_log_test.pkg2.mod2
## init root logger
from my_log_test.common.my_logger import logger_init
logger_init() ## init root logger
print("print in mod2.py if __name__ == '__main__'")
logger.info("Running mod2.py as main")
do2()
Happy if it helps. Happy to receive feedback as well!
You could also come up with something like this!
def get_logger(name=None):
default = "__app__"
formatter = logging.Formatter('%(levelname)s: %(asctime)s %(funcName)s(%(lineno)d) -- %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
log_map = {"__app__": "app.log", "__basic_log__": "file1.log", "__advance_log__": "file2.log"}
if name:
logger = logging.getLogger(name)
else:
logger = logging.getLogger(default)
fh = logging.FileHandler(log_map[name])
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.setLevel(logging.DEBUG)
return logger
Now you could use multiple loggers in same module and across whole project if the above is defined in a separate module and imported in other modules were logging is required.
a=get_logger("__app___")
b=get_logger("__basic_log__")
a.info("Starting logging!")
b.debug("Debug Mode")
#Yarkee's solution seemed better. I would like to add somemore to it -
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances.keys():
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class LoggerManager(object):
__metaclass__ = Singleton
_loggers = {}
def __init__(self, *args, **kwargs):
pass
#staticmethod
def getLogger(name=None):
if not name:
logging.basicConfig()
return logging.getLogger()
elif name not in LoggerManager._loggers.keys():
logging.basicConfig()
LoggerManager._loggers[name] = logging.getLogger(str(name))
return LoggerManager._loggers[name]
log=LoggerManager().getLogger("Hello")
log.setLevel(level=logging.DEBUG)
So LoggerManager can be a pluggable to the entire application.
Hope it makes sense and value.
There are several answers. i ended up with a similar yet different solution that makes sense to me, maybe it will make sense to you as well.
My main objective was to be able to pass logs to handlers by their level (debug level logs to the console, warnings and above to files):
from flask import Flask
import logging
from logging.handlers import RotatingFileHandler
app = Flask(__name__)
# make default logger output everything to the console
logging.basicConfig(level=logging.DEBUG)
rotating_file_handler = RotatingFileHandler(filename="logs.log")
rotating_file_handler.setLevel(logging.INFO)
app.logger.addHandler(rotating_file_handler)
created a nice util file named logger.py:
import logging
def get_logger(name):
return logging.getLogger("flask.app." + name)
the flask.app is a hardcoded value in flask. the application logger is always starting with flask.app as its the module's name.
now, in each module, i'm able to use it in the following mode:
from logger import get_logger
logger = get_logger(__name__)
logger.info("new log")
This will create a new log for "app.flask.MODULE_NAME" with minimum effort.
The best practice would be to create a module separately which has only one method whose task we be to give a logger handler to the the calling method. Save this file as m_logger.py
import logger, logging
def getlogger():
# logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# create console handler and set level to debug
#ch = logging.StreamHandler()
ch = logging.FileHandler(r'log.txt')
ch.setLevel(logging.DEBUG)
# create formatter
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
# add formatter to ch
ch.setFormatter(formatter)
# add ch to logger
logger.addHandler(ch)
return logger
Now call the getlogger() method whenever logger handler is needed.
from m_logger import getlogger
logger = getlogger()
logger.info('My mssg')
New to python so I don't know if this is advisable, but it works great for not re-writing boilerplate.
Your project must have an init.py so it can be loaded as a module
# Put this in your module's __init__.py
import logging.config
import sys
# I used this dictionary test, you would put:
# logging.config.fileConfig('logging.conf')
# The "" entry in loggers is the root logger, tutorials always
# use "root" but I can't get that to work
logging.config.dictConfig({
"version": 1,
"formatters": {
"default": {
"format": "%(asctime)s %(levelname)s %(name)s %(message)s"
},
},
"handlers": {
"console": {
"level": 'DEBUG',
"class": "logging.StreamHandler",
"stream": "ext://sys.stdout"
}
},
"loggers": {
"": {
"level": "DEBUG",
"handlers": ["console"]
}
}
})
def logger():
# Get the name from the caller of this function
return logging.getLogger(sys._getframe(1).f_globals['__name__'])
sys._getframe(1) suggestion comes from here
Then to use your logger in any other file:
from [your module name here] import logger
logger().debug("FOOOOOOOOO!!!")
Caveats:
You must run your files as modules, otherwise import [your module] won't work:
python -m [your module name].[your filename without .py]
The name of the logger for the entry point of your program will be __main__, but any solution using __name__ will have that issue.

Categories

Resources