Logging best practice for classes - python

I have implemented a logger in Python. Basically, the idea is to have a logger with multiple handlers. I do this with the following a yaml config
version: 1
formatters:
simple:
format: "%(name)s - %(lineno)d - %(message)s"
complex:
format: "%(asctime)s - %(name)s | %(levelname)s | %(module)s : [%(filename)s: %(lineno)d] - %(message)s"
json:
class: utils.logger.JsonFormatter
format: '%(asctime)s %(name)s %(levelname)s %(module)s %(filename)s: %(message)s'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: json
file:
class: logging.handlers.TimedRotatingFileHandler
when: midnight
backupCount: 5
level: DEBUG
formatter: complex
filename : /tgs_workflow/logs/tgs_logs.log
cloud:
class: utils.logger.GoogleLogger
formatter: json
level: INFO
loggers:
cloud:
level: INFO
handlers: [console,file,cloud]
propagate: yes
__main__:
level: DEBUG
handlers: [console,file,cloud]
propagate: yes
In the yaml, I have created a class GoogleLogger and a class JsonFormatter, these are the only things outside the usual.
In order for this to work, anywhere I want to use my logger I (instantiate) do:
Instantiator [highlighted cause I refer back to it later]
import logging
import logging.config
import yaml
with open('/tgs_workflow/logging.yaml','rt') as f:
config=yaml.safe_load(f.read())
f.close()
logging.config.dictConfig(config)
logger = logging.getLogger(__name__)
logger.info("This info")
Now there are two questions from here
Q1. Is it bad practice to have to instantiate this in each class/script I wish to use this? This also means there is lot of redundant code. (same code, multiple places)
Q2. I usually place this in __main__ but what happens when I have a class that has no main but includes logging? I definitely know it's not a good idea to put this at the top of the file.
e.g. for Q2: This is a really bad example, but I am just trying to highlight how a class would need some logging
import logging
"""
>>>Insert Instantiator here <<<
"""
class Tools():
def __init__(self, name, age):
self.name = name
self.age = age
def who_am_i(self, name, age):
try:
if (self.name == "Adam"):
return True
logging.info("This was Adam")
else:
return False
logging.info("This was not Adam")
except Exception:
logging.error("There is an error")
The only way for me to use my logger is to include my Instantiator at the top of this class. That has to be incorrect or atleast not best practice? What is the correct way of doing this?

The way is to either stick to a name and not use different loggers for every module/file, or use the hierarchy. If your tools class is imported by main than the __main__ and cloud will already be configured and can be used. All you need to do is replace >>>Insert Instantiator here <<< with logger = logging.getLogger('__main__') and you are good to go. If you don't want to use directly on the main logger you can put a dot to make it a hierarchy. logger = logging.getLogger('__main__.tools'). Now this is a logger that will propagate its logs to the __main__ logger but can have it's own level etc.

Related

allow logging Traceback in file but do not display in console Python

how can I set false exc_info for console but leave for writing to file?
config.conf
version: 1
formatters:
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
filters:
warnings_and_below:
"()": log.filter_maker
level: WARNING
sense: below
errors_and_above:
"()": log.filter_maker
level: ERROR
sense: above
handlers:
outconsole:
class: logging.StreamHandler
level: INFO
formatter: simple
filters: [warnings_and_below]
stream: ext://sys.stdout
errconsole:
class: logging.StreamHandler
level: WARNING
formatter: simple
filters: [errors_and_above]
stream: ext://sys.stderr
file_handler:
class: logging.FileHandler
level: DEBUG
formatter: simple
filename: info.log
encoding: utf8
mode: w
root:
level: DEBUG
handlers: [file_handler, outconsole, errconsole]
propagate: no
log.py
import logging.config
from functools import *
import logging
import yaml
def filter_maker(level, sense):
level = getattr(logging, level) # get the actual numeric value from the string
if sense == 'below': # return a function which only passes if level is at or below threshold
def filter(record):
return record.levelno <= level
else: # return a function which only passes if level is at or above threshold
def filter(record):
return record.levelno >= level
return filter
with open("logging.conf", "r") as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
Logger = logging.getLogger(__name__)
Logger.error("Some error")
Logger.error("Some error", exc_info=True)
what I need is for the last two lines to always send to the errconsole handler exc_info=False, and to the file_handler always exc_info=True. Is this possible on one logger or do I need to configure two ?
Use formatters to have different formatting for different handlers. For example, define something like
class NoExceptionFormatter(logging.Formatter):
def formatException(self, exc_info):
return ''
and attach an instance of it to the console handler(s).

Logging in python with YAML and filter

Want to set up a logger with a filter using YAML.
YAML configuration file config.yaml is as follows:
version: 1
formatters:
simple:
format: "%(asctime)s %(name)s: %(message)s"
extended:
format: "%(asctime)s %(name)s %(levelname)s: %(message)s"
filters:
noConsoleFilter:
class: noConsoleFilter
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: simple
filters: [noConsoleFilter]
file_handler:
class: logging.FileHandler
level: INFO
filename: test.log
formatter: extended
root:
handlers: [console, file_handler]
propagate: true
...and the main program in main.py as follows:
import logging.config
import yaml
class noConsoleFilter(logging.Filter):
def filter(self, record):
print("filtering!")
return not (record.levelname == 'INFO') & ('no-console' in record.msg)
with open('config.yaml', 'r') as f:
log_cfg = yaml.safe_load(f.read())
logging.config.dictConfig(log_cfg)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.info("no-console. Should not be in console, but be in test.log!")
logger.info('This is an info message')
logger.error('This is an error message')
Expected output in console without the "no-console" message:
2020-04-27 18:05:26,936 __main__: This is an info message
2020-04-27 18:05:26,936 __main__: This is an error message
But looks like class: noConsoleFilter is not even being considered, as the print statement is also not working.
Where am I going wrong? How can I fix it?
The syntax is a bit odd, but it's described in the logging docs, under User-defined objects, that you have to use the key (), not class. Like so:
filters:
noConsoleFilter:
(): noConsoleFilter
Next, you need to specify a qualified name for the class. If you're running the script directly, not as a module, you can refer to it under __main__:
filters:
noConsoleFilter:
(): __main__.noConsoleFilter
I would also recommend using the PEP 8 CapWords convention for class names. Here's a slightly tidied up, fully self-contained example:
# logging.yml
version: 1
formatters:
simple_formatter:
format: "%(asctime)s %(name)s: %(message)s"
extended_formatter:
format: "%(asctime)s %(name)s %(levelname)s: %(message)s"
filters:
no_console_filter:
(): __main__.NoConsoleFilter
handlers:
console_handler:
class: logging.StreamHandler
level: INFO
formatter: simple_formatter
filters: [no_console_filter]
file_handler:
class: logging.FileHandler
level: INFO
filename: test.log
formatter: extended_formatter
root:
handlers: [console_handler, file_handler]
propagate: true
# script.py
import logging.config
import yaml
class NoConsoleFilter(logging.Filter):
def filter(self, record):
print('filtering!')
return not (record.levelname == 'INFO') & ('no-console' in record.msg)
with open('logging.yml', 'r') as f:
log_cfg = yaml.safe_load(f.read())
logging.config.dictConfig(log_cfg)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.info('no-console. Should not be in console, but be in test.log!')
logger.info('This is an info message')
logger.error('This is an error message')

Changing formatter when using logging module with yaml in python

When using logging module in python, we can initialize general setting for each logger using yaml file. My test code looks like
[main.py]
import yaml
import logging, logging.config
def setup_logging(default_level=logging.DEBUG):
with open("./logging.yaml", 'rt') as f:
configdata = yaml.safe_load(f.read())
logging.config.dictConfig(configdata)
setup_logging()
dbg = logging.getLogger(__name__)
dbg.info("Test")
[logging.yaml]
version: 1
disable_existing_loggers: False
formatters:
detail:
format: "%(asctime)s - %(name)s - %(message)s"
onlymessage:
format: "%(message)s"
handlers:
file_handler:
class: logging.FileHandler
level: DEBUG
formatter: detail
filename: ./log
mode: w
loggers:
__main__:
level: DEBUG
handlers: [file_handler]
propagate: No
So for "file_handler", the default formatter is "detail". Then how do I change the formatter of this logger to another one, in this case "onlymessage"?
I know that if we were given formatter object, we can use Handler.setFormatter() to change the formatter of a logger, like
dbg.handlers[0].setFormatter(FORMATTER NAME)
But since I specified all information about formatter in yaml file and used logging.config when initizliaing logger, I have no formatter object. I think if I can retrieve formatter object written in that yaml file, problem can be solved. Or is there any other way to do this?
Yes, "unused" formatters will be lost in the process if you don't bind them to any handler.
You can check the source of what happens, here is the interesting bit.

universal logger name in python logging yaml config file

Ok, so the situation is I need to use a yaml config file for logging.(don't ask - I just need it :) ). And when writing the 'loggers:' directive I would like to use one logger and to be able to fetch it from multiple modules in my app using getLogger(__name__). I know how to do it if I use a normal python config file for logging, but I can't to find a way to do the same with a yaml file.
using python 2.7
So long story short, that's what I have(this is just a simplified example of my problem, not part of the actual application :) ):
#this is app.py
import logging.config
import yaml
import os
def init_logging():
path = 'logging.yaml'
if os.path.exists(path):
with open(path, 'r') as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config['logging'])
main()
def main():
logger = logging.getLogger('app')
logger.debug("done!")
init_logging()
and here's the logging.yaml config file:
logging:
version: 1
formatters:
brief:
format: '%(message)s'
default:
format: '%(asctime)s %(levelname)-8s [%(name)s] %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
loggers:
app:
handlers: [console]
level: DEBUG
So as is it is here - it works. the 'done!' message shows in the console.
But I want to be able to set in the config not some distinct logger (here I called it 'app') but a universal, like if it was in a .py config it'would be
"loggers": {
"": {
"handlers": ["console"],
"level": "DEBUG",
},
}
and then I would be using logging.getLogger(__name__) in different modules and it would always use the one "" logger and show me the messages.
So is there a way to create a universal logger in yaml? like the "": in the python logging config?
I tried (), ~, null - those don't do the work.
Basically I need to be able to call loggers with any names I want and to get one specified logger.
And yep - I can create a root directive in the yaml and call it by using logging.getLogger()
The trick is to use a special logger called root, outside the list of other loggers.
I found this in the Python documentation:
root - this will be the configuration for the root logger. Processing of the configuration will be as for any logger, except that the propagate setting will not be applicable.
Here's the configuration from your question changed to use the root key:
logging:
version: 1
formatters:
brief:
format: '%(message)s'
default:
format: '%(asctime)s %(levelname)-8s [%(name)s] %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
root:
handlers: [console]
level: WARN
loggers:
app:
level: DEBUG
That configures app at the DEBUG level, and everything else at the WARN level.
Okay, so I found an answer (thanks to #flyx!).
I compared the original python dict config and the converted from yaml dict config and found out that logging automatically adds disable_existing_loggers: False to the python config. After that I added this line to the yaml and used '' in the loggers directive.. and it worked!
So the resulting yaml config is like:
..............
disable_existing_loggers: False
loggers:
'':
handlers: [console, sentry]
level: DEBUG
propagate: False
and now it works. Even if I create a logger like logging.getLogger('something') and logger 'something' is not in the config then the app will use the '' logger.
#Don Kirkby's answer won't work if the logger is defined in the begining of the file(before configurating it). But it works with no problem if the logger is defined after configuring logging. So it's a resolve for the code from my question but not an answer to the question - "So is there a way to create a universal logger in yaml? like the "": in the python logging config?" That's why I didn't pick it as an answer. But he's comment is totally legit :)

Understanding Python logger names

I have named my Python loggers following the practice described in Naming Python loggers
Everything works fine if I use basicConfig(). But now I'm trying to use a configuration file and dictConfig() to configure the loggers at runtime.
The docs at http://docs.python.org/2/library/logging.config.html#dictionary-schema-details seem to say that I can have a "root" key in my dictionary that configures the root logger. But if I configure only this logger, I don't get any output.
Here's what I have:
logging_config.yaml
version: 1
formatters:
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(pathname)s:%(lineno)s - %(message)s'
datefmt: '%Y%m%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
file:
class: logging.FileHandler
level: DEBUG
formatter: simple
filename: 'test.log'
mode: "w"
# If I explicitly define a logger for __main__, it works
#loggers:
# __main__:
# level: DEBUG
# handlers: [console, file]
root:
level: DEBUG
handlers: [console, file]
test_log.py
import logging
logger = logging.getLogger(__name__)
import logging.config
import yaml
if __name__ == "__main__":
log_config = yaml.load(open("logging_config.yaml", "r"))
logging.config.dictConfig(log_config)
#logging.basicConfig() #This works, but dictConfig doesn't
logger.critical("OH HAI")
logging.shutdown()
Why doesn't this produce any logging output, and what's the proper way to fix it?
The reason is that you haven't specified disable_existing_loggers: false in your YAML, and the __main__ logger already exists at the time dictConfig is called. So that logger is disabled (because it isn't explicitly named in the configuration - if it is named, then it's not disabled).
Just add that line to your YAML:
version: 1
disable_existing_loggers: false
formatters:
simple:
...

Categories

Resources