how can I set false exc_info for console but leave for writing to file?
config.conf
version: 1
formatters:
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
filters:
warnings_and_below:
"()": log.filter_maker
level: WARNING
sense: below
errors_and_above:
"()": log.filter_maker
level: ERROR
sense: above
handlers:
outconsole:
class: logging.StreamHandler
level: INFO
formatter: simple
filters: [warnings_and_below]
stream: ext://sys.stdout
errconsole:
class: logging.StreamHandler
level: WARNING
formatter: simple
filters: [errors_and_above]
stream: ext://sys.stderr
file_handler:
class: logging.FileHandler
level: DEBUG
formatter: simple
filename: info.log
encoding: utf8
mode: w
root:
level: DEBUG
handlers: [file_handler, outconsole, errconsole]
propagate: no
log.py
import logging.config
from functools import *
import logging
import yaml
def filter_maker(level, sense):
level = getattr(logging, level) # get the actual numeric value from the string
if sense == 'below': # return a function which only passes if level is at or below threshold
def filter(record):
return record.levelno <= level
else: # return a function which only passes if level is at or above threshold
def filter(record):
return record.levelno >= level
return filter
with open("logging.conf", "r") as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
Logger = logging.getLogger(__name__)
Logger.error("Some error")
Logger.error("Some error", exc_info=True)
what I need is for the last two lines to always send to the errconsole handler exc_info=False, and to the file_handler always exc_info=True. Is this possible on one logger or do I need to configure two ?
Use formatters to have different formatting for different handlers. For example, define something like
class NoExceptionFormatter(logging.Formatter):
def formatException(self, exc_info):
return ''
and attach an instance of it to the console handler(s).
Related
Though I have been working with python/ipython for some time now, I consider myself a newb. There are still many things, especially about the logging support, I thought I understood from the documentation, but apparently are more difficult to configure than I previously had hoped. I am using ipython 5.5.0 / python 2.7.17 on Xubuntu 18.04.04 LTS with the colorlogs. My logging configuration module is below.
import coloredlogs
import datetime
import logging
import logging.config
import os
import yaml
def setup_logging( default_path='../Config/logging.yaml',
default_level=logging.DEBUG,
env_key='LOG_CFG'):
path = os.path.join(os.path.dirname(os.path.realpath(__file__)), default_path)
value = os.getenv(env_key, None)
# If the envvar is set, use it's value
if value:
path = value
_dt = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
print("%s Using Logging Configuration: %s" % (_dt, path) )
#
# If the configuration file path is there, read it
#
if os.path.exists(path):
with open(path, 'rt') as f:
try:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
coloredlogs.install(level=default_level)
except Exception as err:
print(err)
print('Error in Logging Configuration. Using default configs')
logging.basicConfig(level=default_level)
coloredlogs.install(level=default_level)
# Otherwise, continue without a configuration
else:
logging.basicConfig(level=logging.DEBUG)
coloredlogs.install(level=logging.DEBUG)
print('Failed to load configuration file. Using default configs')
The configuration is held in a yaml file with the following definitions.
version: 1
disable_existing_loggers: False
formatters:
basic:
format: "%(name)s - %(message)s"
standard:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
error:
format: "%(levelname)s <PID %(process)d:%(processName)s> %(name)s.%(funcName)s(): %(message)s"
handlers:
console_basic:
class: logging.StreamHandler
level: DEBUG
formatter: basic
stream: ext://sys.stdout
console_out:
class: logging.StreamHandler
level: DEBUG
formatter: standard
stream: ext://sys.stdout
console_err:
class: logging.StreamHandler
level: DEBUG
formatter: standard
stream: ext://sys.stderr
debug_file_handler:
class: logging.handlers.RotatingFileHandler
level: DEBUG
formatter: standard
filename: /tmp/debug.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
formatter: standard
filename: /tmp/info.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
warn_file_handler:
class: logging.handlers.RotatingFileHandler
level: WARN
formatter: standard
filename: /tmp/warn.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
error_file_handler:
class: logging.handlers.RotatingFileHandler
level: ERROR
formatter: error
filename: /tmp/errors.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
critical_file_handler:
class: logging.handlers.RotatingFileHandler
level: CRITICAL
formatter: standard
filename: /tmp/critical.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
root:
level: CRITICAL
handlers: [console_err]
propogate: no
loggers:
test:
level: DEBUG
handlers: [console_basic]
propogate: no
Utils.paragraph_processing:
level: DEBUG
handlers: [info_file_handler, debug_file_handler, warn_file_handler, error_file_handler, critical_file_handler]
propogate: no
Utils.graphing_functions:
level: DEBUG
handlers: [info_file_handler, debug_file_handler, warn_file_handler, error_file_handler, critical_file_handler]
propogate: no
The following snippet of my test.py module follows.
import coloredlogs
from copy import deepcopy
import cv2
import imutils
import logging
import logging.config
import os
import yaml
import matplotlib.pyplot as PLT
import matplotlib.image as MPI
import numpy as np
import Tests.filtering_tests as FT
import Tests.morphology_tests as MT
import Utils.global_defs as GL
import Utils.graphing_functions as GF
import Utils.paragraph_processing as PP
import Utils.logging_functions as LF
.
.
.
def phony_main():
LF.setup_logging()
# create logger
LOG = logging.getLogger(__name__)
LOG.critical("Logging Started...")
# -----------------------------------------------------------------------------
#
# Main
#
img = None
if __name__ == "__main__":
# execute only if run as a script
phony_main()
My questions are, when I change the configuration as I did from [console_out] to [console_basic], I expected the messages to conform, but they do not. Leading me to believe that some other logger, root(?), is handling the call? But, if I change that to use [console_basic] the messages are still the same. That is, one would expect the time and levelname to no longer be there, but they are!
Again, I do not pretend to understand what's going on, but where I thought the documentation showed simple inheritance I am beginning to wonder it's a bit more complicated than that. What am I doing wrong?
When I fix my spelling mistake and remove the logger for test, I still get the same behavior. Turning propagation on so that console logs will got to root logger, which has [console_basic] still shows the messages using the old format.
Making the following changes to my yaml, seems to fix the issues, as pointed out by #blues.
root:
level: NOTSET
handlers: [console_basic]
propagate: no
loggers:
__main__:
level: DEBUG
handlers: [console_basic]
propagate: no
Utils.paragraph_processing:
level: DEBUG
handlers: [info_file_handler, debug_file_handler, warn_file_handler, error_file_handler, critical_file_handler]
propagate: no
Utils.graphing_functions:
level: DEBUG
handlers: [info_file_handler, debug_file_handler, warn_file_handler, error_file_handler, critical_file_handler]
propagate: no
There is two things going on here. First of all there is a misspelling of propagate in the config. It is wrongly spelled propogate: notice the "o" where an "a" should be. That means all the loggers do in fact propagate their logs up the hierarchy.
The second thing is that when propagation is on, the level of the ancestor loggers, in this case the root logger, is ignored and only the level of the handlers is taken into consideration. Since the console_err handler that is added to root has level DEBUG and all logs propagate to root this handler will log every log.
The relevant piece of information can be found the python documentation here:
Messages are passed directly to the ancestor loggers’ handlers -
neither the level nor filters of the ancestor loggers in question are
considered.
Want to set up a logger with a filter using YAML.
YAML configuration file config.yaml is as follows:
version: 1
formatters:
simple:
format: "%(asctime)s %(name)s: %(message)s"
extended:
format: "%(asctime)s %(name)s %(levelname)s: %(message)s"
filters:
noConsoleFilter:
class: noConsoleFilter
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: simple
filters: [noConsoleFilter]
file_handler:
class: logging.FileHandler
level: INFO
filename: test.log
formatter: extended
root:
handlers: [console, file_handler]
propagate: true
...and the main program in main.py as follows:
import logging.config
import yaml
class noConsoleFilter(logging.Filter):
def filter(self, record):
print("filtering!")
return not (record.levelname == 'INFO') & ('no-console' in record.msg)
with open('config.yaml', 'r') as f:
log_cfg = yaml.safe_load(f.read())
logging.config.dictConfig(log_cfg)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.info("no-console. Should not be in console, but be in test.log!")
logger.info('This is an info message')
logger.error('This is an error message')
Expected output in console without the "no-console" message:
2020-04-27 18:05:26,936 __main__: This is an info message
2020-04-27 18:05:26,936 __main__: This is an error message
But looks like class: noConsoleFilter is not even being considered, as the print statement is also not working.
Where am I going wrong? How can I fix it?
The syntax is a bit odd, but it's described in the logging docs, under User-defined objects, that you have to use the key (), not class. Like so:
filters:
noConsoleFilter:
(): noConsoleFilter
Next, you need to specify a qualified name for the class. If you're running the script directly, not as a module, you can refer to it under __main__:
filters:
noConsoleFilter:
(): __main__.noConsoleFilter
I would also recommend using the PEP 8 CapWords convention for class names. Here's a slightly tidied up, fully self-contained example:
# logging.yml
version: 1
formatters:
simple_formatter:
format: "%(asctime)s %(name)s: %(message)s"
extended_formatter:
format: "%(asctime)s %(name)s %(levelname)s: %(message)s"
filters:
no_console_filter:
(): __main__.NoConsoleFilter
handlers:
console_handler:
class: logging.StreamHandler
level: INFO
formatter: simple_formatter
filters: [no_console_filter]
file_handler:
class: logging.FileHandler
level: INFO
filename: test.log
formatter: extended_formatter
root:
handlers: [console_handler, file_handler]
propagate: true
# script.py
import logging.config
import yaml
class NoConsoleFilter(logging.Filter):
def filter(self, record):
print('filtering!')
return not (record.levelname == 'INFO') & ('no-console' in record.msg)
with open('logging.yml', 'r') as f:
log_cfg = yaml.safe_load(f.read())
logging.config.dictConfig(log_cfg)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.info('no-console. Should not be in console, but be in test.log!')
logger.info('This is an info message')
logger.error('This is an error message')
I have implemented a logger in Python. Basically, the idea is to have a logger with multiple handlers. I do this with the following a yaml config
version: 1
formatters:
simple:
format: "%(name)s - %(lineno)d - %(message)s"
complex:
format: "%(asctime)s - %(name)s | %(levelname)s | %(module)s : [%(filename)s: %(lineno)d] - %(message)s"
json:
class: utils.logger.JsonFormatter
format: '%(asctime)s %(name)s %(levelname)s %(module)s %(filename)s: %(message)s'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: json
file:
class: logging.handlers.TimedRotatingFileHandler
when: midnight
backupCount: 5
level: DEBUG
formatter: complex
filename : /tgs_workflow/logs/tgs_logs.log
cloud:
class: utils.logger.GoogleLogger
formatter: json
level: INFO
loggers:
cloud:
level: INFO
handlers: [console,file,cloud]
propagate: yes
__main__:
level: DEBUG
handlers: [console,file,cloud]
propagate: yes
In the yaml, I have created a class GoogleLogger and a class JsonFormatter, these are the only things outside the usual.
In order for this to work, anywhere I want to use my logger I (instantiate) do:
Instantiator [highlighted cause I refer back to it later]
import logging
import logging.config
import yaml
with open('/tgs_workflow/logging.yaml','rt') as f:
config=yaml.safe_load(f.read())
f.close()
logging.config.dictConfig(config)
logger = logging.getLogger(__name__)
logger.info("This info")
Now there are two questions from here
Q1. Is it bad practice to have to instantiate this in each class/script I wish to use this? This also means there is lot of redundant code. (same code, multiple places)
Q2. I usually place this in __main__ but what happens when I have a class that has no main but includes logging? I definitely know it's not a good idea to put this at the top of the file.
e.g. for Q2: This is a really bad example, but I am just trying to highlight how a class would need some logging
import logging
"""
>>>Insert Instantiator here <<<
"""
class Tools():
def __init__(self, name, age):
self.name = name
self.age = age
def who_am_i(self, name, age):
try:
if (self.name == "Adam"):
return True
logging.info("This was Adam")
else:
return False
logging.info("This was not Adam")
except Exception:
logging.error("There is an error")
The only way for me to use my logger is to include my Instantiator at the top of this class. That has to be incorrect or atleast not best practice? What is the correct way of doing this?
The way is to either stick to a name and not use different loggers for every module/file, or use the hierarchy. If your tools class is imported by main than the __main__ and cloud will already be configured and can be used. All you need to do is replace >>>Insert Instantiator here <<< with logger = logging.getLogger('__main__') and you are good to go. If you don't want to use directly on the main logger you can put a dot to make it a hierarchy. logger = logging.getLogger('__main__.tools'). Now this is a logger that will propagate its logs to the __main__ logger but can have it's own level etc.
I cannot achieve proper logging of my module using python's standard logging. Yet it's a very simple case.
I have the following module hierarchy:
module\
foo.py
bar.py
I need to log from each of these modules with the following constraints:
all logs >= INFO from module.foo to the console (because what this module does is important and user must be notified live)
all logs from module.* into a file
all logs >= WARNING from module.* to the console
Here is the main code
import logging
import logging.config
import os
import yaml
def setup_logging():
loadfrom = os.path.join(os.path.dirname(__file__), 'config.yml')
# Load
with open(loadfrom, 'rt') as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
setup_logging()
foo = logging.getLogger('module.foo')
bar = logging.getLogger('module.bar')
foo.info('module.foo doing something')
foo.debug('module.foo debug data')
bar.info('module.bar doing something')
bar.error('module.bar something bad happened')
Here is the config I'm using
version: 1
disable_existing_loggers: False
formatters:
simple:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: simple
stream: ext://sys.stdout
file:
class: logging.handlers.RotatingFileHandler
level: DEBUG
filename: 'log.log'
formatter: simple
encoding: utf8
loggers:
module:
level: WARNING
handlers: [console]
propagate: yes
module.foo:
level: INFO
handlers: [console]
propagate: yes # If yes, gets displayed twice. If false, entry is missing in log file
root:
level: DEBUG
handlers: [file]
And here is the output :
2017-09-21 10:48:39,679 - module.foo - INFO - module.foo doing something
2017-09-21 10:48:39,679 - module.foo - INFO - module.foo doing something
2017-09-21 10:48:39,681 - module.bar - ERROR - module.bar something bad happened
The log.info from the child module gets displayed twice, because propagate field is set to yes in the config.
Setting it to false solves the issue in the console but breaks the log file because the entry is missing in it.
How can I solve this ? Any alternatives to the standard library that I personnally find counterintuitive ?
EDIT 1
New config after #wmorell's answer:
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: simple
stream: ext://sys.stdout
file:
class: logging.handlers.RotatingFileHandler
level: DEBUG
filename: 'log.log'
formatter: simple
encoding: utf8
loggers:
module:
level: WARNING
handlers: [console]
propagate: yes
module.foo:
level: DEBUG <- set this to debug
handlers: [file, console] <- Add file here
propagate: false
root:
level: DEBUG
handlers: [file]
Console output is OK:
2017-09-21 11:14:51,174 - module.foo - INFO - module.foo doing something
2017-09-21 11:14:51,174 - module.bar - ERROR - module.bar something bad happened
Log output is not ok, misses the call to log.info('module.bar'):
2017-09-21 11:18:34,335 - module.foo - INFO - module.foo doing something
2017-09-21 11:18:34,335 - module.foo - DEBUG - module.foo debug data
2017-09-21 11:18:34,335 - module.bar - ERROR - module.bar something bad happened
Add the file handler explicitly to the logger definitions, and then duplicate the console handler to filter out different log levels:
handlers:
console_info:
class: logging.StreamHandler
level: INFO
formatter: simple
stream: ext://sys.stdout
console_warning:
class: logging.StreamHandler
level: WARNING
formatter: simple
stream: ext://sys.stdout
file:
class: logging.handlers.RotatingFileHandler
level: DEBUG
filename: 'log.log'
formatter: simple
encoding: utf8
loggers:
module:
level: DEBUG
handlers: [file, console_warning]
propagate: false
module.foo:
level: DEBUG
handlers: [file, console_info]
propagate: false
Logs get filtered at the logger definition first, so the module and module.foo loggers must allow DEBUG if those are to make it to the log file. The loggers then forward messages to all handlers, and handlers can drop messages below their configured thresholds; so you want a handler that will drop INFO logs for the base module logger, and a handler that will allow INFO logs for the more specific module.foo logger.
I have named my Python loggers following the practice described in Naming Python loggers
Everything works fine if I use basicConfig(). But now I'm trying to use a configuration file and dictConfig() to configure the loggers at runtime.
The docs at http://docs.python.org/2/library/logging.config.html#dictionary-schema-details seem to say that I can have a "root" key in my dictionary that configures the root logger. But if I configure only this logger, I don't get any output.
Here's what I have:
logging_config.yaml
version: 1
formatters:
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(pathname)s:%(lineno)s - %(message)s'
datefmt: '%Y%m%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
file:
class: logging.FileHandler
level: DEBUG
formatter: simple
filename: 'test.log'
mode: "w"
# If I explicitly define a logger for __main__, it works
#loggers:
# __main__:
# level: DEBUG
# handlers: [console, file]
root:
level: DEBUG
handlers: [console, file]
test_log.py
import logging
logger = logging.getLogger(__name__)
import logging.config
import yaml
if __name__ == "__main__":
log_config = yaml.load(open("logging_config.yaml", "r"))
logging.config.dictConfig(log_config)
#logging.basicConfig() #This works, but dictConfig doesn't
logger.critical("OH HAI")
logging.shutdown()
Why doesn't this produce any logging output, and what's the proper way to fix it?
The reason is that you haven't specified disable_existing_loggers: false in your YAML, and the __main__ logger already exists at the time dictConfig is called. So that logger is disabled (because it isn't explicitly named in the configuration - if it is named, then it's not disabled).
Just add that line to your YAML:
version: 1
disable_existing_loggers: false
formatters:
simple:
...