Python loggers/handlers misconfigured? - python

Though I have been working with python/ipython for some time now, I consider myself a newb. There are still many things, especially about the logging support, I thought I understood from the documentation, but apparently are more difficult to configure than I previously had hoped. I am using ipython 5.5.0 / python 2.7.17 on Xubuntu 18.04.04 LTS with the colorlogs. My logging configuration module is below.
import coloredlogs
import datetime
import logging
import logging.config
import os
import yaml
def setup_logging( default_path='../Config/logging.yaml',
default_level=logging.DEBUG,
env_key='LOG_CFG'):
path = os.path.join(os.path.dirname(os.path.realpath(__file__)), default_path)
value = os.getenv(env_key, None)
# If the envvar is set, use it's value
if value:
path = value
_dt = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
print("%s Using Logging Configuration: %s" % (_dt, path) )
#
# If the configuration file path is there, read it
#
if os.path.exists(path):
with open(path, 'rt') as f:
try:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
coloredlogs.install(level=default_level)
except Exception as err:
print(err)
print('Error in Logging Configuration. Using default configs')
logging.basicConfig(level=default_level)
coloredlogs.install(level=default_level)
# Otherwise, continue without a configuration
else:
logging.basicConfig(level=logging.DEBUG)
coloredlogs.install(level=logging.DEBUG)
print('Failed to load configuration file. Using default configs')
The configuration is held in a yaml file with the following definitions.
version: 1
disable_existing_loggers: False
formatters:
basic:
format: "%(name)s - %(message)s"
standard:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
error:
format: "%(levelname)s <PID %(process)d:%(processName)s> %(name)s.%(funcName)s(): %(message)s"
handlers:
console_basic:
class: logging.StreamHandler
level: DEBUG
formatter: basic
stream: ext://sys.stdout
console_out:
class: logging.StreamHandler
level: DEBUG
formatter: standard
stream: ext://sys.stdout
console_err:
class: logging.StreamHandler
level: DEBUG
formatter: standard
stream: ext://sys.stderr
debug_file_handler:
class: logging.handlers.RotatingFileHandler
level: DEBUG
formatter: standard
filename: /tmp/debug.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
formatter: standard
filename: /tmp/info.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
warn_file_handler:
class: logging.handlers.RotatingFileHandler
level: WARN
formatter: standard
filename: /tmp/warn.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
error_file_handler:
class: logging.handlers.RotatingFileHandler
level: ERROR
formatter: error
filename: /tmp/errors.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
critical_file_handler:
class: logging.handlers.RotatingFileHandler
level: CRITICAL
formatter: standard
filename: /tmp/critical.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
root:
level: CRITICAL
handlers: [console_err]
propogate: no
loggers:
test:
level: DEBUG
handlers: [console_basic]
propogate: no
Utils.paragraph_processing:
level: DEBUG
handlers: [info_file_handler, debug_file_handler, warn_file_handler, error_file_handler, critical_file_handler]
propogate: no
Utils.graphing_functions:
level: DEBUG
handlers: [info_file_handler, debug_file_handler, warn_file_handler, error_file_handler, critical_file_handler]
propogate: no
The following snippet of my test.py module follows.
import coloredlogs
from copy import deepcopy
import cv2
import imutils
import logging
import logging.config
import os
import yaml
import matplotlib.pyplot as PLT
import matplotlib.image as MPI
import numpy as np
import Tests.filtering_tests as FT
import Tests.morphology_tests as MT
import Utils.global_defs as GL
import Utils.graphing_functions as GF
import Utils.paragraph_processing as PP
import Utils.logging_functions as LF
.
.
.
def phony_main():
LF.setup_logging()
# create logger
LOG = logging.getLogger(__name__)
LOG.critical("Logging Started...")
# -----------------------------------------------------------------------------
#
# Main
#
img = None
if __name__ == "__main__":
# execute only if run as a script
phony_main()
My questions are, when I change the configuration as I did from [console_out] to [console_basic], I expected the messages to conform, but they do not. Leading me to believe that some other logger, root(?), is handling the call? But, if I change that to use [console_basic] the messages are still the same. That is, one would expect the time and levelname to no longer be there, but they are!
Again, I do not pretend to understand what's going on, but where I thought the documentation showed simple inheritance I am beginning to wonder it's a bit more complicated than that. What am I doing wrong?
When I fix my spelling mistake and remove the logger for test, I still get the same behavior. Turning propagation on so that console logs will got to root logger, which has [console_basic] still shows the messages using the old format.
Making the following changes to my yaml, seems to fix the issues, as pointed out by #blues.
root:
level: NOTSET
handlers: [console_basic]
propagate: no
loggers:
__main__:
level: DEBUG
handlers: [console_basic]
propagate: no
Utils.paragraph_processing:
level: DEBUG
handlers: [info_file_handler, debug_file_handler, warn_file_handler, error_file_handler, critical_file_handler]
propagate: no
Utils.graphing_functions:
level: DEBUG
handlers: [info_file_handler, debug_file_handler, warn_file_handler, error_file_handler, critical_file_handler]
propagate: no

There is two things going on here. First of all there is a misspelling of propagate in the config. It is wrongly spelled propogate: notice the "o" where an "a" should be. That means all the loggers do in fact propagate their logs up the hierarchy.
The second thing is that when propagation is on, the level of the ancestor loggers, in this case the root logger, is ignored and only the level of the handlers is taken into consideration. Since the console_err handler that is added to root has level DEBUG and all logs propagate to root this handler will log every log.
The relevant piece of information can be found the python documentation here:
Messages are passed directly to the ancestor loggers’ handlers -
neither the level nor filters of the ancestor loggers in question are
considered.

Related

Filtering log levels

I am trying to separate the log levels into separate files (one for each level). At the moment I have defined a file for each level but with my current configuration the upper levels are propagated to the lower levels.
My log configuration is:
version: 1
formatters:
standard:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
error:
format: "%(levelname)s <PID %(process)d:%(processName)s> %(name)s.%(funcName)s(): %(message)s"
handlers:
console:
class: logging.StreamHandler
formatter: standard
level: DEBUG
debug_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: DEBUG
filename: logs/debug.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
info_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: INFO
filename: logs/info.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
warning_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: WARNING
filename: logs/warning.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
error_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: error
level: ERROR
filename: logs/error.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
critical_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: error
level: CRITICAL
filename: logs/critical.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
loggers:
development:
handlers: [ console, debug_file_handler ]
propagate: false
production:
handlers: [ info_file_handler, warning_file_handler, error_file_handler, critical_file_handler ]
propagate: false
root:
handlers: [ debug_file_handler, info_file_handler, warning_file_handler, error_file_handler, critical_file_handler ]
And I load the configuration and set the logger like this:
with open(path_log_config_file, 'r') as config_file:
config = yaml.safe_load(config_file.read())
logging.config.dictConfig(config)
logger = logging.getLogger(LOGS_MODE)
logger.setLevel(LOGS_LEVEL)
Where LOGS_MODE and LOGS_LEVEL are defined in a configuration file in my project:
# Available loggers: development, production
LOGS_MODE = 'production'
# Available levels: CRITICAL = 50, ERROR = 40, WARNING = 30, INFO = 20, DEBUG = 10
LOGS_LEVEL = 20
And when I want to use the logger I do:
from src.logger import logger
I have found these answers where they mention to use filters: #1 #2 but both of them say to use different handlers and specify the level for each one but with this approach I'll have to import different loggers in some cases instead of only one. Is this the only way to achieve it?
Regards.
UPDATE 1:
As I am using a YAML file to load the logger configuration I found this answer #3:
So I have defined the filters in my file logger.py:
with open(path_log_config_file, 'rt') as config_file:
config = yaml.safe_load(config_file.read())
logging.config.dictConfig(config)
class InfoFilter(logging.Filter):
def __init__(self):
super().__init__()
def filter(self, record):
return record.levelno == logging.INFO
class WarningFilter(logging.Filter):
def __init__(self):
super().__init__()
def filter(self, record):
return record.levelno == logging.WARNING
class ErrorFilter(logging.Filter):
def __init__(self):
super().__init__()
def filter(self, record):
return record.levelno == logging.ERROR
class CriticalFilter(logging.Filter):
def __init__(self):
super().__init__()
def filter(self, record):
return record.levelno == logging.CRITICAL
logger = logging.getLogger(LOGS_MODE)
logger.setLevel(LOGS_LEVEL)
And in the YAML file:
filters:
info_filter:
(): src.logger.InfoFilter
warning_filter:
(): src.logger.WarningFilter
error_filter:
(): src.logger.ErrorFilter
critical_filter:
(): src.logger.CriticalFilter
handlers:
console:
class: logging.StreamHandler
formatter: standard
level: DEBUG
debug_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: DEBUG
filename: logs/debug.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
info_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: INFO
filename: logs/info.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
filters: [ info_filter ]
warning_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: standard
level: WARNING
filename: logs/warning.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
filters: [ warning_filter ]
error_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: error
level: ERROR
filename: logs/error.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
filters: [ error_filter ]
critical_file_handler:
class: logging.handlers.RotatingFileHandler
formatter: error
level: CRITICAL
filename: logs/critical.log
encoding: utf8
mode: "w"
maxBytes: 10485760 # 10MB
backupCount: 1
filters: [ critical_filter ]
My problem now is in the filter section. I don't know how to specify the name of each class. In the response #3 he uses __main__. because he is running the script directly, not as a module and doesn't says how to do it if you use a module.
Reading the User-defined objects doc reference I've tried to use ext:// as it's said in the Access to external objects section but I get the same error as when trying to specify the hierarchy with src.logger.InfoFilter.
logging.config.dictConfig(config)
File "/usr/lib/python3.8/logging/config.py", line 808, in dictConfig
dictConfigClass(config).configure()
File "/usr/lib/python3.8/logging/config.py", line 553, in configure
raise ValueError('Unable to configure '
ValueError: Unable to configure filter 'info_filter'
python-BaseException
My project tree is (only the important part is shown):
.
├── resources
│   ├── log.yaml
│   └── properties.py
├── src
│   ├── main.py
│   └── logger.py
└── ...
I submit another answer because your question changed considerably with your update 1.
Notes on replication :
I recreated your arborescence
my PYTHONPATH only pointed to the root (parent of src/ and ressources/)
I ran the script from the root (current directory)
I created a logs/ directory at the top-level (otherwise I got ValueError: Unable to configure handler 'critical_file_handler': [Errno 2] No such file or directory: 'C:\\PycharmProjects\\so69336121\\logs\\critical.log')
The problem you encountered was caused by cyclical imports. When the logger module was imported, it started by loading and YAML file, which asked to instantiate some src.logger.*Filter objects, which it could not found because the file was not yet fully initialized. I recommend putting effectful code into functions that can be called by the main function at startup.
Here is what I have :
# file: src/logger.py
import logging.config
import yaml # by `pip install pyyaml`
path_log_config_file = "ressources/log.yml"
LOGS_LEVEL = logging.ERROR
LOGS_MODE = "production"
def setup_logging():
with open(path_log_config_file, 'rt') as config_file:
config = yaml.safe_load(config_file.read())
logging.config.dictConfig(config)
# ... the rest of the file you provided
# file: src/main.py
from src.logger import setup_logging, logger
setup_logging()
logger.debug("DEBUG")
logger.info("INFO")
logger.warning("WARNING")
logger.error("ERROR")
logger.critical("CRITICAL")
Then I got an error :
ValueError: dictionary doesn't specify a version
solved by adding this line tot he top of the YAML file :
version: 1
cf documentation
Then I got this error :
ValueError: Unable to configure handler 'console': Unable to set formatter 'standard': 'standard'
Because your formatters were not defined (you probably mis-copy-pasted). Here you go, add that to your YAML file :
formatters:
standard:
format: '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
error:
format: 'ERROR %(asctime)s [%(levelname)s] %(name)s: %(message)s'
It ran with no error, but nothing was written to the logs. A quick debugger breakpoint showed that the *Filter.filter methods were never called. I examined the logger object and indeed it had no handler attached. It can be added to the YAML too :
loggers:
production:
handlers: [ debug_file_handler, info_file_handler, warning_file_handler, error_file_handler, critical_file_handler ]
propagate: False
And now it works.
I think you misunderstood.
both of them say to use different handlers and specify the level for each one
Correct.
but with this approach I'll have to import different loggers in some cases instead of only one
No, you can add as many handlers as you want to one logger. That's why the method is called Logger.addHandler and that each logger object has a list of handlers (its .handlers member).
You only need to have one logger setup with your 5 handlers.

JSON-log-formatter with logging.config.dictConfig

Recently discovered JSON-log-formatter and would like to use it to write my log output in JSON. Configuring it in the script is relatively straight forward, but most of my existing code utilizes logging.config.dictConfig to load the logging config from a YAML file so I can easily configure logging without modifying the script itself, and I can't figure out how to add a python module as a formatter within the config.
Current YAML
version: 1
disable_existing_loggers: False
formatters:
simple:
format: "%(asctime)s | %(name)s | %(funcName)s() | %(levelname)s | %(message)s"
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
file_handler:
class: logging.handlers.TimedRotatingFileHandler
level: INFO
formatter: simple
filename: LogFile.log
encoding: utf8
backupCount: 30
encoding: utf8
when: 'midnight'
interval: 1
delay: True
loggers:
my_module:
level: ERROR
handlers: [console]
propagate: no
root:
level: INFO
handlers: [console, file_handler]
The above config works, but I can't figure out how to change the formatter to the JSON using that module. Something like this(I know this doesn't work):
formatters:
simple:
format: json_log_formatter.VerboseJSONFormatter()
Python loading up that YAML
import logging.config
import json_log_formatter
import yaml
with open('./logging.yaml', 'r') as stream:
logConfig = yaml.load(stream, Loader=yaml.FullLoader)
logging.config.dictConfig(logConfig)
I get that python is just loading the YAML into a dictionary object and then configuring logging based on that dictionary, but how do I set the formatter so that it correctly references that JSON logging module and uses it?
nevermind, of course I figured it out after posting this.
formatters:
myformat:
(): json_log_formatter.VerboseJSONFormatter

Logging with Python, ROS, and C++

I have a codebase of Python and C++ code, including heavy use of ROS. Logging is done throughout the Python code with both system logger and rospy logging -- contrived example:
import logging
import rospy
logging.basicConfig(level=logging.INFO)
LOG = logging.getLogger(__name__)
def run():
rospy.loginfo("This is a ROS log message")
LOG.info("And now from Python")
if __name__ == '__main__':
runt()
As for C++ code we need to add logging, probably with glog but I'm open to other options.
Is there a way to integrate the various loggers into one module? Ideally a user could do something like my_logger = AwesomeLogger(level='info', output='my_logs.txt') and then AwesomeLogger behind the scenes sets up the Python and C++ loggers, and combines all log outputs (including from ROS) into clean console messages output text file.
Note we target support for ubuntu 16.04, ROS-kinetic, C++11, Python 2.7*
*If a solution provides rational for moving to Python 3.6 you get bonus points!
UPDATE
If I load a dict config from yaml (as described in this post on logging best practices I can specify handlers and loggers for ROS. But with the yaml below I get duplicated rospy log messages to the console, one in the standard rospy log format and the other in my specified format. Why??
version: 1
disable_existing_loggers: True
formatters:
my_std:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
datefmt: "%Y/%m/%d %H:%M:%S"
handlers:
console:
class: logging.StreamHandler
formatter: my_std
level: DEBUG
stream: ext://sys.stdout
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
formatter: my_std
filename: info.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
rosconsole:
class: rosgraph.roslogging.RosStreamHandler
level: DEBUG
formatter: my_std
colorize: True
loggers:
my_module:
level: INFO
handlers: [console]
propagate: no
rosout:
level: INFO
handlers: [rosconsole]
propagate: yes
qualname: rosout
root:
level: INFO
handlers: [console, info_file_handler, rosconsole]
You can use rospy.loginfo() that made for ROS-Python and ROS_INFO() that made of ROS-C++ and integrate them in a scene. For that do as following:
$ roscd log
You will see several .log files, then use the following command to track them.
$ tail -f <logfile-name.log>
[UPDATE]
Also, you could subscribe to /rosout topic or echo this topic ($ rostopic echo /rosout).
Reference.
This config yaml does the trick:
version: 1
disable_existing_loggers: True
formatters:
my_std:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
datefmt: "%Y/%m/%d %H:%M:%S"
handlers:
console:
class: logging.StreamHandler
formatter: my_std
level: DEBUG
stream: ext://sys.stdout
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
formatter: my_std
filename: info.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
loggers:
__main__:
level: DEBUG
handlers: [console]
propagate: no
rosout:
level: INFO
propagate: yes
qualname: rosout
root:
level: INFO
handlers: [console, info_file_handler]
And to load it,
import logging
import yaml
if os.path.exists(config_path):
with open(config_path, 'rt') as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
In python, I use the standard logger library and remap the logger to ROS_LOG. Note that ROS_LOG won't respect formatting if you try to use ros_stream_handler.setFormatter.
import logging
from rosgraph.roslogging import RosStreamHandler as RosStreamHandler
ros_stream_handler = RosStreamHandler()
ros_stream_handler.setLevel(logging.DEBUG)
logger.addHandler(ros_stream_handler)

logs of child logger get displayed twice

I cannot achieve proper logging of my module using python's standard logging. Yet it's a very simple case.
I have the following module hierarchy:
module\
foo.py
bar.py
I need to log from each of these modules with the following constraints:
all logs >= INFO from module.foo to the console (because what this module does is important and user must be notified live)
all logs from module.* into a file
all logs >= WARNING from module.* to the console
Here is the main code
import logging
import logging.config
import os
import yaml
def setup_logging():
loadfrom = os.path.join(os.path.dirname(__file__), 'config.yml')
# Load
with open(loadfrom, 'rt') as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
setup_logging()
foo = logging.getLogger('module.foo')
bar = logging.getLogger('module.bar')
foo.info('module.foo doing something')
foo.debug('module.foo debug data')
bar.info('module.bar doing something')
bar.error('module.bar something bad happened')
Here is the config I'm using
version: 1
disable_existing_loggers: False
formatters:
simple:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: simple
stream: ext://sys.stdout
file:
class: logging.handlers.RotatingFileHandler
level: DEBUG
filename: 'log.log'
formatter: simple
encoding: utf8
loggers:
module:
level: WARNING
handlers: [console]
propagate: yes
module.foo:
level: INFO
handlers: [console]
propagate: yes # If yes, gets displayed twice. If false, entry is missing in log file
root:
level: DEBUG
handlers: [file]
And here is the output :
2017-09-21 10:48:39,679 - module.foo - INFO - module.foo doing something
2017-09-21 10:48:39,679 - module.foo - INFO - module.foo doing something
2017-09-21 10:48:39,681 - module.bar - ERROR - module.bar something bad happened
The log.info from the child module gets displayed twice, because propagate field is set to yes in the config.
Setting it to false solves the issue in the console but breaks the log file because the entry is missing in it.
How can I solve this ? Any alternatives to the standard library that I personnally find counterintuitive ?
EDIT 1
New config after #wmorell's answer:
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: simple
stream: ext://sys.stdout
file:
class: logging.handlers.RotatingFileHandler
level: DEBUG
filename: 'log.log'
formatter: simple
encoding: utf8
loggers:
module:
level: WARNING
handlers: [console]
propagate: yes
module.foo:
level: DEBUG <- set this to debug
handlers: [file, console] <- Add file here
propagate: false
root:
level: DEBUG
handlers: [file]
Console output is OK:
2017-09-21 11:14:51,174 - module.foo - INFO - module.foo doing something
2017-09-21 11:14:51,174 - module.bar - ERROR - module.bar something bad happened
Log output is not ok, misses the call to log.info('module.bar'):
2017-09-21 11:18:34,335 - module.foo - INFO - module.foo doing something
2017-09-21 11:18:34,335 - module.foo - DEBUG - module.foo debug data
2017-09-21 11:18:34,335 - module.bar - ERROR - module.bar something bad happened
Add the file handler explicitly to the logger definitions, and then duplicate the console handler to filter out different log levels:
handlers:
console_info:
class: logging.StreamHandler
level: INFO
formatter: simple
stream: ext://sys.stdout
console_warning:
class: logging.StreamHandler
level: WARNING
formatter: simple
stream: ext://sys.stdout
file:
class: logging.handlers.RotatingFileHandler
level: DEBUG
filename: 'log.log'
formatter: simple
encoding: utf8
loggers:
module:
level: DEBUG
handlers: [file, console_warning]
propagate: false
module.foo:
level: DEBUG
handlers: [file, console_info]
propagate: false
Logs get filtered at the logger definition first, so the module and module.foo loggers must allow DEBUG if those are to make it to the log file. The loggers then forward messages to all handlers, and handlers can drop messages below their configured thresholds; so you want a handler that will drop INFO logs for the base module logger, and a handler that will allow INFO logs for the more specific module.foo logger.

Understanding Python logger names

I have named my Python loggers following the practice described in Naming Python loggers
Everything works fine if I use basicConfig(). But now I'm trying to use a configuration file and dictConfig() to configure the loggers at runtime.
The docs at http://docs.python.org/2/library/logging.config.html#dictionary-schema-details seem to say that I can have a "root" key in my dictionary that configures the root logger. But if I configure only this logger, I don't get any output.
Here's what I have:
logging_config.yaml
version: 1
formatters:
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(pathname)s:%(lineno)s - %(message)s'
datefmt: '%Y%m%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
file:
class: logging.FileHandler
level: DEBUG
formatter: simple
filename: 'test.log'
mode: "w"
# If I explicitly define a logger for __main__, it works
#loggers:
# __main__:
# level: DEBUG
# handlers: [console, file]
root:
level: DEBUG
handlers: [console, file]
test_log.py
import logging
logger = logging.getLogger(__name__)
import logging.config
import yaml
if __name__ == "__main__":
log_config = yaml.load(open("logging_config.yaml", "r"))
logging.config.dictConfig(log_config)
#logging.basicConfig() #This works, but dictConfig doesn't
logger.critical("OH HAI")
logging.shutdown()
Why doesn't this produce any logging output, and what's the proper way to fix it?
The reason is that you haven't specified disable_existing_loggers: false in your YAML, and the __main__ logger already exists at the time dictConfig is called. So that logger is disabled (because it isn't explicitly named in the configuration - if it is named, then it's not disabled).
Just add that line to your YAML:
version: 1
disable_existing_loggers: false
formatters:
simple:
...

Categories

Resources