Understanding Python logger names - python

I have named my Python loggers following the practice described in Naming Python loggers
Everything works fine if I use basicConfig(). But now I'm trying to use a configuration file and dictConfig() to configure the loggers at runtime.
The docs at http://docs.python.org/2/library/logging.config.html#dictionary-schema-details seem to say that I can have a "root" key in my dictionary that configures the root logger. But if I configure only this logger, I don't get any output.
Here's what I have:
logging_config.yaml
version: 1
formatters:
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(pathname)s:%(lineno)s - %(message)s'
datefmt: '%Y%m%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
file:
class: logging.FileHandler
level: DEBUG
formatter: simple
filename: 'test.log'
mode: "w"
# If I explicitly define a logger for __main__, it works
#loggers:
# __main__:
# level: DEBUG
# handlers: [console, file]
root:
level: DEBUG
handlers: [console, file]
test_log.py
import logging
logger = logging.getLogger(__name__)
import logging.config
import yaml
if __name__ == "__main__":
log_config = yaml.load(open("logging_config.yaml", "r"))
logging.config.dictConfig(log_config)
#logging.basicConfig() #This works, but dictConfig doesn't
logger.critical("OH HAI")
logging.shutdown()
Why doesn't this produce any logging output, and what's the proper way to fix it?

The reason is that you haven't specified disable_existing_loggers: false in your YAML, and the __main__ logger already exists at the time dictConfig is called. So that logger is disabled (because it isn't explicitly named in the configuration - if it is named, then it's not disabled).
Just add that line to your YAML:
version: 1
disable_existing_loggers: false
formatters:
simple:
...

Related

JSON-log-formatter with logging.config.dictConfig

Recently discovered JSON-log-formatter and would like to use it to write my log output in JSON. Configuring it in the script is relatively straight forward, but most of my existing code utilizes logging.config.dictConfig to load the logging config from a YAML file so I can easily configure logging without modifying the script itself, and I can't figure out how to add a python module as a formatter within the config.
Current YAML
version: 1
disable_existing_loggers: False
formatters:
simple:
format: "%(asctime)s | %(name)s | %(funcName)s() | %(levelname)s | %(message)s"
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: simple
stream: ext://sys.stdout
file_handler:
class: logging.handlers.TimedRotatingFileHandler
level: INFO
formatter: simple
filename: LogFile.log
encoding: utf8
backupCount: 30
encoding: utf8
when: 'midnight'
interval: 1
delay: True
loggers:
my_module:
level: ERROR
handlers: [console]
propagate: no
root:
level: INFO
handlers: [console, file_handler]
The above config works, but I can't figure out how to change the formatter to the JSON using that module. Something like this(I know this doesn't work):
formatters:
simple:
format: json_log_formatter.VerboseJSONFormatter()
Python loading up that YAML
import logging.config
import json_log_formatter
import yaml
with open('./logging.yaml', 'r') as stream:
logConfig = yaml.load(stream, Loader=yaml.FullLoader)
logging.config.dictConfig(logConfig)
I get that python is just loading the YAML into a dictionary object and then configuring logging based on that dictionary, but how do I set the formatter so that it correctly references that JSON logging module and uses it?
nevermind, of course I figured it out after posting this.
formatters:
myformat:
(): json_log_formatter.VerboseJSONFormatter

Logging with Python, ROS, and C++

I have a codebase of Python and C++ code, including heavy use of ROS. Logging is done throughout the Python code with both system logger and rospy logging -- contrived example:
import logging
import rospy
logging.basicConfig(level=logging.INFO)
LOG = logging.getLogger(__name__)
def run():
rospy.loginfo("This is a ROS log message")
LOG.info("And now from Python")
if __name__ == '__main__':
runt()
As for C++ code we need to add logging, probably with glog but I'm open to other options.
Is there a way to integrate the various loggers into one module? Ideally a user could do something like my_logger = AwesomeLogger(level='info', output='my_logs.txt') and then AwesomeLogger behind the scenes sets up the Python and C++ loggers, and combines all log outputs (including from ROS) into clean console messages output text file.
Note we target support for ubuntu 16.04, ROS-kinetic, C++11, Python 2.7*
*If a solution provides rational for moving to Python 3.6 you get bonus points!
UPDATE
If I load a dict config from yaml (as described in this post on logging best practices I can specify handlers and loggers for ROS. But with the yaml below I get duplicated rospy log messages to the console, one in the standard rospy log format and the other in my specified format. Why??
version: 1
disable_existing_loggers: True
formatters:
my_std:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
datefmt: "%Y/%m/%d %H:%M:%S"
handlers:
console:
class: logging.StreamHandler
formatter: my_std
level: DEBUG
stream: ext://sys.stdout
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
formatter: my_std
filename: info.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
rosconsole:
class: rosgraph.roslogging.RosStreamHandler
level: DEBUG
formatter: my_std
colorize: True
loggers:
my_module:
level: INFO
handlers: [console]
propagate: no
rosout:
level: INFO
handlers: [rosconsole]
propagate: yes
qualname: rosout
root:
level: INFO
handlers: [console, info_file_handler, rosconsole]
You can use rospy.loginfo() that made for ROS-Python and ROS_INFO() that made of ROS-C++ and integrate them in a scene. For that do as following:
$ roscd log
You will see several .log files, then use the following command to track them.
$ tail -f <logfile-name.log>
[UPDATE]
Also, you could subscribe to /rosout topic or echo this topic ($ rostopic echo /rosout).
Reference.
This config yaml does the trick:
version: 1
disable_existing_loggers: True
formatters:
my_std:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
datefmt: "%Y/%m/%d %H:%M:%S"
handlers:
console:
class: logging.StreamHandler
formatter: my_std
level: DEBUG
stream: ext://sys.stdout
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
formatter: my_std
filename: info.log
maxBytes: 10485760 # 10MB
backupCount: 20
encoding: utf8
loggers:
__main__:
level: DEBUG
handlers: [console]
propagate: no
rosout:
level: INFO
propagate: yes
qualname: rosout
root:
level: INFO
handlers: [console, info_file_handler]
And to load it,
import logging
import yaml
if os.path.exists(config_path):
with open(config_path, 'rt') as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
In python, I use the standard logger library and remap the logger to ROS_LOG. Note that ROS_LOG won't respect formatting if you try to use ros_stream_handler.setFormatter.
import logging
from rosgraph.roslogging import RosStreamHandler as RosStreamHandler
ros_stream_handler = RosStreamHandler()
ros_stream_handler.setLevel(logging.DEBUG)
logger.addHandler(ros_stream_handler)

Changing formatter when using logging module with yaml in python

When using logging module in python, we can initialize general setting for each logger using yaml file. My test code looks like
[main.py]
import yaml
import logging, logging.config
def setup_logging(default_level=logging.DEBUG):
with open("./logging.yaml", 'rt') as f:
configdata = yaml.safe_load(f.read())
logging.config.dictConfig(configdata)
setup_logging()
dbg = logging.getLogger(__name__)
dbg.info("Test")
[logging.yaml]
version: 1
disable_existing_loggers: False
formatters:
detail:
format: "%(asctime)s - %(name)s - %(message)s"
onlymessage:
format: "%(message)s"
handlers:
file_handler:
class: logging.FileHandler
level: DEBUG
formatter: detail
filename: ./log
mode: w
loggers:
__main__:
level: DEBUG
handlers: [file_handler]
propagate: No
So for "file_handler", the default formatter is "detail". Then how do I change the formatter of this logger to another one, in this case "onlymessage"?
I know that if we were given formatter object, we can use Handler.setFormatter() to change the formatter of a logger, like
dbg.handlers[0].setFormatter(FORMATTER NAME)
But since I specified all information about formatter in yaml file and used logging.config when initizliaing logger, I have no formatter object. I think if I can retrieve formatter object written in that yaml file, problem can be solved. Or is there any other way to do this?
Yes, "unused" formatters will be lost in the process if you don't bind them to any handler.
You can check the source of what happens, here is the interesting bit.

universal logger name in python logging yaml config file

Ok, so the situation is I need to use a yaml config file for logging.(don't ask - I just need it :) ). And when writing the 'loggers:' directive I would like to use one logger and to be able to fetch it from multiple modules in my app using getLogger(__name__). I know how to do it if I use a normal python config file for logging, but I can't to find a way to do the same with a yaml file.
using python 2.7
So long story short, that's what I have(this is just a simplified example of my problem, not part of the actual application :) ):
#this is app.py
import logging.config
import yaml
import os
def init_logging():
path = 'logging.yaml'
if os.path.exists(path):
with open(path, 'r') as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config['logging'])
main()
def main():
logger = logging.getLogger('app')
logger.debug("done!")
init_logging()
and here's the logging.yaml config file:
logging:
version: 1
formatters:
brief:
format: '%(message)s'
default:
format: '%(asctime)s %(levelname)-8s [%(name)s] %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
loggers:
app:
handlers: [console]
level: DEBUG
So as is it is here - it works. the 'done!' message shows in the console.
But I want to be able to set in the config not some distinct logger (here I called it 'app') but a universal, like if it was in a .py config it'would be
"loggers": {
"": {
"handlers": ["console"],
"level": "DEBUG",
},
}
and then I would be using logging.getLogger(__name__) in different modules and it would always use the one "" logger and show me the messages.
So is there a way to create a universal logger in yaml? like the "": in the python logging config?
I tried (), ~, null - those don't do the work.
Basically I need to be able to call loggers with any names I want and to get one specified logger.
And yep - I can create a root directive in the yaml and call it by using logging.getLogger()
The trick is to use a special logger called root, outside the list of other loggers.
I found this in the Python documentation:
root - this will be the configuration for the root logger. Processing of the configuration will be as for any logger, except that the propagate setting will not be applicable.
Here's the configuration from your question changed to use the root key:
logging:
version: 1
formatters:
brief:
format: '%(message)s'
default:
format: '%(asctime)s %(levelname)-8s [%(name)s] %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
root:
handlers: [console]
level: WARN
loggers:
app:
level: DEBUG
That configures app at the DEBUG level, and everything else at the WARN level.
Okay, so I found an answer (thanks to #flyx!).
I compared the original python dict config and the converted from yaml dict config and found out that logging automatically adds disable_existing_loggers: False to the python config. After that I added this line to the yaml and used '' in the loggers directive.. and it worked!
So the resulting yaml config is like:
..............
disable_existing_loggers: False
loggers:
'':
handlers: [console, sentry]
level: DEBUG
propagate: False
and now it works. Even if I create a logger like logging.getLogger('something') and logger 'something' is not in the config then the app will use the '' logger.
#Don Kirkby's answer won't work if the logger is defined in the begining of the file(before configurating it). But it works with no problem if the logger is defined after configuring logging. So it's a resolve for the code from my question but not an answer to the question - "So is there a way to create a universal logger in yaml? like the "": in the python logging config?" That's why I didn't pick it as an answer. But he's comment is totally legit :)

logs of child logger get displayed twice

I cannot achieve proper logging of my module using python's standard logging. Yet it's a very simple case.
I have the following module hierarchy:
module\
foo.py
bar.py
I need to log from each of these modules with the following constraints:
all logs >= INFO from module.foo to the console (because what this module does is important and user must be notified live)
all logs from module.* into a file
all logs >= WARNING from module.* to the console
Here is the main code
import logging
import logging.config
import os
import yaml
def setup_logging():
loadfrom = os.path.join(os.path.dirname(__file__), 'config.yml')
# Load
with open(loadfrom, 'rt') as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
setup_logging()
foo = logging.getLogger('module.foo')
bar = logging.getLogger('module.bar')
foo.info('module.foo doing something')
foo.debug('module.foo debug data')
bar.info('module.bar doing something')
bar.error('module.bar something bad happened')
Here is the config I'm using
version: 1
disable_existing_loggers: False
formatters:
simple:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: simple
stream: ext://sys.stdout
file:
class: logging.handlers.RotatingFileHandler
level: DEBUG
filename: 'log.log'
formatter: simple
encoding: utf8
loggers:
module:
level: WARNING
handlers: [console]
propagate: yes
module.foo:
level: INFO
handlers: [console]
propagate: yes # If yes, gets displayed twice. If false, entry is missing in log file
root:
level: DEBUG
handlers: [file]
And here is the output :
2017-09-21 10:48:39,679 - module.foo - INFO - module.foo doing something
2017-09-21 10:48:39,679 - module.foo - INFO - module.foo doing something
2017-09-21 10:48:39,681 - module.bar - ERROR - module.bar something bad happened
The log.info from the child module gets displayed twice, because propagate field is set to yes in the config.
Setting it to false solves the issue in the console but breaks the log file because the entry is missing in it.
How can I solve this ? Any alternatives to the standard library that I personnally find counterintuitive ?
EDIT 1
New config after #wmorell's answer:
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: simple
stream: ext://sys.stdout
file:
class: logging.handlers.RotatingFileHandler
level: DEBUG
filename: 'log.log'
formatter: simple
encoding: utf8
loggers:
module:
level: WARNING
handlers: [console]
propagate: yes
module.foo:
level: DEBUG <- set this to debug
handlers: [file, console] <- Add file here
propagate: false
root:
level: DEBUG
handlers: [file]
Console output is OK:
2017-09-21 11:14:51,174 - module.foo - INFO - module.foo doing something
2017-09-21 11:14:51,174 - module.bar - ERROR - module.bar something bad happened
Log output is not ok, misses the call to log.info('module.bar'):
2017-09-21 11:18:34,335 - module.foo - INFO - module.foo doing something
2017-09-21 11:18:34,335 - module.foo - DEBUG - module.foo debug data
2017-09-21 11:18:34,335 - module.bar - ERROR - module.bar something bad happened
Add the file handler explicitly to the logger definitions, and then duplicate the console handler to filter out different log levels:
handlers:
console_info:
class: logging.StreamHandler
level: INFO
formatter: simple
stream: ext://sys.stdout
console_warning:
class: logging.StreamHandler
level: WARNING
formatter: simple
stream: ext://sys.stdout
file:
class: logging.handlers.RotatingFileHandler
level: DEBUG
filename: 'log.log'
formatter: simple
encoding: utf8
loggers:
module:
level: DEBUG
handlers: [file, console_warning]
propagate: false
module.foo:
level: DEBUG
handlers: [file, console_info]
propagate: false
Logs get filtered at the logger definition first, so the module and module.foo loggers must allow DEBUG if those are to make it to the log file. The loggers then forward messages to all handlers, and handlers can drop messages below their configured thresholds; so you want a handler that will drop INFO logs for the base module logger, and a handler that will allow INFO logs for the more specific module.foo logger.

Categories

Resources