I'm trying to enable logging to stdout for requests_oauthlib. The example in the docs suggests this:
# Uncomment for detailed oauthlib logs
#import logging
#import sys
#log = logging.getLogger('oauthlib')
#log.addHandler(logging.StreamHandler(sys.stdout))
#log.setLevel(logging.DEBUG)
But it doesn't seem to have any effect. What's the proper way to do it?
The root logger's name should be requests_oauthlib, i.e. the package name. The modules in the package define loggers like this
logger = logging.getLogger(__name__)
so configuring the root logger as described in the example should work:
import logging
import sys
log = logging.getLogger('requests_oauthlib')
log.addHandler(logging.StreamHandler(sys.stdout))
log.setLevel(logging.DEBUG)
Related
I'm trying to establish logging in all modules I'm using. My project structure is
# driver.py
import logging
logger = logging.getLogger(__name__)
class driver:
....
# driver_wrapper.py
from driver import driver
device = driver(...)
def driver_func():
logging.info("...")
....
# main.py
import logging
import driver_wrapper
logging.basicConfig(stream=sys.stdout, level=logging.WARNING)
driver_wrapper.driver_func()
My problem now is that I still get INFO level messages and also the output is 'INFO:root'. But I would expect the module name instead of root.
Is there a way to set the logging level in the main.py for all modules or is it already correct how I do it? There are a lot of posts about this problem but the solutions don't seem to work for me.
All your modules that use logging should have the logger = logging.getLogger(__name__) line, and thereafter you always log to e.g.logger.info(...), and never call e.g. logging.info(...). The latter is equivalent to logging to the root logger, not the module's logger. That "all your modules" includes driver_wrapper.py in your example.
I'm trying to import from the following module:
https://github.com/dmlc/gluon-cv/blob/master/gluoncv/torch/data/gluoncv_motion_dataset/dataset.py
However this includes the lines:
logging.basicConfig(level=logging.INFO)
log = logging.getLogger()
Which is messing up the logging settings I'm trying to apply in my main file. How can I import from this module, but overwrite the above log settings?
Tensorflow makes logging messages hide and not appear when I run the code.
I have tried the following stuff but couldn't find a way to make my code work.
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
import os
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
So my code is the following
import logging
import tensorflow as tf
logging.basicConfig(filename='example.log', level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
I expected to have the debug messages into my file example.log but nothing appeared inside example log.
When I import tensorflow, the messages don't appear and when I don't they do.
I need to use both tensorflow and logging because I use an existing code. Is there a way so logging suppresses Tensorflow?
Two facts:
logging.basicConfig will do nothing if the root logger is already configured:
This function does nothing if the root logger already has handlers configured for it.
tensorflow has the absl-py dependency that will try to initialize logging when imported by appending a NullHandler to the root handler:
# The absl handler will always be attached to root, not the absl logger.
if not logging.root.handlers:
# Attach the absl handler at import time when there are no other handlers.
# Otherwise it means users have explicitly configured logging, and the absl
# handler will only be attached later in app.run(). For App Engine apps,
# the absl handler is not used.
logging.root.addHandler(_absl_handler)
Not sure why the handler is attached to the root logger instead of the absl logger, though - might be a bug or a workaround for some other issue.
So the problem is that the import tensorflow call will call import absl.logging which causes an early logger configuration. The subsequent call (yours) to logging.basicConfig will hence do nothing. To fix that, you need to configure logging before importing tensorflow:
import logging
logging.basicConfig(filename='example.log', level=logging.DEBUG)
import tensorflow as tf
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
Rule of thumb: always call your logging configuration as early as possible.
Writing logs in default format to file
If you just want to write the default logs to file, abseil logger can also do that:
from absl import logging as absl_logging
absl_logging.get_absl_handler().use_absl_log_file(
program_name='mytool',
log_dir='/var/logs/'
)
Besides the method offered by #hoefling, you can also clear the handlers of the root logger just before your logging configurations:
logging.getLogger().handlers = []
# ...
logging.basicConfig(level=level, handlers=handlers)
I am getting duplicate (double) logs when using the python logging. I have 3 files :
1. main.py
2. dependencies.py
3. resources.py
I am making only 1 call to the python logger constructor which is done inside the main.py
Following are my import statements in the 3 files
main.py
import xml.etree.ElementTree as et
from configparser import ConfigParser
from Craftlogger import Craftlogger
logger = Craftlogger().getLogger()
dependencies.py
import os,sys
from main import getJobDetails,postRequest,logger
from configparser import ConfigParser
resources.py
import os,sys
import xml.etree.ElementTree as et
And inside the main method in the main.py, I have the imports
def main():
from resources import getResourceDetails,setResources
from dependencies import setDependencies
..... Remaining code .....
My logging file looks like this
import logging
class Craftlogger:
def __init__(self):
self.logger = logging.getLogger(__name__)
handler = logging.StreamHandler()
formatter_string = '%(asctime)s | %(levelname)-8s | %(filename)s-%(funcName)s-%(lineno)04d | %(message)s'
formatter = logging.Formatter(formatter_string)
handler.setFormatter(formatter)
self.logger.addHandler(handler)
self.logger.setLevel(logging.DEBUG)
self.logger.propagate = False
def getLogger(self):
return self.logger
Note : I had to do the imports inside of main so as to be able to do circular imports.
My guess would be that two CraftLogger objects exist and both have the same self.logger member. logging.getLogger(__name__) probably returns the same object for another CraftLogger object, resulting in two addHandler calls on the same logger. This is just a guess, no guarantee.
Logging is a cross cutting concern. As such, I frown upon classes which set up logging on their own. The responsibility to configure logging (especially handlers) should be solely with the main executing function, e.g. your main function. No submodule / class / function should modify logging, except getting a logger via logging.getlogger(name).
This avoids most of these pitfalls and allows easy composition of modules.
Imagine you have to import two modules who both modify the logging system...fun
I have a package with this structure :
mypackage
|
+---- a.py
+---- b.py
+---- __init__.py
this package is sometimes used as a library, sometimes as interactively with IPython, so I need to configure logging differently in both cases:
interactively: print logs in the console, so the loggers should have a StreamHandler handler
library: let the user configure logging, so the loggers should have a NullHandler handler
In __init__.py I do this:
import logging
import a
import b
logging.getLogger(__name__).addHandler(logging.NullHandler())
def get_loggers():
"""
Get all the logger objects instantiated for the current package
"""
loggers = []
for logger in logging.Logger.manager.loggerDict.values():
if not isinstance(logger, logging.Logger):
continue
if logger.name.startswith(__name__):
loggers.append(logger)
return loggers
def enable_logs():
"""
Configure loggers to print on stdout/stderr
"""
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter(
'%(name)s :: %(levelname)s :: %(message)s'))
for logger in get_loggers():
logger.removeHandler(handler)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
logger.propagate = False
def disable_logs():
"""
Configure loggers not to print anywhere
"""
handler = logging.NullHandler()
for logger in get_loggers():
logger.removeHandler(handler)
logger.addHandler(handler)
logger.propagate = False
a.py and b.py both start with:
import logging
log = logging.getLogger(__name__)
log.addHandler(logging.NullHandler())
So now I can enable/disable logging by doing this:
import mypackage
mypackage.enable_logs()
mypackage.disable_logs()
But this solution is not PEP8 compliant because in __init__.py I import modules that are not used. Note that don't have to import them, but I want because then their respective loggers are created when I import the package.
Question1: Is their a PEP8 compliant way to achieve what the same goal?
Question2: This is subjective perhaps, is being PEP8 compliant worth it in this case?
You could call this function at the beginning of your enable and disable logs functions to load the modules dynamically, but I think getting around the PEP this way may be cheating:
import glob
import imp
import os
def load_modules(module_names=None):
if module_names is None:
cur_dir = os.path.realpath(os.path.dirname(__file__))
module_wc = '{}/*.py'.format(cur_dir)
module_names = [mn for mn in glob.glob(module_wc) if not mn.startswith('_')]
modules = map(imp.load_source, module_names)
return modules