Logging not working if I import the module - python

I have a module which should do some logging:
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
def do_something():
logging.info("I did something")
Now if I call the module, let it be module.py, then it will not do the logging:
import module
module.do_something()
Not even a logfile is created! Where is the bug?

Sometimes you have to specify the full path of the log file. Try that. For example:
import logging
logging.basicConfig(filename='C:/workspace/logging_proj/src/example.log',level=logging.DEBUG)
or you can have Python do it for you:
import os
import logging
LOG_FILENAME = os.path.join(os.path.dirname(__file__), 'example.log')
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG)

Related

Python logging with multiple module imports

I'm trying to establish logging in all modules I'm using. My project structure is
# driver.py
import logging
logger = logging.getLogger(__name__)
class driver:
....
# driver_wrapper.py
from driver import driver
device = driver(...)
def driver_func():
logging.info("...")
....
# main.py
import logging
import driver_wrapper
logging.basicConfig(stream=sys.stdout, level=logging.WARNING)
driver_wrapper.driver_func()
My problem now is that I still get INFO level messages and also the output is 'INFO:root'. But I would expect the module name instead of root.
Is there a way to set the logging level in the main.py for all modules or is it already correct how I do it? There are a lot of posts about this problem but the solutions don't seem to work for me.
All your modules that use logging should have the logger = logging.getLogger(__name__) line, and thereafter you always log to e.g.logger.info(...), and never call e.g. logging.info(...). The latter is equivalent to logging to the root logger, not the module's logger. That "all your modules" includes driver_wrapper.py in your example.

Disable logging.basicConfig from imported module

I'm trying to import from the following module:
https://github.com/dmlc/gluon-cv/blob/master/gluoncv/torch/data/gluoncv_motion_dataset/dataset.py
However this includes the lines:
logging.basicConfig(level=logging.INFO)
log = logging.getLogger()
Which is messing up the logging settings I'm trying to apply in my main file. How can I import from this module, but overwrite the above log settings?

How to enable logging for requests_oauthlib?

I'm trying to enable logging to stdout for requests_oauthlib. The example in the docs suggests this:
# Uncomment for detailed oauthlib logs
#import logging
#import sys
#log = logging.getLogger('oauthlib')
#log.addHandler(logging.StreamHandler(sys.stdout))
#log.setLevel(logging.DEBUG)
But it doesn't seem to have any effect. What's the proper way to do it?
The root logger's name should be requests_oauthlib, i.e. the package name. The modules in the package define loggers like this
logger = logging.getLogger(__name__)
so configuring the root logger as described in the example should work:
import logging
import sys
log = logging.getLogger('requests_oauthlib')
log.addHandler(logging.StreamHandler(sys.stdout))
log.setLevel(logging.DEBUG)

python logs get suppressed

My tornado application is using some legacy modules written many years back. Those modules are configured to log out things with root logger. The issue I am facing is that whenever I import those modules the logs printed by the tornado(i.e. tornado.access, tornado.application, etc..) get suppressed.
Here is how I invoke my server
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Basic run script"""
from zmq.eventloop import ioloop
ioloop.install()
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
import tornado.autoreload
from tornado.options import options
import tornado.web
from grace_server.application import MyApplication
from settings import settings
def main():
app = MyApplication(settings)
app.listen(options.port)
tornado.ioloop.IOLoop.current().start()
if __name__ == "__main__":
main()
Here is the definition of the tornado.Application
import collections, zmq, os
import logging, re
import pickle, json
from datetime import datetime
from functools import partial
from zmq.eventloop.zmqstream import ZMQStream
from zmq.eventloop import ioloop
from tornado import web
from tornado.log import LogFormatter, app_log, access_log, gen_log
from jupyter_client import MultiKernelManager
from legacy_module import api
from legacy_module.util.utils import get_env
from urls import url_patterns
ioloop = ioloop.IOLoop.current()
class MyApplication(web.Application):
def __init__(self, settings):
self.init_logging()
self.connections = collections.defaultdict(list)
self.kernels = {}
self.listen_logs()
web.Application.__init__(self, url_patterns, **settings)
def init_logging(self):
self.logger = logging.getLogger('MyApplication')
self.logger.setLevel(logging.DEBUG)
def broadcast_message(self, message):
connections = self.connections.keys()
for conn in connections:
conn.write_message(message)
def multicat_message(self, filter_, message):
connections = self.connections.keys()
connections = filter(connections)
for conn in connections:
conn.write_message(message)
...
...
...
This is how logging is configured in my legacy_module
import os, json
import logging, logging.config
from contextlib import contextmanager
from kombu import Connection
from terminaltables import AsciiTable
from legacy_module import resources
from legacy_module.resources.gredis import redis_tools
from legacy_module.core import versioning
from legacy_module.util.utils import get_logger_container, get_env
from legacy_module.resources.databases.mongo import MongoDatabaseCollection
DB_COLLECTION_OBJECT = MongoDatabaseCollection()
LOGGING_FILE = os.path.join(os.environ['legacy_module_HOME'], 'config', 'logging.config')
logging.config.fileConfig(LOGGING_FILE)
LOGGER = logging.getLogger()
...
...
...
This is how logging.config looks.
[loggers]
keys = root
[handlers]
keys = consoleHandler
[formatters]
keys = simpleFormatter
[logger_root]
level = DEBUG
handlers = consoleHandler
[handler_consoleHandler]
class = StreamHandler
level = DEBUG
formatter = simpleFormatter
args = (sys.stdout,)
[formatter_simpleFormatter]
format = %(asctime)s - %(name)s - %(levelname)s - %(message)s
datefmt =
This is how normal logs looks like
2017-09-28 02:40:03,409 MyApplication DEBUG init_logging done
2017-09-28 02:40:13,018 MyApplication DEBUG Authenticating
But When I comment out the import for legacy_module from MyApplication, I can see tornado.access logs
2017-09-28 02:40:03,409 MyApplication DEBUG init_logging done
2017-09-28 02:40:13,017 tornado.access INFO 304 GET / (172.20.20.3) 1.79ms
2017-09-28 02:40:14,264 tornado.access INFO 304 GET /api/login (172.20.20.3) 0.75ms
2017-09-28 02:40:13,018 MyApplication DEBUG Authenticating
so the logging configurations of my legacy_module is some how suppressing the logs by the tornado.
How can I fix this, I need these logs.
First, in yourlegacymodule, remove the logging.config.fileConfig(LOGGING_FILE) call and replace LOGGER = logging.getLogger() with LOGGER = logging.getLogger(__name__).
Then you may want to make sure you have at least the root logger properly configured (don't know what you get in tornado for logging config so check the docs).
As a more general note: this logging configuration in a library module is the perfect example of logging antipattern - the whole point of the logging package is to decouple logger's use (from within library code) from logging config which should be left to the application using the library code and should be configurable per application instance. FWIW note that your own MyApplication.init_logging() is also an antipattern - you shouldn't hardcode the logger's level in your code, this should be done using a per-instance config (cf how django uses the settings module to configure logging).
Update:
I'd have to dig into tornado's code to give you an exact detailed answer, but obviously the logging.config.fileConfig() call in yourlegacymodule overrides tornado's own configuration.
are my configs done in init_logging get overridden by the root logger ?
The only thing you currently "configure" (and which you shouldn't) in init_logging is the "MyApplication" logger's level, this has no impact on which loggin handler is used (=> where your logs are sent) etc.
how can I prevent them ?
This was the very first part of my answer...

Avoid singleton pattern in logging module

For example, i have some script test1.py with code like this:
import logging
from logging.handlers import RotatingFileHandler
import some_module
handler = RotatingFileHandler('TEST1.log', maxBytes=18000, backupCount=7)
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logging.getLogger("some_module").addHandler(handler)
do_something():
some_module.do_smth()
do_something()
And I have another script test2.py with code like this:
import logging
from logging.handlers import RotatingFileHandler
import some_module
handler = RotatingFileHandler('TEST2.log', maxBytes=18000, backupCount=7)
logger = logging.getLogger(__name__)
logger.addHandler(handler)
logging.getLogger("some_module").addHandler(handler)
do_something():
some_module.do_smth_else()
do_something()
Then i import both scripts in file app.py, which can call one of the scripts for some reasons.
The problem is, that all log messages for module some_module from script test1.py are written to both log files: and TEST1.log, and TEST2.log.
As i understood, the problem is about singleton pattern, so module logging is something like global module for all my scripts, which are working in the same process. So, when i import test1.py to app.py it adds handler for some_module first time, then, when i import test2.py to app.py, it adds handler for some_module another time, and this module now has 2 handlers.
Is there a way to add handlers for this module separately, so all debug messages, which are being called by test1.py, will be written in TEST1.log, but not in TEST2.log.
UPDATE:
In my case i am trying to do it with this module, and it seems, that with it it's not working:
logging.getLogger("TeleBot.test1").setLevel(logging.DEBUG)
logging.getLogger("TeleBot.test1").addHandler(handler)
And nothing is being written in my log file, but if i just do simply:
logging.getLogger("TeleBot").setLevel(logging.DEBUG)
logging.getLogger("TeleBot").addHandler(handler)
It's working, but, as i mentioned in the question, it writes debug messages to all files.
So, is it a bug in this particular module?
Doing logging.getLogger("some_module") in both files returns the same Logger object as you have already observed.
To get a separate Logger in each file simply provide a different name in getLogger() each time.
E.g. in test1.py
logging.getLogger("some_module.test1").addHandler(handler)
and in test2.py
logging.getLogger("some_module.test2").addHandler(handler)

Categories

Resources