My tornado application is using some legacy modules written many years back. Those modules are configured to log out things with root logger. The issue I am facing is that whenever I import those modules the logs printed by the tornado(i.e. tornado.access, tornado.application, etc..) get suppressed.
Here is how I invoke my server
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Basic run script"""
from zmq.eventloop import ioloop
ioloop.install()
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
import tornado.autoreload
from tornado.options import options
import tornado.web
from grace_server.application import MyApplication
from settings import settings
def main():
app = MyApplication(settings)
app.listen(options.port)
tornado.ioloop.IOLoop.current().start()
if __name__ == "__main__":
main()
Here is the definition of the tornado.Application
import collections, zmq, os
import logging, re
import pickle, json
from datetime import datetime
from functools import partial
from zmq.eventloop.zmqstream import ZMQStream
from zmq.eventloop import ioloop
from tornado import web
from tornado.log import LogFormatter, app_log, access_log, gen_log
from jupyter_client import MultiKernelManager
from legacy_module import api
from legacy_module.util.utils import get_env
from urls import url_patterns
ioloop = ioloop.IOLoop.current()
class MyApplication(web.Application):
def __init__(self, settings):
self.init_logging()
self.connections = collections.defaultdict(list)
self.kernels = {}
self.listen_logs()
web.Application.__init__(self, url_patterns, **settings)
def init_logging(self):
self.logger = logging.getLogger('MyApplication')
self.logger.setLevel(logging.DEBUG)
def broadcast_message(self, message):
connections = self.connections.keys()
for conn in connections:
conn.write_message(message)
def multicat_message(self, filter_, message):
connections = self.connections.keys()
connections = filter(connections)
for conn in connections:
conn.write_message(message)
...
...
...
This is how logging is configured in my legacy_module
import os, json
import logging, logging.config
from contextlib import contextmanager
from kombu import Connection
from terminaltables import AsciiTable
from legacy_module import resources
from legacy_module.resources.gredis import redis_tools
from legacy_module.core import versioning
from legacy_module.util.utils import get_logger_container, get_env
from legacy_module.resources.databases.mongo import MongoDatabaseCollection
DB_COLLECTION_OBJECT = MongoDatabaseCollection()
LOGGING_FILE = os.path.join(os.environ['legacy_module_HOME'], 'config', 'logging.config')
logging.config.fileConfig(LOGGING_FILE)
LOGGER = logging.getLogger()
...
...
...
This is how logging.config looks.
[loggers]
keys = root
[handlers]
keys = consoleHandler
[formatters]
keys = simpleFormatter
[logger_root]
level = DEBUG
handlers = consoleHandler
[handler_consoleHandler]
class = StreamHandler
level = DEBUG
formatter = simpleFormatter
args = (sys.stdout,)
[formatter_simpleFormatter]
format = %(asctime)s - %(name)s - %(levelname)s - %(message)s
datefmt =
This is how normal logs looks like
2017-09-28 02:40:03,409 MyApplication DEBUG init_logging done
2017-09-28 02:40:13,018 MyApplication DEBUG Authenticating
But When I comment out the import for legacy_module from MyApplication, I can see tornado.access logs
2017-09-28 02:40:03,409 MyApplication DEBUG init_logging done
2017-09-28 02:40:13,017 tornado.access INFO 304 GET / (172.20.20.3) 1.79ms
2017-09-28 02:40:14,264 tornado.access INFO 304 GET /api/login (172.20.20.3) 0.75ms
2017-09-28 02:40:13,018 MyApplication DEBUG Authenticating
so the logging configurations of my legacy_module is some how suppressing the logs by the tornado.
How can I fix this, I need these logs.
First, in yourlegacymodule, remove the logging.config.fileConfig(LOGGING_FILE) call and replace LOGGER = logging.getLogger() with LOGGER = logging.getLogger(__name__).
Then you may want to make sure you have at least the root logger properly configured (don't know what you get in tornado for logging config so check the docs).
As a more general note: this logging configuration in a library module is the perfect example of logging antipattern - the whole point of the logging package is to decouple logger's use (from within library code) from logging config which should be left to the application using the library code and should be configurable per application instance. FWIW note that your own MyApplication.init_logging() is also an antipattern - you shouldn't hardcode the logger's level in your code, this should be done using a per-instance config (cf how django uses the settings module to configure logging).
Update:
I'd have to dig into tornado's code to give you an exact detailed answer, but obviously the logging.config.fileConfig() call in yourlegacymodule overrides tornado's own configuration.
are my configs done in init_logging get overridden by the root logger ?
The only thing you currently "configure" (and which you shouldn't) in init_logging is the "MyApplication" logger's level, this has no impact on which loggin handler is used (=> where your logs are sent) etc.
how can I prevent them ?
This was the very first part of my answer...
Related
I would like to use Loguru to intercept loggers from other modules.
Could anyone of you tell how to approach this topic, please?
Example:
import logging
import requests
from loguru import logger
logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)
logger_requests = logging.getLogger('requests')
logger_requests.setLevel(logging.DEBUG)
logger.debug('Message through loguru')
requests.get('https://stackoverflow.com')
Execution:
$ python test_logger.py > /dev/null
2021-03-23 19:35:27.141 | DEBUG | __main__:<module>:10 - Message through loguru
DEBUG:Starting new HTTPS connection (1): stackoverflow.com:443
DEBUG:https://stackoverflow.com:443 "GET / HTTP/1.1" 200 None
Answering explicitly...
You want to redirect requests logging through loguru. As mentioned in the comments, you can use:
logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)
However, it's also worth mentioning that decorating functions with logger.catch() will vastly improve error tracebacks should anything happen during script execution.
import logging
import sys
import requests
from loguru import logger
class InterceptHandler(logging.Handler):
"""
Add logging handler to augment python stdlib logging.
Logs which would otherwise go to stdlib logging are redirected through
loguru.
"""
#logger.catch(default=True, onerror=lambda _: sys.exit(1))
def emit(self, record):
# Get corresponding Loguru level if it exists.
try:
level = logger.level(record.levelname).name
except ValueError:
level = record.levelno
# Find caller from where originated the logged message.
frame, depth = sys._getframe(6), 6
while frame and frame.f_code.co_filename == logging.__file__:
frame = frame.f_back
depth += 1
logger.opt(depth=depth, exception=record.exc_info).log(level, record.getMessage())
##########################################################################
# The logger.catch() decorator improves error tracebacks
# ^^^^^^^^^^^^^^
##########################################################################
#logger.catch(default=True, onerror=lambda _: sys.exit(1))
def requests_http_get(url=None):
logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)
logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)
logger_requests = logging.getLogger('requests')
logger_requests.setLevel(logging.DEBUG)
logger.debug('Message through loguru')
requests.get(url)
if __name__=="__main__":
requests_http_get("https://stackoverflow.com/")
I am getting duplicate (double) logs when using the python logging. I have 3 files :
1. main.py
2. dependencies.py
3. resources.py
I am making only 1 call to the python logger constructor which is done inside the main.py
Following are my import statements in the 3 files
main.py
import xml.etree.ElementTree as et
from configparser import ConfigParser
from Craftlogger import Craftlogger
logger = Craftlogger().getLogger()
dependencies.py
import os,sys
from main import getJobDetails,postRequest,logger
from configparser import ConfigParser
resources.py
import os,sys
import xml.etree.ElementTree as et
And inside the main method in the main.py, I have the imports
def main():
from resources import getResourceDetails,setResources
from dependencies import setDependencies
..... Remaining code .....
My logging file looks like this
import logging
class Craftlogger:
def __init__(self):
self.logger = logging.getLogger(__name__)
handler = logging.StreamHandler()
formatter_string = '%(asctime)s | %(levelname)-8s | %(filename)s-%(funcName)s-%(lineno)04d | %(message)s'
formatter = logging.Formatter(formatter_string)
handler.setFormatter(formatter)
self.logger.addHandler(handler)
self.logger.setLevel(logging.DEBUG)
self.logger.propagate = False
def getLogger(self):
return self.logger
Note : I had to do the imports inside of main so as to be able to do circular imports.
My guess would be that two CraftLogger objects exist and both have the same self.logger member. logging.getLogger(__name__) probably returns the same object for another CraftLogger object, resulting in two addHandler calls on the same logger. This is just a guess, no guarantee.
Logging is a cross cutting concern. As such, I frown upon classes which set up logging on their own. The responsibility to configure logging (especially handlers) should be solely with the main executing function, e.g. your main function. No submodule / class / function should modify logging, except getting a logger via logging.getlogger(name).
This avoids most of these pitfalls and allows easy composition of modules.
Imagine you have to import two modules who both modify the logging system...fun
I'm trying to enable logging to stdout for requests_oauthlib. The example in the docs suggests this:
# Uncomment for detailed oauthlib logs
#import logging
#import sys
#log = logging.getLogger('oauthlib')
#log.addHandler(logging.StreamHandler(sys.stdout))
#log.setLevel(logging.DEBUG)
But it doesn't seem to have any effect. What's the proper way to do it?
The root logger's name should be requests_oauthlib, i.e. the package name. The modules in the package define loggers like this
logger = logging.getLogger(__name__)
so configuring the root logger as described in the example should work:
import logging
import sys
log = logging.getLogger('requests_oauthlib')
log.addHandler(logging.StreamHandler(sys.stdout))
log.setLevel(logging.DEBUG)
I am trying to use Structlog to to log to a file and then using filebeat to send the log to my logging service.
I have made the everything work, but I would like to be able to use the same logger across multiple modules, like with Pythons default logger (see https://docs.python.org/2/howto/logging.html section "Logging from multiple modules").
One of the reasons is that I would like to bind a sessionID to my logoutput that should be logged across all modules called from this session.
Probably need some fundamental knowledge on how to use structlogger, but havn't found the answer in their documentation or in other posts.
Please advice....
An example:
main.py
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
import uuid
from mylogger import myLogger
import otherModule
myLogger = myLogger()
myLogger.log.warning('working', error='test')
myLogger.log = myLogger.log.bind(sessionID=str(uuid.uuid4()))
myLogger.log.warning('Event where sessionID is bound to logger', error='test')
otherModule = otherModule.otherModule()
myLogger.py
#!/usr/bin/python3
# -*- coding: UTF-8 -*-
import datetime, logging
from structlog import wrap_logger, get_logger
from structlog.processors import JSONRenderer
from structlog.stdlib import filter_by_level, add_log_level
class myLogger():
def __init__(self, loggername="root", logLevel='INFO', logFile='test2.log'):
self.log = wrap_logger(
logging.getLogger('root'),
processors=[
filter_by_level,
add_log_level,
self.add_timestamp,
JSONRenderer(indent=1, sort_keys=True)
]
)
logging.basicConfig(format='%(message)s', level=logLevel, filename=logFile)
def my_get_logger(self, loggername="AnyLogger"):
log = get_logger(loggername)
return log
def add_timestamp(self, _, __, event_dict):
event_dict['timestamp'] = datetime.datetime.utcnow().isoformat()
return event_dict
otherModule.py
import structlog
from mylogger import myLogger
class otherModule():
def __init__(self):
logger = structlog.get_logger('root')
## This logger does not have the processors nor the bound sessionID
logger.warning('In other module')
## This logmessage is printed to console
logger2 = myLogger();
## This logger has all the processors but not the bund sessionID
logger2.log.warning('In other module')
## This log is written to my logfile, but without the sessionID
You need to use a wrapped dictionary as a context class as explained in the structlog documentation
So you will end up with something like this:
structlog.configure(
processors=[
structlog.stdlib.filter_by_level,
# other processors
],
context_class=structlog.threadlocal.wrap_dict(dict),
)
I have a module which should do some logging:
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
def do_something():
logging.info("I did something")
Now if I call the module, let it be module.py, then it will not do the logging:
import module
module.do_something()
Not even a logfile is created! Where is the bug?
Sometimes you have to specify the full path of the log file. Try that. For example:
import logging
logging.basicConfig(filename='C:/workspace/logging_proj/src/example.log',level=logging.DEBUG)
or you can have Python do it for you:
import os
import logging
LOG_FILENAME = os.path.join(os.path.dirname(__file__), 'example.log')
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG)