How to use loguru with standard loggers? - python

I would like to use Loguru to intercept loggers from other modules.
Could anyone of you tell how to approach this topic, please?
Example:
import logging
import requests
from loguru import logger
logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)
logger_requests = logging.getLogger('requests')
logger_requests.setLevel(logging.DEBUG)
logger.debug('Message through loguru')
requests.get('https://stackoverflow.com')
Execution:
$ python test_logger.py > /dev/null
2021-03-23 19:35:27.141 | DEBUG | __main__:<module>:10 - Message through loguru
DEBUG:Starting new HTTPS connection (1): stackoverflow.com:443
DEBUG:https://stackoverflow.com:443 "GET / HTTP/1.1" 200 None

Answering explicitly...
You want to redirect requests logging through loguru. As mentioned in the comments, you can use:
logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)
However, it's also worth mentioning that decorating functions with logger.catch() will vastly improve error tracebacks should anything happen during script execution.
import logging
import sys
import requests
from loguru import logger
class InterceptHandler(logging.Handler):
"""
Add logging handler to augment python stdlib logging.
Logs which would otherwise go to stdlib logging are redirected through
loguru.
"""
#logger.catch(default=True, onerror=lambda _: sys.exit(1))
def emit(self, record):
# Get corresponding Loguru level if it exists.
try:
level = logger.level(record.levelname).name
except ValueError:
level = record.levelno
# Find caller from where originated the logged message.
frame, depth = sys._getframe(6), 6
while frame and frame.f_code.co_filename == logging.__file__:
frame = frame.f_back
depth += 1
logger.opt(depth=depth, exception=record.exc_info).log(level, record.getMessage())
##########################################################################
# The logger.catch() decorator improves error tracebacks
# ^^^^^^^^^^^^^^
##########################################################################
#logger.catch(default=True, onerror=lambda _: sys.exit(1))
def requests_http_get(url=None):
logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)
logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)
logger_requests = logging.getLogger('requests')
logger_requests.setLevel(logging.DEBUG)
logger.debug('Message through loguru')
requests.get(url)
if __name__=="__main__":
requests_http_get("https://stackoverflow.com/")

Related

Python logging messages not showing up due to imports

I have trouble with the logging module. I search through other answers, but can't find a suitable solution.
My code:
import logging
def configure_logging():
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s | [%(levelname)s] | %(message)s | function: %(funcName)s",
)
if __name__ == "__main__":
configure_logging()
# Logging message
logging.info("Test")
# Other stuff here
When run, this correctly outputs what is expected. Whenever I add another import like from mypackage.mymodule import myfunction, the logging output is no longer displayed.
I tried to look for patterns, but I checked and none of the imported modules imports the logging module, for instance. On the other hand, import of common libraries (such as numpy or pandas) does not make the issue appear.
An example of import that breaks the logging is the following:
import logging
# Suspicious import
from settings.constants import QUESTIONS_INFO
def configure_logging():
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s | [%(levelname)s] | %(message)s | function: %(funcName)s",
)
if __name__ == "__main__":
configure_logging()
# Logging message
logging.info("Test")
# Other stuff here
where settings/constants.py is the following:
from inputoutput.yaml import read_yaml
QUESTIONS_DEFINTION = read_yaml(f"settings/questions_definition.yml")
QUESTIONS_INFO = QUESTIONS_DEFINTION["questions"]
and where inputoutput/yaml.py is the following:
import logging
import yaml
def read_yaml(file_path):
try:
with open(file_path) as file:
data = yaml.safe_load(file)
file.close()
logging.debug(f"Yaml file: {file_path} loaded")
return data
except Exception as message:
logging.error(f"Impossible to load the file: {file_path}")
logging.error(f"Error: {message}")
return None
There's maybe some other argument that I need to add to the basicConfig function? Or maybe a way to correctly set up and call the logger here in main and in imported submodules?
It is very likely that something that you import calls basicConfig. (Or adds a Handler to the root logger some other way. Calling logging.debug() for example does that.) Once the root logger has a Handler basicConfig has no effect. You can override this behaviour by setting the force arg:
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s | [%(levelname)s] | %(message)s | function: %(funcName)s",
force=True # <-- HERE
)
This will remove any previously added Handlers from the root logger. A better way to solve this though is that you set up your logging as early as possible so that you don't run into the problem of having to remove already configured Handlers.

twisted logging to screen(stdout) not working

i have this small program taken from here
from twisted.logger import Logger
log = Logger()
def handleData(data):
log.debug("Got data: {data!r}.", data=data)
handleData({'a':20})
This does not prints anything to the screen .why is that?
The default python logger is set to WARN level, so DEBUG messages are suppressed. You can make that code work like -
import logging
from twisted.logger import Logger
log = Logger()
log.setLevel(logging.DEBUG)
def handleData(data):
log.debug("Got data: {data!r}.", data=data)
handleData({'a':20})
i figured it out from here https://github.com/moira-alert/worker/blob/master/moira/logs.py:
import logging
from twisted.logger import Logger, LogLevel
import sys
from twisted.logger import globalLogPublisher
from twisted.logger import textFileLogObserver
from twisted.logger import FilteringLogObserver, LogLevelFilterPredicate, LogLevel
log = Logger()
level = LogLevel.debug
predicate = LogLevelFilterPredicate(defaultLogLevel=level)
observer = FilteringLogObserver(textFileLogObserver(sys.stdout), [predicate])
observer._encoding = "utf-8"
globalLogPublisher.addObserver(observer)
log.info("Start logging with {l}", l=level)
def handleData(data):
log.debug("Got data: {data!r}.", data=data)
handleData({'a':20})
Is there any simpler way . it seems overly complicated just to set log level.
You didn't add observer for your logger object.
here is a simple observer that print the log to stdout
import sys
from twisted.logger import Logger, eventAsText, FileLogObserver
log = Logger()
log.observer.addObserver(FileLogObserver(sys.stdout, lambda e: eventAsText(e) + "\n"))
someData = 2
log.debug("Got data: {data!r}", data=someData)

logging with twisted not printing to the screen

Question is somewhat related to this. twisted logging to screen(stdout) not working
I want to put logs on screen using twisted logger. it works when a string is passed to the log methods but when python objects are passed as mentioned in the linked document it does not works(log statement at last line in code below).
import logging
from twisted.logger import Logger, LogLevel
import sys
from twisted.logger import globalLogPublisher
from twisted.logger import textFileLogObserver
from twisted.logger import FilteringLogObserver, LogLevelFilterPredicate, LogLevel
log = Logger()
level = LogLevel.debug
predicate = LogLevelFilterPredicate(defaultLogLevel=level)
observer = FilteringLogObserver(textFileLogObserver(sys.stdout), [predicate])
globalLogPublisher.addObserver(observer)
#---------> This works
log.info("Start logging with {l}", l=level)
#---------> This does not
log.debug(data=log)
According to the source https://github.com/twisted/twisted/blob/twisted-16.3.0/twisted/logger/_logger.py
Both .debug and .info call the the same def emit(self, level, format=None, **kwargs)
in case of info it's self.emit(LogLevel.info, format, **kwargs) and debug calls self.emit(LogLevel.debug, format, **kwargs)
So if you want your log.debug work properly you should stick to the format and call it somewhat like:
log.debug('debug with {obj}', obj=log)

Python - Logging from multiple modules using structlog

I am trying to use Structlog to to log to a file and then using filebeat to send the log to my logging service.
I have made the everything work, but I would like to be able to use the same logger across multiple modules, like with Pythons default logger (see https://docs.python.org/2/howto/logging.html section "Logging from multiple modules").
One of the reasons is that I would like to bind a sessionID to my logoutput that should be logged across all modules called from this session.
Probably need some fundamental knowledge on how to use structlogger, but havn't found the answer in their documentation or in other posts.
Please advice....
An example:
main.py
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
import uuid
from mylogger import myLogger
import otherModule
myLogger = myLogger()
myLogger.log.warning('working', error='test')
myLogger.log = myLogger.log.bind(sessionID=str(uuid.uuid4()))
myLogger.log.warning('Event where sessionID is bound to logger', error='test')
otherModule = otherModule.otherModule()
myLogger.py
#!/usr/bin/python3
# -*- coding: UTF-8 -*-
import datetime, logging
from structlog import wrap_logger, get_logger
from structlog.processors import JSONRenderer
from structlog.stdlib import filter_by_level, add_log_level
class myLogger():
def __init__(self, loggername="root", logLevel='INFO', logFile='test2.log'):
self.log = wrap_logger(
logging.getLogger('root'),
processors=[
filter_by_level,
add_log_level,
self.add_timestamp,
JSONRenderer(indent=1, sort_keys=True)
]
)
logging.basicConfig(format='%(message)s', level=logLevel, filename=logFile)
def my_get_logger(self, loggername="AnyLogger"):
log = get_logger(loggername)
return log
def add_timestamp(self, _, __, event_dict):
event_dict['timestamp'] = datetime.datetime.utcnow().isoformat()
return event_dict
otherModule.py
import structlog
from mylogger import myLogger
class otherModule():
def __init__(self):
logger = structlog.get_logger('root')
## This logger does not have the processors nor the bound sessionID
logger.warning('In other module')
## This logmessage is printed to console
logger2 = myLogger();
## This logger has all the processors but not the bund sessionID
logger2.log.warning('In other module')
## This log is written to my logfile, but without the sessionID
You need to use a wrapped dictionary as a context class as explained in the structlog documentation
So you will end up with something like this:
structlog.configure(
processors=[
structlog.stdlib.filter_by_level,
# other processors
],
context_class=structlog.threadlocal.wrap_dict(dict),
)

python logs get suppressed

My tornado application is using some legacy modules written many years back. Those modules are configured to log out things with root logger. The issue I am facing is that whenever I import those modules the logs printed by the tornado(i.e. tornado.access, tornado.application, etc..) get suppressed.
Here is how I invoke my server
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Basic run script"""
from zmq.eventloop import ioloop
ioloop.install()
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
import tornado.autoreload
from tornado.options import options
import tornado.web
from grace_server.application import MyApplication
from settings import settings
def main():
app = MyApplication(settings)
app.listen(options.port)
tornado.ioloop.IOLoop.current().start()
if __name__ == "__main__":
main()
Here is the definition of the tornado.Application
import collections, zmq, os
import logging, re
import pickle, json
from datetime import datetime
from functools import partial
from zmq.eventloop.zmqstream import ZMQStream
from zmq.eventloop import ioloop
from tornado import web
from tornado.log import LogFormatter, app_log, access_log, gen_log
from jupyter_client import MultiKernelManager
from legacy_module import api
from legacy_module.util.utils import get_env
from urls import url_patterns
ioloop = ioloop.IOLoop.current()
class MyApplication(web.Application):
def __init__(self, settings):
self.init_logging()
self.connections = collections.defaultdict(list)
self.kernels = {}
self.listen_logs()
web.Application.__init__(self, url_patterns, **settings)
def init_logging(self):
self.logger = logging.getLogger('MyApplication')
self.logger.setLevel(logging.DEBUG)
def broadcast_message(self, message):
connections = self.connections.keys()
for conn in connections:
conn.write_message(message)
def multicat_message(self, filter_, message):
connections = self.connections.keys()
connections = filter(connections)
for conn in connections:
conn.write_message(message)
...
...
...
This is how logging is configured in my legacy_module
import os, json
import logging, logging.config
from contextlib import contextmanager
from kombu import Connection
from terminaltables import AsciiTable
from legacy_module import resources
from legacy_module.resources.gredis import redis_tools
from legacy_module.core import versioning
from legacy_module.util.utils import get_logger_container, get_env
from legacy_module.resources.databases.mongo import MongoDatabaseCollection
DB_COLLECTION_OBJECT = MongoDatabaseCollection()
LOGGING_FILE = os.path.join(os.environ['legacy_module_HOME'], 'config', 'logging.config')
logging.config.fileConfig(LOGGING_FILE)
LOGGER = logging.getLogger()
...
...
...
This is how logging.config looks.
[loggers]
keys = root
[handlers]
keys = consoleHandler
[formatters]
keys = simpleFormatter
[logger_root]
level = DEBUG
handlers = consoleHandler
[handler_consoleHandler]
class = StreamHandler
level = DEBUG
formatter = simpleFormatter
args = (sys.stdout,)
[formatter_simpleFormatter]
format = %(asctime)s - %(name)s - %(levelname)s - %(message)s
datefmt =
This is how normal logs looks like
2017-09-28 02:40:03,409 MyApplication DEBUG init_logging done
2017-09-28 02:40:13,018 MyApplication DEBUG Authenticating
But When I comment out the import for legacy_module from MyApplication, I can see tornado.access logs
2017-09-28 02:40:03,409 MyApplication DEBUG init_logging done
2017-09-28 02:40:13,017 tornado.access INFO 304 GET / (172.20.20.3) 1.79ms
2017-09-28 02:40:14,264 tornado.access INFO 304 GET /api/login (172.20.20.3) 0.75ms
2017-09-28 02:40:13,018 MyApplication DEBUG Authenticating
so the logging configurations of my legacy_module is some how suppressing the logs by the tornado.
How can I fix this, I need these logs.
First, in yourlegacymodule, remove the logging.config.fileConfig(LOGGING_FILE) call and replace LOGGER = logging.getLogger() with LOGGER = logging.getLogger(__name__).
Then you may want to make sure you have at least the root logger properly configured (don't know what you get in tornado for logging config so check the docs).
As a more general note: this logging configuration in a library module is the perfect example of logging antipattern - the whole point of the logging package is to decouple logger's use (from within library code) from logging config which should be left to the application using the library code and should be configurable per application instance. FWIW note that your own MyApplication.init_logging() is also an antipattern - you shouldn't hardcode the logger's level in your code, this should be done using a per-instance config (cf how django uses the settings module to configure logging).
Update:
I'd have to dig into tornado's code to give you an exact detailed answer, but obviously the logging.config.fileConfig() call in yourlegacymodule overrides tornado's own configuration.
are my configs done in init_logging get overridden by the root logger ?
The only thing you currently "configure" (and which you shouldn't) in init_logging is the "MyApplication" logger's level, this has no impact on which loggin handler is used (=> where your logs are sent) etc.
how can I prevent them ?
This was the very first part of my answer...

Categories

Resources