Creating a custom slack log handler doesn't work - python

I want to create a custom python logging Handler to send messages via slack.
I found this package however its no longer being updated so i created a very bare bone version of it. however it doesnt seem to work, I added a print call for debugging purposes and emit is not being evoked.
import logging
# import slacker
class SlackerLogHandler(logging.Handler):
def __init__(self, api_key, channel):
super().__init__()
self.channel = channel
# self.slacker = slacker.Slacker(api_key)
def emit(self, record):
message = self.format(record)
print('works')
# self.slacker.chat.post_message(text=message, channel=self.channel)
slack_handler = SlackerLogHandler('token', 'channel')
slack_handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
slack_handler.setFormatter(formatter)
logger = logging.getLogger('my_app')
logger.addHandler(slack_handler)
logger.info('info_try') # 'works' is not printed and no slack message is sent
I saw this answer and tried to also inherit from StreamHandler but to no avail.
I think I am missing something very basic.
edited to remove slack logic for ease of reproduction.

After some additional searching I found out that the logging level that is set for the handler is separate from the level that is set for the logger.
meaning, that adding:
logger.setLevel(logging.INFO)
fixes the problem.

Related

Log messages appearing twice - with and without formatting in Python Flask app

I set up logging module wide like so:
def setup_logging(app):
"""
Set up logging so as to include RequestId and relevant logging info
"""
RequestID(app)
handler = logging.StreamHandler()
handler.setStream(sys.stdout)
handler.propagate=False
handler.setFormatter(
logging.Formatter("[MHPM][%(module)s][%(funcName)s] %(levelname)s : %(request_id)s - %(message)s")
)
handler.addFilter(RequestIDLogFilter()) # << Add request id contextual filter
logging.getLogger().addHandler(handler)
logging.getLogger().setLevel(level="DEBUG")
and I use it so:
# in init.py
setup_logging(app)
# in MHPMService.py
logger = logging.getLogger(__name__)
But here's what I see on my console:
DEBUG:src.service.MHPMService:MHPMService.__init__(): initialized
[MHPM][MHPMService][__init__] DEBUG : 5106ec8e-9ffa-423d-9401-c34a92dcfa23 - MHPMService.__init__(): initialized
I only want the second type of logs in my application, how do I do this?
I reset the handlers and got the expected behaviour:
logger.handlers=[]
swap the current handlers
logging.getLogger().handlers[0] = handler
instead of doing this
logging.getLogger().addHandler(handler)

Using same logger to seperate file outout with python logging

with the following code I use to write info logs to info.log and error log to error.log
import logging
logger_info = logging.getLogger('info')
logger_err = logging.getLogger('err')
logger_info.setLevel(logging.INFO)
logger_err.setLevel(logging.WARNING)
info_file_handler = logging.FileHandler('info.log')
error_file_handler = logging.FileHandler('error.log')
logger_info.addHandler(info_file_handler)
logger_err.addHandler(error_file_handler)
logger_info.info('info test')
logger_err.error('error test')
Now I using 2 logger : logger_err and logger_info.
Can I marge those 2 logger to 1 logger , that logger_info.info will write into info.log and logger_info.error will write to error.log
It is uncommon because logging usually processes messages that have a higher severity than a threshold, but it is possible by using 2 handlers and a custom filter:
you attach a handler to the logger with a level of ERROR and make it write to the error.log file
you attach a second handler to the same logger with a level of INFO and make it write to the info.log file
you add a custom filter to that second handler to reject messages with a level higher than INFO
Demo:
class RevFilter:
"""A filter to reject messages ABOVE a maximum level"""
def __init__(self, maxLev):
self.maxLev = maxLev
def filter(self, record):
return record.levelno <= self.maxLev
hinf = logging.FileHandler('/path/to/info.log')
herr = logging.FileHandler('/path/to/error.log')
herr.setLevel(logging.ERROR)
hinf.setLevel(logging.INFO)
hinf.addFilter(RevFilter(logging.INFO))
logger = logging.getLogger(name)
logger.addHandler(hinf)
logger.addHandler(herr)
logger.setLevel(logging.INFO) # or lower of course...
From that point, the file error.log will receive messages send by logger at a level of ERROR or above, and info.log will only receive message at a level of INFO, neither higher nor lower.
I'm not sure I understand what you're looking to achieve, but if you're simply wanting to log both types of messages to the same file, you can just specify the same output file when you create the two FileHandlers:
import logging
logger_info = logging.getLogger('info')
logger_err = logging.getLogger('err')
logger_info.setLevel(logging.INFO)
logger_err.setLevel(logging.WARNING)
info_file_handler = logging.FileHandler('combined.log')
error_file_handler = logging.FileHandler('combined.log')
logger_info.addHandler(info_file_handler)
logger_err.addHandler(error_file_handler)
logger_info.info('info test')
logger_err.error('error test')

Python logger with custom handler emits to std

I've created a logger with the custom handler which emits message to telegram via bot. It works, but for some reason the message is also emitted to stderr (or stdout).
My code:
class TelegramHandler(logging.Handler):
def emit(self, record):
log_entry = self.format(record)
payload = {
'chat_id': TELEGRAM_CHAT_ID,
'text': log_entry,
'parse_mode': 'HTML'
}
return requests.post("https://api.telegram.org/bot{token}/sendMessage".format(token=TELEGRAM_TOKEN),
data=payload).content
# setting up root logger
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.WARNING)
# setting up my logger
logger_bot = logging.getLogger('bot')
handler = TelegramHandler()
logger_bot.addHandler(handler)
logger_bot.setLevel(logging.DEBUG)
And the following code:
logger_bot.info('bot test')
logging.warning('root test')
results in
2019-12-06 22:24:14,401 - bot - INFO - bot test # *(plus message in telegram)*
2019-12-06 22:24:14,740 - root - WARNING - root test
I've checked handlers
for h in logger_bot.handlers:
print(h)
and only one is present
<TelegramHandler (NOTSET)>
Also noticed when I don't set up root logger, bot logger doesn't emit to std. So those are somehow connected, but I can't figure out what is going on exactly.
Thank you.
You need to set the propagate attribute on the bot logger to False.
So add logger_bot.propagate = False somewhere in the setup for it, and that should cause each log to only be handled by its own handlers.

How to create a logging package with python "logging" module for my scripts?

I'm trying to create a logging package for my python scripts, allowing me to log my errors on an ELK server. The problem is that I can't create an instance of my Logging class to get all my logs present in the scripts.
In my package file I set up a handler that will allow me to retrieve all the events from my scripts. The problem is that I can't use my log messages only through class Logging. I am forced to use the python logging package in each of my modules when the call is already made in my package.
PACKAGE LOGS
import logging
class Logging:
def __init__(self, module, scope, title):
self.module = module
self.scope = scope
self.title = title
def format_logs(self):
# Configuration Logging
logging.getLogger(__name__)
logging.basicConfig(filename=FILE, format='%(asctime)s :: %(levelname)s :: %(message)s', datefmt='%m/%d/%Y %I:%M:%S', level=logging.DEBUG)
# Setting up the listening file
logging.FileHandler(FILE)
EXAMPLE SCRIPTS
from packages.pkg_logs import Logging
import logging
def __init__(self, url):
self.url = url
self.init_log = Logging(self.url, SCRIPT, TITLE)
self.set_handler = self.init_log.format_logs()
logging.info("Starting script : %s on : %s" % (TITLE, url))
Waiting for your feedback for solutions.
You have to return the logger instance from your custom Log Manager.
Example:
log_mananger.py
class Logger:
def __init__(filename):
# set logging format
logging.basicConfig()
self.logger = logging.getLogger(filename=filename, format='set your custom format')
# set up listening file
self.logger.env=FILE
def get_logger():
return self.logger
script.py
from log_manager import Logger
logger = Logger().get_logger()
logger.info("message")
This should work fine.
In addition to this, if you want to log to the same file from different parts of your project, you can make the LogManager a singleton class to return a singleton object and log in the same file.

Python logging caches?

I have a function in my python package, which returns a logger:
import logging
def get_logger(logger_name, log_level='DEBUG') :
"""Setup and return a logger."""
log = logging.getLogger(logger_name)
log.setLevel(log_level)
formatter = logging.Formatter('[%(levelname)s] %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
log.addHandler(handler)
return log
I use this logger in my modules and submodules:
from tgtk.logger import get_logger
log = get_logger(__name__, 'DEBUG')
so I can use it via
log.debug("Debug message")
log.info("Info message")
etc. This worked so far perfectly, but today I encountered a weird problem: on one machine, there is no output at all. Every log.xxx is simply being "ignored", whatever level is set. I had some similar issues in the past and I remember that it somehow started working again after I renamed the logger, but this time it does not help.
Is there any caching or what is going on? The scripts are exactly the same (synced over SVN).

Categories

Resources