I've created a logger with the custom handler which emits message to telegram via bot. It works, but for some reason the message is also emitted to stderr (or stdout).
My code:
class TelegramHandler(logging.Handler):
def emit(self, record):
log_entry = self.format(record)
payload = {
'chat_id': TELEGRAM_CHAT_ID,
'text': log_entry,
'parse_mode': 'HTML'
}
return requests.post("https://api.telegram.org/bot{token}/sendMessage".format(token=TELEGRAM_TOKEN),
data=payload).content
# setting up root logger
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.WARNING)
# setting up my logger
logger_bot = logging.getLogger('bot')
handler = TelegramHandler()
logger_bot.addHandler(handler)
logger_bot.setLevel(logging.DEBUG)
And the following code:
logger_bot.info('bot test')
logging.warning('root test')
results in
2019-12-06 22:24:14,401 - bot - INFO - bot test # *(plus message in telegram)*
2019-12-06 22:24:14,740 - root - WARNING - root test
I've checked handlers
for h in logger_bot.handlers:
print(h)
and only one is present
<TelegramHandler (NOTSET)>
Also noticed when I don't set up root logger, bot logger doesn't emit to std. So those are somehow connected, but I can't figure out what is going on exactly.
Thank you.
You need to set the propagate attribute on the bot logger to False.
So add logger_bot.propagate = False somewhere in the setup for it, and that should cause each log to only be handled by its own handlers.
Related
I set up logging module wide like so:
def setup_logging(app):
"""
Set up logging so as to include RequestId and relevant logging info
"""
RequestID(app)
handler = logging.StreamHandler()
handler.setStream(sys.stdout)
handler.propagate=False
handler.setFormatter(
logging.Formatter("[MHPM][%(module)s][%(funcName)s] %(levelname)s : %(request_id)s - %(message)s")
)
handler.addFilter(RequestIDLogFilter()) # << Add request id contextual filter
logging.getLogger().addHandler(handler)
logging.getLogger().setLevel(level="DEBUG")
and I use it so:
# in init.py
setup_logging(app)
# in MHPMService.py
logger = logging.getLogger(__name__)
But here's what I see on my console:
DEBUG:src.service.MHPMService:MHPMService.__init__(): initialized
[MHPM][MHPMService][__init__] DEBUG : 5106ec8e-9ffa-423d-9401-c34a92dcfa23 - MHPMService.__init__(): initialized
I only want the second type of logs in my application, how do I do this?
I reset the handlers and got the expected behaviour:
logger.handlers=[]
swap the current handlers
logging.getLogger().handlers[0] = handler
instead of doing this
logging.getLogger().addHandler(handler)
with the following code I use to write info logs to info.log and error log to error.log
import logging
logger_info = logging.getLogger('info')
logger_err = logging.getLogger('err')
logger_info.setLevel(logging.INFO)
logger_err.setLevel(logging.WARNING)
info_file_handler = logging.FileHandler('info.log')
error_file_handler = logging.FileHandler('error.log')
logger_info.addHandler(info_file_handler)
logger_err.addHandler(error_file_handler)
logger_info.info('info test')
logger_err.error('error test')
Now I using 2 logger : logger_err and logger_info.
Can I marge those 2 logger to 1 logger , that logger_info.info will write into info.log and logger_info.error will write to error.log
It is uncommon because logging usually processes messages that have a higher severity than a threshold, but it is possible by using 2 handlers and a custom filter:
you attach a handler to the logger with a level of ERROR and make it write to the error.log file
you attach a second handler to the same logger with a level of INFO and make it write to the info.log file
you add a custom filter to that second handler to reject messages with a level higher than INFO
Demo:
class RevFilter:
"""A filter to reject messages ABOVE a maximum level"""
def __init__(self, maxLev):
self.maxLev = maxLev
def filter(self, record):
return record.levelno <= self.maxLev
hinf = logging.FileHandler('/path/to/info.log')
herr = logging.FileHandler('/path/to/error.log')
herr.setLevel(logging.ERROR)
hinf.setLevel(logging.INFO)
hinf.addFilter(RevFilter(logging.INFO))
logger = logging.getLogger(name)
logger.addHandler(hinf)
logger.addHandler(herr)
logger.setLevel(logging.INFO) # or lower of course...
From that point, the file error.log will receive messages send by logger at a level of ERROR or above, and info.log will only receive message at a level of INFO, neither higher nor lower.
I'm not sure I understand what you're looking to achieve, but if you're simply wanting to log both types of messages to the same file, you can just specify the same output file when you create the two FileHandlers:
import logging
logger_info = logging.getLogger('info')
logger_err = logging.getLogger('err')
logger_info.setLevel(logging.INFO)
logger_err.setLevel(logging.WARNING)
info_file_handler = logging.FileHandler('combined.log')
error_file_handler = logging.FileHandler('combined.log')
logger_info.addHandler(info_file_handler)
logger_err.addHandler(error_file_handler)
logger_info.info('info test')
logger_err.error('error test')
I try to use two different loggers to handle different log levels. For example, I want info message store in a file and don't log error message. Error messages of some special functions will email to the user.
I write a simple program to test the logging module.
Code:
import logging
def_logger = logging.getLogger("debuglogger")
def_logger.setLevel(logging.DEBUG)
maillogger = logging.getLogger("mail")
maillogger.setLevel(logging.ERROR)
mailhandler = logging.StreamHandler()
mailhandler.setLevel(logging.ERROR)
mailhandler.setFormatter(logging.Formatter('Error: %(asctime)s - %(name)s - %(levelname)s - %(message)s'))
maillogger.addHandler(mailhandler)
print(def_logger.getEffectiveLevel())
print(maillogger.getEffectiveLevel())
def_logger.info("info 1")
maillogger.info("info 2")
def_logger.error("error 1")
maillogger.error("error 2")
Output:
Output result
I can see the level of them is correct, but both of them act like the level is ERROR.
How can I correctly configure them?
Answer: Base on blues advice, I added a handler and it solved my problem.
Here is the modified code:
import logging
def_logger = logging.getLogger("debuglogger")
def_logger.setLevel(logging.DEBUG)
def_logger.addHandler(logging.StreamHandler()) #added a handler here
maillogger = logging.getLogger("mail")
maillogger.setLevel(logging.ERROR)
mailhandler = logging.StreamHandler()
mailhandler.setLevel(logging.ERROR)
mailhandler.setFormatter(logging.Formatter('Error: %(asctime)s - %(name)s - %(levelname)s - %(message)s'))
maillogger.addHandler(mailhandler)
print(def_logger.getEffectiveLevel())
print(maillogger.getEffectiveLevel())
def_logger.info("info 1")
maillogger.info("info 2")
def_logger.error("error 1")
maillogger.error("error 2")
Neither the def_logger nor any of its parents have a handler attached to it. So what happens is that the logging module falls back to logging.lastResort which by default is a StreamHandler with level Warning. That is the reason why the info message doesn't appear, while the error does. So to solve your problem attach a handler to the def_logger.
Note: In your scenario the only parent both loggers have is the default root handler.
You could add a filter for each logger or handler to process only records of interest
import logging
def filter_info(record):
return True if record.levelno == logging.INFO else False
def filter_error(record):
return True if record.levelno >= logging.ERROR else False
# define debug logger
def_logger = logging.getLogger("debuglogger")
def_logger.setLevel(logging.DEBUG)
def_logger.addFilter(filter_info) # add filter directly to this logger since you didn't define any handler
# define handler for mail logger
mail_handler = logging.StreamHandler()
mail_handler.setLevel(logging.ERROR)
mail_handler.setFormatter(logging.Formatter('Error: %(asctime)s - %(name)s - %(levelname)s - %(message)s'))
mail_handler.addFilter(filter_error) # add filter to handler
mail_logger = logging.getLogger("mail")
mail_logger.setLevel(logging.ERROR)
mail_logger.addHandler(mail_handler)
# test
def_logger.info("info 1")
mail_logger.info("info 2")
def_logger.error("error 1")
mail_logger.error("error 2")
If filter return True, the log record is processed. Otherwise, it is skipped.
Note:
Filter attached to a logger won't be called for log record that is generated by descendant loggers. For example, if you add a filter to logger A, it won't be called for records that are generated by logger A.B nor A.B.C.
Filter attached to a handler is consulted before an event is emitted by that handler.
This means that you just need one logger and add two handlers with different filters attached.
import logging
def filter_info(record):
return True if record.levelno == logging.INFO else False
def filter_error(record):
return True if record.levelno >= logging.ERROR else False
# define your logger
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)
# define handler for file
file_handler = logging.FileHandler('path_to_log.txt')
file_handler.level = logging.INFO
file_handler.addFilter(filter_info) # add filter to handler
# define handler for mail
mail_handler = logging.StreamHandler()
mail_handler.setLevel(logging.ERROR)
mail_handler.setFormatter(logging.Formatter('Error: %(asctime)s - %(name)s - %(levelname)s - %(message)s'))
mail_handler.addFilter(filter_error) # add filter to handler
logger.addHandler(file_handler)
logger.addHandler(mail_handler)
# test
logger.info("info 1")
logger.info("info 2")
logger.error("error 1")
logger.error("error 2")
I want to create a custom python logging Handler to send messages via slack.
I found this package however its no longer being updated so i created a very bare bone version of it. however it doesnt seem to work, I added a print call for debugging purposes and emit is not being evoked.
import logging
# import slacker
class SlackerLogHandler(logging.Handler):
def __init__(self, api_key, channel):
super().__init__()
self.channel = channel
# self.slacker = slacker.Slacker(api_key)
def emit(self, record):
message = self.format(record)
print('works')
# self.slacker.chat.post_message(text=message, channel=self.channel)
slack_handler = SlackerLogHandler('token', 'channel')
slack_handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
slack_handler.setFormatter(formatter)
logger = logging.getLogger('my_app')
logger.addHandler(slack_handler)
logger.info('info_try') # 'works' is not printed and no slack message is sent
I saw this answer and tried to also inherit from StreamHandler but to no avail.
I think I am missing something very basic.
edited to remove slack logic for ease of reproduction.
After some additional searching I found out that the logging level that is set for the handler is separate from the level that is set for the logger.
meaning, that adding:
logger.setLevel(logging.INFO)
fixes the problem.
I'm getting mad at the logging module from Python, because I really have no idea anymore why the logger is printing out the logging messages to the console (on the DEBUG level, even though I set my FileHandler to INFO). The log file is produced correctly.
But I don't want any logger information on the console.
Here is my configuration for the logger:
template_name = "testing"
fh = logging.FileHandler(filename="testing.log")
fr = logging.Formatter("%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s")
fh.setFormatter(fr)
fh.setLevel(logging.INFO)
logger = logging.getLogger(template_name)
# logger.propagate = False # this helps nothing
logger.addHandler(fh)
Would be nice if anybody could help me out :)
I found this question as I encountered a similar issue, after I had removed the logging.basicConfig(). It started printing all logs in the console.
In my case, I needed to change the filename on every run, to save a log file to different directories. Adding the basic config on top (even if the initial filename is never used, solved the issue for me. (I can't explain why, sorry).
What helped me was adding the basic configuration in the beginning:
logging.basicConfig(filename=filename,
format='%(levelname)s - %(asctime)s - %(name)s - %(message)s',
filemode='w',
level=logging.INFO)
Then changing the filename by adding a handler in each run:
file_handler = logging.FileHandler(path_log + f'/log_run_{c_run}.log')
formatter = logging.Formatter('%(asctime)s : %(levelname)s : %(name)s : %(message)s')
file_handler.setFormatter(formatter)
logger_TS.addHandler(file_handler)
Also curious side-note. If I don't set the formater (file_handler.setFormatter(formatter)) before setting the handler with the new filename, the initially formated logging (levelname, time, etc.) is missing in the log files.
So, the key is to set the logging.basicConfig before, then set the add the handler. As #StressedBoi69420 indicated above.
Hope that helps a bit.
You should be able to add a StreamHandler to handle stdout and set the handlers log level to a level above 50. (Standard log levels are 50 and below.)
Example of how I'd do it...
import logging
import sys
console_log_level = 100
logging.basicConfig(level=logging.INFO,
format="%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s",
filename="testing.log",
filemode="w")
console = logging.StreamHandler(sys.stdout)
console.setLevel(console_log_level)
root_logger = logging.getLogger("")
root_logger.addHandler(console)
logging.debug("debug log message")
logging.info("info log message")
logging.warning("warning log message")
logging.error("error log message")
logging.critical("critical log message")
Contents of testing.log...
2019-11-21 12:53:02,426,426 root INFO info log message
2019-11-21 12:53:02,426,426 root WARNING warning log message
2019-11-21 12:53:02,426,426 root ERROR error log message
2019-11-21 12:53:02,426,426 root CRITICAL critical log message
Note: The only reason I have the console_log_level variable is because I pulled most of this code from a default function that I use that will set the console log level based on an argument value. That way if I want to make the script "quiet", I can change the log level based on a command line arg to the script.