I have a python program that utilizes multiprocessing to increase efficiency, and a function that creates a logger for each process. The logger function looks like this:
import logging
import os
def create_logger(app_name):
"""Create a logging interface"""
# create a logger
if logging in os.environ:
logging_string = os.environ["logging"]
if logging_string == "DEBUG":
logging_level = loggin.DEBUG
else if logging_string == "INFO":
logging_level = logging.INFO
else if logging_string == "WARNING":
logging_level = logging.WARNING
else if logging_string == "ERROR":
logging_level = logging.ERROR
else if logging_string == "CRITICAL":
logging_level = logging.CRITICAL
else:
logging_level = logging.INFO
logger = logging.getLogger(app_name)
logger.setLevel(logging_level)
# Console handler for error output
console_handler = logging.StreamHandler()
console_handler.setLevel(logging_level)
# Formatter to make everything look nice
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
console_handler.setFormatter(formatter)
# Add the handlers to the logger
logger.addHandler(console_handler)
return logger
And my processing functions look like this:
import custom_logging
def do_capture(data_dict_access):
"""Process data"""
# Custom logging
LOGGER = custom_logging.create_logger("processor")
LOGGER.debug("Doing stuff...")
However, no matter what the logging environment variable is set to, I still receive debug log messages in the console. Why is my logging level not taking effect, surely the calls to setLevel() should stop the debug messages from being logged?
Here is an easy way to create a logger object:
import logging
import os
def create_logger(app_name):
"""Create a logging interface"""
logging_level = os.getenv('logging', logging.INFO)
logging.basicConfig(
level=logging_level,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(app_name)
return logger
Discussion
There is no need to convert from "DEBUG" to logging.DEBUG, the logging module understands these strings.
Use basicConfig to ease the pain of setting up a logger. You don't need to create handler, set format, set level, ... This should work for most cases.
Update
I found out why your code does not work, besides the else if. Consider your line:
if logging in os.environ:
On this line loggging without quote refers to the logging library package. What you want is:
if 'logging' in os.environ:
Related
I am trying to learn about logging. My code is logging the information that I want but it's duplicating the message.
The first code block below is at the top of my py file.
import logging
import traceback
# setup the file name
log_file_name = dt.date.today().strftime('%Y_%B_%d')
log_path = "C:/some_path/"
# setup the logging
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logger = logging.getLogger(__name__)
logger.setLevel('DEBUG')
file_handler = logging.FileHandler(log_path + log_file_name + '.log')
formatter = logging.Formatter(log_format)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
In my function I'm using the line like below to log some information.
logger.debug('some message')
My py file doesn't import any other modules which have any logging information so I'm a bit confused as to why the messages are being duplicated?
I have a logger function from logging package that after I call it, I can send the message through logging level.
I would like to send this message also to another function, which is a Telegram function called SendTelegramMsg().
How can I get the message after I call the funcion setup_logger send a message through logger.info("Start") for example, and then send this exatcly same message to SendTelegramMsg() function which is inside setup_logger function?
My currently setup_logger function:
# Define the logging level and the file name
def setup_logger(telegram_integration=False):
"""To setup as many loggers as you want"""
filename = os.path.join(os.path.sep, pathlib.Path(__file__).parent.resolve(), 'logs', str(dt.date.today()) + '.log')
formatter = logging.Formatter('%(levelname)s: %(asctime)s: %(message)s', datefmt='%m/%d/%Y %H:%M:%S')
level = logging.DEBUG
handler = logging.FileHandler(filename, 'a')
handler.setFormatter(formatter)
consolehandler = logging.StreamHandler()
consolehandler.setFormatter(formatter)
logger = logging.getLogger('logs')
if logger.hasHandlers():
# Logger is already configured, remove all handlers
logger.handlers = []
else:
logger.setLevel(level)
logger.addHandler(handler)
logger.addHandler(consolehandler)
#if telegram_integration == True:
#SendTelegramMsg(message goes here)
return logger
After I call the function setup_logger():
logger = setup_logger()
logger.info("Start")
The output:
INFO: 01/06/2022 11:07:12: Start
How am I able to get this message and send to SendTelegramMsg() if I enable the integration to True?
Implement a custom logging.Handler:
class TelegramHandler(logging.Handler):
def emit(self, record):
message = self.format(record)
SendTelegramMsg(message)
# SendTelegramMsg(message, record.levelno) # Passing level
# SendTelegramMsg(message, record.levelname) # Passing level name
Add the handler:
def setup_logger(telegram_integration=False):
# ...
if telegram_integration:
telegram_handler = TelegramHandler()
logger.addHandler(telegram_handler)
return logger
Usage, no change:
logger = setup_logger()
logger.info("Start")
Picking up the idea suggested by #gold_cy: You implement a custom logging.Handler. Some hints for that:
for the handler to be able to send message via a bot, you may want to pass the bot to the handlers __init__ so that you have it available later
emit must be implemented by you. Here you'll want to call format which gives you a formatted version of the log record. You can then use that message to send it via the bot
Maybe having a look at the implementation of StreamHandler and FileHandler is helpful as well
#Defining a global flag
tlInt=False
# Define the logging level and the file name
def setup_logger(telegram_integration=False):
"""To setup as many loggers as you want"""
filename = os.path.join(os.path.sep, pathlib.Path(__file__).parent.resolve(), 'logs', str(dt.date.today()) + '.log')
formatter = logging.Formatter('%(levelname)s: %(asctime)s: %(message)s', datefmt='%m/%d/%Y %H:%M:%S')
level = logging.DEBUG
handler = logging.FileHandler(filename, 'a')
handler.setFormatter(formatter)
consolehandler = logging.StreamHandler()
consolehandler.setFormatter(formatter)
logger = logging.getLogger('logs')
if logger.hasHandlers():
# Logger is already configured, remove all handlers
logger.handlers = []
else:
logger.setLevel(level)
logger.addHandler(handler)
logger.addHandler(consolehandler)
if telegram_integration == True:
global tlInt
tlInt=True
return logger
#Logic : If telegram integration is true, it will call SendTelegramMsg to send the message to the app based on the level; and if it is false, it will save the message in local file based on the level
def GenerateLog(logger,levelFlag,data):
global tlInt
if tlInt == True:
SendTelegramMsg(levelFlag,data)
else:
if levelFlag == "warning":
logger.warning(data)
elif levelFlag == "error":
logger.error(data)
elif levelFlag == "debug":
logger.debug(data)
elif levelFlag == "critical":
logger.critical(data)
else:
logger.info(data)
#You can used the same logic in SendTelegramMsg which used in GenerateLog for deciding the level
def SendTelegramMsg(levelFlag,data):
#your code goes here
logger=setup_logger()
GenerateLog(logger,'warning','Start')
I try to use two different loggers to handle different log levels. For example, I want info message store in a file and don't log error message. Error messages of some special functions will email to the user.
I write a simple program to test the logging module.
Code:
import logging
def_logger = logging.getLogger("debuglogger")
def_logger.setLevel(logging.DEBUG)
maillogger = logging.getLogger("mail")
maillogger.setLevel(logging.ERROR)
mailhandler = logging.StreamHandler()
mailhandler.setLevel(logging.ERROR)
mailhandler.setFormatter(logging.Formatter('Error: %(asctime)s - %(name)s - %(levelname)s - %(message)s'))
maillogger.addHandler(mailhandler)
print(def_logger.getEffectiveLevel())
print(maillogger.getEffectiveLevel())
def_logger.info("info 1")
maillogger.info("info 2")
def_logger.error("error 1")
maillogger.error("error 2")
Output:
Output result
I can see the level of them is correct, but both of them act like the level is ERROR.
How can I correctly configure them?
Answer: Base on blues advice, I added a handler and it solved my problem.
Here is the modified code:
import logging
def_logger = logging.getLogger("debuglogger")
def_logger.setLevel(logging.DEBUG)
def_logger.addHandler(logging.StreamHandler()) #added a handler here
maillogger = logging.getLogger("mail")
maillogger.setLevel(logging.ERROR)
mailhandler = logging.StreamHandler()
mailhandler.setLevel(logging.ERROR)
mailhandler.setFormatter(logging.Formatter('Error: %(asctime)s - %(name)s - %(levelname)s - %(message)s'))
maillogger.addHandler(mailhandler)
print(def_logger.getEffectiveLevel())
print(maillogger.getEffectiveLevel())
def_logger.info("info 1")
maillogger.info("info 2")
def_logger.error("error 1")
maillogger.error("error 2")
Neither the def_logger nor any of its parents have a handler attached to it. So what happens is that the logging module falls back to logging.lastResort which by default is a StreamHandler with level Warning. That is the reason why the info message doesn't appear, while the error does. So to solve your problem attach a handler to the def_logger.
Note: In your scenario the only parent both loggers have is the default root handler.
You could add a filter for each logger or handler to process only records of interest
import logging
def filter_info(record):
return True if record.levelno == logging.INFO else False
def filter_error(record):
return True if record.levelno >= logging.ERROR else False
# define debug logger
def_logger = logging.getLogger("debuglogger")
def_logger.setLevel(logging.DEBUG)
def_logger.addFilter(filter_info) # add filter directly to this logger since you didn't define any handler
# define handler for mail logger
mail_handler = logging.StreamHandler()
mail_handler.setLevel(logging.ERROR)
mail_handler.setFormatter(logging.Formatter('Error: %(asctime)s - %(name)s - %(levelname)s - %(message)s'))
mail_handler.addFilter(filter_error) # add filter to handler
mail_logger = logging.getLogger("mail")
mail_logger.setLevel(logging.ERROR)
mail_logger.addHandler(mail_handler)
# test
def_logger.info("info 1")
mail_logger.info("info 2")
def_logger.error("error 1")
mail_logger.error("error 2")
If filter return True, the log record is processed. Otherwise, it is skipped.
Note:
Filter attached to a logger won't be called for log record that is generated by descendant loggers. For example, if you add a filter to logger A, it won't be called for records that are generated by logger A.B nor A.B.C.
Filter attached to a handler is consulted before an event is emitted by that handler.
This means that you just need one logger and add two handlers with different filters attached.
import logging
def filter_info(record):
return True if record.levelno == logging.INFO else False
def filter_error(record):
return True if record.levelno >= logging.ERROR else False
# define your logger
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)
# define handler for file
file_handler = logging.FileHandler('path_to_log.txt')
file_handler.level = logging.INFO
file_handler.addFilter(filter_info) # add filter to handler
# define handler for mail
mail_handler = logging.StreamHandler()
mail_handler.setLevel(logging.ERROR)
mail_handler.setFormatter(logging.Formatter('Error: %(asctime)s - %(name)s - %(levelname)s - %(message)s'))
mail_handler.addFilter(filter_error) # add filter to handler
logger.addHandler(file_handler)
logger.addHandler(mail_handler)
# test
logger.info("info 1")
logger.info("info 2")
logger.error("error 1")
logger.error("error 2")
I encountered such a problem and couldn't solve it. I used python's logger to log info, logger level set to logging.DEBUG. I used gunicorn to log
info at the same time. Normally, the error message goes to python's logger, and the link messages and other messages written by logger.info or logger.debug goes to the log file of gunicorn. However with one application it doesn't behave so. The messages output by logger.info also goes to python's logger. The problem is, I only want to see error messages in python's logger, all the other messages would be seen from gunicorn's logger. Can anyone give me a clue where I might do wrong in this situation?
thx in advance,
alex
The following is my config:
LOGGER_LEVEL = logging.DEBUG
LOGGER_ROOT_NAME = "root"
LOGGER_ROOT_HANLDERS = [logging.StreamHandler, logging.FileHandler]
LOGGER_ROOT_LEVEL = LOGGER_LEVEL
LOGGER_ROOT_FORMAT = "[%(asctime)s %(levelname)s %(name)s %(funcName)s:%(lineno)d] %(message)s"
LOGGER_LEVEL = logging.ERROR
LOGGER_FILE_PATH = "/data/log/web/"
Code:
def config_root_logger(self):
formatter = logging.Formatter(self.config.LOGGER_ROOT_FORMAT)
logger = logging.getLogger()
logger.setLevel(self.config.LOGGER_ROOT_LEVEL)
filename = os.path.join(self.config.LOGGER_FILE_PATH, "secondordersrv.log")
handler = logging.FileHandler(filename)
handler.setFormatter(formatter)
logger.addHandler(handler)
# 测试环境配置再增加console的日志记录
self._add_test_handler(logger, formatter)
def _add_test_handler(self, logger, formatter):
# 测试环境配置再增加console的日志记录
if self.config.RUN_MODE == 'test':
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
My gunicorn config looks like this:
errorlog = '/data/log/web/%s.log' % APP_NAME
loglevel = 'info'
accesslog = '-'
You did not set the level of your handler.
After handler.setFormatter(formatter), add the following line:
handler.setLevel(self.config.LOGGER_LEVEL)
I have a function in my python package, which returns a logger:
import logging
def get_logger(logger_name, log_level='DEBUG') :
"""Setup and return a logger."""
log = logging.getLogger(logger_name)
log.setLevel(log_level)
formatter = logging.Formatter('[%(levelname)s] %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
log.addHandler(handler)
return log
I use this logger in my modules and submodules:
from tgtk.logger import get_logger
log = get_logger(__name__, 'DEBUG')
so I can use it via
log.debug("Debug message")
log.info("Info message")
etc. This worked so far perfectly, but today I encountered a weird problem: on one machine, there is no output at all. Every log.xxx is simply being "ignored", whatever level is set. I had some similar issues in the past and I remember that it somehow started working again after I renamed the logger, but this time it does not help.
Is there any caching or what is going on? The scripts are exactly the same (synced over SVN).