I know how to adjust the level emitted from logs in individual modules
logging.getLogger(<module>).setLevel(logging.INFO)
How to implement different levels for specific modules in Python
My question is - can it be customized per-handler? (I have 3 handlers - File, Network, and commandline) (My problem is that one module, sqlalchemy, is emitting far to many messages at the info level. My CL logger is INFO, network and file are DEBUG. I want the messages in the network and file loggers, just not on the command line.)
Or, alternatively, is there a way to "demote" messages from a log - i.e. turn all INFO into DEBUG, for example?
So, Ideally, I would solve this with a filter on the source logger - and change the level of the messages originating - but I couldn't get the filter to register (I added it, but it never got called.)
Try 1:
class SA_Filter(logging.Filter):
def filter(self, record):
if record.levelno == logging.INFO:
record.level = logging.DEBUG
record.levelname = 'DEBUG'
return True
saLogger = logging.getLogger('sqlalchemy.engine')
saLogger.addFilter(SA_Filter())
Try 2 -
so, based on this info from the Logging documentation .
"Note that filters attached to handlers are consulted before an event is emitted by the handler" - I assumed that that meant prior to determining if the record should be emitted from the handler based on level, it ran the filters -
So I had this, which modified the record:
class SA_Filter(logging.Filter):
def filter(self, record):
if record.name.startswith('sqlalchemy.engine') and record.levelno == logging.INFO:
record.level = logging.DEBUG
record.levelname = 'DEBUG'
return True
# change info messages to Debug for sqlalchemy
handlers = logging.getLogger().handlers
for handler in handlers:
if isinstance(handler, logging.StreamHandler):
handler.addFilter(SA_Filter())
However, it turns out that the sequence is:
Check record level against handler level
Run Filters
Emit record.
So I ended with
Try 3
Which runs on the handler side, and gets the handler level - if the handler is Info or above, the messages are filtered, out, ow they are emitted.
# create log filter to filter out sql alchemy logs at INFO level - for the commandline handler
class SA_Filter(logging.Filter):
"""a class to filter out sqlalchemy logs at INFO level - for the commandline handler"""
def __init__(self, handler_level: int, *args, **kwargs) -> None:
self.handler_level = handler_level if handler_level else 0
super().__init__(*args, **kwargs)
def filter(self, record):
"""
filter out sqlalchemy logs at INFO level
"""
# if we're dealing with sqlalchemy logs, filter them if they're at the INFO level and so is the handler.
if record.name.startswith('sqlalchemy.engine') and record.levelno == logging.INFO and self.handler_level >= logging.INFO:
return False
else:
return True
...
# change info messages to Debug for sqlalchemy
handlers = logging.getLogger().handlers
for handler in handlers:
if isinstance(handler, logging.StreamHandler):
handler.addFilter(SA_Filter(handler.level))
Related
I set up logging module wide like so:
def setup_logging(app):
"""
Set up logging so as to include RequestId and relevant logging info
"""
RequestID(app)
handler = logging.StreamHandler()
handler.setStream(sys.stdout)
handler.propagate=False
handler.setFormatter(
logging.Formatter("[MHPM][%(module)s][%(funcName)s] %(levelname)s : %(request_id)s - %(message)s")
)
handler.addFilter(RequestIDLogFilter()) # << Add request id contextual filter
logging.getLogger().addHandler(handler)
logging.getLogger().setLevel(level="DEBUG")
and I use it so:
# in init.py
setup_logging(app)
# in MHPMService.py
logger = logging.getLogger(__name__)
But here's what I see on my console:
DEBUG:src.service.MHPMService:MHPMService.__init__(): initialized
[MHPM][MHPMService][__init__] DEBUG : 5106ec8e-9ffa-423d-9401-c34a92dcfa23 - MHPMService.__init__(): initialized
I only want the second type of logs in my application, how do I do this?
I reset the handlers and got the expected behaviour:
logger.handlers=[]
swap the current handlers
logging.getLogger().handlers[0] = handler
instead of doing this
logging.getLogger().addHandler(handler)
with the following code I use to write info logs to info.log and error log to error.log
import logging
logger_info = logging.getLogger('info')
logger_err = logging.getLogger('err')
logger_info.setLevel(logging.INFO)
logger_err.setLevel(logging.WARNING)
info_file_handler = logging.FileHandler('info.log')
error_file_handler = logging.FileHandler('error.log')
logger_info.addHandler(info_file_handler)
logger_err.addHandler(error_file_handler)
logger_info.info('info test')
logger_err.error('error test')
Now I using 2 logger : logger_err and logger_info.
Can I marge those 2 logger to 1 logger , that logger_info.info will write into info.log and logger_info.error will write to error.log
It is uncommon because logging usually processes messages that have a higher severity than a threshold, but it is possible by using 2 handlers and a custom filter:
you attach a handler to the logger with a level of ERROR and make it write to the error.log file
you attach a second handler to the same logger with a level of INFO and make it write to the info.log file
you add a custom filter to that second handler to reject messages with a level higher than INFO
Demo:
class RevFilter:
"""A filter to reject messages ABOVE a maximum level"""
def __init__(self, maxLev):
self.maxLev = maxLev
def filter(self, record):
return record.levelno <= self.maxLev
hinf = logging.FileHandler('/path/to/info.log')
herr = logging.FileHandler('/path/to/error.log')
herr.setLevel(logging.ERROR)
hinf.setLevel(logging.INFO)
hinf.addFilter(RevFilter(logging.INFO))
logger = logging.getLogger(name)
logger.addHandler(hinf)
logger.addHandler(herr)
logger.setLevel(logging.INFO) # or lower of course...
From that point, the file error.log will receive messages send by logger at a level of ERROR or above, and info.log will only receive message at a level of INFO, neither higher nor lower.
I'm not sure I understand what you're looking to achieve, but if you're simply wanting to log both types of messages to the same file, you can just specify the same output file when you create the two FileHandlers:
import logging
logger_info = logging.getLogger('info')
logger_err = logging.getLogger('err')
logger_info.setLevel(logging.INFO)
logger_err.setLevel(logging.WARNING)
info_file_handler = logging.FileHandler('combined.log')
error_file_handler = logging.FileHandler('combined.log')
logger_info.addHandler(info_file_handler)
logger_err.addHandler(error_file_handler)
logger_info.info('info test')
logger_err.error('error test')
I want to create a custom python logging Handler to send messages via slack.
I found this package however its no longer being updated so i created a very bare bone version of it. however it doesnt seem to work, I added a print call for debugging purposes and emit is not being evoked.
import logging
# import slacker
class SlackerLogHandler(logging.Handler):
def __init__(self, api_key, channel):
super().__init__()
self.channel = channel
# self.slacker = slacker.Slacker(api_key)
def emit(self, record):
message = self.format(record)
print('works')
# self.slacker.chat.post_message(text=message, channel=self.channel)
slack_handler = SlackerLogHandler('token', 'channel')
slack_handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
slack_handler.setFormatter(formatter)
logger = logging.getLogger('my_app')
logger.addHandler(slack_handler)
logger.info('info_try') # 'works' is not printed and no slack message is sent
I saw this answer and tried to also inherit from StreamHandler but to no avail.
I think I am missing something very basic.
edited to remove slack logic for ease of reproduction.
After some additional searching I found out that the logging level that is set for the handler is separate from the level that is set for the logger.
meaning, that adding:
logger.setLevel(logging.INFO)
fixes the problem.
I've created a logger with the custom handler which emits message to telegram via bot. It works, but for some reason the message is also emitted to stderr (or stdout).
My code:
class TelegramHandler(logging.Handler):
def emit(self, record):
log_entry = self.format(record)
payload = {
'chat_id': TELEGRAM_CHAT_ID,
'text': log_entry,
'parse_mode': 'HTML'
}
return requests.post("https://api.telegram.org/bot{token}/sendMessage".format(token=TELEGRAM_TOKEN),
data=payload).content
# setting up root logger
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.WARNING)
# setting up my logger
logger_bot = logging.getLogger('bot')
handler = TelegramHandler()
logger_bot.addHandler(handler)
logger_bot.setLevel(logging.DEBUG)
And the following code:
logger_bot.info('bot test')
logging.warning('root test')
results in
2019-12-06 22:24:14,401 - bot - INFO - bot test # *(plus message in telegram)*
2019-12-06 22:24:14,740 - root - WARNING - root test
I've checked handlers
for h in logger_bot.handlers:
print(h)
and only one is present
<TelegramHandler (NOTSET)>
Also noticed when I don't set up root logger, bot logger doesn't emit to std. So those are somehow connected, but I can't figure out what is going on exactly.
Thank you.
You need to set the propagate attribute on the bot logger to False.
So add logger_bot.propagate = False somewhere in the setup for it, and that should cause each log to only be handled by its own handlers.
I found a few other posts on this but none that worked for me yet so I wanted to reach out and see if anyone could explain how to properly get / redirect / set handlers on some of the loggers present in Flask / Werkzeurg / sqlalchemy.
Research prior that could not answer my question:
https://github.com/pallets/flask/issues/1359
http://flask.pocoo.org/docs/dev/logging/
https://gist.github.com/ibeex/3257877
My configurations:
main.py
...
def init_app():
""" Runs prior to app launching, contains initialization code """
# set logging level
if not os.path.exists(settings.LOG_DIR):
os.makedirs(settings.LOG_DIR)
# default level
log_level = logging.CRITICAL
if settings.ENV == 'DEV':
log_level = logging.DEBUG
elif settings.ENV == 'TEST':
log_level = logging.WARNING
elif settings.ENV == 'PROD':
log_level = logging.ERROR
log_formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
api_logger = logging.getLogger()
api_handler = TimedRotatingFileHandler(
settings.API_LOG_FILE,
when='midnight',
backupCount=10
)
api_handler.setLevel(log_level)
api_handler.setFormatter(log_formatter)
api_logger.addHandler(api_handler)
logging.getLogger('werkzeug').addHandler(api_handler)
db_logger = logging.getLogger('sqlalchemy')
db_handler = TimedRotatingFileHandler(
settings.DB_LOG_FILE,
when='midnight',
backupCount=10
)
db_handler.setLevel(log_level)
db_handler.setFormatter(log_formatter)
db_logger.addHandler(db_handler)
logging.getLogger('sqlalchemy.engine').addHandler(db_handler)
logging.getLogger('sqlalchemy.dialects').addHandler(db_handler)
logging.getLogger('sqlalchemy.pool').addHandler(db_handler)
logging.getLogger('sqlalchemy.orm').addHandler(db_handler)
# add endpoints
...
if __name__ == '__main__':
init_app()
app.run(host='0.0.0.0', port=7777)
I tried grabbing and changes settings on the loggers a few different ways but I still end up with the werkzeug debug outputting to console and not my logs, I can see the logs are being created but it doesn't look like the loggers are actually outputting to them:
api.log (formatter wrote to it)
2018-02-15 12:03:03,944] {/usr/local/lib/python3.5/dist-packages/werkzeug/_internal.py:88} WARNING - * Debugger is active!
db.log (empty)
Any insight on this would be much appreciated!
UPDATE
I was able to get the werkzeug logger working using the long hand version, it seems the shorthand function calls shown were returning null objects. The sqlalchemy logger is still outputting to console though.. Could the engine configuration be overriding my filehandler?
main.py
...
# close current file handlers
for handler in copy(logging.getLogger().handlers):
logging.getLogger().removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('werkzeug').handlers):
logging.getLogger('werkzeug').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.engine').handlers):
logging.getLogger('sqlalchemy.engine').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.dialects').handlers):
logging.getLogger('sqlalchemy.dialects').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.pool').handlers):
logging.getLogger('sqlalchemy.pool').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.orm').handlers):
logging.getLogger('sqlalchemy.orm').removeHandler(handler)
handler.close()
# create our own custom handlers
log_formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
api_handler = TimedRotatingFileHandler(
settings.API_LOG_FILE,
when='midnight',
backupCount=10
)
api_handler.setLevel(log_level)
api_handler.setFormatter(log_formatter)
logging.getLogger().setLevel(log_level)
logging.getLogger().addHandler(api_handler)
logging.getLogger('werkzeug').setLevel(log_level)
logging.getLogger('werkzeug').addHandler(api_handler)
db_handler = TimedRotatingFileHandler(
settings.DB_LOG_FILE,
when='midnight',
backupCount=10
)
db_handler.setLevel(log_level)
db_handler.setFormatter(log_formatter)
logging.getLogger('sqlalchemy.engine').addHandler(db_handler)
logging.getLogger('sqlalchemy.engine').setLevel(log_level)
logging.getLogger('sqlalchemy.dialects').addHandler(db_handler)
logging.getLogger('sqlalchemy.dialects').setLevel(log_level)
logging.getLogger('sqlalchemy.pool').addHandler(db_handler)
logging.getLogger('sqlalchemy.pool').setLevel(log_level)
logging.getLogger('sqlalchemy.orm').addHandler(db_handler)
logging.getLogger('sqlalchemy.orm').setLevel(log_level)
database.py
...
engine = create_engine(getDBURI(), echo="debug", echo_pool=True, pool_recycle=10)
ANSWER
For future reference if anyone runs into this issue, sqlalchemy engine configuration echo=True|'debug' will OVERRIDE your loggers. Fixed the issue by changing my engine configuration to:
engine = create_engine(getDBURI(), echo_pool=True, pool_recycle=10)
And then everything worked like a charm. Cheers! :D
as i understand it your file based log configuration for werkzeug is actually working => it outputs into api.log
The db log handler is also working (file gets created etc.) but there is no output.
This is probably due to the loglevel of those loggers beeing on Error by default. You need to set them manually on a lower level like this:
logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.dialects').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.pool').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.orm').setLevel(logging.DEBUG)
That werkzeug is still outputting to console is probably because there is allways a root logger defined. Before you add your new handlers you should do the following to remove all log handlers:
for handler in copy(logging.getLogger().handlers):
logging.getLogger().removeHandler(handler)
handler.close() # clean up used file handles
Then you can also assign your app log handler as the root log handler with
logging.getLogger().addHandler(api_handler)
If its not the root logger but just the werkzeug logger which has a default console logger defined you can also just remove all handlers from the werkzeug logger before adding yours like this:
for handler in copy(logging.getLogger('werkzeug').handlers):
logging.getLogger('werkzeug').removeHandler(handler)
handler.close() # clean up used file handles
logging.getLogger('werkzeug').addHandler(api_handler)