I found a few other posts on this but none that worked for me yet so I wanted to reach out and see if anyone could explain how to properly get / redirect / set handlers on some of the loggers present in Flask / Werkzeurg / sqlalchemy.
Research prior that could not answer my question:
https://github.com/pallets/flask/issues/1359
http://flask.pocoo.org/docs/dev/logging/
https://gist.github.com/ibeex/3257877
My configurations:
main.py
...
def init_app():
""" Runs prior to app launching, contains initialization code """
# set logging level
if not os.path.exists(settings.LOG_DIR):
os.makedirs(settings.LOG_DIR)
# default level
log_level = logging.CRITICAL
if settings.ENV == 'DEV':
log_level = logging.DEBUG
elif settings.ENV == 'TEST':
log_level = logging.WARNING
elif settings.ENV == 'PROD':
log_level = logging.ERROR
log_formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
api_logger = logging.getLogger()
api_handler = TimedRotatingFileHandler(
settings.API_LOG_FILE,
when='midnight',
backupCount=10
)
api_handler.setLevel(log_level)
api_handler.setFormatter(log_formatter)
api_logger.addHandler(api_handler)
logging.getLogger('werkzeug').addHandler(api_handler)
db_logger = logging.getLogger('sqlalchemy')
db_handler = TimedRotatingFileHandler(
settings.DB_LOG_FILE,
when='midnight',
backupCount=10
)
db_handler.setLevel(log_level)
db_handler.setFormatter(log_formatter)
db_logger.addHandler(db_handler)
logging.getLogger('sqlalchemy.engine').addHandler(db_handler)
logging.getLogger('sqlalchemy.dialects').addHandler(db_handler)
logging.getLogger('sqlalchemy.pool').addHandler(db_handler)
logging.getLogger('sqlalchemy.orm').addHandler(db_handler)
# add endpoints
...
if __name__ == '__main__':
init_app()
app.run(host='0.0.0.0', port=7777)
I tried grabbing and changes settings on the loggers a few different ways but I still end up with the werkzeug debug outputting to console and not my logs, I can see the logs are being created but it doesn't look like the loggers are actually outputting to them:
api.log (formatter wrote to it)
2018-02-15 12:03:03,944] {/usr/local/lib/python3.5/dist-packages/werkzeug/_internal.py:88} WARNING - * Debugger is active!
db.log (empty)
Any insight on this would be much appreciated!
UPDATE
I was able to get the werkzeug logger working using the long hand version, it seems the shorthand function calls shown were returning null objects. The sqlalchemy logger is still outputting to console though.. Could the engine configuration be overriding my filehandler?
main.py
...
# close current file handlers
for handler in copy(logging.getLogger().handlers):
logging.getLogger().removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('werkzeug').handlers):
logging.getLogger('werkzeug').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.engine').handlers):
logging.getLogger('sqlalchemy.engine').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.dialects').handlers):
logging.getLogger('sqlalchemy.dialects').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.pool').handlers):
logging.getLogger('sqlalchemy.pool').removeHandler(handler)
handler.close()
for handler in copy(logging.getLogger('sqlalchemy.orm').handlers):
logging.getLogger('sqlalchemy.orm').removeHandler(handler)
handler.close()
# create our own custom handlers
log_formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
api_handler = TimedRotatingFileHandler(
settings.API_LOG_FILE,
when='midnight',
backupCount=10
)
api_handler.setLevel(log_level)
api_handler.setFormatter(log_formatter)
logging.getLogger().setLevel(log_level)
logging.getLogger().addHandler(api_handler)
logging.getLogger('werkzeug').setLevel(log_level)
logging.getLogger('werkzeug').addHandler(api_handler)
db_handler = TimedRotatingFileHandler(
settings.DB_LOG_FILE,
when='midnight',
backupCount=10
)
db_handler.setLevel(log_level)
db_handler.setFormatter(log_formatter)
logging.getLogger('sqlalchemy.engine').addHandler(db_handler)
logging.getLogger('sqlalchemy.engine').setLevel(log_level)
logging.getLogger('sqlalchemy.dialects').addHandler(db_handler)
logging.getLogger('sqlalchemy.dialects').setLevel(log_level)
logging.getLogger('sqlalchemy.pool').addHandler(db_handler)
logging.getLogger('sqlalchemy.pool').setLevel(log_level)
logging.getLogger('sqlalchemy.orm').addHandler(db_handler)
logging.getLogger('sqlalchemy.orm').setLevel(log_level)
database.py
...
engine = create_engine(getDBURI(), echo="debug", echo_pool=True, pool_recycle=10)
ANSWER
For future reference if anyone runs into this issue, sqlalchemy engine configuration echo=True|'debug' will OVERRIDE your loggers. Fixed the issue by changing my engine configuration to:
engine = create_engine(getDBURI(), echo_pool=True, pool_recycle=10)
And then everything worked like a charm. Cheers! :D
as i understand it your file based log configuration for werkzeug is actually working => it outputs into api.log
The db log handler is also working (file gets created etc.) but there is no output.
This is probably due to the loglevel of those loggers beeing on Error by default. You need to set them manually on a lower level like this:
logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.dialects').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.pool').setLevel(logging.DEBUG)
logging.getLogger('sqlalchemy.orm').setLevel(logging.DEBUG)
That werkzeug is still outputting to console is probably because there is allways a root logger defined. Before you add your new handlers you should do the following to remove all log handlers:
for handler in copy(logging.getLogger().handlers):
logging.getLogger().removeHandler(handler)
handler.close() # clean up used file handles
Then you can also assign your app log handler as the root log handler with
logging.getLogger().addHandler(api_handler)
If its not the root logger but just the werkzeug logger which has a default console logger defined you can also just remove all handlers from the werkzeug logger before adding yours like this:
for handler in copy(logging.getLogger('werkzeug').handlers):
logging.getLogger('werkzeug').removeHandler(handler)
handler.close() # clean up used file handles
logging.getLogger('werkzeug').addHandler(api_handler)
Related
I set up logging module wide like so:
def setup_logging(app):
"""
Set up logging so as to include RequestId and relevant logging info
"""
RequestID(app)
handler = logging.StreamHandler()
handler.setStream(sys.stdout)
handler.propagate=False
handler.setFormatter(
logging.Formatter("[MHPM][%(module)s][%(funcName)s] %(levelname)s : %(request_id)s - %(message)s")
)
handler.addFilter(RequestIDLogFilter()) # << Add request id contextual filter
logging.getLogger().addHandler(handler)
logging.getLogger().setLevel(level="DEBUG")
and I use it so:
# in init.py
setup_logging(app)
# in MHPMService.py
logger = logging.getLogger(__name__)
But here's what I see on my console:
DEBUG:src.service.MHPMService:MHPMService.__init__(): initialized
[MHPM][MHPMService][__init__] DEBUG : 5106ec8e-9ffa-423d-9401-c34a92dcfa23 - MHPMService.__init__(): initialized
I only want the second type of logs in my application, how do I do this?
I reset the handlers and got the expected behaviour:
logger.handlers=[]
swap the current handlers
logging.getLogger().handlers[0] = handler
instead of doing this
logging.getLogger().addHandler(handler)
I have the following script that I want only the "DEBUG" log messages to be logged to the file, and nothing to the screen.
from flask import Flask, request, jsonify
from gevent.pywsgi import WSGIServer
import usaddress
app = Flask(__name__)
from logging.handlers import RotatingFileHandler
import logging
#logging.basicConfig(filename='error.log',level=logging.DEBUG)
# create a file to store weblogs
log = open('error.log', 'w'); log.seek(0); log.truncate();
log.write("Web Application Log\n"); log.close();
log_handler = RotatingFileHandler('error.log', maxBytes =1000000, backupCount=1)
formatter = logging.Formatter(
"[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s"
)
log_handler.setFormatter(formatter)
app.logger.setLevel(logging.DEBUG)
app.logger.addHandler(log_handler)
#app.route('/')
def hello():
return "Hello World!"
#app.route("/parseAddress", methods=["POST"])
def parseAddress():
address = request.form['address']
return jsonify(usaddress.tag(address)), 200
if __name__ == '__main__':
# app.run(host='0.0.0.0')
http_server = WSGIServer(('', 5000), app, log=app.logger)
http_server.serve_forever()
But right now even "INFO" messages are being logged to the file and to the screen. How can I have only the "DEBUG" messages logged to the file and nothing to the screen?
I did some interactive testing and it looks like app.logger is a logger named with python file
print(app.logger.name) # filename
and it has one handler
print(app.logger.handlers) # [<StreamHandler <stderr> (NOTSET)>]
A handler with level logging.NOTSET processes all messages (from all logging levels). So when you set app.logger.setLevel(logging.DEBUG) then all debug and higher logs will be passed to handlers and all of them will appear on stderr.
To log absolutely nothing on the screen you have to manually remove StreamHandler:
app.logger.handlers.pop(0)
and to log DEBUG and higher to the file set logging level also on the handler
log_handler.setLevel(logging.DEBUG)
Python logging is quite complicated. See this logging flow chart for better understanding what is going on :)
EDIT: to log only one specific level you need custom Filter object:
class LevelFilter:
def __init__(self, level):
self._level = level
def filter(self, log_record):
return log_record.levelno == self._level
log_handler.setLevel(logging.DEBUG)
log_handler.addFilter(LevelFilter(logging.DEBUG))
I have a python program that utilizes multiprocessing to increase efficiency, and a function that creates a logger for each process. The logger function looks like this:
import logging
import os
def create_logger(app_name):
"""Create a logging interface"""
# create a logger
if logging in os.environ:
logging_string = os.environ["logging"]
if logging_string == "DEBUG":
logging_level = loggin.DEBUG
else if logging_string == "INFO":
logging_level = logging.INFO
else if logging_string == "WARNING":
logging_level = logging.WARNING
else if logging_string == "ERROR":
logging_level = logging.ERROR
else if logging_string == "CRITICAL":
logging_level = logging.CRITICAL
else:
logging_level = logging.INFO
logger = logging.getLogger(app_name)
logger.setLevel(logging_level)
# Console handler for error output
console_handler = logging.StreamHandler()
console_handler.setLevel(logging_level)
# Formatter to make everything look nice
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
console_handler.setFormatter(formatter)
# Add the handlers to the logger
logger.addHandler(console_handler)
return logger
And my processing functions look like this:
import custom_logging
def do_capture(data_dict_access):
"""Process data"""
# Custom logging
LOGGER = custom_logging.create_logger("processor")
LOGGER.debug("Doing stuff...")
However, no matter what the logging environment variable is set to, I still receive debug log messages in the console. Why is my logging level not taking effect, surely the calls to setLevel() should stop the debug messages from being logged?
Here is an easy way to create a logger object:
import logging
import os
def create_logger(app_name):
"""Create a logging interface"""
logging_level = os.getenv('logging', logging.INFO)
logging.basicConfig(
level=logging_level,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(app_name)
return logger
Discussion
There is no need to convert from "DEBUG" to logging.DEBUG, the logging module understands these strings.
Use basicConfig to ease the pain of setting up a logger. You don't need to create handler, set format, set level, ... This should work for most cases.
Update
I found out why your code does not work, besides the else if. Consider your line:
if logging in os.environ:
On this line loggging without quote refers to the logging library package. What you want is:
if 'logging' in os.environ:
I encountered such a problem and couldn't solve it. I used python's logger to log info, logger level set to logging.DEBUG. I used gunicorn to log
info at the same time. Normally, the error message goes to python's logger, and the link messages and other messages written by logger.info or logger.debug goes to the log file of gunicorn. However with one application it doesn't behave so. The messages output by logger.info also goes to python's logger. The problem is, I only want to see error messages in python's logger, all the other messages would be seen from gunicorn's logger. Can anyone give me a clue where I might do wrong in this situation?
thx in advance,
alex
The following is my config:
LOGGER_LEVEL = logging.DEBUG
LOGGER_ROOT_NAME = "root"
LOGGER_ROOT_HANLDERS = [logging.StreamHandler, logging.FileHandler]
LOGGER_ROOT_LEVEL = LOGGER_LEVEL
LOGGER_ROOT_FORMAT = "[%(asctime)s %(levelname)s %(name)s %(funcName)s:%(lineno)d] %(message)s"
LOGGER_LEVEL = logging.ERROR
LOGGER_FILE_PATH = "/data/log/web/"
Code:
def config_root_logger(self):
formatter = logging.Formatter(self.config.LOGGER_ROOT_FORMAT)
logger = logging.getLogger()
logger.setLevel(self.config.LOGGER_ROOT_LEVEL)
filename = os.path.join(self.config.LOGGER_FILE_PATH, "secondordersrv.log")
handler = logging.FileHandler(filename)
handler.setFormatter(formatter)
logger.addHandler(handler)
# 测试环境配置再增加console的日志记录
self._add_test_handler(logger, formatter)
def _add_test_handler(self, logger, formatter):
# 测试环境配置再增加console的日志记录
if self.config.RUN_MODE == 'test':
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
My gunicorn config looks like this:
errorlog = '/data/log/web/%s.log' % APP_NAME
loglevel = 'info'
accesslog = '-'
You did not set the level of your handler.
After handler.setFormatter(formatter), add the following line:
handler.setLevel(self.config.LOGGER_LEVEL)
I have a function in my python package, which returns a logger:
import logging
def get_logger(logger_name, log_level='DEBUG') :
"""Setup and return a logger."""
log = logging.getLogger(logger_name)
log.setLevel(log_level)
formatter = logging.Formatter('[%(levelname)s] %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
log.addHandler(handler)
return log
I use this logger in my modules and submodules:
from tgtk.logger import get_logger
log = get_logger(__name__, 'DEBUG')
so I can use it via
log.debug("Debug message")
log.info("Info message")
etc. This worked so far perfectly, but today I encountered a weird problem: on one machine, there is no output at all. Every log.xxx is simply being "ignored", whatever level is set. I had some similar issues in the past and I remember that it somehow started working again after I renamed the logger, but this time it does not help.
Is there any caching or what is going on? The scripts are exactly the same (synced over SVN).