Python Flask Logging Problems - python

I am about to write a flask app for something very trivial .. and I hit a roadblock in my logging practices.
This is my simple flask app, I wrote this to explain the problem I ran into and have been stuck at for sometime trying to figure out what is happening with python-logging & flask.
# standard
from flask import Flask
from flask_restful import Api
import logging
import json
# logging config
log_fmt = "%(asctime)s %(levelname)s %(process)d %(filename)s %(funcName)s %(message)s"
logging.basicConfig(
filename="test.log",
filemode="w",
format=log_fmt,
level=logging.DEBUG
)
# create an object of flask (flask is a web framework)
app = Flask(__name__)
api = Api(app)
# health check /
#app.route("/", methods=['GET'])
def default():
logging.debug("/ request received")
out_dict = {
"hello": "world"
}
logging.debug("/ response" + str(out_dict))
return json.dumps(out_dict)
# main function, entry point
if __name__ == "__main__":
# invokes src and runs the application
logging.debug("starting")
# COMMENTING below - gets me the log file! Whats happening with flask & logging?
app.run(host="0.0.0.0", port=7001, debug=True)
logging.debug("stopping")
Now this is the pattern I generally adopt when I need logging. But when I apply this pattern of logging along with app.run(..) the log-file never gets created. I am unable to figure out why this happens.
But on the contrary .. if I comment the app.run(..), the log file gets created with corresponding debug logs I have in place.
I have been struggling to understand this - and I did land on the flask inbuilt log handler, but looking at its implementation it attaches to logging module itself. So the whole thing is still not making sense. Any help or direction here will be appreciated.

Using logging.basicConfig makes a number of assumptions which upon calling app.run(...) may have undone, as Flask does also make use of the logging module to set up logging output as you noted. However, if you manually set up a file handler and attach it to the root logger like so (i.e. replace the # logging config section with):
# logging config
log_fmt = "%(asctime)s %(levelname)s %(process)d %(filename)s %(funcName)s %(message)s"
handler = logging.FileHandler('test.log')
handler.setFormatter(logging.Formatter(log_fmt))
root_logger = logging.getLogger()
root_logger.addHandler(handler)
root_logger.setLevel(logging.DEBUG)
This sets up the logging handler with the formatter set to the log_fmt you had specified, then attaching that handler to the root logger returned by logging.getLogger(). Running the application, give it some requests and quitting, you should see appropriate entries showing up inside test.log in the current working directory, while some of the typical logging output produced by flask will also be shown.

Related

How to redirect another library's console logging messages to a file, in Python

The fastAPI library that I import for an API I have written, writes many logging.INFO level messages to the console, which I would like either to redirect to a file-based log, or ideally, to both console and file. Here is an example of fastAPI module logging events in my console:
So I've tried to implement this Stack Overflow answer ("Easy-peasy with Python 3.3 and above"), but the log file it creates ("api_screen.log") is always empty....
# -------------------------- logging ----------------------------
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_format = ' %(message)s'
logging_handlers = [logging.FileHandler(logging_file), logging.StreamHandler()]
logging.basicConfig(level = logging_level, format = logging_format, handlers = logging_handlers)
logging.info("------logging test------")
Even though my own "------logging test------" message does appear on console within the other fastAPI logs:
As you can see here it's created the file, but it has size zero.
So what do I need to do also to get the file logging working?
There are multiple issues here. First and most importantly: basicConfig does nothing if a logger is already configured, which fastAPI does. So the handlers you are creating are never used. When you call logging.info() you are sending a log to the root logger which is printed because the fastAPI has added a handler to it. You are also not setting the level on your handlers. Try this code instead of what you currently have:
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_fh = logging.FileHandler(logging_file)
logging_sh = logging.StreamHandler()
logging_fh.setLevel(logging_level)
logging_sh.setLevel(logging_level)
root_logger = logging.getLogger()
root_logger.addHandler(logging_fh)
root_logger.addHandler(logging_sh)
logging.info('--test--')

Logging and Python bokeh compatibility

I am using import logging to save changes to my bokeh server and I want to save it to a file with a .log extension, but when I run the bokeh server, the file is not created and the can not save operations to .log file.
There is a part of the code I wrote below.
Could it be that I am making a mistake in the code or bokeh server does it not work in accordance with logging?
import logging
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename = "test.log",
level = logging.DEBUG,
format = LOG_FORMAT,
filemode="w")
logger = logging.getLogger()
When you use bokeh serve %some_python_file%, the Bokeh server is started right away, but your code is executed only when you actually open the URL that points to the Bokeh document that you fill in that code.
bokeh serve configures logging by using logging.basicConfig as well, and calling this function again does not override anything - that's just how logging.basicConfig works.
Instead of using logging directly, you should just create and configure your own logger:
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
file_handler = logging.FileHandler(filename='test.log', mode='w')
file_handler.setFormatter(logging.Formatter(LOG_FORMAT))
logger = logging.getLogger(__name__)
logger.addHandler(file_handler)
logger.setLevel(logging.DEBUG)
logger.info('Hello there')
Eugene's answer is correct. Calling logging.basicConfig() for a second time does not have any effect. Nevertheless, if you are using python >= 3.8 then you can use force=True which will disable all existing logging handlers and setup a new one. This practically means that your own logging.basicCOnfig() will just work:
logging.basicConfig(..., force=True)
docs

How can I log outside of main Flask module?

I have a Python Flask application, the entry file configures a logger on the app, like so:
app = Flask(__name__)
handler = logging.StreamHandler(sys.stdout)
app.logger.addHandler(handler)
app.logger.setLevel(logging.DEBUG)
I then do a bunch of logging using
app.logger.debug("Log Message")
which works fine. However, I have a few API functions like:
#app.route('/api/my-stuff', methods=['GET'])
def get_my_stuff():
db_manager = get_manager()
query = create_query(request.args)
service = Service(db_manager, query)
app.logger.debug("Req: {}".format(request.url))
What I would like to know is how can I do logging within that Service module/python class. Do I have to pass the app to it? That seems like a bad practice, but I don't know how to get a handle to the app.logger from outside of the main Flask file...
Even though this is a possible duplicate I want to write out a tiny bit of python logging knowledge.
DON'T pass loggers around. You can always access any given logger by logging.getLogger(<log name as string>). By default it looks like* flask uses the name you provide to the Flask class.
So if your main module is called 'my_tool', you would want to do logger = logging.getLogger('my_tool')in the Service module.
To add onto that, I like to be explicit about naming my loggers and packages, so I would do Flask('my_tool')** and in other modules, have sub level loggers like. logger = logging.getLogger('my_tool.services') that all use the same root logger (and handlers).
* No experience, based off other answer.
** Again, don't use flask, dk if that is good practice
Edit: Super simple stupid example
Main Flask app
import sys
import logging
import flask
from module2 import hi
app = flask.Flask('tester')
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
app.logger.addHandler(handler)
app.logger.setLevel(logging.DEBUG)
#app.route("/index")
def index():
app.logger.debug("TESTING!")
hi()
return "hi"
if __name__ == '__main__':
app.run()
module2
import logging
log = logging.getLogger('tester.sub')
def hi():
log.warning('warning test')
Outputs
127.0.0.1 - - [04/Oct/2016 20:08:29] "GET /index HTTP/1.1" 200 -
2016-10-04 20:08:29,098 - tester - DEBUG - TESTING!
2016-10-04 20:08:29,098 - tester.sub - WARNING - warning test
Edit 2: Messing with subloggers
Totally unneeded, just for general knowledge.
By defining a child logger, done by adding a .something after the root logger name in logging.getLogger('root.something') it gives you basiclly a different namespace to work with.
I personally like using it to group functionality in logging. So have some .tool or .db to know what type of code is logging. But it also allows so that those child loggers can have their own handlers. So if you only want some of your code to print to stderr, or to a log you can do so. Here is an example with a modified module2.
module2
import logging
import sys
log = logging.getLogger('tester.sub')
handler = logging.StreamHandler(sys.stderr)
handler.setFormatter(logging.Formatter('%(name)s - %(levelname)s - %(message)s'))
log.addHandler(handler)
log.setLevel(logging.INFO)
def hi():
log.warning("test")
Output
127.0.0.1 - - [04/Oct/2016 20:23:18] "GET /index HTTP/1.1" 200 -
2016-10-04 20:23:18,354 - tester - DEBUG - TESTING!
tester.sub - WARNING - test
2016-10-04 20:23:18,354 - tester.sub - WARNING - test

Redirect console logging output to flask-socketio

I log events to console via the Python logging module.
I also want to send that log messages via socket-io (flask) to a client.
The following approach was only partly successful.
from flask.ext.socketio import send
fmt_str = '%(asctime)s - %(message)s'
formatter = logging.Formatter(fmt_str)
logging.basicConfig(level=logging.INFO, format=fmt_str)
logger = logging.getLogger("")
class SocketIOHandler(logging.Handler):
def emit(self, record):
send(record.getMessage())
sio = SocketIOHandler()
logger.addHandler(sio)
I get the result in the browser, but still get
RuntimeError: working outside of request context
for each send call on the console. I think the context for the send call is not availible... What is a useful way to deal with that problem? Thanks.
The send and emit functions are context-aware functions that only work from inside an event handler. The equivalent context-free functions are available from the socketio instance. Example (making some assumptions about your app that may or may not be true):
from app import socketio # flask.ext.socketio.SocketIO instance
fmt_str = '%(asctime)s - %(message)s'
formatter = logging.Formatter(fmt_str)
logging.basicConfig(level=logging.INFO, format=fmt_str)
logger = logging.getLogger("")
class SocketIOHandler(logging.Handler):
def emit(self, record):
socketio.send(record.getMessage())
sio = SocketIOHandler()
logger.addHandler(sio)
The first line may need to be adapted to the structure of your application, but with this your application will broadcast logs to all the clients.
Hope this helps!

automatically logging flask's messages to a logger

I was following Flask's documentation on how to configure logging. However, it seems that it does not write to the logger unless I explicitly tell it to (and the documentation seems to agree.)
Here is how I configured my logger, within create_app()
def create_app(environment):
""" creates the flask application. Uses a parameter to choose which config to use """
app = Flask(__name__)
# ...
error_handler = RotatingFileHandler(
os.path.join(app.config['LOG_FOLDER'], 'flask.log'),
maxBytes=100000,
backupCount=1
)
error_handler.setLevel(logging.NOTSET)
error_handler.setFormatter(Formatter(
'%(asctime)s %(levelname)s: %(message)s'
'[in %(pathname)s:%(lineno)d]'
))
app.logger.addHandler(error_handler)
Now I want it such that, whenever an error occurs like it would in the debugger, to put the traceback in the log. Is this possible while on production?
The easiest way to do this is to register an error handler with teardown_request:
#app.teardown_request
def log_errors(error):
if error is None:
return
app.logger.error("An error occurred while handling the request", error)

Categories

Resources