Redirect console logging output to flask-socketio - python

I log events to console via the Python logging module.
I also want to send that log messages via socket-io (flask) to a client.
The following approach was only partly successful.
from flask.ext.socketio import send
fmt_str = '%(asctime)s - %(message)s'
formatter = logging.Formatter(fmt_str)
logging.basicConfig(level=logging.INFO, format=fmt_str)
logger = logging.getLogger("")
class SocketIOHandler(logging.Handler):
def emit(self, record):
send(record.getMessage())
sio = SocketIOHandler()
logger.addHandler(sio)
I get the result in the browser, but still get
RuntimeError: working outside of request context
for each send call on the console. I think the context for the send call is not availible... What is a useful way to deal with that problem? Thanks.

The send and emit functions are context-aware functions that only work from inside an event handler. The equivalent context-free functions are available from the socketio instance. Example (making some assumptions about your app that may or may not be true):
from app import socketio # flask.ext.socketio.SocketIO instance
fmt_str = '%(asctime)s - %(message)s'
formatter = logging.Formatter(fmt_str)
logging.basicConfig(level=logging.INFO, format=fmt_str)
logger = logging.getLogger("")
class SocketIOHandler(logging.Handler):
def emit(self, record):
socketio.send(record.getMessage())
sio = SocketIOHandler()
logger.addHandler(sio)
The first line may need to be adapted to the structure of your application, but with this your application will broadcast logs to all the clients.
Hope this helps!

Related

How to send logs to GCP using StructuredLogHandler with jsonPayload?

I am trying to send logs (with jsonPayload) to GCP using StructuredLogHandler with below python code.
rootlogger = logging.getLogger()
client = google.cloud.logging.Client(credentials=xxx, project=xxx)
h = StructuredLogHandler()
rootlogger.addHandler(h)
logger = logging.getLogger('test')
logger.warning('warning')
I see that logs are being printed on console (in json format) but the logs are not sent to GCP Log Explorer. Can someone help?
Since v3 of Python Cloud Logging Library it's now easier than ever as it integrates with the Python standard logging library with client.setup_logging:
import logging
import google.cloud.logging
# Instantiate a client
client = google.cloud.logging.Client()
# Retrieves a Cloud Logging handler based on the environment
# you're running in and integrates the handler with the
# Python logging module. By default this captures all logs
# at INFO level and higher
client.setup_logging()
Name your logger as usual. E.g.
logger = logging.getLogger('test')
Then if you want to send structured log messages to Cloud Logging then you can use one of two methods:
Use the json_fields extra argument:
data_dict = {"hello": "world"}
logging.info("message field", extra={"json_fields": data_dict})
Use a JSON-parseable string (requires importing the json module):
import json
data_dict = {"hello": "world"}
logging.info(json.dumps(data_dict))
This will see your log messages sent to Google Cloud and the JSON payload available under the jsonPayload field of the expanded log entry:

How to redirect another library's console logging messages to a file, in Python

The fastAPI library that I import for an API I have written, writes many logging.INFO level messages to the console, which I would like either to redirect to a file-based log, or ideally, to both console and file. Here is an example of fastAPI module logging events in my console:
So I've tried to implement this Stack Overflow answer ("Easy-peasy with Python 3.3 and above"), but the log file it creates ("api_screen.log") is always empty....
# -------------------------- logging ----------------------------
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_format = ' %(message)s'
logging_handlers = [logging.FileHandler(logging_file), logging.StreamHandler()]
logging.basicConfig(level = logging_level, format = logging_format, handlers = logging_handlers)
logging.info("------logging test------")
Even though my own "------logging test------" message does appear on console within the other fastAPI logs:
As you can see here it's created the file, but it has size zero.
So what do I need to do also to get the file logging working?
There are multiple issues here. First and most importantly: basicConfig does nothing if a logger is already configured, which fastAPI does. So the handlers you are creating are never used. When you call logging.info() you are sending a log to the root logger which is printed because the fastAPI has added a handler to it. You are also not setting the level on your handlers. Try this code instead of what you currently have:
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_fh = logging.FileHandler(logging_file)
logging_sh = logging.StreamHandler()
logging_fh.setLevel(logging_level)
logging_sh.setLevel(logging_level)
root_logger = logging.getLogger()
root_logger.addHandler(logging_fh)
root_logger.addHandler(logging_sh)
logging.info('--test--')

Python Flask Logging Problems

I am about to write a flask app for something very trivial .. and I hit a roadblock in my logging practices.
This is my simple flask app, I wrote this to explain the problem I ran into and have been stuck at for sometime trying to figure out what is happening with python-logging & flask.
# standard
from flask import Flask
from flask_restful import Api
import logging
import json
# logging config
log_fmt = "%(asctime)s %(levelname)s %(process)d %(filename)s %(funcName)s %(message)s"
logging.basicConfig(
filename="test.log",
filemode="w",
format=log_fmt,
level=logging.DEBUG
)
# create an object of flask (flask is a web framework)
app = Flask(__name__)
api = Api(app)
# health check /
#app.route("/", methods=['GET'])
def default():
logging.debug("/ request received")
out_dict = {
"hello": "world"
}
logging.debug("/ response" + str(out_dict))
return json.dumps(out_dict)
# main function, entry point
if __name__ == "__main__":
# invokes src and runs the application
logging.debug("starting")
# COMMENTING below - gets me the log file! Whats happening with flask & logging?
app.run(host="0.0.0.0", port=7001, debug=True)
logging.debug("stopping")
Now this is the pattern I generally adopt when I need logging. But when I apply this pattern of logging along with app.run(..) the log-file never gets created. I am unable to figure out why this happens.
But on the contrary .. if I comment the app.run(..), the log file gets created with corresponding debug logs I have in place.
I have been struggling to understand this - and I did land on the flask inbuilt log handler, but looking at its implementation it attaches to logging module itself. So the whole thing is still not making sense. Any help or direction here will be appreciated.
Using logging.basicConfig makes a number of assumptions which upon calling app.run(...) may have undone, as Flask does also make use of the logging module to set up logging output as you noted. However, if you manually set up a file handler and attach it to the root logger like so (i.e. replace the # logging config section with):
# logging config
log_fmt = "%(asctime)s %(levelname)s %(process)d %(filename)s %(funcName)s %(message)s"
handler = logging.FileHandler('test.log')
handler.setFormatter(logging.Formatter(log_fmt))
root_logger = logging.getLogger()
root_logger.addHandler(handler)
root_logger.setLevel(logging.DEBUG)
This sets up the logging handler with the formatter set to the log_fmt you had specified, then attaching that handler to the root logger returned by logging.getLogger(). Running the application, give it some requests and quitting, you should see appropriate entries showing up inside test.log in the current working directory, while some of the typical logging output produced by flask will also be shown.

Logging and Python bokeh compatibility

I am using import logging to save changes to my bokeh server and I want to save it to a file with a .log extension, but when I run the bokeh server, the file is not created and the can not save operations to .log file.
There is a part of the code I wrote below.
Could it be that I am making a mistake in the code or bokeh server does it not work in accordance with logging?
import logging
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename = "test.log",
level = logging.DEBUG,
format = LOG_FORMAT,
filemode="w")
logger = logging.getLogger()
When you use bokeh serve %some_python_file%, the Bokeh server is started right away, but your code is executed only when you actually open the URL that points to the Bokeh document that you fill in that code.
bokeh serve configures logging by using logging.basicConfig as well, and calling this function again does not override anything - that's just how logging.basicConfig works.
Instead of using logging directly, you should just create and configure your own logger:
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
file_handler = logging.FileHandler(filename='test.log', mode='w')
file_handler.setFormatter(logging.Formatter(LOG_FORMAT))
logger = logging.getLogger(__name__)
logger.addHandler(file_handler)
logger.setLevel(logging.DEBUG)
logger.info('Hello there')
Eugene's answer is correct. Calling logging.basicConfig() for a second time does not have any effect. Nevertheless, if you are using python >= 3.8 then you can use force=True which will disable all existing logging handlers and setup a new one. This practically means that your own logging.basicCOnfig() will just work:
logging.basicConfig(..., force=True)
docs

automatically logging flask's messages to a logger

I was following Flask's documentation on how to configure logging. However, it seems that it does not write to the logger unless I explicitly tell it to (and the documentation seems to agree.)
Here is how I configured my logger, within create_app()
def create_app(environment):
""" creates the flask application. Uses a parameter to choose which config to use """
app = Flask(__name__)
# ...
error_handler = RotatingFileHandler(
os.path.join(app.config['LOG_FOLDER'], 'flask.log'),
maxBytes=100000,
backupCount=1
)
error_handler.setLevel(logging.NOTSET)
error_handler.setFormatter(Formatter(
'%(asctime)s %(levelname)s: %(message)s'
'[in %(pathname)s:%(lineno)d]'
))
app.logger.addHandler(error_handler)
Now I want it such that, whenever an error occurs like it would in the debugger, to put the traceback in the log. Is this possible while on production?
The easiest way to do this is to register an error handler with teardown_request:
#app.teardown_request
def log_errors(error):
if error is None:
return
app.logger.error("An error occurred while handling the request", error)

Categories

Resources