I have a Python Flask application, the entry file configures a logger on the app, like so:
app = Flask(__name__)
handler = logging.StreamHandler(sys.stdout)
app.logger.addHandler(handler)
app.logger.setLevel(logging.DEBUG)
I then do a bunch of logging using
app.logger.debug("Log Message")
which works fine. However, I have a few API functions like:
#app.route('/api/my-stuff', methods=['GET'])
def get_my_stuff():
db_manager = get_manager()
query = create_query(request.args)
service = Service(db_manager, query)
app.logger.debug("Req: {}".format(request.url))
What I would like to know is how can I do logging within that Service module/python class. Do I have to pass the app to it? That seems like a bad practice, but I don't know how to get a handle to the app.logger from outside of the main Flask file...
Even though this is a possible duplicate I want to write out a tiny bit of python logging knowledge.
DON'T pass loggers around. You can always access any given logger by logging.getLogger(<log name as string>). By default it looks like* flask uses the name you provide to the Flask class.
So if your main module is called 'my_tool', you would want to do logger = logging.getLogger('my_tool')in the Service module.
To add onto that, I like to be explicit about naming my loggers and packages, so I would do Flask('my_tool')** and in other modules, have sub level loggers like. logger = logging.getLogger('my_tool.services') that all use the same root logger (and handlers).
* No experience, based off other answer.
** Again, don't use flask, dk if that is good practice
Edit: Super simple stupid example
Main Flask app
import sys
import logging
import flask
from module2 import hi
app = flask.Flask('tester')
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
app.logger.addHandler(handler)
app.logger.setLevel(logging.DEBUG)
#app.route("/index")
def index():
app.logger.debug("TESTING!")
hi()
return "hi"
if __name__ == '__main__':
app.run()
module2
import logging
log = logging.getLogger('tester.sub')
def hi():
log.warning('warning test')
Outputs
127.0.0.1 - - [04/Oct/2016 20:08:29] "GET /index HTTP/1.1" 200 -
2016-10-04 20:08:29,098 - tester - DEBUG - TESTING!
2016-10-04 20:08:29,098 - tester.sub - WARNING - warning test
Edit 2: Messing with subloggers
Totally unneeded, just for general knowledge.
By defining a child logger, done by adding a .something after the root logger name in logging.getLogger('root.something') it gives you basiclly a different namespace to work with.
I personally like using it to group functionality in logging. So have some .tool or .db to know what type of code is logging. But it also allows so that those child loggers can have their own handlers. So if you only want some of your code to print to stderr, or to a log you can do so. Here is an example with a modified module2.
module2
import logging
import sys
log = logging.getLogger('tester.sub')
handler = logging.StreamHandler(sys.stderr)
handler.setFormatter(logging.Formatter('%(name)s - %(levelname)s - %(message)s'))
log.addHandler(handler)
log.setLevel(logging.INFO)
def hi():
log.warning("test")
Output
127.0.0.1 - - [04/Oct/2016 20:23:18] "GET /index HTTP/1.1" 200 -
2016-10-04 20:23:18,354 - tester - DEBUG - TESTING!
tester.sub - WARNING - test
2016-10-04 20:23:18,354 - tester.sub - WARNING - test
Related
I have created a simple weather webapp in Django using API. Logging is enabled and are written into files in Windows. I want logging to be asynchronous that is at the end of execution. How can we do async logging in Django?
We can only create async views in Django.
There is Python Logstash package which has Async way of logging, but it stores logs in a database in a remote instance.
(Alternative of which is to store logs in SQLLite3 db). File logging option is not present in it.
Moreover, async is newbie in Django and still many complexities present unresolved in it. It might cause memory overhead which
can degrade performance. Please find some links below for reference.
https://pypi.org/project/python-logstash/
https://docs.djangoproject.com/en/3.1/topics/async/#:~:text=New%20in%20Django%203.0.,have%20efficient%20long%2Drunning%20requests.
https://deepsource.io/blog/django-async-support/
you can use the logging module from python standard library
import logging
logger = logging.getLogger(__name__)
# Set file as output
handler = logging.StreamHandler()
# Formatter template
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
# add a formatter to handler
handler.setFormatter(formatter)
# add a handler to logger
logger.addHandler(handler)
# your messages will be added to the file
logger.error("it's error message")
logger.info("it's info message")
logger.warning("it's warning message")
Official documentation: https://docs.python.org/3/library/logging.html
I hope I helped you!)
I can advise you to start the django project like this. Cons: nothing will be output to the console, but it will work faster than in middleware
nohup python manage.py runserver > file.log
The fastAPI library that I import for an API I have written, writes many logging.INFO level messages to the console, which I would like either to redirect to a file-based log, or ideally, to both console and file. Here is an example of fastAPI module logging events in my console:
So I've tried to implement this Stack Overflow answer ("Easy-peasy with Python 3.3 and above"), but the log file it creates ("api_screen.log") is always empty....
# -------------------------- logging ----------------------------
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_format = ' %(message)s'
logging_handlers = [logging.FileHandler(logging_file), logging.StreamHandler()]
logging.basicConfig(level = logging_level, format = logging_format, handlers = logging_handlers)
logging.info("------logging test------")
Even though my own "------logging test------" message does appear on console within the other fastAPI logs:
As you can see here it's created the file, but it has size zero.
So what do I need to do also to get the file logging working?
There are multiple issues here. First and most importantly: basicConfig does nothing if a logger is already configured, which fastAPI does. So the handlers you are creating are never used. When you call logging.info() you are sending a log to the root logger which is printed because the fastAPI has added a handler to it. You are also not setting the level on your handlers. Try this code instead of what you currently have:
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_fh = logging.FileHandler(logging_file)
logging_sh = logging.StreamHandler()
logging_fh.setLevel(logging_level)
logging_sh.setLevel(logging_level)
root_logger = logging.getLogger()
root_logger.addHandler(logging_fh)
root_logger.addHandler(logging_sh)
logging.info('--test--')
I am about to write a flask app for something very trivial .. and I hit a roadblock in my logging practices.
This is my simple flask app, I wrote this to explain the problem I ran into and have been stuck at for sometime trying to figure out what is happening with python-logging & flask.
# standard
from flask import Flask
from flask_restful import Api
import logging
import json
# logging config
log_fmt = "%(asctime)s %(levelname)s %(process)d %(filename)s %(funcName)s %(message)s"
logging.basicConfig(
filename="test.log",
filemode="w",
format=log_fmt,
level=logging.DEBUG
)
# create an object of flask (flask is a web framework)
app = Flask(__name__)
api = Api(app)
# health check /
#app.route("/", methods=['GET'])
def default():
logging.debug("/ request received")
out_dict = {
"hello": "world"
}
logging.debug("/ response" + str(out_dict))
return json.dumps(out_dict)
# main function, entry point
if __name__ == "__main__":
# invokes src and runs the application
logging.debug("starting")
# COMMENTING below - gets me the log file! Whats happening with flask & logging?
app.run(host="0.0.0.0", port=7001, debug=True)
logging.debug("stopping")
Now this is the pattern I generally adopt when I need logging. But when I apply this pattern of logging along with app.run(..) the log-file never gets created. I am unable to figure out why this happens.
But on the contrary .. if I comment the app.run(..), the log file gets created with corresponding debug logs I have in place.
I have been struggling to understand this - and I did land on the flask inbuilt log handler, but looking at its implementation it attaches to logging module itself. So the whole thing is still not making sense. Any help or direction here will be appreciated.
Using logging.basicConfig makes a number of assumptions which upon calling app.run(...) may have undone, as Flask does also make use of the logging module to set up logging output as you noted. However, if you manually set up a file handler and attach it to the root logger like so (i.e. replace the # logging config section with):
# logging config
log_fmt = "%(asctime)s %(levelname)s %(process)d %(filename)s %(funcName)s %(message)s"
handler = logging.FileHandler('test.log')
handler.setFormatter(logging.Formatter(log_fmt))
root_logger = logging.getLogger()
root_logger.addHandler(handler)
root_logger.setLevel(logging.DEBUG)
This sets up the logging handler with the formatter set to the log_fmt you had specified, then attaching that handler to the root logger returned by logging.getLogger(). Running the application, give it some requests and quitting, you should see appropriate entries showing up inside test.log in the current working directory, while some of the typical logging output produced by flask will also be shown.
I am using import logging to save changes to my bokeh server and I want to save it to a file with a .log extension, but when I run the bokeh server, the file is not created and the can not save operations to .log file.
There is a part of the code I wrote below.
Could it be that I am making a mistake in the code or bokeh server does it not work in accordance with logging?
import logging
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename = "test.log",
level = logging.DEBUG,
format = LOG_FORMAT,
filemode="w")
logger = logging.getLogger()
When you use bokeh serve %some_python_file%, the Bokeh server is started right away, but your code is executed only when you actually open the URL that points to the Bokeh document that you fill in that code.
bokeh serve configures logging by using logging.basicConfig as well, and calling this function again does not override anything - that's just how logging.basicConfig works.
Instead of using logging directly, you should just create and configure your own logger:
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
file_handler = logging.FileHandler(filename='test.log', mode='w')
file_handler.setFormatter(logging.Formatter(LOG_FORMAT))
logger = logging.getLogger(__name__)
logger.addHandler(file_handler)
logger.setLevel(logging.DEBUG)
logger.info('Hello there')
Eugene's answer is correct. Calling logging.basicConfig() for a second time does not have any effect. Nevertheless, if you are using python >= 3.8 then you can use force=True which will disable all existing logging handlers and setup a new one. This practically means that your own logging.basicCOnfig() will just work:
logging.basicConfig(..., force=True)
docs
I'm trying to replace an old method of logging information with standard Python logging to a file. The application currently has a log file which is set to capture Info and Debug messages so I would like this at a lower level that isn't captured by the main log.
App structure:
- mysite
- legacy
-- items
--- item1.py
-- __init__.py
-- engine.py
Within item1.py and engine.py are calls to an old debug() function which I'd not like to be logged in legacy.log but not have them appear in the mysite.log file.
Ideally the way this works is to create a wrapper with a debug function which does the logging at the new level and I've read that this requires an extension of logging.Logger.
So In legacy/__init__.py I've written;
import logging
LEGACY_DEBUG_LVL = 5
class LegacyLogger(logging.Logger):
"""
Extend the Logger to introduce the new legacy logging.
"""
def legacydebug(self, msg, *args, **kwargs):
"""
Log messages from legacy provided they are strings.
#param msg: message to log
#type msg:
"""
if isinstance(msg, str):
self._log(LEGACY_DEBUG_LVL, msg, args)
logging.Logger.legacydebug = legacydebug
logger = logging.getLogger('legacy')
logger.setLevel(LEGACY_DEBUG_LVL)
logger.addHandler(logging.FileHandler('legacy.log'))
logging.addLevelName(LEGACY_DEBUG_LVL, "legacydebug")
And from engine.py and item1.py I can just do;
from . import logger
debug = logger.legacydebug
At the moment I'm seeing messages logged to both logs. Is this the correct approach for what I want to achieve? I've got a talent for over-complicating things and missing the simple stuff!
edit
Logging in the main application settings is setup as such;
# settings.py
logging.captureWarnings(True)
logger = logging.getLogger()
logger.addHandler(logging.NullHandler())
logger.addHandler(logging.handlers.FileHandler('mysite.log'))
if DEBUG:
# If we're running in debug mode, write logs to stdout as well:
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
else:
logger.setLevel(logging.INFO)
When using multiple loggers, the logging module implicitly creates them in a tree structure. The structure are defined by the logger name: a logger named 'animal' will be the parent of loggers called 'animal.cat' and 'animal.dog'.
In your case, the unnamed logger defined in settings.py is parent of the logger named 'legacy'. The unnamed logger will receive the messages sent through the 'legacy' logger, and write them into mysite.log.
Try to give a name for the unnamed logger, such as 'mysite' to break the tree structure.