Async logging in Django - python

I have created a simple weather webapp in Django using API. Logging is enabled and are written into files in Windows. I want logging to be asynchronous that is at the end of execution. How can we do async logging in Django?

We can only create async views in Django.
There is Python Logstash package which has Async way of logging, but it stores logs in a database in a remote instance.
(Alternative of which is to store logs in SQLLite3 db). File logging option is not present in it.
Moreover, async is newbie in Django and still many complexities present unresolved in it. It might cause memory overhead which
can degrade performance. Please find some links below for reference.
https://pypi.org/project/python-logstash/
https://docs.djangoproject.com/en/3.1/topics/async/#:~:text=New%20in%20Django%203.0.,have%20efficient%20long%2Drunning%20requests.
https://deepsource.io/blog/django-async-support/

you can use the logging module from python standard library
import logging
logger = logging.getLogger(__name__)
# Set file as output
handler = logging.StreamHandler()
# Formatter template
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
# add a formatter to handler
handler.setFormatter(formatter)
# add a handler to logger
logger.addHandler(handler)
# your messages will be added to the file
logger.error("it's error message")
logger.info("it's info message")
logger.warning("it's warning message")
Official documentation: https://docs.python.org/3/library/logging.html
I hope I helped you!)

I can advise you to start the django project like this. Cons: nothing will be output to the console, but it will work faster than in middleware
nohup python manage.py runserver > file.log

Related

GCP Cloud Functions printing extra blank line after every print statement

I have a Cloud Function running Python 3.7 runtime triggered from a Pub/Sub Topic.
In the code, I have places where I use print() to write logs. However, when I go to the logs tab of my function, I see that an extra blank line is added after each log. I would like to remove these, since this is basically doubling my usage of the Logging API.
I have tried using print(message, end="") but this did not remove the blank lines.
Thanks in advance.
Although I have not found out the root cause for the blank line, I was able to resolve this by using the google-cloud-logging library as suggested by John in the comment of my question.
Resulting code is as below:
import google.cloud.logging
import logging
# set up logging client when run on GCP
if not os.environ.get("DEVELOPMENT"): # custom environment variable
# only on GCP
logging_client = google.cloud.logging.Client()
logging_client.setup_logging()
# define logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG) # min logging level for logger
# define handler only on local
# only add handlers in local, since Cloud Function there already is a handler attached to the logger
# adding another handler in Cloud Function will result in duplicate logging with severity = ERROR
if os.environ.get("DEVELOPMENT"):
console_handler = logging.StreamHandler() # handler to write to stream
console_handler.setLevel(logging.DEBUG) # min logging level for handler
# add handler to logger
logger.addHandler(console_handler)
def my_function():
logger.info('info')
This code will,
not send code to GCP logs when function is executed on local
will print INFO and DEBUG logs both on local and on GCP
Thank you both for your suggestions.
Instead of using print. Use Logger.
import logging
import logging.handlers as handlers
logger = logging.getLogger("YourCloudFunctionLoggerName")
logger.setLevel(logging.DEBUG)

Why python logging is treated as errors in azure dev ops pipeline

I have a problem. In my project I'm using python logging, to describe every single step of my program. Code is simple:
log = logging.getLogger()
logging.basicConfig(
handlers=[
logging.FileHandler("./logs/{:%d-%m-%Y}/".format(datetime.now())+"{:%Y-%m-%d-%H-%M-%S}.log".format(datetime.now()), 'w', 'utf-8'),
logging.StreamHandler()
],
level=logging.INFO,
format='[%(asctime)s] %(levelname)s - %(message)s',
datefmt='%H:%M:%S'
)
I don't know why, but pipeline in Azure Dev Ops automaticly treats every log as error, no matter that log is type of INFO or ERROR:
Same thing is in console output, everything is in red.
How can I fix it?
That is a default case in Azure DevOps actually to consider python logging as Warning or Error. To make the logging to use perfect in azure DevOps follow the steps below.
Set Loggin Level:
import logging
logger = logging.getLogger('azure.mgmt.resource')
# Set the desired logging level
logger.setLevel(logging.DEBUG)
Use General Namespaces:
import logging
logger = logging.getLogger('azure.storage')
logger.setLevel(logging.INFO)
logger = logging.getLogger('azure')
logger.setLevel(logging.ERROR)
print(f"Logger enabled for ERROR={logger.isEnabledFor(logging.ERROR)}, " \
f"WARNING={logger.isEnabledFor(logging.WARNING)}, " \
f"INFO={logger.isEnabledFor(logging.INFO)}, " \
f"DEBUG={logger.isEnabledFor(logging.DEBUG)}")
This is the link of standard logging libraries.
Check out the documentation on official procedure to handle the logging

How to redirect another library's console logging messages to a file, in Python

The fastAPI library that I import for an API I have written, writes many logging.INFO level messages to the console, which I would like either to redirect to a file-based log, or ideally, to both console and file. Here is an example of fastAPI module logging events in my console:
So I've tried to implement this Stack Overflow answer ("Easy-peasy with Python 3.3 and above"), but the log file it creates ("api_screen.log") is always empty....
# -------------------------- logging ----------------------------
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_format = ' %(message)s'
logging_handlers = [logging.FileHandler(logging_file), logging.StreamHandler()]
logging.basicConfig(level = logging_level, format = logging_format, handlers = logging_handlers)
logging.info("------logging test------")
Even though my own "------logging test------" message does appear on console within the other fastAPI logs:
As you can see here it's created the file, but it has size zero.
So what do I need to do also to get the file logging working?
There are multiple issues here. First and most importantly: basicConfig does nothing if a logger is already configured, which fastAPI does. So the handlers you are creating are never used. When you call logging.info() you are sending a log to the root logger which is printed because the fastAPI has added a handler to it. You are also not setting the level on your handlers. Try this code instead of what you currently have:
logging_file = "api_screen.log"
logging_level = logging.INFO
logging_fh = logging.FileHandler(logging_file)
logging_sh = logging.StreamHandler()
logging_fh.setLevel(logging_level)
logging_sh.setLevel(logging_level)
root_logger = logging.getLogger()
root_logger.addHandler(logging_fh)
root_logger.addHandler(logging_sh)
logging.info('--test--')

Logging to separate files in Python

I'm using python's logging module. I've initialized it as:
import logging
logger = logging.getLogger(__name__)
in every of my modules. Then, in the main file:
logging.basicConfig(level=logging.INFO,filename="log.txt")
Now, in the app I'm also using WSGIServer from gevent. The initializer takes a log argument where I can add a logger instance. Since this is an HTTP Server it's very verbose.
I would like to log all of my app's regular logs to "log.txt" and WSGIServer's logs to "http-log.txt".
I tried this:
logging.basicConfig(level=logging.INFO,filename="log.txt")
logger = logging.getLogger(__name__)
httpLogger = logging.getLogger("HTTP")
httpLogger.addHandler(logging.FileHandler("http-log.txt"))
httpLogger.addFilter(logging.Filter("HTTP"))
http_server = WSGIServer(('0.0.0.0', int(config['ApiPort'])), app, log=httpLogger)
This logs all HTTP messages into http-log.txt, but also to the main logger.
How can I send all but HTTP messages to the default logger (log.txt), and HTTP messages only to http-log.txt?
EDIT: Since people are quickly jumping to point that this Logging to two files with different settings has an answer, plese read the linked answer and you'll see they don't use basicConfig but rather initialize each logger separately. This is not how I'm using the logging module.
Add the following line to disable propagation:
httpLogger.propagate = False
Then, it will no longer propagate messages to its ancestors' handlers which includes the root logger for which you have set up the general log file.

Django: How do I get logging working?

I've added the following to my settings.py file:
import logging
...
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s',
filename=os.path.join(rootdir, 'django.log'),
filemode='a+')
And in views.py, I've added:
import logging
log = logging.getLogger(__name__)
...
log.info("testing 123!")
Unfortunately, no log file is being created. Any ideas what I am doing wrong? And also is their a better method I should be using for logging? I am doing this on Webfaction.
Python logging for Django is fine on somewhere like Webfaction. If you were on a cloud-based provider (eg Amazon EC2) where you had a number of servers, it might be worth looking at either logging to key-value DB or using Python logging over the network.
Your logging setup code in settings.py looks fine, but I'd check that you can write to rootdir -- your syslog might show errors, but it's more likely that Django would be throwing a 500 if it couldn't log properly.
Which leads me to note that the only major difference in my logging (also on WebFaction) is that I do:
import logging
logging.info("Something here")
instead of log.info

Categories

Resources