Display warning in console output in Django App - python

I want to add a warning to an old function-based view alerting users that we are moving to class-based views. I see these warnings all the time in Django, such as RemovedInDjango19Warning. It's really cool.
def old_view(request, *args, **kwargs):
print('I can see this message in my terminal output!')
warnings.warn("But not this message.", DeprecationWarning, stacklevel=2)
view = NewClassBasedView.as_view()
return view(request, *args, **kwargs)
I see the print statement in my terminal output but not the warning itself. I get other warnings from Django in the terminal (like the aforementioned RemovedInDjango19Warning but not this one.
Am I missing something? Do I have to turn warnings on somehow at an import specific level (even though it is working broadly for Django)?

Assuming you are using Django 1.8 and you have no Logging configurations, you can read from the Django documentation:
Django uses Python’s builtin logging module to perform system logging. The usage of this module is discussed in detail in Python’s own documentation.
So you can do:
# import the logging library
import logging
# Get an instance of a logger
logger = logging.getLogger(__name__)
def my_view(request, arg1, arg):
...
if bad_mojo:
# Log an error message
logger.error('Something went wrong!')
The logging module is very useful and has too many functions, I suggest you to take a look at Python logging documentation, you can for example do:
FORMAT = '%(asctime)-15s %(clientip)s %(user)-8s %(message)s'
logging.basicConfig(format=FORMAT)
d = {'clientip': '192.168.0.1', 'user': 'fbloggs'}
logger = logging.getLogger('tcpserver')
logger.warning('Protocol problem: %s', 'connection reset', extra=d)
That outputs to:
2006-02-08 22:20:02,165 192.168.0.1 fbloggs Protocol problem: connection reset
To format all your messages and have pretty awesome messages to your users.
If you want to use the warnings module, you have to know that Python by default does not displays all warnings raised on the application. There are two ways to change this behaviour:
By parameters: You have to call python with the -Wd parameter to load the default filter, so you can do python -Wd manage.py runserver to call the test server.
By program: You need to call the warnings.simplefilter('default') function just one time . You can call this function from anywhere, but you have to be sure that this line will be executed before any call to warnings.warn, on my tests I placed it at the beginning of settings.py file but I am not sure that was the best place. The __init__.py file of the project package would be a nice place.

Related

Async logging in Django

I have created a simple weather webapp in Django using API. Logging is enabled and are written into files in Windows. I want logging to be asynchronous that is at the end of execution. How can we do async logging in Django?
We can only create async views in Django.
There is Python Logstash package which has Async way of logging, but it stores logs in a database in a remote instance.
(Alternative of which is to store logs in SQLLite3 db). File logging option is not present in it.
Moreover, async is newbie in Django and still many complexities present unresolved in it. It might cause memory overhead which
can degrade performance. Please find some links below for reference.
https://pypi.org/project/python-logstash/
https://docs.djangoproject.com/en/3.1/topics/async/#:~:text=New%20in%20Django%203.0.,have%20efficient%20long%2Drunning%20requests.
https://deepsource.io/blog/django-async-support/
you can use the logging module from python standard library
import logging
logger = logging.getLogger(__name__)
# Set file as output
handler = logging.StreamHandler()
# Formatter template
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
# add a formatter to handler
handler.setFormatter(formatter)
# add a handler to logger
logger.addHandler(handler)
# your messages will be added to the file
logger.error("it's error message")
logger.info("it's info message")
logger.warning("it's warning message")
Official documentation: https://docs.python.org/3/library/logging.html
I hope I helped you!)
I can advise you to start the django project like this. Cons: nothing will be output to the console, but it will work faster than in middleware
nohup python manage.py runserver > file.log

Doing the equivalent of log_struct in python logger

In the google example, it gives the following:
logger.log_struct({
'message': 'My second entry',
'weather': 'partly cloudy',
})
How would I do the equivalent in python's logger. For example:
import logging
log.info(
msg='My second entry',
extra = {'weather': "partly cloudy"}
)
When I view this in stackdriver, the extra fields aren't getting parsed properly:
2018-11-12 15:41:12.366 PST
My second entry
Expand all | Collapse all
{
insertId: "1de1tqqft3x3ri"
jsonPayload: {
message: "My second entry"
python_logger: "Xhdoo8x"
}
logName: "projects/Xhdoo8x/logs/python"
receiveTimestamp: "2018-11-12T23:41:12.366883466Z"
resource: {…}
severity: "INFO"
timestamp: "2018-11-12T23:41:12.366883466Z"
}
How would I do that?
The closest I'm able to do now is:
log.handlers[-1].client.logger('').log_struct("...")
But this still requires a second call...
Current solution:
Update 1 - user Seth Nickell improved my proposed solution, so I update this answer as his method is superior. The following is based on his response on GitHub:
https://github.com/snickell/google_structlog
pip install google-structlog
Used like so:
import google_structlog
google_structlog.setup(log_name="here-is-mylilapp")
# Now you can use structlog to get searchable json details in stackdriver...
import structlog
logger = structlog.get_logger()
logger.error("Uhoh, something bad did", moreinfo="it was bad", years_back_luck=5)
# Of course, you can still use plain ol' logging stdlib to get { "message": ... } objects
import logging
logger = logging.getLogger("yoyo")
logger.error("Regular logging calls will work happily too")
# Now you can search stackdriver with the query:
# logName: 'here-is-mylilapp'
Original answer:
Based on an answer from this GitHub thread, I use the following bodge to log custom objects as info payload. It derives more from the original _Worker.enqueue and supports passing custom fields.
from google.cloud.logging import _helpers
from google.cloud.logging.handlers.transports.background_thread import _Worker
def my_enqueue(self, record, message, resource=None, labels=None, trace=None, span_id=None):
queue_entry = {
"info": {"message": message, "python_logger": record.name},
"severity": _helpers._normalize_severity(record.levelno),
"resource": resource,
"labels": labels,
"trace": trace,
"span_id": span_id,
"timestamp": datetime.datetime.utcfromtimestamp(record.created),
}
if 'custom_fields' in record:
entry['info']['custom_fields'] = record.custom_fields
self._queue.put_nowait(queue_entry)
_Worker.enqueue = my_enqueue
Then
import logging
from google.cloud import logging as google_logging
logger = logging.getLogger('my_log_client')
logger.addHandler(CloudLoggingHandler(google_logging.Client(), 'my_log_client'))
logger.info('hello', extra={'custom_fields':{'foo': 1, 'bar':{'tzar':3}}})
Resulting in:
Which then makes it much easier to filter according to these custom_fields.
Let's admit this is not good programming, though until this functionality is officially supported there doesn't seem to be much else that can be done.
This is not currently possible, see this open feature request on google-cloud-python for more details.
Official docs: Setting Up Cloud Logging for Python
You can write logs to Logging from Python applications
by using the Python logging handler included with the Logging client library, or
by using Cloud Logging API Cloud client library for Python directly.
I did not get the Python logging module to export jsonPayload, but the cloud logging library for Python works:
google-cloud-logging >= v.3.0.0 can do it
No need for Python logging workaround anymore, the only thing you need is to install Python >= 3.6 and
pip install google-cloud-logging
Then, you can use Python logging with
import os
# have the environment variable ready:
# GOOGLE_APPLICATION_CREDENTIALS
# Then:
GOOGLE_APPLICATION_CREDENTIALS = os.environ.get("GOOGLE_APPLICATION_CREDENTIALS")
client = google.cloud.logging.Client.from_service_account_json(
GOOGLE_APPLICATION_CREDENTIALS
)
log_name = "test"
gcloud_logger = client.logger(log_name)
# jsonPayloads
gcloud_logger.log_struct(entry)
# messages
gcloud_logger.log_text('hello')
# generic (can find out whether it is jsonPayload or message)!
gcloud_logger.log(entry or 'hello')
You run these commands in a Python file outside of GCP and reach GCP with more or less a one-liner, you only need the credentials.
You can use the gcloud logger even for printing and logging in one go, taken from Writing structured logs, untested.
Python logging to log jsonPayload into GCP logs (TL/DR)
I could not get this to run!
You can also use the built-in logging module of Python with the workaround mentioned in the other answer, but I did not get it to run.
It will not work if you pass a dictionary or its json.dumps() directly as a parameter, since then, you get a string output of the whole dictionary which you cannot read as a json tree.
But it also did not work for me when I used the logger.info() to log the jsonPayload / json.dumps in an example parameter called extras.
import json
import os
#...
# https://googleapis.dev/python/logging/latest/stdlib-usage.html
GOOGLE_APPLICATION_CREDENTIALS = os.environ.get("GOOGLE_APPLICATION_CREDENTIALS")
client = google.cloud.logging.Client.from_service_account_json(
GOOGLE_APPLICATION_CREDENTIALS
)
log_name = "test"
handler = CloudLoggingHandler(client, name=log_name)
setup_logging(handler)
logger = logging.getLogger()
logger.setLevel(logging.INFO) # Set default level.
#...
# Complete a structured log entry.
entry = dict(
severity="NOTICE",
message="This is the default display field.",
# Log viewer accesses 'component' as jsonPayload.component'.
component="arbitrary-property"
)
# Python logging to log jsonPayload into GCP logs
logger.info('hello', extras=json.dumps(entry))
I also tried the google-structlog solution of the other answer, that only threw the error:
google_structlog.setup(log_name=log_name) TypeError: setup() got an unexpected keyword argument 'log_name'
I used Python v3.10.2 and
google-auth==1.35.0
google-cloud-core==1.7.2
google-cloud-logging==1.15.0
googleapis-common-protos==1.54.0
Research steps gcloud logging (TL/DR)
Following the fixed issue (see the merge and close in the end) on github at googleapis / python-logging: Logging: support sending structured logs to stackdriver via stdlib 'logging'. #13
you find feat!: support json logs #316 :
This PR adds full support for JSON logs, along with standard text
logs. Now, users can call logging.error({'a':'b'}), and they will get
a JsonPayload in Cloud Logging, Or call logging.error('test') to
receive a TextPayload
As part of this change, I added a generic logger.log() function, which
serves as a generic entry-point instead of logger.log_text or
logger.log_struct. It will infer which log function is meant based on
the input type
Previously, the library would attach the python logger name as part of
the JSON payload for each log. Now, that information will be attached
as a label instead, giving users full control of the log payload
fields
Fixes #186, #263, #13
With the main new version listing the new feature:
chore(main): release 3.0.0 #473

Logging to separate files in Python

I'm using python's logging module. I've initialized it as:
import logging
logger = logging.getLogger(__name__)
in every of my modules. Then, in the main file:
logging.basicConfig(level=logging.INFO,filename="log.txt")
Now, in the app I'm also using WSGIServer from gevent. The initializer takes a log argument where I can add a logger instance. Since this is an HTTP Server it's very verbose.
I would like to log all of my app's regular logs to "log.txt" and WSGIServer's logs to "http-log.txt".
I tried this:
logging.basicConfig(level=logging.INFO,filename="log.txt")
logger = logging.getLogger(__name__)
httpLogger = logging.getLogger("HTTP")
httpLogger.addHandler(logging.FileHandler("http-log.txt"))
httpLogger.addFilter(logging.Filter("HTTP"))
http_server = WSGIServer(('0.0.0.0', int(config['ApiPort'])), app, log=httpLogger)
This logs all HTTP messages into http-log.txt, but also to the main logger.
How can I send all but HTTP messages to the default logger (log.txt), and HTTP messages only to http-log.txt?
EDIT: Since people are quickly jumping to point that this Logging to two files with different settings has an answer, plese read the linked answer and you'll see they don't use basicConfig but rather initialize each logger separately. This is not how I'm using the logging module.
Add the following line to disable propagation:
httpLogger.propagate = False
Then, it will no longer propagate messages to its ancestors' handlers which includes the root logger for which you have set up the general log file.

Tornado - No handlers could be found for logger "tornado.application" while using render()

from tornado.web import RequestHandler
class HelloWorldHandler(RequestHandler):
def get(self):
# self.write("Hello, world...!!!") # works without any error
self.render('hello.html') # but here I get:
# `500: Internal Server Error` and my console shows `No handlers
# could be found for logger "tornado.application" `.
What is the issue? I've already Googled No handlers could be found for logger "tornado.application".
and surprisingly all urls suggest same method but I'm unable to implement this.
here is same thread on SOF.
If your logs were configured correctly you'd get a stack trace in the logs that would explain what went wrong. The logs are supposed to be configured automatically in IOLoop.start() so I'm not sure why that's not happening, but you can configure them manually by calling logging.basicConfig() or tornado.options.parse_command_line() at the beginning of main.

Django: How do I get logging working?

I've added the following to my settings.py file:
import logging
...
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s',
filename=os.path.join(rootdir, 'django.log'),
filemode='a+')
And in views.py, I've added:
import logging
log = logging.getLogger(__name__)
...
log.info("testing 123!")
Unfortunately, no log file is being created. Any ideas what I am doing wrong? And also is their a better method I should be using for logging? I am doing this on Webfaction.
Python logging for Django is fine on somewhere like Webfaction. If you were on a cloud-based provider (eg Amazon EC2) where you had a number of servers, it might be worth looking at either logging to key-value DB or using Python logging over the network.
Your logging setup code in settings.py looks fine, but I'd check that you can write to rootdir -- your syslog might show errors, but it's more likely that Django would be throwing a 500 if it couldn't log properly.
Which leads me to note that the only major difference in my logging (also on WebFaction) is that I do:
import logging
logging.info("Something here")
instead of log.info

Categories

Resources