Is it possible to hook into Django's built in error reporting emails from a try-except code block? In other words, email the default error report and stack trace to the ADMINS/MANAGERS while still having situation specific error handling.
Specific example:
In a project performing complex calculations and generating large reports, the view displaying the report page does all the calculations and generates a long html page with lots of pretty tables and graphs and also generates downloadable PDFs from sections of that same HTML.
Recently we had errors in the PDF generation from issues with storage on S3. Now this is obviously an error we need to track down and attend, but most users are happy if they can just see the report on screen. If the PDFs download links just weren't displayed the issue could go entirely unnoticed for hours or even days - but the dev team should be notified.
Ideally, but not necessarily, I would love a solution that is logger agnostic, where it will use whatever error logger is used and trigger the default 500 error handler, and return back to the finally block or after the except block.
All you need to do is to use Python's logging framework to raise an appropriate message at the appropriate level. In your settings.py there is a LOGGING variable that defines how things are logged. By default I believe Django has any ERROR in django.request will be handled by mail_admins.
So in your code, all you need to do is
import logging
logger = logging.getLogger(__name__) # this will create a logger with the module being the logger name
try:
#do stuff you watch to catch
except:
# we're going to catch and just log it
logger.error('Some error title', exc_info=True) # exc_info=True will include the stacktrace
finally:
# what you want to do in your finally block.
Note, this will swallow the exception and won't bubble it up. Your response will return as a 200. If you want to bubble up the exception, just call raise in your except block. However, if all you care about is logging the error, but the view is still functional, then just log and swallow it.
In your LOGGING variable, you can add additional entries to loggers for the different logger names. You can have an app log at a different logging status, say INFO if you want to debug a certain code path. As long as you create a logger with the module name, you have a lot of flexibility of segmenting your logging to different handlers such as mail_admins.
Lastly, I'd recommend to look into sentry, as it's a really great error logging tool.
Related
Our log formatter has a user defined parameter. It is defined as:
'%(asctime)s|%(levelname)s|%(name)s|REQID:%(req_id)s|%(module)s:%(lineno)s|%(message)s'
where req_id is a request id, generated by application code for every request. When we are processing the requests, on our application code, we have access to this req_id, and we use it for logging purposes like this:
logger = logging.LoggerAdapter(logging.getLogger(service_name), {'req_id': req_id})
logger.debug('A debug message')
I am trying to make the tornado logger conforming to our log format, but since tornado has no access to our application level req_id it fails with:
KeyError: 'req_id'
How can I tell tornado to use a LoggerAdapter for tornado.access, with a user provided context?
EDIT
As a workaround, I tried the following:
Since it is not possible for me to tell tornado what loggers to use, I managed to hack my way around this limitation by reconfiguring the tornado logger in each request, adding the contextual information using a logging filter.
Unfortunately, reconfiguring the log for each request does not work since tornado will be serving requests in parallel, and we will get an inconsistent state.
How can we pass user context for the tornado loggers then?
The tornado.access log can be controlled by overriding Application.log_request in a subclass or using the log_function application setting. This method defaults to writing to the tornado.access log, but you can override it to log however you want.
Note however that the tornado.general and tornado.application loggers cannot be overridden in this way, so your log formatters/filters must still be able to handle messages that do not have the req_id field.
Technologies and Applications used: Rollbar, Django 1.7, Python 3.4
So, I'm following the official documentation found here for integrating Rollbar into a python and Django based application: https://github.com/rollbar/pyrollbar. Which includes: pip installing rollbar, adding the middleware class and creating the Rollbar dictionary configuration in a settings file, etc.
Just to test things out I added the example they provided in their docs to one of my views, and Rollbar/Django works fine (i.e. Rollbar registers the exception and the exception is sent to my Rollbar account in the cloud):
try:
main_app_loop()
except IOError:
rollbar.report_message('Got an IOError in the main loop', 'warning')
except:
# catch-all
rollbar.report_exc_info()
But, for example, in one of my template files I misspell a block tag and get an error via Django's default error logging system. However, Rollbar doesn't record the error and/or it isn't sent to my Rollbar account in the cloud. Is that because Rollbar has to be integrated manually via some kind of try, catch scenario? Or can Rollbar just grab errors by default without having to write a try, catch?
There is no other documentation I kind find for integrating Rollbar into a Django project other than what is found on the above link, so I'm not sure what to do next. Anyone else run into this or know what the issue might be?
I have an issue with debugging and Cloud Endpoints. I'm using tons of endpoints in my application, and one endpoint consistently returns with error code 500, message "Internal Error".
This endpoint does not appear in my app's logs, and when I run its code directly in the interactive console (in production), everything works fine.
There might be a bug in my code that I am failing to see, however, the real problem here is that the failing endpoints request is NOT showing up in my app's logs – which leaves me with no great way to debug the problem.
Any tips? Is it possible to force some kind of "debug" mode where more information (such as a stack trace) is conveyed back to me in the 500 response from endpoints? Why isn't the failing request showing up in my app's logs?
Just in case you aren't aware - by default the Logs webpage does not show you the lowest level log statements. That missing level ('D', I think) adds lots of Endpoints log statements that occur prior to the invocation of your code, so they could be useful in the situation you describe.
I also find it useful to retrieve my log statements with 'appcfg' (in the GAE SDK), e.g.
appcfg --num_days=1 --severity=0 request_logs myfile.log
Check if you are running out of resources.
Let's say I have a try and catch and there is an exception ... What is the proper way to deal with those exceptions/errors on a live production (django) site?
So I have
try:
create_response = wepay.call('/account/create',
{'name': name, 'description': desc})
self.wepay_account_id = create_response['account_id']
self.save()
except WePay.WePayError as e:
..... (what do I put here?
You can set up e-mail error reporting through Django: https://docs.djangoproject.com/en/dev/howto/error-reporting/
Or you can use a service like Rollbar (has a free account) to track error occurances.
Or you could use self-hosted Greylog (like suggested in comments), here's a good guide for django: http://www.caktusgroup.com/blog/2013/09/18/central-logging-django-graylog2-and-graypy/
Respond with (optionally a redirect to) a appropriate page explaining the problem to the user and if possible, provide a solution. Serving a 500 to your users in production is something you want to avoid, so catching the exception is a good idea.
So:
except WePay.WePayError as e:
return render_to_response('wepay_error_page.html')
or:
except WePay.WePayError as e:
return HttpResponseRedirect('/errors/wepay/') # Note: better use urlresolvers
(note this particular code will only work if it's in a view)
Then (optionally), make sure you get a copy of the error, by for example sending yourself an email.
A suggestion for this particular case (if I interpret the code succesfully) may be to notify yourself, and repond with a page explaining to the user their payment went wrong. Tell them this might occur because of their actions (maybe they cancelled their payment), and provide contact details for when users think it was not their fault.
Django by default mails (when mail is properly configured) all 500 errors to settings.ADMINS, but these only occur on uncaught exceptions, so in this particular question services like Rollbar or a central logging solution will only work if you re-raise the exception (will result in a 500) or send the error to one of these manually in the catch block.
I would recommend the above solution of redirecting over to a page that explains WePay error, combined with using django-wepay app available on pypi that features logging of all errors, and optionally all calls.
I am using the Pyramid web framework with SQLAlchemy, connected to a MySQL backend. The app I've put together works, but I'm trying to add some polish by way of some enhanced logging and exception handling.
I based everything off of the basic SQLAlchemy tutorial on the Pyramid site, using the session like so:
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
Using DBSession to query works great, and if I need to add and commit something to the database I'll do something like
DBSession.add(myobject)
DBSession.flush()
So I get my new ID.
Then I wanted to add logging to the database, so I followed this tutorial. That seemed to work great. I did initially run into some weirdness with things getting committed and I wasn't sure how SQLAlchemy was working so I had changed "transaction.commit()" to "DBSession.flush()" to force the logs to commit (this is addressed below!).
Next I wanted to add custom exception handling with the intent that I could put a friendly error page for anything that wasn't explicitly caught and still log things. So based on this documentation I created error handlers like so:
from pyramid.view import (
view_config,
forbidden_view_config,
notfound_view_config
)
from pyramid.httpexceptions import (
HTTPFound,
HTTPNotFound,
HTTPForbidden,
HTTPBadRequest,
HTTPInternalServerError
)
from models import DBSession
import transaction
import logging
log = logging.getLogger(__name__)
#region Custom HTTP Errors and Exceptions
#view_config(context=HTTPNotFound, renderer='HTTPNotFound.mako')
def notfound(request):
log.exception('404 not found: {0}'.format(str(request.url)))
request.response.status_int = 404
return {}
#view_config(context=HTTPInternalServerError, renderer='HTTPInternalServerError.mako')
def internalerror(request):
log.exception('HTTPInternalServerError: {0}'.format(str(request.url)))
request.response.status_int = 500
return {}
#view_config(context=Exception, renderer="HTTPExceptionCaught.mako")
def error_view(exc, request):
log.exception('HTTPException: {0}'.format(str(request.url)))
log.exception(exc.message)
return {}
#endregion
So now my problem is, exceptions are caught and my custom exception view comes up as expected. But the exceptions aren't logged to the database. It appears this is because the DBSession transaction is rolled back on any exception. So I changed the logging handler back to "transaction.commit". This had the effect of actually committing my exception logs to the database, BUT now any DBSession action after any log statement throws an "Instance not bound to a session" error...which makes sense because from what I understand after a transaction.commit() the session is cleared out. The console log always shows exactly what I want logged, including the SQL statements to write the log info to the database. But it's not committing on exception unless I use transaction.commit(), but if I do that then I kill any DBSession statements after the transaction.commit()!.
Sooooo....how might I set things up so that I can log to the database, but also catch and successfully log exceptions to the database, too? I feel like I want the logging handler to use some sort of separate database session/connection/instance/something so that it is self-contained but I'm unclear on how that might work.
Or should I architect what I want to do completely different?
EDIT:
I did end up going with a separate, log-specific session dedicated only to adding committing log info to the database. This seemed to work well until I started integrating a Pyramid console script into the mix, in which I ran into problems with sessions and database commits within the script not necessarily working like they do in the actual Pyramid web application.
In hindsight (and what I'm doing now) instead of logging to a database I use the standard logging and FileHandlers (TimedRotatingFileHandlers specifically) and log to the file system.
Using transaction.commit() has an unintended side-effect of the changes to other models being committed too, which is not too cool - the idea behind the "normal" Pyramid session setup with ZopeTransactionExtension is that a single session starts at the beginning of the request, then if everything succeeds the session is committed, if there's an exception then everything is rolled back. It would be better to keep this logic and avoid committing things manually in the middle of request.
(as a side note - DBSession.flush() does not commit the transaction, it emits the SQL statements but the transaction can be rolled back later)
For things like exception logs, I would look at setting up a separate Session which is not bound to Pyramid's request/response cycle (without ZopeTransactionExtension) and then using it to create log records. You'd need to commit the transaction manually after adding a log record:
record = Log("blah")
log_session.add(record)
log_session.commit()