Is there a way that I can handle some sort of "catch-all" error handling in a Pyramid web app? I currently have implemented exception logging to a database (via the docs at http://docs.pylonsproject.org/projects/pyramid_cookbook/en/latest/logging/sqlalchemy_logger.html) and I'll return messages to my views to put a "friendly" face on what happened.
But is there something I can implement that would show some sort of generic "Oops, you ran into a problem and we're looking into it" for anything else I'm not explicitly catching, and I could use the above error handler behind the scenes to log whatever to the database? Or, what sort of thing should I be looking for in searches?
Thanks,
edit, since I can't fit it all into a comment:
.
Thanks, that seems to be exactly what I'm looking for!
One thing I'm running into, I don't know if it's related or not....
So I'm implementing the SQL logger as above like so:
class SQLAlchemyHandler(logging.Handler):
# A very basic logger that commits a LogRecord to the SQL Db
def emit(self, record):
trace = None
exc = record.__dict__['exc_info']
if exc:
trace = traceback.format_exc(exc)
log = Log(
logger=record.__dict__['name'],
level=record.__dict__['levelname'],
trace=trace,
msg=record.__dict__['msg'],)
DBSession.add(log)
DBSession.flush()
#transaction.commit()
I had to take out the 'transaction.commit()' call and instead use .flush() because I was getting a SQLAlchemy DetachedInstanceError exception when using transaction. I think it's because I'm playing some games with passing a request to a helper function and that's where it seems to be throwing it. So it works by flushing the session. Buuuut, what happens is if I have a log.error() statement in my exception view, if an exception is actually thrown the view catches it (great!) but the log statement in the view doesn't get committed. The debugging logs in Pyramid show it being written, but never committed.
If I change the logging handler back to transaction.commit then the exceptions do get committed, but I'm back at my original problem. I think I need to focus back on what I'm doing in my helper function that's causing it in the first place, but I'm still learning SQLAlchemy in general, too. Sometimes it can be a little strange.
You can set up an exception view. For example:
#view_config(context=Exception)
def error_view(exc, request):
#log or do other stuff to exc...
return Response("Sorry there was an error")
Related
I'm not sure exactly how to ask what I'm asking, and I don't know if any sample code would really be relevant here, so if clarification is necessary just ask.
I have a non-trivial program in Python that does a couple of things. It reads from some SQL Server database tables, executes some DDL in SQL Server (both of these with pyodbc), analyzes some data, and has a GUI to orchestrate everything so users in the future besides me can automate the process.
Everything is functioning as it should, but obviously I don't expect future users to always play by the rules. I can explicitly indicate what input is wrong (i.e. fields left empty), but there are quite a bit of things that can go wrong. Try-catch structures are out of the question because they cause a few issues in the web of things happening in my program, some of which are embedded in the resources I'm using, not to mention I feel like it's probably not good form to have them everywhere.
That being said, I'm wondering if there's a way to cause an event (likely just a dialog box saying that an error occurred), so that a user without a view of the output window would know something had gone wrong. It would be nice if I could also grab the error message itself, but that isn't necessary. I'm alright with the error still occurring so long as the program can keep going if the issue is corrected.
I'm not sure if this is possible, or if it is possible what form it would take, like something that monitors the output or listens for errors, but I'm open to all information about this.
Thank you
You can wrap your code with a try/except block and name the Exception to print it in the dialog, for example:
try:
# your code
except Exception as e:
print(e) # change this to be in your dialog
This way you will not use try/except many times in different places and you will catch basically any Exception. The other way is to raise custom exceptions for each error and catch them with except.
If you still don't want to use try/except at all, maybe start a thread to keep checking for certain variables (through a loop) and whenever you want to start an error event you just set that variable to True and the thread will start the error dialog. For instance:
import time
import threading
test1 = False
test2 = False
def errorCheck():
while True:
if test1:
# start error dialog for error1
if test2:
# start error dialog for error2
time.sleep(0.1)
if __name__ == '__main__':
t = threading.Thread(target=errorCheck)
t.daemon = True
t.start()
# start your app
However, I recommend using try/except instead.
Is this idiomatic/pythonic to do like this or is there a better way? I want all the errors to get in log for in case I don't have access to the console output. Also I want to abort this code path in case the problem arises.
try:
with open(self._file_path, "wb") as out_f:
out_f.write(...)
...
except OSError as e:
log("Saving %s failed: %s" % (self._file_path, str(e)))
raise
EDIT: this question is about handling exceptions in a correct place/with correct idiom. It is not about logging class.
A proven, working scheme is to have a generic except clause at the top level of your application code to make sure any unhandled error will be logged (and re-raised fo course) - and it also gives you an opportunity to try and do some cleanup before crashing)
Once you have this, adding specific "log and re-reraise" exception handlers in your code makes sense if and when you want to capture more contextual informations in your log message, as in your snippet example. This means the exception might end up logged twice but this is hardly and issue .
If you really want to be pythonic (or if you value your error logs), use the stdlib's logging module and it's logger.exception() method that will automagically add the full traceback to the log.
Some (other) benefits of the logging module are the ability to decouple the logging configuration (which should be handled by the app itself, and can be quite fine-grained) from the logging calls (which most often happen at library code level), the compatibility with well-written libs (which already use logging so you just have to configure your loggers to get infos from 3rd-part libs - and this can really save your ass), and the ability to use different logging mechanisms (to stderr, to file, to syslog, via email alerts, whatever, and you're not restricted to a single handler) according to the log source and severity and the deployment environment.
Update:
What would you say about re-raising the same exception (as in example) or re-raising custom exception (MyLibException) instead of original one?
This is a common pattern indeed, but beware of overdoing it - you only want to do this for exceptions that are actually expected and where you really know the cause. Some exception classes can have different causes - cf OSError, 'IOErrorandRuntimeError- so never assume anything about what really caused the exception, either check it with a decently robust condition (for example the.errnofield forIOError`) or let the exception propagate. I once wasted a couple hours trying to understand why some lib complained about a malformed input file when the real reason was a permission issue (which I found out tracing the library code...).
Another possible issue with this pattern is that (in Python2 at least) you will loose the original exception and traceback, so better to log them appropriately before raising your own exception. IIRC Python3 has some mechanism to handle this situation in a cleaner way that let you preserve some of the original exception infos.
I'm using Piston with Django. Anytime there's an error in my handler code, I get a simplified, text-only description of the error in my http response, which gives me much less information that Django does when it's reporting errors. How can I stop Piston catching errors in this way?
In your settings.py file, add PISTON_DISPLAY_ERRORS = False this will cause exceptions to be raised allowing them to be shown as expected in the Django debug error page when you are using DEBUG = True.
There are a few cases when the exception won't propagate properly. I've seen it happen when Piston says that the function definition doesn't match, but haven't looked to see why...
Maybe you could try to override Resource.error_handle, and instead of using the default implementation:
https://bitbucket.org/jespern/django-piston/src/c4b2d21db51a/piston/resource.py#cl-248
just re-raise the original exception.
For my python/django site I need to build a "dashboard" that will update me on the status of dozens of error/heartbeat/unexpected events going on.
There are a few types of "events" that I'm currently tracking by having the Django site send emails to the admin accounts:
1) Something that normally should happen goes wrong. We synch files to different services and other machines every few hours and I send error emails when this goes wrong.
2) When something that should happen actually happens. Sometimes events in item #1 fail so horribly that they don't even send emails (try: except: around an event should always work, but things can get deleted from the crontab, the system configuration can get knocked askew where things won't run, etc. where I won't even get an error email and the lack of a success/heartbeat email will let me know something that should have happened didn't happen.)
3) When anything unexpected happens. We've made a lot of assumptions on how backend operations will run and if any of these assumptions are violated (e.g. we find two users who have the same email address) we want to know about it. These events aren't necessarily errors, more like warnings to investigate.
So I want to build a dashboard that I can easily update from python/django to give me a bird's eye view of all of these types of activity so I can stop sending hundreds of emails out per week (which is already unmanageble.)
Sounds like you want to create a basic logging system that outputs to a web page.
So you could write something simple app called, say, systemevents that creates an Event record each time something happens on the site. You'd add a signal hook so that anywhere in the rest of the site you could code something like:
from systemevents.signals import record_event
...
try:
# code goes here
except Exception, inst:
record_event("Error occurred while taunting %s: %s" % (obj, inst,), type="Error")
else:
record_event("Successfully taunted %s" % (obj, ), type="Success")
Then you can pretty easily create a view that lists these events.
However, keep in mind that this is adding a layer of complexity that is highly problematic. What if the error lies in your database? Then each time you try to record an error event, another error occurs!
Far better to use something like a built-in logging system to create a text-based log file, then whip up something that can import that text file and lay it out in a somewhat more readable fashion.
One more tip: in order to change how Django handles exceptions, you have to write a custom view for 500 errors. If using systemevents, you'd write something like:
from django.views.defaults import server_error
def custom_error_view(request)
try:
import sys
type, value, tb = sys.exc_info()
error_message = "" # create an error message from the values above
record_event("Error occurred: %s" % (error_message,), type="Error")
except Exception:
pass
return server_error(request)
Note that none of this code has been tested for correctness. It's just meant as a guide.
Have you tried looking at django-sentry?
http://dcramer.github.com/django-sentry/
So, I'm pulling my hair out here, and maybe someone has an insight.
I have a cronjob that loops over all my Link objects, does some stuff, might change properties on the object and does a save(). That's it.
Every so often (around once an hour), one of my rows just disappears. Poof. Nothing in the logs.
So, I'm trying to add debugging statements everywhere, but are there any glaring reasons for an entry to disapear? Is the only way to remove an entry by calling delete()?
Just any general directions to go would be wonderful, thank you.
Some ideas I've had:
git push while the cronjob is running
some cascading delete is wiping them out
some django method is calling delete on an exception
You could override the delete method on your Link class and dump a stack trace or log a message to see if it's indeed happening from within your Django application.
import sys, traceback
def delete(self):
super(Link, self).delete()
try:
assert False
except AssertionError, e
traceback.print_tb(file=sys.stdout)
There may be a better way to get and log a stack trace, but that's the first thing that came to mind.
You could use django-logging with LOGGING_LOG_SQL = True to log all the SQL, so you can see if any DELETE's are occurring.