I want to wrap a bunch of functions in a try/except and have them email me a traceback when they fail. I'm using Django's ExceptionReporter, so I need the request object to send the traceback email. Some of the functions I want to wrap have the request object as a parameter already, but not all of them.
I was thinking about using a decorator for the try/except, but then it isn't clear that the request object is a required parameter for all the functions it decorates. Is there a better way to do this?
Edit: The functions I'm trying to wrap are all just supplementary functions after the core stuff required for the response are done, so I don't want to use Django's automatic emailing that happens when returning a 500 error because of an uncaught exception. I suppose that opens up the possibility of running these methods as separate processes after returning the response, but that also gets complicated in Django.
Related
During development using drf, an efficient method for error handling was needed.
I found two methods, one using ErrorResponse created by inheriting Response, and one using APIException provided by drf.
The first method is done through return, and the second method uses the raise command.
I wonder which one is more efficient and why!
I apologize in advance for the question may be too vague.
Not sure if efficiency and CPU time is most important thing.
You have to understand Django request-response cycle first. The next step after return Response (or raise Exception) is not a client side browser but number of Middlewares that you imported in your application. And these Middlewares may be different depends on what happens inside View.
When you raising something you break this cycle flow.
Django handling raised exception, writing extra error logs, returning specified error response to client side. You don't have to care that all conditions of correct responses will be satisfied, because error already happens, it is already not correct. In other way returned Response will be delivered to client side by normal way. Django will care that all validations and steps will be passed before response reach a client.
If you need to save milliseconds by choosing between return / raise and deeply thinking about efficiency, at first stop using Django. Seriously. It is slowest framework even for python.
raise produces an error in the current level of the call stack. You can catch a raised error by covering the area where the error might be raised in a try and handling that error in an except.
return on the other hand, returns a value to where the function was called from, so returning an exception usually is not the functionality you are looking for in a situation like this, since the exception itself is not the thing triggering the except it is instead the raising of the exception that triggers it.
https://docs.python.org/3/reference/simple_stmts.html#raise
https://docs.python.org/3/reference/simple_stmts.html#return
So to answer your question I would raise because it is built for errors compared to return. Also, they are the same in speed/efficiency.
Let's say we have a service method that performs its business logic by using one or more helper methods, where each helper method can raise one or more errors.
Because there can potentially be many helper methods that in total encapsulate lots of code, we want a way for folks using the service method to know all of the errors it can raise without needing to read through the implementation of every helper method.
To make this more concrete, let's assume this is being done in a Django context, where each view calls a service method, and then that view is then responsible for catching any errors the service method raises and returning an appropriate response for each to the user.
So question is, should the errors that each helper method can raise be documented by just re-raising the errors using a try/except, or should they be noted in a docstring?
For example, which is better here, service_method_v1 or service_method_v2?:
def service_method_v1():
try:
helper_method_1()
except SomeError:
raise
def service_method_v2():
"""
Raises:
SomeError, via helper_method_1()
"""
helper_method_1()
helper_method_1():
raise SomeError
I know there is some computational overhead when using try/except, but this purpose let's assume that this is negligible in terms of the end user experience.
I need to authenticate a user from a cookie in an application running on Tornado. I need to parse the cookie and load the user from the DB using the cookie content. On checking out the Tornado RequestHandler documentation, there are 2 ways of doing it:
by overriding prepare() method of RequestHandler class.
by overriding get_current_user() method of RequestHandler class.
I'm confused with the following statement:
Note that prepare() may be a coroutine while get_current_user() may
not, so the latter form is necessary if loading the user requires
asynchronous operations.
I don't understand 2 things in it:
What does the doc mean by saying that get_current_user() may not be a coroutine? What does may not mean here? Either it can be a coroutine, or it can't.
Why is the latter form, i.e. get_current_user(), required if async operation is required? If prepare() can be a coroutine and get_current_user() may not, then shouldn't prepare() be used for async operations?
I would really appreciate any help with this.
Here, "may not be a coroutine" means "is not allowed to be a coroutine" or "must not be a coroutine". The language used is confusing and it should probably be changed to say "must not".
Again, the docs are confusing: in this sentence prepare() is mentioned first, but before this sentence are two examples and get_current_user is first. "Latter" refers to the second example which uses prepare().
So in summary, it always works to override prepare() and set self.current_user, whether you need a coroutine or not. If you don't need a coroutine to get the current user, you can override get_current_user() instead and it will be called automatically the first time self.current_user is accessed. It doesn't really matter which one you choose; you can use whichever feels more natural to you. (The reason we have two different methods is that get_current_user() is older but we had to use a different method for coroutines)
1
The recommended way of getting current user is to use RequestHandler.current_user property. This property is actually a function, it returns RequestHandler._current_user if set, otherwise it tries to set it with call of get_current_user.
Because of current_user is a property - it can't be yielded, therefore get_current_user can't be coroutine function.
Of course you could, read cookie and call db, authenticate user in get_current_user, but only in blocking (sync) manner.
2
In the the doc you quoted, the latter example is the one with prepare.
This question already has answers here:
Calling a hook function every time an Exception is raised
(4 answers)
Closed 6 years ago.
My project's code is full of blocks like the following:
try:
execute_some_code()
except Exception:
print(datetime.datetime.now())
raise
simply because, if I get an error message, I'd like to know when it happened. I find it rather silly to repeat this code over and over, and I'd like to factor it away.
I don't want to decorate execute_some_code with something that does the error capturing (because sometimes it's just a block of code rather than a function call, and sometimes I don't need the exact same function to be decorated like that). I also don't want to divert stdout to some different stream that logs everything, because that would affect every other thing that gets sent to stdout as well.
Ideally, I'd like to over-ride the behaviour of either the raise statement (to also print datetime.datetime.now() on every execution) or the Exception class, to pre-pend all of its messages with the time. I can easily sub-class from Exception, but then I'd have to make sure my functions raise an instance of this subclass, and I'd have just as much code duplication as currently.
Is either of these options possible?
You might be able to modify python (I'd have to read code to be sure how complex that'd be), but:
You do not want to replace raise with different behaviour - trying and catching is a very pythonic approach to problem solving, so there's lots of code that works very well by e.g. calling a method and letting that method raise an exception, catching that under normal circumstances. So we can rule that approach out – you really only want to know about the exceptions you care about, not the ones that are normal during operation.
The same goes for triggering some action whenever an Exception instance is created – but:
You might be able to overwrite the global namespace; at least for things that get initialized after you declared your own Exception class. You could then add a message property that includes a timestamp. Don't do that, though – there might be people actually relying on the message to automatically react to Exceptions (bad style, but still not really seldom, sadly).
Ok I am programing a way to interface with Grooveshark (http://grooveshark.com). Right now I have a class Grooveshark and several methods, one gets a session with the server, another gets a token that is based on the session and another is used to construct api calls to the server (and other methods use that). Right now I use it like so.... Note uses twisted and t.i.defer in twisted
g = Grooveshark()
d = g.get_session()
d.addCallback(lambda x: g.get_token())
## and then something like.... ##
g.search("Song")
I find this unpythonic and ugly sense even after initializing the class you have to call two methods first or else the other methods won't work. To solve this I am trying to get it so that the method that creates api calls takes care of the session and token. Currently those two methods (the session and token methods) set class variables and don't return anything (well None). So my question is, is there a common design used when interfacing with sites that require tokens and sessions? Also the token and session are retrieved from a server so I can't have them run in the init method (as it would either block or may not be done before a api call is made)
I find this unpythonic and ugly sense
even after initializing the class you
have to call two methods first or else
the other methods won't work.
If so, then why not put the get_session part in your class's __init__? If it always must be performed before anything else, that would seem to make sense. Of course, this means that calling the class will still return a yet-unusable instance -- that's kind of inevitable with asynchronous, event-drive programming... you don't "block until the instance is ready for use".
One possibility would be to pass the callback to perform as an argument to the class when you call it; a more Twisted-normal one would be to have Grooveshark be a function which returns a deferred (you'll add to the deferred the callback to perform, and call it with the instance as the argument when that instance is finally ready to be used).
I would highly recommend looking at the Facebook graph API. Just because you need sessions and some authentication doesn't mean you can build a clean REST API. Facebook uses OAuth to handle the authentication but there are other possibilities.