Is there a way to 'detect' what exceptions function/method raises? Examplifying:
def foo():
print 'inside foo, next calling bar()'
_bar()
_baz()
# lots of other methods calls which raise other legitimate exceptions
def _bar():
raise my_exceptions.NotFound
def _baz():
raise my_exceptions.BadRequest
so, supposing that foo is part of my API and I need to document it, is there a way to get all exceptions that can be raised from it?
Just to be clear I don't want to handle those exceptions, they are supposed to happen (when a resource is not found or the request is malformed for instance).
I'm thinking to create some tool that transform that sequence of code in something 'inline' like:
def foo():
print 'inside foo, next calling bar()'
# what _bar() does
raise my_exceptions.NotFound
# what _baz() does
raise my_exceptions.BadRequest
# lots of other methods calls which raise other legitimate exceptions
Is there anything that can help me detect that instead of navigate through each method call? (Which goes deep into several files.)
You can't reasonably do this with Python, for a few reasons:
1) The Python primitives don't document precisely what exceptions they can throw. The Python ethos is that anything can throw any exception at any time.
2) Python's dynamic nature makes it very difficult to statically analyze code at all, it's pretty much impossible to know what code "might" do.
3) All sorts of uninteresting exceptions would have to be in the list, for example, if you have self.foo, then it could raise AttributeError. It would take a very sophisticated analyzer to figure out that foo must exist.
No, because of the dynamic nature of Python. How would your tool work if a function took another function chosen at runtime (very common), or if the code is later monkeypatched?
There's simply no way to know ahead of time (in enough situations for it to be useful), what the interpreter is going to do through static analysis. You effectively have to run the interpreter and see what happens, which of course could change between runs...
Related
When using try/except blocks in Python, is there a recommendation to delegate it to any methods that might raise an exception, or to catch it in the parent function, or both?
For example, which of the following is preferred?
def my_function():
s = something.that.might.go_wrong()
return s
def main():
try:
s = my_function()
except Exception:
print "Error"
or
def my_function():
try:
s = something.that.might.go_wrong()
return s
except Exception:
print "Error"
def main():
s = my_function()
PEP 8 seems to be quiet on the matter, and I seem to find examples of both cases everywhere.
It really depends on the semantics of the functions in question. In general if you're writing a library, your library probably should handle exceptions that get raised inside the library (optionally re-raising them as new library-specific exceptions).
At the individual function level, though, the main thing you want to think about is what context/scope you desire to handle the exception in - if there is a reasonable different thing you could do in exceptional cases within the inner function, it might be useful to handle it within the inner function; otherwise, it might make more sense to handle it in the outer function.
For the specific case of writing output, it's often useful to only do that at the highest level, and inner functions only ever (a) return values or (b) raise exceptions. That makes the code easier to test because you don't have to worry about testing side effect output.
If you are following the rule of "one function should handle one task" then You shouldn't handle exception in that function, Let it fail loudly on unexpected input. Parent function which calling such function may handle to give better experience to user.
We can refer python builtin functions to get pythonic way
I always use the second one. Logically to me it seems that the problems of a function should be dealt with in that function only. This would provide user a clean and a hassle free interface, so you could later put your code in a library.
There could be some cases where you would want to use exceptions outside the function. For example if you want to print a particular message when something goes wrong then you should use the exception out of the function.
However you could provide the exception statements as a argument to the function if you want to give user the ability to decide his own exception message. So i guess the Second example(exception inside the function) would be universal and thus should be preferred.
Is it pythonic to store the expected exceptions of a funcion as attributes of the function itself? or just a stinking bad practice.
Something like this
class MyCoolError(Exception):
pass
def function(*args):
"""
:raises: MyCoolError
"""
# do something here
if some_condition:
raise MyCoolError
function.MyCoolError = MyCoolError
And there in other module
try:
function(...)
except function.MyCoolError:
#...
Pro: Anywhere I have a reference to my function, I have also a reference to the exception it can raise, and I don't have to import it explicitly.
Con: I "have" to repeat the name of the exception to bind it to the function. This could be done with a decorator, but it is also added complexity.
EDIT
Why I am doing this is because I append some methods in an irregular way to some classes, where I think that a mixin it is not worth it. Let's call it "tailored added functionality". For instance let's say:
Class A uses method fn1 and fn2
Class B uses method fn2 and fn3
Class C uses fn4 ...
And like this for about 15 classes.
So when I call obj_a.fn2(), I have to import explicitly the exception it may raise (and it is not in the module where classes A, B or C, but in another one where the shared methods live)... which I think it is a little bit annoying. Appart from that, the standard style in the project I'm working in forces to write one import per line, so it gets pretty verbose.
In some code I have seen exceptions stored as class attributes, and I have found it pretty useful, like:
try:
obj.fn()
except obj.MyCoolError:
....
I think it is not Pythonic. I also think that it does not provide a lot of advantage over the standard way which should be to just import the exception along with the function.
There is a reason (besides helping the interpreter) why Python programs use import statements to state where their code comes from; it helps finding the code of the facilities (e. g. your exception in this case) you are using.
The whole idea has the smell of the declaration of exceptions as it is possible in C++ and partly mandatory in Java. There are discussions amongst the language lawyers whether this is a good idea or a bad one, and in the Python world the designers decided against it, so it is not Pythonic.
It also raises a whole bunch of further questions. What happens if your function A is using another function B which then, later, is changed so that it can throw an exception (a valid thing in Python). Are you willing to change your function A then to reflect that (or catch it in A)? Where would you want to draw the line — is using int(text) to convert a string to int reason enough to "declare" that a ValueError can be thrown?
All in all I think it is not Pythonic, no.
Raymond Hettinger surprised quite a few people when he showed slides 36 and 37. https://speakerdeck.com/pyconslides/transforming-code-into-beautiful-idiomatic-python-by-raymond-hettinger -- Many people knew that the with statement could be used for opening files, but not these new things. Looking at python 3.3 docs on threading, only at the very bottom, section 16.2.8, is it even mentioned. From the lecture it was implied that using the 'with' operator was best practice.
How is one supposed to figure out if 'with' is supported, what it can be tied to, etc?
Also, how should 'with' be referred to? (threading with statement, python threading lock with statement,...), what is the vernacular to search and see if 'with' is supported (we can ask if something is iterable, do we ask if it's 'withable')?
ref:
http://docs.python.org/2/reference/compound_stmts.html#with 7.5
http://docs.python.org/2/reference/datamodel.html#context-managers 3.4.10
http://docs.python.org/3.1/library/threading.html 16.2.8
First, you don't ask if something is "withable", you ask if it's a "context manager".*
For example, in the docs you linked (which are from 3.1, not 3.3, by the way):
Currently, Lock, RLock, Condition, Semaphore, and BoundedSemaphore objects may be used as with statement context managers.
Meanwhile, if you want to search in the interactive interpreter, there are two obvious things to do:
if hasattr(x, '__exit__'):
print('x is a context manager')
try:
with x:
pass
except AttributeError:
pass
else:
print('x is a context manager')
Meanwhile:
help(open) … makes no mention of it
Well, yeah, because open isn't a context manager, it's a function that happens to return something that is a context manager. In 3.3, it can return a variety of different things depending on its parameters; in 2.7, it only returns one thing (a file), but help tells you exactly what it returns, and you can then use help on whichever one is appropriate for your use case, or just look at its attributes, to see that it defines __exit__.
At any rate, realistically, just remember that EAFTP applies to debugging and prototyping as well as to your final code. Try writing something with a with statement first. If the expression you're trying to use as a context manager isn't one, you'll get an exception as soon as you try to run that code, which is pretty easy to debug. (It will generally be an AttributeError about the lack of __exit__, but even if it isn't, the fact that the traceback says it's from your with line ought to tell you the problem.) And if you have an object that seems like it should be usable as a context manager, and isn't, you might want to consider filing a bug/bringing it up on the mailing lists/etc. (There are some classes in the stdlib that weren't context managers until someone complained.)
One last thing: If you're using a type that has a close method, but isn't a context manager, just use contextlib.closing around it:
with closing(legacy_file_like_object):
… or
with closing(legacy_file_like_object_producer()) as f:
In fact, you should really look at everything in contextlib. #contextmanager is very nifty, and nested is handy if you need to backport 2.7/3.x code to 2.5, and, while closing is trivial to write (if you have #contextmanager), using the stdlib function makes your intentions clear.
* Actually, there was a bit of a debate about the naming, and it recurs every so often on the mailing lists. But the docs and help('with') both give a nearly-precise definition, the "context manager" is the result of evaluating the "context expression". So, in with foo(bar) as baz, qux as quux:, foo(bar) and qux are both context managers. (Or maybe in some way the two of them make up a single context manager.)
afaik any class/object that that implements __exit__ method (you may also need to implement __enter__)
>>>dir(file)
#notice it includes __enter__ and __exit__
so
def supportsWith(some_ob):
if "__exit__" in dir(some_ob): #could justas easily used hasattr
return True
Objects that work with Python's with statement are called context managers. In typically Pythonic fashion, whether an object is a context manager depends only on whether you can do "context manager-y" things with it. (This strategy is called duck typing.)
So what constitutes "context manager-y" behavior? There are exactly two things: (1) doing some standard set-up on entering a with block, and (2) doing some standard "tear-down", and maybe also some damage control if things go awry, before exiting the block. That's it.
The details are provided in PEP 343, which introduced the with statement, and in the documentation you linked in the question.
A with block, step by step
But let's run through this step by step.
To start, we need a "context manager". That's any object that provides set-up and tear-down behavior encapsulated in methods respectively called __enter__ and __exit__. If an object provides these methods, it qualifies as a context manager, albeit possibly a poor one if the methods don't do sensible things.
So what happens behind the scenes when the interpreter sees a with block? First, the interpreter looks for __enter__ and __exit__ methods on the object provided after the with statement. If the methods don't exist, then we don't have a context manager, so the interpreter throws an exception.
But if the methods do exist, all is well. We have our context manager, so we move into the block. The interpreter then executes the context manager's __enter__ method and assigns the result to the variable that follows the as statement (if there is one, otherwise the result is thrown away). Next, the body of the with block is executed. When that's done, the context manager's __exit__ statement is passed a (possibly empty) dictionary of information about any exceptions that occurred while executing the bock's body, and the __exit__ method is executed to clean things up.
Here's a line-by-line walk-through:
with man as x: # 1) Look for `man.__enter__` and `man.__exit__`, then ...
# 2) Execute `x = man.__enter__()`, then ...
do_something(x) # 3) Execute the code in the body of the block, ...
# 4) If something blows up, note it (in `err`), ...
# 5) Last, this (always!) happens: `man.__exit__(**err)`.
carry_on()
And that's that.
What's the point?
The only subtlety here is that the context manager's __exit__ method is always executed, even if an uncaught exception is thrown in the body of the with block. In that case, the exception information is passed to the __exit__ method (in a dictionary I called err in the example above), which should use that information to provide damage control.
So a with block is just an abstraction that lets us shunt off set-up, tear-down, and damage-control code (i.e., "context management") into a couple of methods, which can then be called behind the scenes. This both encourages us to factor out boilerplate and provides more concise, readable code that highlights the core control flow and hides implementation details.
There is a list standard python exceptions that we should watch out, but I don't think these are the ones we should raise ourselves, cause they are rarely applicable.
I'm curious if there exists a list within standard python library, with exceptions similar to .NET's ApplicationException, ArgumentNullException, ArgumentOutOfRangeException, InvalidOperationException — exceptions that we can raise ourselves?
Or is there different, more pythonic way to handle common error cases, than raising standard exceptions?
EDIT: I'm not asking on how to handle exceptions but what types I can and should raise where needed.
If the error matches the description of one of the standard python exception classes, then by all means throw it.
Common ones to use are TypeError and ValueError, the list you linked to already is the standard list.
If you want to have application specific ones, then subclassing Exception or one of it's descendants is the way to go.
To reference the examples you gave from .NET
ApplicationException is closest to RuntimeError
ArgumentNullException will probably be an AttributeError (try and call the method you want, let python raise the exception a la duck typing)
AttributeOutOfRange is just a more specific ValueError
InvalidOperationException could be any number of roughly equivalent exceptions form the python standard lib.
Basically, pick one that reflects whatever error it is you're raising based on the descriptions from the http://docs.python.org/library/exceptions.html page.
First, Python raises standard exceptions for you.
It's better to ask forgiveness than to ask permission
Simply attempt the operation and let Python raise the exception. Don't bracket everything with if would_not_work(): raise Exception. Never worth writing. Python already does this in all cases.
If you think you need to raise a standard exception, you're probably writing too much code.
You may have to raise ValueError.
def someFunction( arg1 ):
if arg1 <= 0.0:
raise ValueError( "Guess Again." )
Once in a while, you might need to raise a TypeError, but it's rare.
def someFunctionWithConstraints( arg1 ):
if isinstance(arg1,float):
raise TypeError( "Can't work with float and can't convert to int, either" )
etc.
Second, you almost always want to create your own, unique exceptions.
class MyException( Exception ):
pass
That's all it takes to create something distinctive and unique to your application.
I seem to recall being trained by the documentation that it is ok to raise predefined exceptions, as long as they are appropriate. For example, the recommended way to terminate is no longer to call exit() but rather to raise SystemExit.
Another example given is to reuse the IndexError exception on custom container types.
Of course, your application should define its own exceptions rather than to actually repurpose system exceptions. I'm just saying there's no prohibition from reusing them where appropriate.
The Pythonic way is just let the exceptions pass through from Python itself. For example, instead of:
def foo(arg):
if arg is None:
raise SomeNoneException
bar = arg.param
Just do:
def foo(arg):
bar = arg.param
If arg is None or doesn't have the param attribute, you will get an exception from Python itself.
In the Python glossary this is called "EAFP":
Easier to ask for forgiveness than permission. This common Python coding
style assumes the existence of valid
keys or attributes and catches
exceptions if the assumption proves
false. This clean and fast style is
characterized by the presence of many
try and except statements. The
technique contrasts with the LBYL (Look Before You Leap)
style common to many other languages
such as C.
And it works well in tandem with Python's inherent duck typing philosophy.
This doesn't mean you should not create exceptions of your own, of course, just that you don't need to wrap the already existing Python exceptions.
For your own exceptions, create classes deriving from Exception and throw them when it's suitable.
My work place has imposed a rules for no use of exception (catching is allowed). If I have code like this
def f1()
if bad_thing_happen():
raise Exception('bad stuff')
...
return something
I could change it to
def f1()
if bad_thing_happen():
return [-1, None]
...
return [0, something]
f1 caller would be like this
def f1_caller():
code, result = f1(param1)
if code < 0:
return code
actual_work1()
# call f1 again
code, result = f1(param2)
if code < 0:
return code
actual_work2()
...
Are there more elegant ways than this in Python ?
Exceptions in python are not something to be avoided, and are often a straightforward way to solve problems. Additionally, an exception carries a great deal of information with it that can help quickly locate (via stack trace) and identify problems (via exception class or message).
Whoever has come up with this blanket policy was surely thinking of another language (perhaps C++?) where throwing exceptions is a more expensive operation (and will reduce performance if your code is executing on a 20 year old computer).
To answer your question: the alternative is to return an error code. This means that you are mixing function results with error handling, which raises (ha!) it's own problems. However, returning None is often a perfectly reasonable way to indicate function failure.
Returning None is reasonably common and works well conceptually. If you are expecting a return value, and you get none, that is a good indication that something went wrong.
Another possible approach, if you are expecting to return a list (or dictionary, etc.) is to return an empty list or dict. This can easily be tested for using if, because an empty container evaluates to False in Python, and if you are going to iterate over it, you may not even need to check for it (depending on what you want to do if the function fails).
Of course, these approaches don't tell you why the function failed. So you could return an exception instance, such as return ValueError("invalid index"). Then you can test for particular exceptions (or Exceptions in general) using isinstance() and print them to get decent error messages. (Or you could provide a helper function that tests a return code to see if it's derived from Exception.) You can still create your own Exception subclasses; you would simply be returning them rather than raising them.
Finally, I would work toward getting this ridiculous policy changed, as exceptions are an important part of how Python works, have low overhead, and will be expected by anyone using your functions.
You have to use return codes. Other alternatives would involve mutable global state (think C's errno) or passing in a mutable object (such as a list), but you almost always want to avoid both in Python. Perhaps you could try explaining to them how exceptions let you write better post-conditions instead of adding complication to return values, but are otherwise equivalent.