PyCharm warns about this code, saying the last return is unreachable:
def foo():
with open(...):
return 1
return 0
I expect that the second return would execute if open() failed. Who's right?
PyCharm is right. If open() fails, an exception is raised, and neither return is reached.
with does not somehow protect you from an exception in the expression that produces the context manager. The expression after with is expected to produce a context manager, at which point it's __exit__ method is stored and it's __enter__ method is called. The only outcomes here are that either the context manager is successfully produced and entered, or an exception is raised. At no point will with swallow an exception at this stage and silently skip the block.
Related
could you please help me to understand what is the difference between these 2 syntaxes in Django tests (Python 3.7):
def test_updateItem_deletion(self):
# some logic in here
with self.assertRaises(OrderItem.DoesNotExist):
OrderItem.objects.get(id=self.product_1.id)
And:
# all the same, but self.assertRaises not wrapped in 'with'
self.assertRaises(OrderItem.DoesNotExist, OrderItem.objects.get(id=self.product_1.id))
The first one worked and test passed. But the second one raised:
models.OrderItem.DoesNotExist: OrderItem matching query does not exist.
Does it somehow replicate the behaviour of try/catch block?
Thank you a lot!
The first one will catch the exception raised if executed as context manager.
On the second one, nothing is catching the exception.
This is known as ContextManager. When using the with statement, an __exit__ method is called at the end of the with block, containing any exception raised during execution of the block.
This __exit__ method is not called when directly calling assertRaises, so the exception is not captured.
Here you will find more info about this:
Official Doc
Python Tips
Let’s say I have this function :
def a ():
try :
b
except Error:
raise Error
return True
Is this considered good practice to only return true if it is successful otherwise stop block execution with raise error ?
Think about your function and error processing as separate concerns. A function should either return an object as the result (or automatically None with no return statement) or perform some action without returning an explicit result. Avoid mixing the two.
If an error that your function code can detect happens, rise an Exception at the appropriate abstraction level, so that the client code can wrap the call in a try block and properly treat the error case, if there is a wish to do so.
Assume I have the following code:
with open('somefile.txt') as my_file:
# some processing
my_file.close()
Is my_file.close() above redundant?
Yes. Exiting the with block will close the file.
However, that is not necessarily true for objects that are not files. Normally, exiting the context should trigger an operation conceptually equivalent to "close", but in fact __exit__ can be overloaded to execute any code the object wishes.
Yes it is; Beside, it is not guarranty that your close() will always be executed. (for instance if an Exception occurs).
with open('somefile.txt') as my_file:
1/0 # raise Exception
my_file.close() # Your close() call is never going to be called
But the __exit__() function of the with statement is always executed because it follows the try...except...finally pattern.
The with statement is used to wrap the execution of a block with
methods defined by a context manager (see section With Statement
Context Managers). This allows common try...except...finally usage
patterns to be encapsulated for convenient reuse.
The context manager’s __exit__() method is invoked. If an exception
caused the suite to be exited, its type, value, and traceback are
passed as arguments to __exit__()
You can check that the file have been close right after the with statement using closed
>>> with open('somefile.txt') as f:
... pass
>>> f.closed
# True
Source for my answer:
Understanding Python's "with" statement
The with statement creates a runtime context.Python creates the stream object of the file and tells it that it is entering a runtime context. When the with code block is completed, Python tells the stream object that it is exiting the runtime context,and the stream object calls its own close() method.
yes, the with statement takes care of that
as you can see in the documentation:
The context manager’s __exit__() method is invoked. If an exception caused the suite to be exited, its type, value, and traceback are passed as arguments to __exit__().
In the case of files, the __exit__() method will close the file
I have a scenario with Tornado where I have a coroutine that is called from a non-coroutine or without yielding, yet I need to propagate the exception back.
Imagine the following methods:
#gen.coroutine
def create_exception(with_yield):
if with_yield:
yield exception_coroutine()
else:
exception_coroutine()
#gen.coroutine
def exception_coroutine():
raise RuntimeError('boom')
def no_coroutine_create_exception(with_yield):
if with_yield:
yield create_exception(with_yield)
else:
create_exception(with_yield)
Calling:
try:
# Throws exception
yield create_exception(True)
except Exception as e:
print(e)
will properly raise the exception. However, none of the following raise the exception :
try:
# none of these throw the exception at this level
yield create_exception(False)
no_coroutine_create_exception(True)
no_coroutine_create_exception(False)
except Exception as e:
print('This is never hit)
The latter are variants similar to what my problem is - I have code outside my control calling coroutines without using yield. In some cases, they are not coroutines themselves. Regardless of which scenario, it means that any exceptions they generate are swallowed until Tornado returns them as "future exception not received."
This is pretty contrary to Tornado's intent, their documentation basically states you need to do yield/coroutine through the entire stack in order for it to work as I'm desiring without hackery/trickery.
I can change the way the exception is raised (ie modify exception_coroutine). But I cannot change several of the intermediate methods.
Is there something I can do in order to force the exception to be raised throughout the Tornado stack, even if it is not properly yielded? Basically to properly raise the exception in all of the last three situations?
This is complicated because I cannot change the code that is causing this situation. I can only change exception_coroutine for example in the above.
What you're asking for is impossible in Python because the decision to yield or not is made by the calling function after the coroutine has finished. The coroutine must return without raising an exception so it can be yielded, and after that it is no longer possible for it to raise an exception into the caller's context in the event that the Future is not yielded.
The best you can do is detect the garbage collection of a Future, but this can't do anything but log (this is how the "future exception not retrieved" message works)
If you're curious why this isn't working, it's because no_coroutine_create_exception contains a yield statement. Therefore it's a generator function, and calling it does not execute its code, it only creates a generator object:
>>> no_coroutine_create_exception(True)
<generator object no_coroutine_create_exception at 0x101651678>
>>> no_coroutine_create_exception(False)
<generator object no_coroutine_create_exception at 0x1016516d0>
Neither of the calls above executes any Python code, it only creates generators that must be iterated.
You'd have to make a blocking function that starts the IOLoop and runs it until your coroutine finishes:
def exception_blocking():
return ioloop.IOLoop.current().run_sync(exception_coroutine)
exception_blocking()
(The IOLoop acts as a scheduler for multiple non-blocking tasks, and the gen.coroutine decorator is responsible for iterating the coroutine until completion.)
However, I think I'm likely answering your immediate question but merely enabling you to proceed down an unproductive path. You're almost certainly better off using async code or blocking code throughout instead of trying to mix them.
I am writing a class in Python and part of the code deals with a server. Therefore I need to deal with exceptions like ExpiredSession or ConnectionError.
Instead of writing exception handling code for every try/except block, I have a single function in the class to deal with the exceptions. something like this (inside the class definition)
def job_a(self):
try:
do something
except Exception as e:
#maybe print the error on screen.
self.exception_handling(e)
def job_b(self):
try:
do something else
except Exception as e:
#maybe print the error on screen.
self.exception_handling(e)
def exception_handling(self,e):
if isInstanceOf(e,ExpiredSession):
#deal with expired session.
self.reconnect()
if isInstanceOf(e,ConnectionError):
#deal with connection error
else:
#other exceptions
I am not sure if this kind of code would cause any problem because I haven't seen code do this. like, possible memory leak? (Now I notice the memory usage grows(though slowly) when I have more and more error/exception and eventually I have to restart the process before it eats all my memories). Not sure this is the cause.
Is it a good practice to pass exceptions to a single function?
This is a good use case for a context manager. You can see some examples of using context managers for error handling here. The contextmanager decorator allows you to write context managers concisely in the form of a single function. Here's a simple example:
class Foo(object):
def meth1(self):
with self.errorHandler():
1/0
def meth2(self):
with self.errorHandler():
2 + ""
def meth3(self):
with self.errorHandler():
# an unhandled ("unexpected") kind of exception
""[3]
#contextlib.contextmanager
def errorHandler(self):
try:
yield
except TypeError:
print "A TypeError occurred"
except ZeroDivisionError:
print "Divide by zero occurred"
Then:
>>> x = Foo()
>>> x.meth1()
Divide by zero occurred
>>> x.meth2()
A TypeError occurred
The with statement allows you to "offload" the error handling into a separate function where you catch the exceptions and do what you like with them. In your "real" functions (i.e., the functions that do the work but may raise the exceptions), you just need a with statement instead of an entire block of complicated try/except statements.
An additional advantage of this approach is that if an unforeseen exception is raised, it will propagate up normally with no extra effort:
>>> x.meth3()
Traceback (most recent call last):
File "<pyshell#394>", line 1, in <module>
x.meth3()
File "<pyshell#389>", line 12, in meth3
""[3]
IndexError: string index out of range
In your proposed solution, on the other hand, the exception is already caught in each function and the actual exception object is passed the handler. If the handler gets an unexpected error, it would have to manually reraise it (and can't even use a bare raise to do so). Using a context manager, unexpected exceptions have their ordinary behavior with no extra work required.