I have been using Python coroutines instead of threading with some success. It occurred to me that I might have a use for a coroutine that knows about itself, so it can send itself something. I found that this is not possible (in Python 3.3.3 anyway). To test, I wrote the following code:
def recursive_coroutine():
rc = (yield)
rc.send(rc)
reco = recursive_coroutine()
next(reco)
reco.send(reco)
This raises an exception:
Traceback (most recent call last):
File "rc.py", line 7, in <module>
reco.send(reco)
File "rc.py", line 3, in recursive_coroutine
rc.send(rc)
ValueError: generator already executing
Although the error is clear, it feels like this should be possible. I never got as far as to come up with a useful, realistic application of a recursive coroutine, so I'm not looking for an answer to a specific problem. Is there a reason, other than perhaps implementation difficulty, that this is not possible?
This isn't possible because for send to work, the coroutine has to be waiting for input. yield pauses a coroutine, and send and next unpause it. If the generator is calling send, it can't simultaneously be paused and waiting for input.
If you could send to an unpaused coroutine, the semantics would get really weird. Suppose that rc.send(rc) line worked. Then send would continue the execution of the coroutine from where it left off, which is the send call... but there is no value for send to return, because we didn't hit a yield.
Suppose we return some dummy value and continue. Then the coroutine would execute until the next yield. At that point, what happens? Does execution rewind so send can return the yielded value? Does the yield discard the value? Where does control flow go? There's no good answer.
Related
I am currently working on asyncio with python 3.7.x not sure about the exact version, right now I am trying to schedule a task. And I am unable to get an output, even when running the thing forever. Here is the code I currently have
import asyncio
async def print_now():
print("Hi there")
loop = asyncio.get_event_loop()
loop.call_later(print_now())
loop.run_until_complete(asyncio.sleep(1))
This gives the following error:
Traceback (most recent call last):
File "D:\Coding\python\async\main.py", line 7, in <module>
loop.call_later(print_now())
TypeError: call_later() missing 1 required positional argument: 'callback'
The call back in call_later() is print_now I've tried just print_now and print_now()
I have also tried using loop.run_forever() instead of loop.run_until_complete() and so far I didn't get anything
Sometimes I get either no output or a different error.
First, yes, you're missing a delay argument . First one is supposed to be the delay while the second one is a callback (docs).
loop.call_later(delay, callback, *args, context=None)
Second, the callback is supposed to be a function. What you're passing is print_now() which is gonna evaluate to None. You might find out that
'NoneType' object is not callable
So you're gonna need to pass print_now — without parentheses — as a callback. This way you're passing a function instead of the result of its application.
Third, async functions are supposed to be awaited on. Your scenario doesn't seem to need that, so just drop the async keyword.
When you call an awaitable function, you create a new coroutine object. The code inside the function won't run until you then await on the function or run it as a task
From this post. You might want to check out
I'm not asking how to cancel such task. It is not possible and it it is explained here: https://stackoverflow.com/a/33578893/5378816
My concern is this change in wait_for:
Changed in version 3.7: When aw is cancelled due to a timeout,
wait_for waits for aw to be cancelled. Previously, it raised
asyncio.TimeoutError immediately.
Don't get me wrong, I like it, it is an improvement.
However, this program will now hang in wait_for (Python 3.7):
import asyncio
async def uncancellable():
while True:
try:
await asyncio.sleep(99)
except asyncio.CancelledError:
print("sorry!")
TIMEOUT = 1.0
async def test():
task = asyncio.get_event_loop().create_task(uncancellable())
try:
await asyncio.wait_for(task, TIMEOUT)
except asyncio.TimeoutError:
print("timeout")
asyncio.get_event_loop().run_until_complete(test())
An uncancellable task is a programming error. But if I need to be defensive, how do I prevent the wait_for from hanging indefinitely?
I tried this. First timeout: before cancelling, second timeout: before giving up.
await asyncio.wait_for(asyncio.wait_for(task, TIMEOUT1), TIMEOUT1+TIMEOUT2)
There is one tiny issue I don't much care about. When it raises asyncio.TimeoutError I cannot tell if it happened at first or at the second timeout. Basically I think it works, but is it really correct?
But if I need to be defensive, how do I prevent the wait_for from
hanging indefinitely?
I don't think to consider each task as potentially uncancellable is a good idea.
Usually you just assume this situation won't happen and it's ok because, yes, uncancellable task is a programming error and it's not a kind of error that you expect to see often. Same way you usually don't expect some inner code will suppress KeyboardInterrupt or any other BaseException.
There's nothing wrong in expecting third-party code to follow some contracts (like in example above or, let's say not randomly to call sys.exit()). Otherwise you'll have to write much more code and probably still not cover all possible cases.
For any possible try-finally block in Python, is it guaranteed that the finally block will always be executed?
For example, let’s say I return while in an except block:
try:
1/0
except ZeroDivisionError:
return
finally:
print("Does this code run?")
Or maybe I re-raise an Exception:
try:
1/0
except ZeroDivisionError:
raise
finally:
print("What about this code?")
Testing shows that finally does get executed for the above examples, but I imagine there are other scenarios I haven't thought of.
Are there any scenarios in which a finally block can fail to execute in Python?
"Guaranteed" is a much stronger word than any implementation of finally deserves. What is guaranteed is that if execution flows out of the whole try-finally construct, it will pass through the finally to do so. What is not guaranteed is that execution will flow out of the try-finally.
A finally in a generator or async coroutine might never run, if the object never executes to conclusion. There are a lot of ways that could happen; here's one:
def gen(text):
try:
for line in text:
try:
yield int(line)
except:
# Ignore blank lines - but catch too much!
pass
finally:
print('Doing important cleanup')
text = ['1', '', '2', '', '3']
if any(n > 1 for n in gen(text)):
print('Found a number')
print('Oops, no cleanup.')
Note that this example is a bit tricky: when the generator is garbage collected, Python attempts to run the finally block by throwing in a GeneratorExit exception, but here we catch that exception and then yield again, at which point Python prints a warning ("generator ignored GeneratorExit") and gives up. See PEP 342 (Coroutines via Enhanced Generators) for details.
Other ways a generator or coroutine might not execute to conclusion include if the object is just never GC'ed (yes, that's possible, even in CPython), or if an async with awaits in __aexit__, or if the object awaits or yields in a finally block. This list is not intended to be exhaustive.
A finally in a daemon thread might never execute if all non-daemon threads exit first.
os._exit will halt the process immediately without executing finally blocks.
os.fork may cause finally blocks to execute twice. As well as just the normal problems you'd expect from things happening twice, this could cause concurrent access conflicts (crashes, stalls, ...) if access to shared resources is not correctly synchronized.
Since multiprocessing uses fork-without-exec to create worker processes when using the fork start method (the default on Unix), and then calls os._exit in the worker once the worker's job is done, finally and multiprocessing interaction can be problematic (example).
A C-level segmentation fault will prevent finally blocks from running.
kill -SIGKILL will prevent finally blocks from running. SIGTERM and SIGHUP will also prevent finally blocks from running unless you install a handler to control the shutdown yourself; by default, Python does not handle SIGTERM or SIGHUP.
An exception in finally can prevent cleanup from completing. One particularly noteworthy case is if the user hits control-C just as we're starting to execute the finally block. Python will raise a KeyboardInterrupt and skip every line of the finally block's contents. (KeyboardInterrupt-safe code is very hard to write).
If the computer loses power, or if it hibernates and doesn't wake up, finally blocks won't run.
The finally block is not a transaction system; it doesn't provide atomicity guarantees or anything of the sort. Some of these examples might seem obvious, but it's easy to forget such things can happen and rely on finally for too much.
Yes. Finally always wins.
The only way to defeat it is to halt execution before finally: gets a chance to execute (e.g. crash the interpreter, turn off your computer, suspend a generator forever).
I imagine there are other scenarios I haven't thought of.
Here are a couple more you may not have thought about:
def foo():
# finally always wins
try:
return 1
finally:
return 2
def bar():
# even if he has to eat an unhandled exception, finally wins
try:
raise Exception('boom')
finally:
return 'no boom'
Depending on how you quit the interpreter, sometimes you can "cancel" finally, but not like this:
>>> import sys
>>> try:
... sys.exit()
... finally:
... print('finally wins!')
...
finally wins!
$
Using the precarious os._exit (this falls under "crash the interpreter" in my opinion):
>>> import os
>>> try:
... os._exit(1)
... finally:
... print('finally!')
...
$
I'm currently running this code, to test if finally will still execute after the heat death of the universe:
try:
while True:
sleep(1)
finally:
print('done')
However, I'm still waiting on the result, so check back here later.
According to the Python documentation:
No matter what happened previously, the final-block is executed once the code block is complete and any raised exceptions handled. Even if there's an error in an exception handler or the else-block and a new exception is raised, the code in the final-block is still run.
It should also be noted that if there are multiple return statements, including one in the finally block, then the finally block return is the only one that will execute.
Well, yes and no.
What is guaranteed is that Python will always try to execute the finally block. In the case where you return from the block or raise an uncaught exception, the finally block is executed just before actually returning or raising the exception.
(what you could have controlled yourself by simply running the code in your question)
The only case I can imagine where the finally block will not be executed is when the Python interpretor itself crashes for example inside C code or because of power outage.
I found this one without using a generator function:
import multiprocessing
import time
def fun(arg):
try:
print("tried " + str(arg))
time.sleep(arg)
finally:
print("finally cleaned up " + str(arg))
return foo
list = [1, 2, 3]
multiprocessing.Pool().map(fun, list)
The sleep can be any code that might run for inconsistent amounts of time.
What appears to be happening here is that the first parallel process to finish leaves the try block successfully, but then attempts to return from the function a value (foo) that hasn't been defined anywhere, which causes an exception. That exception kills the map without allowing the other processes to reach their finally blocks.
Also, if you add the line bar = bazz just after the sleep() call in the try block. Then the first process to reach that line throws an exception (because bazz isn't defined), which causes its own finally block to be run, but then kills the map, causing the other try blocks to disappear without reaching their finally blocks, and the first process not to reach its return statement, either.
What this means for Python multiprocessing is that you can't trust the exception-handling mechanism to clean up resources in all processes if even one of the processes can have an exception. Additional signal handling or managing the resources outside the multiprocessing map call would be necessary.
You can use a finally with an if statement, below example is checking for network connection and if its connected it will run the finally block
try:
reader1, writer1 = loop.run_until_complete(self.init_socket(loop))
x = 'connected'
except:
print("cant connect server transfer") #open popup
x = 'failed'
finally :
if x == 'connected':
with open('text_file1.txt', "r") as f:
file_lines = eval(str(f.read()))
else:
print("not connected")
I have a leetle test program which does not work unless I do a client.flush() before client.close(). A naive reading of nats/aio/client.py seems to suggest that close() does some kind of flush(), so I do not understand why my test program fails. Is it necessary to always flush() before close()? Example programs don't seem to indicate so.
I can see that close() calls _flush_pending(), but this does something quite different from flush():
flush() calls _send_ping() to send a PING and waits for a PONG response. _send_ping() writes directly to self._io_writer, and then calls _flush_pending().
_flush_pending() pushes a None (anything will do, I guess) into self._flush_queue. This presumably wakes the _flusher() and causes it to write everything in self._pending to self._io_writer.
publish() calls _send_command() to push the messages onto self._pending, and then also calls _flush_pending() to cause the _flusher() to write everything.
The test program:
#!/usr/bin/env python3.5
import asyncio
import nats.aio.client
import nats.aio.errors
async def send_message(loop):
mq_url = "nats://nats:password#127.0.0.1:4222"
client = nats.aio.client.Client()
await client.connect(io_loop=loop, servers=[mq_url])
await client.publish("test_subject", "test1".encode())
#await client.flush()
await client.close()
def main():
loop = asyncio.get_event_loop()
loop.run_until_complete(send_message(loop))
loop.close()
if __name__ == '__main__':
main()
FWIW if I send a large number of messages then, at some point (haven't yet worked out the exact conditions), messages are sent. We noticed the behaviour when processing a collection of files and sending a message for each line in the file: some files were making it through, others weren't, and it turned out it was the larger files (more lines) that made it. So it looks as though some internal buffer is filled and this forces a flush.
Looks like was probably a bug which has just been fixed: https://github.com/nats-io/asyncio-nats/pull/35
My code looks like this:
... # class Site(Resource)
def render_POST(self,request)
otherclass.doAssync(request.args)
print '1'
return "done" #that returns the HTTP response, always the same.
...
def doAssync(self,msg):
d = defer.Deferred()
reactor.callLater(0,self.doStuff,d,msg)
d.addCallback(self.sucess)
def doStuff(self,d,msg):
# do some stuff
time.sleep(2) #just for example
d.callback('ok')
def sucess(msg):
print msg
The output:
1
ok
So far, so good, but, the HTTP response (return 'done'), only happens after the delay (time.sleep(2)).
I can tell this, because the browser keeps 'loading' for 2 seconds.
What am I doing wrong?
What you are doing wrong is running a blocking call (time.sleep(2)), while Twisted expects you to only perform non-blocking operations. Things that don't wait. Because you have that time.sleep(2) in there, Twisted can't do anything else while that function is sleeping. So it can't send any data to the browser, either.
In the case of time.sleep(2), you would replace that with another reactor.callLater call. Assuming you actually meant for the time.sleep(2) call to be some other blocking operation, how to fix it depends on the operation. If you can do the operation in a non-blocking way, do that. For many such operations (like database interaction) Twisted already comes with non-blocking alternatives. If the thing you're doing has no non-blocking interface and Twisted doesn't have an alternative to it, you may have to run the code in a separate thread (using for example twisted.internet.threads.deferToThread), although that requires your code is actually thread-safe.