Tornado/Twisted newb here. First I just want to confirm what I know (please correct and elaborate if I am wrong):
In order to use #gen.engine and gen.Task in Tornado, I need to feed gen.Task() functions that are:
asynchronous to begin with
has the keyword argument "callback"
calls the callback function at the very end
In other words the function should look something like this:
def function(arg1, arg2, ... , callback=None):
# asynchronous stuff here ...
callback()
And I would call it like this (trivial example):
#gen.engine
def coroutine_call():
yield gen.Task(function, arg1, arg2)
Now I am in a weird situation where I have to use Twisted in a Tornado system for asynchronous client calls to a server (since Tornado apparently does not support it).
So I wrote a function in Twisted (e.g. connects to the server):
import tornado.platform.twisted
tornado.platform.twisted.install()
from twisted.web.xmlrpc import Proxy
class AsyncConnection():
def __init__(self, hostname):
self.proxy = Proxy(hostname)
self.token = False
#defer.inlineCallbacks
def login(self, user, passwd, callback=None):
"""Login to server using given username and password"""
self.token = yield self.proxy.callRemote('login', user, passwd) # twisted function
callback()
And if I run it like so:
#gen.engine
def test():
conn = AsyncConnection("192.168.11.11")
yield gen.Task(conn.login, "user","pwd")
print conn.token
if __name__ == '__main__':
test()
tornado.ioloop.IOLoop.instance().start()
And I DO get the token as I want. But my question is:
I know that Twisted and Tornado can share the same IOLoop. But am I allowed to do this (i.e. use #defer.inlineCallbacks function in gen.Task simply by giving it the callback keyword argument)? I seem to get the right result but is my way really running things asynchronously? Any complications/problems with the IOLoop this way?
I actually posted somewhat related questions on other threads
Is it possible to use tornado's gen.engine and gen.Task with twisted?
Using Tornado and Twisted at the same time
and the answers told me that I should "wrap" the inlineCallback function. I was wondering if adding the callback keyword is enough to "wrap" the twisted function to be suitable for Tornado.
Thanks in advance
What you're doing is mostly fine: adding a callback argument is enough to make a function usable with gen.Task. The only tricky part is exception handling: you'll need to run the callback from an except or finally block to ensure it always happens, and should probably return some sort of value to indicate whether the operation succeeded or not (exceptions do not reliably pass through a gen.Task when you're working with non-Tornado code)
The wrapper approach (which I posted in Is it possible to use tornado's gen.engine and gen.Task with twisted?) has two advantages: it can be used with most Twisted code directly (since Twisted functions usually don't have a callback argument), and exceptions work more like you'd expect (an exception raised in the inner function will be propagated to the outer function).
Related
I have some hooks in place, and I thought I could decorate them with #ndb.tasklet in order to use async apis inside the hooks.
e.g.
#classmethod
#ndb.tasklet
def _post_delete_hook(cls, key,future):
yield do_something_async()
This seemed to work, but every now and then I see "suspended generator" error for the code inside those hooks.
Should I be using #ndb.synctasklet instead?
An example of error:
suspended generator _post_put_hook(data_field.py:112) raised TypeError(Expected Future, received <class 'google.appengine.api.apiproxy_stub_map.UserRPC'>: <google.appengine.api.apiproxy_stub_map.UserRPC object at 0x09AA00B0>)
The code causing the error occasionally was:
t, d = yield (queue.add_async(task), queue.delete_tasks_async(taskqueue.Task(name=existing_task_name)))
Now that I've put #ndb.synctasklet it raises an actual exception.
An ndb tasklet returns a future. If calling the tasklet results in an exception, the exception will only be raised if the future's get_result method is called.
ndb.synctasklet automatically calls get_result on the futures yielded by tasklets, causing exceptions to be raised if they occurred, rather than just logged.
For the error that you are seeing, you may be able to fix it by converting the UserRPCs returned by the taskqueue async methods to tasklets.
This untested code is based on ndb.context.urlfetch (link), which converts the UserRPC produced by urlfetch.createRPC into a Future.
#ndb.tasklet
def add_async(queue, **taskqueue_kwargs):
rpc = queue.add_async(**taskqueue_kwargs)
result = yield rpc
raise ndb.Return(result)
You would need to create a tasklet for each async method that you want to use, or you could extend the taskqueue class and make the async methods tasklets.
I'm writing a library which is using Tornado Web's tornado.httpclient.AsyncHTTPClient to make requests which gives my code a async interface of:
async def my_library_function():
return await ...
I want to make this interface optionally serial if the user provides a kwarg - something like: serial=True. Though you can't obviously call a function defined with the async keyword from a normal function without await. This would be ideal - though almost certain imposible in the language at the moment:
async def here_we_go():
result = await my_library_function()
result = my_library_function(serial=True)
I'm not been able to find anything online where someones come up with a nice solution to this. I don't want to have to reimplement basically the same code without the awaits splattered throughout.
Is this something that can be solved or would it need support from the language?
Solution (though use Jesse's instead - explained below)
Jesse's solution below is pretty much what I'm going to go with. I did end up getting the interface I originally wanted by using a decorator. Something like this:
import asyncio
from functools import wraps
def serializable(f):
#wraps(f)
def wrapper(*args, asynchronous=False, **kwargs):
if asynchronous:
return f(*args, **kwargs)
else:
# Get pythons current execution thread and use that
loop = asyncio.get_event_loop()
return loop.run_until_complete(f(*args, **kwargs))
return wrapper
This gives you this interface:
result = await my_library_function(asynchronous=True)
result = my_library_function(asynchronous=False)
I sanity checked this on python's async mailing list and I was lucky enough to have Guido respond and he politely shot it down for this reason:
Code smell -- being able to call the same function both asynchronously
and synchronously is highly surprising. Also it violates the rule of
thumb that the value of an argument shouldn't affect the return type.
Nice to know it's possible though if not considered a great interface. Guido essentially suggested Jesse's answer and introducing the wrapping function as a helper util in the library instead of hiding it in a decorator.
When you want to call such a function synchronously, use run_until_complete:
asyncio.get_event_loop().run_until_complete(here_we_go())
Of course, if you do this often in your code, you should come up with an abbreviation for this statement, perhaps just:
def sync(fn, *args, **kwargs):
return asyncio.get_event_loop().run_until_complete(fn(*args, **kwargs))
Then you could do:
result = sync(here_we_go)
I have WebSocketHandler in my Tornado application.
I am not sure is this a right way to make code asynchronous.
class MyHandler(WebSocketHandler):
def open(self):
do something ...
self.my_coroutine_method()
#gen.coroutine
def my_coroutine_method(self):
user = yield db.user.find_one() # call motor asynchronous engine
self.write_message(user)
Yes, this is correct. However, in some cases simply calling a coroutine without yielding can cause exceptions to be handled in unexpected ways, so I recommend using IOLoop.current().spawn_callback(self.my_coroutine_method) when calling a coroutine from a non-coroutine like this.
I am new to tornado and have some questions about tornado's coroutine.
if i have a call stack looks like:
func_a => func_b => func_c => func_d
and func_d is an asynchronous function and I use yield and #gen.coroutine decorator.
just like this:
#gen.coroutine
def redis_data(self, id):
ret = yield asyn_function()
raise gen.Return(ret)
Must I use yield and #gen.coroutine with func_c, func_b and func_a?
Yes, all your coroutine's callers must also be coroutines, and they must yield the result of your coroutine.
Why? No coroutine can do I/O without executing a yield statement. Look at your code: might it need to talk to the server? Then it must yield. So must its caller, and so on up the chain, so that ultimately you have yielded to the event loop. Otherwise the loop cannot make progress and the I/O does not complete.
This is both a technical requirement of coroutine code, and an advantage of coroutines over threads. You always know by looking at your code when you can be interrupted:
https://glyph.twistedmatrix.com/2014/02/unyielding.html
For more on refactoring coroutines, see:
http://emptysqua.re/blog/refactoring-tornado-coroutines/
I am getting started with twisted as one of the libs I'll be using depends on it. In an early test I am confused about how to catch an exception thrown in a function like this:
#defer.inlineCallbacks
def read_input_status(self, address, length, callback):
assert callback
# ...
If callbackis None an AssertionError is thrown, so I tried to see it... The function is called like this:
def cb():
pass
def eb():
pass
d = task.deferLater(reactor, 1, client.read_input_status, 0x0000, 8, None)
d.addCallback(cb)
d.addErrback(eb)
I'm calling deferLater here on purpose to be able to deal with errors, as I understood that's not possible when using `callLater'. But my errback is never called.
What's weird is that when trying to debug and looking at the twisted lib's code I think I've seen a reason, why my errback is without effect. My decorated generator function (argument g below) is called by twisted's defer._inlineCallbacks implementation like this (breviated):
def _inlineCallbacks(result, g, deferred):
# ...
while 1:
try:
# ...
result = g.send(result)
except:
deferred.errback()
return deferred
I do see my exception pop up in the last section, where a deferred's errback is then called. But: that is not my deferred... If I go up one call in the debugger's call hierarchy, I see which deferred object is actually passed to _inlineCallbacks:
def unwindGenerator(*args, **kwargs):
# ...
return _inlineCallbacks(None, gen, Deferred())
Am I mistaken or is this simply a new object, empty, plain, without an callbacks/errbacks attached?
Sorry for this lengthy elaboration. Couldn't find anything immediately related, except for this SO post where I could not directly see how it solves my issue.
Thanks a lot.
[UPDATE] Please see this gist for working sample code (Python 2.7.6, Twisted 13.2.0).
Figured it out after rereading the docs about Twisted Deferred callbacks and errbacks. The issue with the code above and in the linked gist are the missing arguments for callback and errback. If I replace what's written above with the following code, the exception is caught fine and notified via errback as expected:
def cb(result):
pass
def eb(failure):
pass