Here is my code:
#/test
class Test(tornado.web.RequestHandler):
#tornado.web.asynchronous
#tornado.gen.coroutine
def get(self):
res = yield self.inner()
self.write(res)
#tornado.gen.coroutine
def inner(self):
import time
time.sleep(15)
raise tornado.gen.Return('hello')
#/test_1
class Test1(tornado.web.RequestHandler):
#tornado.web.asynchronous
#tornado.gen.coroutine
def get(self):
res = yield self.inner()
self.write(res)
#tornado.gen.coroutine
def inner(self):
raise tornado.gen.Return('hello test1')
When I fetch /test and then fetch /test_1, but /test_1 does not response until /test responsed, how to fixed it?
Don't use time.sleep(). time.sleep() will block cpu loop. Instead, use
yield tornado.gen.Task(tornado.ioloop.IOLoop.instance().add_timeout,
time.time() + sleep_seconds)
You've hit both the frequently-asked questions:
http://www.tornadoweb.org/en/stable/faq.html
First, please don't use time.sleep() in a Tornado application, use gen.sleep() instead. Second, be aware that most browsers won't fetch two pages from the same domain simultaneously: use "curl" or "wget" to test your application instead.
Related
There is a tricky post handler, sometimes it can take a lots of time (depending on a input values), sometimes not.
What I want is to write back whenever 1 second passes, dynamically allocating the response.
def post():
def callback():
self.write('too-late')
self.finish()
timeout_obj = IOLoop.current().add_timeout(
dt.timedelta(seconds=1),
callback,
)
# some asynchronous operations
if not self.request.connection.stream.closed():
self.write('here is your response')
self.finish()
IOLoop.current().remove_timeout(timeout_obj)
Turns out I can't do much from within callback.
Even raising an exception is suppressed by the inner context and won't be passed through the post method.
Any other ways to achieve the goal?
Thank you.
UPD 2020-05-15:
I found similar question
Thanks #ionut-ticus, using with_timeout() is much more convenient.
After some tries, I think I came really close to what i'm looking for:
def wait(fn):
#gen.coroutine
#wraps(fn)
def wrap(*args):
try:
result = yield gen.with_timeout(
dt.timedelta(seconds=20),
IOLoop.current().run_in_executor(None, fn, *args),
)
raise gen.Return(result)
except gen.TimeoutError:
logging.error('### TOO LONG')
raise gen.Return('Next time, bro')
return wrap
#wait
def blocking_func(item):
time.sleep(30)
# this is not a Subprocess.
# It is a file IO and DB
return 'we are done here'
Still not sure, should wait() decorator being wrapped in a
coroutine?
Some times in a chain of calls of a blocking_func(), there can
be another ThreadPoolExecutor. I have a concern, would this work
without making "mine" one global, and passing to the
Tornado's run_in_executor()?
Tornado: v5.1.1
An example of usage of tornado.gen.with_timeout. Keep in mind the task needs to be async or else the IOLoop will be blocked and won't be able to process the timeout:
#gen.coroutine
def async_task():
# some async code
#gen.coroutine
def get(self):
delta = datetime.timedelta(seconds=1)
try:
task = self.async_task()
result = yield gen.with_timeout(delta, task)
self.write("success")
except gen.TimeoutError:
self.write("timeout")
I'd advise to use https://github.com/aio-libs/async-timeout:
import asyncio
import async_timeout
def post():
try:
async with async_timeout.timeout(1):
# some asynchronous operations
if not self.request.connection.stream.closed():
self.write('here is your response')
self.finish()
IOLoop.current().remove_timeout(timeout_obj)
except asyncio.TimeoutError:
self.write('too-late')
self.finish()
I'm trying to write a post request for a Python Tornado server that sleeps for a second before sending a response back to a client. The server must handle many of these post requests per minute. The following code doesn't work because of BadYieldError: yielded unknown object <generator object get at 0x10d0b8870>
#asynchronous
def post(self):
response = yield IOLoop.instance().add_timeout(time.time() + 1, self._process)
self.write(response)
self.finish()
#gen.coroutine
def _process(self, callback=None):
callback("{}")
The server is to receive a post request, wait a second, and then return the result without blocking other requests. This is Python 2.7. How to resolve this? Thanks!
Either use callbacks or "yield", not both. So you could do:
#asynchronous
def post(self):
IOLoop.instance().add_timeout(time.time() + 1, self._process)
def _process(self):
self.write("{}")
self.finish()
Or, better:
#gen.coroutine
def post(self):
yield gen.sleep(1)
self.write("{}")
# Tornado calls self.finish when coroutine exits.
I start a long file-based db search which should run async and leave the browser side alone for other requests, but it seems that it blocks. What is the problem?
class Handler(tornado.web.RequestHandler):
def initialize(self, param):
self.db = param
#tornado.web.asynchronous
#gen.engine
def post(self):
try:
self.set_status(200)
response = yield gen.Task(self.handleSearch, self.request.arguments)
self.finish(response)
except BaseException, s:
logging.exception(s)
self.finish("Error tonight, cause: %s" % s)
def handleSearch(self, request, callback):
return callback(self.db.createList(request))
In order to use tornado async feature, your functions Needs to be async too,otherwise it's not really async
there are a few libraries for tornado out there, check this out for libraries, but if you didn't find your needed library, another solution would be to use fantastic future
so using future in python your code would be like this
from concurrent.futures import ThreadPoolExecutor
class Handler(tornado.web.RequestHandler):
def initialize(self, param):
self.db = param
#gen.coroutine
def post(self):
self.set_status(200)
with ThreadPoolExecutor(1) as execute:
r = yield execute.submit(self.handleSearch, param=request.arguments)
self.finish(r)
def handleSearch(self, param):
try:
return self.db.createList(param) # or time.sleep(4) (sth which block)
except Exception as e:
return False
I already test it, and it works, it's 100% compatible with tornado so your not going to facing any issues
So I have repeating code that I do for many GETs -- checking whether the response was cached previously and returning that if it is available.
The code I'd like to get working looks like this:
class Handler(web.RequestHandler):
#gen.coroutine
def get_cache(self):
try:
response = yield gen.Task(get_redis)
except:
logging.log()
if response:
self.finish(response)
raise gen.Return()
#gen.coroutine
#asynchronous
def get(self):
self.get_cache()
response = do_sql_get()
self.set_cache(key, response)
self.finish(response)
What's happening now is that it gets the cache if there but continues running the rest of the code in self.get. That it does this makes sense to me, but I'm not sure how to refactor it properly with it stopping as soon as self.finish is called in the self.get_cache method.
get_cache should return a value that indicates whether it finished the request or not (or it should return the cached data and leave it to the caller to finish the request). I would do one of the following:
#gen.coroutine
def serve_from_cache(self):
response = yield gen.Task(get_redis)
if response:
self.finish(response)
raise gen.Return(True)
else:
raise gen.Return(False)
#gen.coroutine
def get(self):
if (yield self.serve_from_cache()):
return
# do work
yield self.set_cache(...)
or
#gen.coroutine
def get_cache(self):
return yield gen.Task(get_redis)
#gen.coroutine
def get(self):
resp = yield self.get_cache()
if resp:
self.finish(resp)
return
# do work...
I have lot of code in my Tornado app which looks like this:
#tornado.web.asynchronous
def get(self):
...
some_async_call(..., callback=self._step1)
def _step1(self, response):
...
some_async_call(..., callback=self._step2)
def _step2(self, response):
...
some_async_call(..., callback=self._finish_request)
def _finish_request(self, response):
...
self.write(something)
self.finish()
Obviously inline callbacks would simplify that code a lot, it would look something like:
#inlineCallbacks
#tornado.web.asynchronous
def get(self):
...
response = yield some_async_call(...)
...
response = yield some_async_call(...)
...
response = yield some_async_call(...)
...
self.write(something)
self.finish()
Is there a way of having inline callbacks or otherwise simplifying the code in Tornado?
You could even factorize the calls.
I think what you do calls one async call after the other, thus not giving a maximum latency improvement.
If the calls don't have any dependencies (like e.g. taking the result of one call to do the second call) you could start all calls simultaneously:
#tornado.web.asynchronous
#gen.engine
def get(self):
responses = yield [ gen.Task(call) for call in required_calls ]
This way, all calls start at the same time and thus your overall latency is the max(all calls) instead of the sum(all calls).
I've used this in an app that need to aggregate many third-party WS or database calls and it improves the overall latency a lot.
Of course it doesn't work if there are dependencies between the calls (as mentionned above)
Found it. In Tornado it's not called inline callbacks, but rather "a generator-based interface" — tornado.gen. Thus my code should look something like:
#tornado.web.asynchronous
#gen.engine
def get(self):
...
response = yield gen.Task(some_async_call(...))
...
response = yield gen.Task(some_async_call(...))
...
response = yield gen.Task(some_async_call(...))
...
self.write(something)
self.finish()
You might also consider just using Cyclone, which would allow you to use #inlineCallbacks (and any other Twisted code that you want) directly.