Python Tornado: Delete request never ends - python

I am having problems with a delete request in Tornado. The request arrives to the server and everything in the handler work great, but it never returns the response to the client.
I have tried returning something, with only the "return" and even without the "return" and the result is always the same.
I am using Python 3.4, Tornado 4.1 and the RestClient of Firefox.
#web.asynchronous
#gen.coroutine
def delete(self, _id):
try:
model = Model()
model.delete(_id)
self.set_status(204)
except Exception as e:
logging.error(e)
self.set_status(500)
return

Tornado documentation (tornado.web.asynchronous):
If this decorator is given, the response is not finished when the method > returns. It is up to the request handler to call self.finish() to finish > the HTTP request.
You need to call tornado.web.RequestHandler.finish method. This will work:
#web.asynchronous
#gen.coroutine
def delete(self, _id):
try:
model = Model()
model.delete(_id)
self.set_status(204)
except Exception as e:
logging.error(e)
self.set_status(500)
self.finish()
return
However, you don't need asynchronous approach in this example. This will work also in the same way:
def delete(self, _id):
try:
model = Model()
model.delete(_id)
self.set_status(204)
except Exception as e:
logging.error(e)
self.set_status(500)
return
Also, if you are using #gen.coroutine decorator, you don't need to use #web.asynchronous decorator. Simply use only #gen.coroutine, it is the correct way and much more elegant.
Lastly, I think you should read this article for understanding asynchronous programming in Tornado.

Related

How to implement a logging Interceptor for Python async gRPC client?

I'm trying to implement accept / error logging for an asynchronous gRPC clientwith gRPC AsyncIO API. I would like to handle common errors (like StatusCode.UNAVAILABLE) in one place instead of in every request.
It's easy for the synchronous version with response.exception():
class LoggingClientInterceptor(grpc.UnaryUnaryClientInterceptor):
def __init__(self, logger: Logger):
self.logger = logger
def intercept_unary_unary(self, continuation, client_call_details, request):
self.logger.debug(f"{request=}")
response = continuation(client_call_details, request)
if response.exception():
self.logger.exception(f"{response.code()}")
return response
But things get more complicated when using an asynchronous interceptor.
I tried to use try / except, expecting await returns a response, but this did not lead to anything, because await of continuation returns an undone UnaryUnaryCall and it has no method .exception:
# this does not work
class LoggingClientInterceptor(grpc.aio.UnaryUnaryClientInterceptor):
def __init__(self, logger: Logger):
self.logger = logger
async def intercept_unary_unary(self, continuation, client_call_details, request):
self.logger.debug(f"{request=}")
try:
response = await continuation(client_call_details, request)
return response
except Exception as exc:
self.logger.exception(f"{exc}")
I can await the response code and compare it with OK, and then throw an exception, but it seems to me that this is somehow the wrong way: what if I want to add another interceptor?
code = await response.code()
if code != grpc.StatusCode.OK:
raise SmthException
I have searched extensively, including the code in the official repository, but have not found good examples for asynchronous interceptors
I will be glad if someone will show me the reference sample.
Having similar problem and adjust the code a little bit.
class LoggingClientInterceptor(grpc.aio.UnaryUnaryClientInterceptor):
def __init__(self, logger: Logger):
self.logger = logger
async def intercept_unary_unary(self, continuation, client_call_details, request):
self.logger.debug(f"{request=}")
try:
undone_call = await continuation(client_call_details, request)
response = await undone_call
return response
except Exception as exc:
self.logger.exception(f"{exc}")
raise exc

Write back through the callback attached to IOLoop in Tornado

There is a tricky post handler, sometimes it can take a lots of time (depending on a input values), sometimes not.
What I want is to write back whenever 1 second passes, dynamically allocating the response.
def post():
def callback():
self.write('too-late')
self.finish()
timeout_obj = IOLoop.current().add_timeout(
dt.timedelta(seconds=1),
callback,
)
# some asynchronous operations
if not self.request.connection.stream.closed():
self.write('here is your response')
self.finish()
IOLoop.current().remove_timeout(timeout_obj)
Turns out I can't do much from within callback.
Even raising an exception is suppressed by the inner context and won't be passed through the post method.
Any other ways to achieve the goal?
Thank you.
UPD 2020-05-15:
I found similar question
Thanks #ionut-ticus, using with_timeout() is much more convenient.
After some tries, I think I came really close to what i'm looking for:
def wait(fn):
#gen.coroutine
#wraps(fn)
def wrap(*args):
try:
result = yield gen.with_timeout(
dt.timedelta(seconds=20),
IOLoop.current().run_in_executor(None, fn, *args),
)
raise gen.Return(result)
except gen.TimeoutError:
logging.error('### TOO LONG')
raise gen.Return('Next time, bro')
return wrap
#wait
def blocking_func(item):
time.sleep(30)
# this is not a Subprocess.
# It is a file IO and DB
return 'we are done here'
Still not sure, should wait() decorator being wrapped in a
coroutine?
Some times in a chain of calls of a blocking_func(), there can
be another ThreadPoolExecutor. I have a concern, would this work
without making "mine" one global, and passing to the
Tornado's run_in_executor()?
Tornado: v5.1.1
An example of usage of tornado.gen.with_timeout. Keep in mind the task needs to be async or else the IOLoop will be blocked and won't be able to process the timeout:
#gen.coroutine
def async_task():
# some async code
#gen.coroutine
def get(self):
delta = datetime.timedelta(seconds=1)
try:
task = self.async_task()
result = yield gen.with_timeout(delta, task)
self.write("success")
except gen.TimeoutError:
self.write("timeout")
I'd advise to use https://github.com/aio-libs/async-timeout:
import asyncio
import async_timeout
def post():
try:
async with async_timeout.timeout(1):
# some asynchronous operations
if not self.request.connection.stream.closed():
self.write('here is your response')
self.finish()
IOLoop.current().remove_timeout(timeout_obj)
except asyncio.TimeoutError:
self.write('too-late')
self.finish()

How use properly Motor in separate data access layer

I started a little project using tornado and motor,
I feel some confused respect how i have to handle the access data layer if i want to have a non-bloking access
usally i separate my project with this structure
root_project
-logic
-data
--UsersDao
-handlers
--Users
-main.py
but i don't know if i do something like this the connection would be non-blocking
#gen.coroutine
#tornado.web.asynchronous
def get(self, id):
users = self.settings["User"]
result = yield from users.get(id)
self.write(json_encode(result))
self.finish()
'users' it's my UsersDao object and looks like
class UsersDao(object):
....
def get(self, user, callback=None):
try:
user = yield self._db["users"].find_one({'_id': user})
...create user object
return user
except ValueError:
pass
except OperationFailure:
pass
except Exception:
raise
In general, whenever you use yield, you're doing something asynchronous/non-blocking. So in this case the code you've posted looks correct except for the missing #gen.coroutine decorator on UsersDao.get (whenever you use yield for asynchronous stuff, you need this decorator, and you need to use yield any time you call it).

Pytest raise test failing for requests.raise_for_status()

I've recently started using pytest, and even more recently started using mock for mocking the requests library. I have made a requests.Response object okay, and for a 200 status code it works fine. What I'm trying to do here, is to use raise_for_status() to check for a rate limit exceeded error, and test that it handles the exception with pytest.
I'm using the Mock side_effect option, which seems to fire the exception I'm hoping, but pytest doesn't seem to recognise this as having happened and fails the test.
Any thoughts? I'm sure it's something obvious I'm missing!
The code I have for the class is:
class APIClient:
def get_records(self, url):
try:
r = requests.get(url)
r.raise_for_status()
return r.json()
except requests.HTTPError as e:
print("Handling the exception")
In the test class, I have got:
#pytest.fixture
def http_error_response(rate_limit_json):
mock_response = mock.Mock()
mock_response.json.return_value = rate_limit_json
mock_response.status_code = 429
mock_response.raise_for_status.side_effect = requests.exceptions.HTTPError
return mock_response
class TestRecovery(object):
#mock.patch('requests.get')
def test_throws_exception_for_rate_limit_error\
(self, mock_get, api_query_object, http_error_response):
mock_get.return_value = http_error_response
print(http_error_response.raise_for_status.side_effect)
url = api_query_object.get_next_url()
with pytest.raises(requests.exceptions.HTTPError):
api_query_object.get_records(url)
The output I get is:
with pytest.raises(requests.exceptions.HTTPError):
> api_query_object.get_records(url)
E Failed: DID NOT RAISE
---------------------- Captured stdout call ----------------------
<class 'requests.exceptions.HTTPError'>
Handling the exception
You are instructing pytest to expect an exception that should be raised in APIClient.get_records but inside that method definition you are already capturing the exception and just doing a print.
The exception is actually happening and it proved by seeing the result of your print in the console output.
Instead of that you should either check with the mock that the method raise_for_status was called.

Python Tornado gen.engine exception handling

I am using Tornado 2.4, and I am trying to integrate async call.
Lets say I need to access to a remote resource through a HTTP call, so I made this function in a tornado.web.RequestHandler:
#tornado.web.asynchronous
def get(self, *args):
try:
self.remote_call()
return 'OK'
except Exception, e:
self.handle_exception(e)
#gen.engine
def remote_call(self):
http_client = httpclient.AsyncHTTPClient()
response = yield gen.Task(http_client.fetch, 'http://google.com')
self.process(response)
So my problem is, since remote_call is yielding a Task, it will obviously exit the remote_call function and continue the get function. Then when my task is complete, the engine will process the response.
But if an error happen in the self.process(response), it will not be catch by my except, since this part of the code is not actually called here, but inside the engine where I do have no control.
So my question is, can I have some control on this engine? Can I handle error, can I ask to perform some specific task at the end the function?
I could do this directly in the function like this
#tornado.web.asynchronous
def get(self, *args):
self.remote_call()
return 'OK'
#gen.engine
def remote_call(self):
http_client = httpclient.AsyncHTTPClient()
response = yield gen.Task(http_client.fetch, 'http://google.com')
try:
self.process(response)
except:
self.handle_exception(e)
But I want to make the handle exception generic and not copy pasting this on every of my Handler.
So do I have a way to access to the engine of Tornado?
Note that I am using Tornado 2.4 but I can migrate to 3.0 if needed.
Thanks
You can handle it in 2.4 by decorating your get call with #gen.engine, wrapping the call to self.remote_call in a gen.Task, and then yielding from that:
#tornado.web.asynchronous
#gen.engine
def get(self, *args):
try:
yield gen.Task(self.remote_call)
except Exception, e:
self.handle_exception(e)
self.finish() # Make sure you call this when `get` is asynchronous.
#gen.engine
def remote_call(self):
http_client = httpclient.AsyncHTTPClient()
response = yield gen.Task(http_client.fetch, 'http://google.com')
self.process(response)
This will allow you to handle the exception in get, though you'll still see a traceback from the exception being raise in remote_call.
However, I highly recommend you upgrade. Tornado is now on version 4.0. With 3.0 or later, you can use gen.coroutine instead of gen.engine and web.asynchronous:
#gen.coroutine
def get(self, *args):
try:
yield self.remote_call()
except Exception, e:
self.handle_exception(e)
self.finish()
#gen.coroutine
def remote_call(self):
http_client = httpclient.AsyncHTTPClient()
response = yield http_client.fetch('http://google.com')
self.process(response)
coroutine properly supresses the traceback from any exception thrown in remote_call, as well as letting you handle it in get.
Ok thanks it works. I had to do this however:
#tornado.web.asynchronous
#gen.engine
def get(self, *args):
try:
yield gen.Task(lambda cb: self.remote_call())
except Exception, e:
self.handle_exception(e)
self.finish() # Make sure you call this when `get` is asynchronous.
#gen.engine
def remote_call(self):
http_client = httpclient.AsyncHTTPClient()
response = yield gen.Task(http_client.fetch, 'http://google.com')
self.process(response)

Categories

Resources