Can I pipeline HTTP 1.1 requests with tornado? - python

As you may know HTTP/1.1 can allow you leave the socket open between HTTP requests leveraging the famous Keep-Alive connection. But, what less people exploit is the feature of just launch a burst of multiple sequential HTTP/1.1 requests without wait for the response in the middle time, Then the responses should return to you the same order paying the latency time just one time. (This consumption pattern is encouraged in Redis clients for example).
I know this pattern has been improved in HTTP/2 with the multiplexing feature but my concern right now is if I can use that pipelining pattern with the tornado library exploiting its async features, or may be other library capable?

No, Tornado does not support HTTP/1.1 pipelining. It won't start serving the second request until the response to the first request has been written.

Related

Does setting socket timeout cancel the initial request

I have a request that can only run once. At times, the request takes much longer than it should.
If I were to set a default socket timeout value (using socket.setdefaulttimeout(5)), and it took longer than 5 seconds, will the original request be cancelled so it's safe to retry (see example code below)?
If not, what is the best way to cancel the original request and retry it again ensuring it never runs more than once.
import socket
from googleapiclient.discovery import build
from tenacity import retry, stop_after_attempt, wait_fixed, retry_if_exception_type
#retry(
retry=retry_if_exception_type(socket.timeout),
wait=wait_fixed(4),
stop=stop_after_attempt(3)
)
def create_file_once_only(creds, body):
service = build('drive', 'v3', credentials=creds)
file = service.files().create(body=body, fields='id').execute()
socket.setdefaulttimeout(5)
create_file_once_only(creds, body)
It's unlikely that this can be made to work as you hope. An HTTP POST (as with any other HTTP request) is implemented by sending a command to the web server, then receiving a response. The python requests library encapsulates a lot of tedious parts of that for you, but at the core, it's going to do a socket send followed by a socket recv (it may of course require more than one send or recv depending on the size of the data).
Now, if you were able to connect to the web server initially (again, this is taken care of for you by the requests library but typically only takes a few milliseconds), then it's highly likely that the data in your POST request has long since been sent. (If the data you are sending is megabytes long, it's possible that it's only been partially sent, but if it is reasonably short, it's almost certainly been sent in full.)
That in turn means that in all likelihood the server has received your entire request and is working on it or has enqueued your request to work on it eventually. In either case, even if you break the connection to the server by timing out on the recv, it's unlikely that the server will actually even notice that until it gets to the point in its execution where it would be sending its response to your request. By that point, it has probably finished doing whatever it was going to do.
In other words, your socket timeout is not going to apply to the "HTTP request" -- it applies to the underlying socket operations instead -- and almost certainly to the recv part on the tail end. And just breaking the socket connection doesn't cancel the HTTP request.
There is no reliable way to do what you want without designing a transactional protocol with the close cooperation of the HTTP server.
You could do something (with the cooperation of the HTTP server still) that could do something approximating it:
Create a unique ID (UUID or the like)
Send a request to the server that contains that UUID along with the other account info (name, password, whatever else)
The server then only creates the account if it hasn't already created an account with the same unique ID.
That way, you can request the operation multiple times, but know that it will only actually be implemented once. If asked to do the same operation a second time, the server would simply respond with "yep, already did that".

Pipelining POST requests with python-requests

Assuming that I can verify that a bunch of POST requests are in fact logically independent, how can I set up HTTP pipelining using python-requests and force it to allow POST requests in the pipeline?
Does someone have a working example?
P.S. for extra points, how to handle errors for outstanding requests if pipeline suddenly breaks?
P.P.S. grequests is not an option in this case.
Pipelining requests can be done with the builtin httplib, but only by accessing the connection and response objects below their public interface. This snippet demonstrates.
Edit: updated version for Python3: https://github.com/urllib3/urllib3/issues/52#issuecomment-109756116
The requests library does not support HTTP pipelining.
You can approximate pipelining by using grequests which makes it easier to run many requests in parallel, but each parallel request would still use a new TCP connection.
(requests does pool connections, keeping the TCP connection open if the remote server permits this, but that only helps for sequential connections, and request and response still have to alternate).

Python HTTP client with request pipelining

The problem: I need to send many HTTP requests to a server. I can only use one connection (non-negotiable server limit). The server's response time plus the network latency is too high – I'm falling behind.
The requests typically don't change server state and don't depend on the previous request's response. So my idea is to simply send them on top of each other, enqueue the response objects, and depend on the Content-Length: of the incoming responses to feed incoming replies to the next-waiting response object. In other words: Pipeline the requests to the server.
This is of course not entirely safe (any reply without Content-Length: means trouble), but I don't care -- in that case I can always retry any queued requests. (The safe way would be to wait for the header before sending the next bit. That'd might help me enough. No way to test beforehand.)
So, ideally I want the following client code (which uses client delays to mimic network latency) to run in three seconds.
Now for the $64000 question: Is there a Python library which already does this, or do I need to roll my own? My code uses gevent; I could use Twisted if necessary, but Twisted's standard connection pool does not support pipelined requests. I also could write a wrapper for some C library if necessary, but I'd prefer native code.
#!/usr/bin/python
import gevent.pool
from gevent import sleep
from time import time
from geventhttpclient import HTTPClient
url = 'http://local_server/100k_of_lorem_ipsum.txt'
http = HTTPClient.from_url(url, concurrency=1)
def get_it(http):
print time(),"Queueing request"
response = http.get(url)
print time(),"Expect header data"
# Do something with the header, just to make sure that it has arrived
# (the greenlet should block until then)
assert response.status_code == 200
assert response["content-length"] > 0
for h in response.items():
pass
print time(),"Wait before reading body data"
# Now I can read the body. The library should send at
# least one new HTTP request during this time.
sleep(2)
print time(),"Reading body data"
while response.read(10000):
pass
print time(),"Processing my response"
# The next request should definitely be transmitted NOW.
sleep(1)
print time(),"Done"
# Run parallel requests
pool = gevent.pool.Pool(3)
for i in range(3):
pool.spawn(get_it, http)
pool.join()
http.close()
Dugong is an HTTP/1.1-only client which claims to support real HTTP/1.1 pipelining. The tutorial includes several examples on how to use it, including one using threads and another using asyncio.
Be sure to verify that the server you're communicating with actually supports HTTP/1.1 pipelining—some servers claim to support HTTP/1.1 but don't implement pipelining.
I think txrequests could get you most of what you are looking for, using the background_callback to en-queue processing of responses on a separate thread. Each request would still be it's own thread but using a session means by default it would reuse the same connection.
https://github.com/tardyp/txrequests#working-in-the-background
It seems you are running python2.
For python3 >= 3.5
you could use async/await loop
See asyncio
Also, there is a library built on top for better, easier use
called Trio, available on pip.
Another thing I can think of is multiple threads with locks.
I will think on how to better explain this or could it even work.

Tornado/Async Webserver theory, how to handle longer running operations to utilize the async server

I have just begun to look at tornado and asynchronous web servers. In many examples for tornado, longer requests are handled by something like:
make a call to tornado webserver
tornado makes async web call to an api
let tornado keep taking requests while callback waits to be called
handle response in callback. server to user.
So for hypothetical purposes say users are making a request to tornado server at /retrive. /retrieve will make a request to an internal api myapi.com/retrieve_posts_for_user_id/ or w/e. the api request could take a second to run while getting requests, then when it finally returns tornado servers up the response. First of all is this flow the 'normal' way to use tornado? Many of the code examples online would suggest so.
Secondly, (this is where my mind is starting to get boggled) assuming that the above flow is the standard flow, should myapi.com be asyncronous? If its not async and the requests can take seconds apiece wouldn't it create the same bottleneck a blocking server would? Perhaps an example of a normal use case for tornado or any async would help to shed some light on this issue? Thank you.
Yes, as I understand your question, that is a normal use-case for Tornado.
If all requests to your Tornado server would make requests to myapi.com, and myapi.com is blocking, then yes, myapi.com would still be the bottleneck. However, if only some requests have to be handled by myapi.com, then Tornado would still be a win, as it can keep handling such requests while waiting for responses for the requests to myapi.com. But regardless, if myapi.com can't handle the load, then putting a Tornado server in front of it won't magically fix that. The difference is that your Tornado server will still be able to respond to requests even when myapi.com is busy.

How to speed-up a HTTP request

I need to get json data and I'm using urllib2:
request = urllib2.Request(url)
request.add_header('Accept-Encoding', 'gzip')
opener = urllib2.build_opener()
connection = opener.open(request)
data = connection.read()
but although the data aren't so big it is too slow.
Is there a way to speed it up? I can use 3rd party libraries too.
Accept-Encoding:gzip means that the client is ready to gzip Encoded content if the Server is ready to send it first. The rest of the request goes down the sockets and to over your Operating Systems TCP/IP stack and then to physical layer.
If the Server supports ETags, then you can send a If-None-Match header to ensure that content has not changed and rely on the cache. An example is given here.
You cannot do much with clients only to improve your HTTP request speed.
You're dependant on a number of different things here that may not be within your control:
Latency/Bandwidth of your connection
Latency/Bandwidth of server connection
Load of server application and its individual processes
Items 2 and 3 are probably where the problem lies and you won't be able to do much about it. Is the content cache-able? This will depend on your own application needs and HTTP headers (e.g. ETags, Cache-Control, Last-Modified) that are returned from the server. The server may only up date every day in which case you might be better off only requesting data every hour.
There is unlikely an issue with urllib. If you have network issues and performance problems: consider using tools like Wireshark to investigate on the network level. I have very strong doubts that this is related to Python in any way.
If you are making lots of requests, look into threading. Having about 10 workers making requests can speed things up - you don't grind to a halt if one of them takes too long getting a connection.

Categories

Resources