I have an external library that uses requests module to perform http requests.
I need to use the library asynchronously without using many threads (it would be the last choice if nothing else works). And I can't change its source code either.
It would be easy to monkey-patch the library since all the interacting with requests module are done from a single function, but I don't know if I can monkey-patch synchronous function with asynchronous one (I mean async keyword).
Roughly, the problem simplifies to the following code:
import asyncio
import aiohttp
import types
import requests
# Can't modify Library class.
class Library:
def do(self):
self._request('example.com')
# Some other code here..
def _request(self, url):
return requests.get(url).text
# Monkey-patched to this method.
async def new_request(self, url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.text()
async def main():
library = Library()
# Do monkey-patch.
library._request = types.MethodType(new_request, library)
# Call library asynchronously in hope that it will perform requests using aiohttp.
asyncio.gather(
library.do(),
library.do(),
library.do()
)
print('Done.')
asyncio.run(main())
But as expected, it doesn't work. I get TypeError: An asyncio.Future, a coroutine or an awaitable is required on asyncio.gather call. And also RuntimeWarning: coroutine 'new_request' was never awaited on self._request('example.com').
So the question is: is it possible to make that code work without modifying the Library class' source code? Otherwise, what options do I have to make asynchronous requests using the library?
Is it possible to make that code work without modifying the Library class' source code? Otherwise, what options do I have to make asynchronous requests using the library?
Yes, it is possible, and you even do not need monkey-patching to perform that. You should use asyncio.to_thread to make the synchronous do method of Library an asynchronous function (coroutine). So the main coroutine should look like this:
async def main():
library = Library()
await asyncio.gather(
asyncio.to_thread(library.do),
asyncio.to_thread(library.do),
asyncio.to_thread(library.do)
)
print('Done.')
Here the asyncio.to_thread wraps the library.do method and returns a coroutine object avoiding the first error, but you also need await before asyncio.gather.
NOTE: If you are going to check my answer with the above example, please do not forget to set a valid URL instead of 'example.com'.
Edit
If you do not want to use threads at all, I would recommend an async wrapper like the to_async function below and replace asyncio.to_thread with that.
async def to_async(func):
return func()
async def main():
library = Library()
await asyncio.gather(
to_async(library.do),
to_async(library.do),
to_async(library.do),
)
I thought to release another answer as it solves the problem in another way.
So why do not extend the default behavior of the Library class and make the do and _request methods polymorph?
class AsyncLibrary(Library):
async def do(self):
return await self._request('https://google.com/')
async def _request(self, url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.text()
async def main():
library = AsyncLibrary()
await asyncio.gather(
library.do(),
library.do(),
library.do(),
)
No way to do this with requests without threads but you can limit the number of threads active at any one time to address your without using many threads requirement.
import asyncio
import requests
# Can't modify Library class.
class Library:
def do(self):
self._request('http://example.com')
def _request(self, url):
return requests.get(url).text
async def as_thread(semaphore, func):
async with semaphore: # limit the number of threads active
await asyncio.to_thread(func)
async def main():
library = Library()
semaphore = asyncio.Semaphore(2) # limit to 2 for example
tasks = [library.do] * 10 # pretend there are a lot of sites to read
await asyncio.gather(
*[as_thread(semaphore, x) for x in tasks]
)
print('Done.')
asyncio.run(main())
Related
I want to run many HTTP requests in parallel using python.
I tried this module named aiohttp with asyncio.
import aiohttp
import asyncio
async def main():
async with aiohttp.ClientSession() as session:
for i in range(10):
async with session.get('https://httpbin.org/get') as response:
html = await response.text()
print('done' + str(i))
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
I expect it to execute all the requests in parallel, but they are executed one by one.
Although, I later solved this using threading, but I would like to know what's wrong with this?
You need to make the requests in a concurrent manner. Currently, you have a single task defined by main() and so the http requests are run in a serial manner for that task.
You could also consider using asyncio.run() if you are using Python version 3.7+ that abstracts out creation of event loop:
import aiohttp
import asyncio
async def getResponse(session, i):
async with session.get('https://httpbin.org/get') as response:
html = await response.text()
print('done' + str(i))
async def main():
async with aiohttp.ClientSession() as session:
tasks = [getResponse(session, i) for i in range(10)] # create list of tasks
await asyncio.gather(*tasks) # execute them in concurrent manner
asyncio.run(main())
I am running a program that makes three different requests from a rest api. data, indicator, request functions all fetch data from BitMEX's api using a wrapper i've made.
I have used asyncio to try to speed up the process so that while i am waiting on a response from previous request, it can begin to make another one.
However, my asynchronous version is not running any quicker for some reason. The code works and as far as I know, I have set everything up correctly. But there could be something wrong with how I am setting up the coroutines?
Here is the asynchronous version:
import time
import asyncio
from bordemwrapper import BitMEXData, BitMEXFunctions
'''
asynchronous I/O
'''
async def data():
data = BitMEXData().get_ohlcv(symbol='XBTUSD', timeframe='1h',
instances=25)
await asyncio.sleep(0)
return data
async def indicator():
indicator = BitMEXData().get_indicator(symbol='XBTUSD',
timeframe='1h', indicator='RSI', period=20, source='close',
instances=25)
await asyncio.sleep(0)
return indicator
async def request():
request = BitMEXFunctions().get_price()
await asyncio.sleep(0)
return request
async def chain():
data_ = await data()
indicator_ = await indicator()
request_ = await request()
return data_, indicator_, request_
async def main():
await asyncio.gather(chain())
if __name__ == '__main__':
start = time.perf_counter()
asyncio.run(main())
end = time.perf_counter()
print('process finished in {} seconds'.format(end - start))
Unfortunately, asyncio isn't magic. Although you've put them in async functions, the BitMEXData().get_<foo> functions are not themselves async (i.e. you can't await them), and therefore block while they run. The concurrency in asyncio can only occur while awaiting something.
You'll need a library which makes the actual HTTP requests asynchronously, like aiohttp. It sounds like you wrote bordemwrapper yourself - you should rewrite the get_<foo> functions to use asynchronous HTTP requests. Feel free to submit a separate question if you need help with that.
I need to parse repeatedly one link content. synchronous way gives me 2-3 responses per second, i need faster (yes, i know, that too fast is bad too)
I found some async examples, but all of them show how to handle result after all links are parsed, whereas i need to parse it immediately after receiving, something like this, but this code doesn't give any speed improvement:
import aiohttp
import asyncio
import time
async def fetch(session, url):
async with session.get(url) as response:
return await response.text()
async def main():
while True:
async with aiohttp.ClientSession() as session:
html = await fetch(session, 'https://example.com')
print(time.time())
#do_something_with_html(html)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
but this code doesn't give any speed improvement
asyncio (and async/concurrency in general) gives speed improvement for I/O things that interleave each other.
When everything you do is await something and you never create any parallel tasks (using asyncio.create_task(), asyncio.ensure_future() etc.) then you are basically doing the classic synchronous programming :)
So, how to make the requests faster:
import aiohttp
import asyncio
import time
async def fetch(session, url):
async with session.get(url) as response:
return await response.text()
async def check_link(session):
html = await fetch(session, 'https://example.com')
print(time.time())
#do_something_with_html(html)
async def main():
async with aiohttp.ClientSession() as session:
while True:
asyncio.create_task(check_link(session))
await asyncio.sleep(0.05)
asyncio.run(main())
Notice: the async with aiohttp.Cliensession() as session: must be above (outside) while True: for this to work. Actually, having a single ClientSession() for all your requests is a good practice anyway.
I gave up using async, threading solved my problem, thanks to this answer
https://stackoverflow.com/a/23102874/5678457
from threading import Thread
import requests
import time
class myClassA(Thread):
def __init__(self):
Thread.__init__(self)
self.daemon = True
self.start()
def run(self):
while True:
r = requests.get('https://ex.com')
print(r.status_code, time.time())
for i in range(5):
myClassA()
I need to make asynchronous requests using the Requests library. In Python 3.7 if I try from requests import async I get SyntaxError: invalid syntax.
async has become a reserved with in Python 3.7. How to I get around this situation?
Lukasa who is with the requests lib said:
At the current time there are no plans to support async and await. This is not because they aren't a good idea: they are. It's because to use them requires quite substantial code changes.
Right now requests is a purely synchronous library that, at the bottom of its stack, uses httplib to send and receive data. We cannot move to an async model unless we replace httplib. The best we could do is provide a shorthand to run a request in a thread, but asyncio already has just such a shorthand, so I don't believe it would be valuable.
Right now I am quietly looking at whether we can rewrite requests to work just as well in a synchronous environment as in an async one. However, the reality is that doing so will be a lot of work, involving rewriting a lot of our stack, and may not happen for many years, if ever.
But don't worry aiohttp is very similar to requests.
Here's an example.
import aiohttp
import asyncio
async def fetch(session, url):
async with session.get(url) as response:
return await response.text()
async def main():
async with aiohttp.ClientSession() as session:
html = await fetch(session, 'http://python.org')
print(html)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
You can use asyncio to make asynchronous requests. Here is an example:
import asyncio
import requests
async def main():
loop = asyncio.get_event_loop()
futures = [
loop.run_in_executor(
None,
requests.get,
'http://example.org/'
)
for i in range(20)
]
for response in await asyncio.gather(*futures):
pass
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
I've been using asyncio and the http requests package aiohttp recently and I've run into a problem.
My application talks to a REST API.
for some API endpoints it makes sense to be able to dispatch multiple requests in parallel. Eg. sending different queries in the request to the same endpoint to get different data.
Though for some endpoints, this doesn't make sense. As in the endpoint always takes the same arguments (authentication) and returns requested information. (No point asking for the same data multiple times before the server has responded once) For these endpoints I need to enforce a 'serial' flow of requests. In that my program should only be able to send a request when it's not waiting for a response. (the typical behavior of blocking requests).
Of course I don't want to block.
This is an abstraction of what I intend to do. Essentially wrap the endpoint in an async generator that enforces this serial behavior.
I feel like I'm reinventing the wheel, Is there a common solution to this issue?
import asyncio
from time import sleep
# Encapsulate the idea of an endpoint that can't handle multiple requests
async def serialendpoint():
count = 0
while True:
count += 1
await asyncio.sleep(2)
yield str(count)
# Pretend client object
class ExampleClient(object):
gen = serialendpoint()
# Simulate a http request that sends multiple requests
async def simulate_multiple_http_requests(self):
print(await self.gen.asend(None))
print(await self.gen.asend(None))
print(await self.gen.asend(None))
print(await self.gen.asend(None))
async def other_stuff():
for _ in range(6):
await asyncio.sleep(1)
print('doing async stuff')
client = ExampleClient()
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(client.simulate_multiple_http_requests(),
client.simulate_multiple_http_requests(),
other_stuff()))
outputs
doing async stuff
1
doing async stuff
doing async stuff
2
doing async stuff
doing async stuff
3
doing async stuff
4
5
6
7
8
update
This is the actual async generator I implemented:
All the endpoints that require serial behavior get assigned a serial_request_async_generator during the import phase. Which meant I couldn't initialize them with an await 'async_gen'.asend(None) as the await is only allowed in an async coroutine. The compromise is that every serial request at runtime must .asend(None) before asending the actual arguments. There must be a better way!
async def serial_request_async_generator():
args, kwargs = yield
while True:
yield await request(*args, **kwargs) # request is an aiohttp request
args, kwargs = yield