Make Python Async Requests Faster - python

I am writing a get method that gets an array of ids and then makes a request for each id. The array of ids can be potentially 500+ and right now the requests are taking 20+ minutes. I have tried several different async methods like aiohttp and async and neither of them have worked to make the requests faster. Here is my code:
async def get(self):
self.set_header("Access-Control-Allow-Origin", "*")
story_list = []
duplicates = []
loop = asyncio.get_event_loop()
ids = loop.run_in_executor(None, requests.get, 'https://hacker-news.firebaseio.com/v0/newstories.json?print=pretty')
response = await ids
response_data = response.json()
print(response.text)
for url in response_data:
if url not in duplicates:
duplicates.append(url)
stories = loop.run_in_executor(None, requests.get, "https://hacker-news.firebaseio.com/v0/item/{}.json?print=pretty".format(
url))
data = await stories
if data.status_code == 200 and len(data.text) > 5:
print(data.status_code)
print(data.text)
story_list.append(data.json())
Is there a way I can use multithreading to make the requests faster?

The main issue here is that the code isn't really async.
After getting your list of URL's you are then fetching them one at a time and then awaiting the response.
A better idea would be to filter out the duplicates (use a set) before queuing all of the URLs in the executor and awaiting all of them to finish eg:
async def get(self):
self.set_header("Access-Control-Allow-Origin", "*")
stories = []
loop = asyncio.get_event_loop()
# Single executor to share resources
executor = ThreadPoolExecutor()
# Get the initial set of ids
response = await loop.run_in_executor(executor, requests.get, 'https://hacker-news.firebaseio.com/v0/newstories.json?print=pretty')
response_data = response.json()
print(response.text)
# Putting them in a set will remove duplicates
urls = set(response_data)
# Build the set of futures (returned by run_in_executor) and wait for them all to complete
responses = await asyncio.gather(*[
loop.run_in_executor(
executor, requests.get,
"https://hacker-news.firebaseio.com/v0/item/{}.json?print=pretty".format(url)
) for url in urls
])
# Process the responses
for response in responses:
if response.status_code == 200 and len(response.text) > 5:
print(response.status_code)
print(response.text)
stories.append(response.json())
return stories

Related

Unable to asynchronously make requests uing aiohttp and asyncio

I have written a script where draft orders are created in shopify and then used the id of the first request for the url of the second request which involves completing the order. Below is just a portion of the script that I wrote. However, there are delays that are happening when response is being fetched. And , because of this the draft order only sometimes get completed:
url=f"https://{shop_url}/admin/api/{api_version}/draft_orders.json"
headers = {"X-Shopify-Access-Token": private_app_password}
counter = count(start=1)
for _ in range(number_of_orders):
order = get_order(
line_items_list, locale="en_US", country="United States"
)
response = requests.post(url, json=order, headers=headers)
data = response.json()
# complete order
url2 = f"https://{shop_url}/admin/api/{api_version}/draft_orders/{data['draft_order']['id']}/complete.json"
requests.put(url2,headers=headers)
The problem seems to have to do with the delay that happens when the first response is fetched. Hence why I tried to wrap my API calls in asycn to fetch but the same thing is still occuring. The portion of the script is given below:
async def fetch(session, url,order,headers):
async with session.post(url,headers=headers,json=order) as response:
return await response.json()
async def get_draft_order(url,order,headers):
async with aiohttp.ClientSession() as session:
data = await fetch(session,url,order,headers)
url2 = f"https://{shop_url}/admin/api/{api_version}/draft_orders/{data['draft_order']['id']}/complete.json"
await session.put(url2,headers=headers,json=data)
def create_orders():
# POST request
url = f"https://{shop_url}/admin/api/{api_version}/draft_orders.json"
headers = {"X-Shopify-Access-Token": private_app_password}
counter = count(start=1)
for _ in range(number_of_orders):
order = get_order(
line_items_list, locale="en_US", country="United States"
)
asyncio.run(get_draft_order(url,order,headers))
Could someone help me understand what is wrong with the way I have implemented it. The second request depends on the id of the first request.

Parallelize checking of dead URLs

The question is quite easy: Is it possible to test a list of URLs and store in a list only dead URLs (response code > 400) using asynchronous function?
I previously use requests library to do it and it works great but I have a big list of URLs to test and if I do it sequentially it takes more than 1 hour.
I saw a lot of article on how to make parallels requests using asyncio and aiohttp but I didn't see many things about how to test URLs with these libraries.
Is it possible to do it?
Using multithreading you could do it like this:
import requests
from concurrent.futures import ThreadPoolExecutor
results = dict()
# test the given url
# add url and status code to the results dictionary if GET succeeds but status code >= 400
# also add url to results dictionary if an exception arises with full exception details
def test_url(url):
try:
r = requests.get(url)
if r.status_code >= 400:
results[url] = f'{r.status_code=}'
except requests.exceptions.RequestException as e:
results[url] = str(e)
# return a list of URLs to be checked. probably get these from a file in reality
def get_list_of_urls():
return ['https://facebook.com', 'https://google.com', 'http://google.com/nonsense', 'http://goooglyeyes.org']
def main():
with ThreadPoolExecutor() as executor:
executor.map(test_url, get_list_of_urls())
print(results)
if __name__ == '__main__':
main()
You could do something like this using aiohttp and asyncio.
Could be done more pythonic I guess but this should work.
import aiohttp
import asyncio
urls = ['url1', 'url2']
async def test_url(session, url):
async with session.get(url) as resp:
if resp.status > 400:
return url
async def main():
async with aiohttp.ClientSession() as session:
tasks = []
for url in urls:
tasks.append(asyncio.ensure_future(test_url(session, url)))
dead_urls = await asyncio.gather(*tasks)
print(dead_urls)
asyncio.run(main())
Very basic example, but this is how I would solve it:
from aiohttp import ClientSession
from asyncio import create_task, gather, run
async def TestUrl(url, session):
async with session.get(url) as response:
if response.status >= 400:
r = await response.text()
print(f"Site: {url} is dead, response code: {str(response.status)} response text: {r}")
async def TestUrls(urls):
resultsList: list = []
async with ClientSession() as session:
# Maybe some rate limiting?
partitionTasks: list = [
create_task(TestUrl(url, session))
for url in urls]
resultsList.append(await gather(*partitionTasks, return_exceptions=False))
# do stuff with the results or return?
return(resultsList)
async def main():
urls = []
test = await TestUrls(urls)
if __name__ == "__main__":
run(main())
Try using a ThreadPoolExecutor
from concurrent.futures import ThreadPoolExecutor
import requests
url_list=[
"https://www.google.com",
"https://www.adsadasdad.com",
"https://www.14fsdfsff.com",
"https://www.ggr723tg.com",
"https://www.yyyyyyyyyyyyyyy.com",
"https://www.78sdf8sf5sf45sf.com",
"https://www.wikipedia.com",
"https://www.464dfgdfg235345.com",
"https://www.tttllldjfh.com",
"https://www.qqqqqqqqqq456.com"
]
def check(url):
r=requests.get(url)
if r.status_code < 400:
print(f"{url} is ALIVE")
with ThreadPoolExecutor(max_workers=5) as e:
for url in url_list:
e.submit(check, url)
Multiprocessing could be the better option for your problem.
from multiprocessing import Process
from multiprocessing import Manager
import requests
def checkURLStatus(url, url_status):
res = requests.get(url)
if res.status_code >= 400:
url_status[url] = "Inactive"
else:
url_status[url] = "Active"
if __name__ == "__main__":
urls = [
"https://www.google.com"
]
manager = Manager()
# to store the results for later usage
url_status = manager.dict()
procs = []
for url in urls:
proc = Process(target=checkURLStatus, args=(url, url_status))
procs.append(proc)
proc.start()
for proc in procs:
proc.join()
print(url_status.values())
url_status is a shared variable to store data for separate threads. Refer this page for more info.

Asynchronous requests backoff/throttling best practice

Scenario: I need to gather paginated data from a web app's API which has a call limit of 100 per minute. The API object I need to return contains 100 items per page for 105 total, and growing, pages (~10,500 total items). Synchronous code was taking approximately 15 minutes to retrieve all the pages, so there was no worry about hitting the call limits then. However, I wanted to speed up the data retrieval, so I implemented asynchronous calls using asyncio and aiohttp. Data now downloads in 15 seconds - nice.
Problem: I'm now hitting the call limit thus receiving 403 errors for the last 5 or so calls.
Proposed Solution I implemented the try/except found in the get_data() function. I make the calls, and then when the call is not successful because of 403: Exceeded call limit I back off for back_off seconds and retry up to retries times:
async def get_data(session, url):
retries = 3
back_off = 60 # seconds to try again
for _ in range(retries):
try:
async with session.get(url, headers=headers) as response:
if response.status != 200:
response.raise_for_status()
print(retries, response.status, url)
return await response.json()
except aiohttp.client_exceptions.ClientResponseError as e:
retries -= 1
await asyncio.sleep(back_off)
continue
async def main():
async with aiohttp.ClientSession() as session:
attendee_urls = get_urls('attendee') # returns list of URLs to call asynchronously in get_data()
attendee_data = await asyncio.gather(*[get_data(session, attendee_url) for attendee_url in attendee_urls])
return attendee_data
if __name__ == '__main__':
data = asyncio.run(main())
Question: How do I limit the aiohttp calls so that they stay under the 100 calls/minute threshold without making a 403 request to back off? I've tried the following modules and none of them appeared to do anything: ratelimiter, ratelimit and asyncio-throttle.
Goal: To make 100 async calls per minute, but backing off and retrying if necessary (403: Exceeded call limit).
You can achieve "at most 100 requests/min" by adding a delay before every request.
100 requests/min is equivalent to 1 request/0.6s.
async def main():
async with aiohttp.ClientSession() as session:
attendee_urls = get_urls('attendee') # returns list of URLs to call asynchronously in get_data()
coroutines = []
for attendee_url in attendee_urls:
coroutines.append(get_data(session, attendee_url))
await asyncio.sleep(0.6)
attendee_data = asyncio.gather(*coroutines)
return attendee_data
Apart from the request rate limit, often, APIs also limit the no. of simultaneous requests. If so, you can use BoundedSempahore.
async def main():
sema = asyncio.BoundedSemaphore(50) # Assuming a concurrent requests limit of 50
...
coroutines.append(get_data(sema, session, attendee_url))
...
def get_data(sema, session, attendee_url):
...
for _ in range(retries):
try:
async with sema:
response = await session.get(url, headers=headers):
if response.status != 200:
response.raise_for_status()
...

How paginate through api response asynchronously with asyncio and aiohttp

I'm trying to make api calls with python asynchronously. I have multiple endpoints in a list and each endpoint will return paginated results. I'm able to set up going though the multiple endpoints asynchronously, however am not able to return the paginated results of each endpoint.
From debugging, I found that the fetch_more() function runs the while loop, but doesn't actually get past the async with session.get(). So basically. The function fetch_more() is intended to get remaining results from the api call for each endpoint, however I find that the same number of results are returned with or without the fetch_more() function. I've tried looking for examples of pagination with asyncio but have not have much luck.
From my understanding, I should not be doing a request inside a while loop, however, I'm not sure a way around that in order to get paginated results.
if __name__ == 'main':
starter_fun(url, header, endpoints):
starter_func(url, header, endpoints):
loop = asyncio.get_event_loop() #event loop
future = asyncio.ensure_future(fetch_all(url, header, endpoints))
loop.run_until_complete(future) #loop until done
async def fetch_all(url, header, endpoints):
async with ClientSession() as session:
for endpoint in endpoints:
task = asyncio.ensure_future(fetch(url, header, endpoint))
tasks.append(task)
res = await asyncio.gather(*tasks) # gather task responses
return res
async def fetch(url, header, endpoint):
total_tasks = []
async with session.get(url, headers=header, params=params, ssl=False) as response:
response_json = await response.json()
data = response_json['key']
tasks = asyncio.ensure_future(fetch_more(response_json, data, params, header, url, endpoint, session)) //this is where I am getting stuck
total_tasks.append(tasks)
return data
//function to get paginated results of api endpoint
async def fetch_more(response_json, data, params, header, url, endpoint, session): //this is where I am getting stuck
while len(response_json['key']) >= params['limit']:
params['offset'] = response_json['offset'] + len(response_json['key'])
async with session.get(url, headers=header, params=params, ssl=False) as response_continued:
resp_continued_json = await response_continued.json()
data.extend(resp_continued_json[kebab_to_camel(endpoint)])
return data
Currently I am getting 1000 results with or without the fetch_more function, however it should be a lot more with the fetch_more. Any idea as to how to approach asynchronously paginating?
from aiohttp import web
async def fetch(self size: int = 10):
data = "some code to fetch data here"
def paginate(_data, _size):
import itertools
while True:
i1, i2 = itertools.tee(_data)
_data, page = (itertools.islice(i1, _size, None),
list(itertools.islice(i2, _size)))
if len(page) == 0:
break
yield page
return web.json_response(list(paginate(_data=data, _size=size)))

Multithreading Python Requests Through Tor

The following code is my attempt at doing python requests through tor, this works fine, however I am interested in adding multithreading to this.
So I would like to simultaneously do about 10 different requests and process their outputs. What is the simplest and most efficient way to do this?
def onionrequest(url, onionid):
onionid = onionid
session = requests.session()
session.proxies = {}
session.proxies['http'] = 'socks5h://localhost:9050'
session.proxies['https'] = 'socks5h://localhost:9050'
#r = session.get('http://google.com')
onionurlforrequest = "http://" + url
try:
r = session.get(onionurlforrequest, timeout=15)
except:
return None
if r.status_code = 200:
listofallonions.append(url)
I would recommend using the the following packages to achieve this: asyncio, aiohttp, aiohttp_socks
example code:
import asyncio
import aiohttp
from aiohttp_socks import ProxyConnector
async def fetch(session, url):
async with session.get(url) as response:
return await response.text()
async def main(urls):
tasks = []
connector = ProxyConnector.from_url('socks5://localhost:9150', rdns=True)
async with aiohttp.ClientSession(connector=connector, rdns=True) as session:
for url in urls:
tasks.append(fetch(session, url))
htmls = await asyncio.gather(*tasks)
for html in htmls:
print(html)
if __name__ == '__main__':
urls = [
'http://python.org',
'https://google.com',
...
]
loop = asyncio.get_event_loop()
loop.run_until_complete(main(urls))
Using asyncio can get a bit daunting at first, so you might need to practice for a while before you get the hang of it.
If you want a more in-depth explanation of the difference between synchronous and asynchronous, check out this question.

Categories

Resources