How to send many requests in minimal time with Python 3 - python

My task is to send 30-100 post requests to one url in one exact precise moment of time. For example in 13:00:00.550 with several milliseconds accuracy.
Requests are differ from each other (some types, for example 10 types). And each type must send 5 times.
I have problem with fast sending of http requests. Is there the fastest way to send 30-100 post requests in minimal time?
I tried to use asyncio and httpx.AsyncClient to do it.
Here the part of code how I made it:
from datetime import datetime
import asyncio
import httpx
async def async_post(request_data):
time_to_sleep = 0.005
action_time = '13:00:00'
time_microseconds = 550000
async with httpx.AsyncClient(cookies=request_data['cookies']) as client:
while True:
now_time_second = datetime.now().strftime('%H:%M:%S')
if action_time==now_time_second:
break
await asyncio.sleep(0.05)
while True:
now_time_microsecond = datetime.now().strftime('%f')
if now_time_microsecond >= time_microseconds:
break
await asyncio.sleep(0.003)
for _ in range(5):
response = await client.post(request_data['url'],
headers = request_data['headers'],
params = request_data['params'],
data = request_data['data'],
timeout = 60)
logger.info('Time: ' + str(datetime.now().strftime('%H:%M:%S.%f')))
logger.info('Text: ' + str(response.text))
logger.info('Response time: ' + str(response.headers['Date']))
await asyncio.sleep(time_to_sleep)
def main():
loop = asyncio.get_event_loop()
loop.run_until_complete(
asyncio.gather(*[async_post(request_data) for request_data in all_requests_data]))
all_requests_data - list of all types of requests.
request_data - dict that contains data of request
As result - the time between requests can reach 70-200 ms. That's a lot. It does not suit for me.
And it's not server lag. I tried other application, and could see, that server can make answers in few miliseconds. So that is not on server side.
How to send requests faster?

Related

How do I get Python to send as many concurrent HTTP requests as possible?

I'm trying to send HTTPS requests as quickly as possible. I know this would have to be concurrent requests due to my goal being 150 to 500+ requests a second. I've searched everywhere, but get no Python 3.11+ answer or one that doesn't give me errors. I'm trying to avoid AIOHTTP as the rigmarole of setting it up was a pain, which didn't even work.
The input should be an array or URLs and the output an array of the html string.
It's quite unfortunate that you couldn't setup AIOHTTP properly because this is one of the most efficient way to do asynchronous requests in Python.
Setup is not that hard:
import asyncio
import aiohttp
from time import perf_counter
def urls(n_reqs: int):
for _ in range(n_reqs):
yield "https://python.org"
async def get(session: aiohttp.ClientSession, url: str):
async with session.get(url) as response:
_ = await response.text()
async def main(n_reqs: int):
async with aiohttp.ClientSession() as session:
await asyncio.gather(
*[get(session, url) for url in urls(n_reqs)]
)
if __name__ == "__main__":
n_reqs = 10_000
start = perf_counter()
asyncio.run(main(n_reqs))
end = perf_counter()
print(f"{n_reqs / (end - start)} req/s")
You basically need to create a single ClientSession which you then reuse to send the get requests. The requests are made concurrently with to asyncio.gather(). You could also use the newer asyncio.TaskGroup:
async def main(n_reqs: int):
async with aiohttp.ClientSession() as session:
async with asyncio.TaskGroup() as group:
for url in urls(n_reqs):
group.create_task(get(session, url))
This easily achieves 500+ requests per seconds on my 7+ years old bi-core computer. Contrary to what other answers suggested, this solution does not require to spawn thousands of threads, which are expensive.
You may improve the speed even more my using a custom connector in order to allow more concurrent connections (default is 100) in a single session:
async def main(n_reqs: int):
let connector = aiohttp.TCPConnector(limit=0)
async with aiohttp.ClientSession(connector=connector) as session:
...
Hope this helps, this question asked What is the fastest way to send 10000 http requests
I observed 15000 requests in 10s, using wireshark to trap on localhost and saved packets to CSV, only counted packets that had GET in them.
FILE: a.py
from treq import get
from twisted.internet import reactor
def done(response):
if response.code == 200:
get("http://localhost:3000").addCallback(done)
get("http://localhost:3000").addCallback(done)
reactor.callLater(10, reactor.stop)
reactor.run()
Run test like this:
pip3 install treq
python3 a.py # code from above
Setup test website like this, mine was on port 3000
mkdir myapp
cd myapp
npm init
npm install express
node app.js
FILE: app.js
const express = require('express')
const app = express()
const port = 3000
app.get('/', (req, res) => {
res.send('Hello World!')
})
app.listen(port, () => {
console.log(`Example app listening on port ${port}`)
})
OUTPUT
grep GET wireshark.csv | head
"5","0.000418","::1","::1","HTTP","139","GET / HTTP/1.1 "
"13","0.002334","::1","::1","HTTP","139","GET / HTTP/1.1 "
"17","0.003236","::1","::1","HTTP","139","GET / HTTP/1.1 "
"21","0.004018","::1","::1","HTTP","139","GET / HTTP/1.1 "
"25","0.004803","::1","::1","HTTP","139","GET / HTTP/1.1 "
grep GET wireshark.csv | tail
"62145","9.994184","::1","::1","HTTP","139","GET / HTTP/1.1 "
"62149","9.995102","::1","::1","HTTP","139","GET / HTTP/1.1 "
"62153","9.995860","::1","::1","HTTP","139","GET / HTTP/1.1 "
"62157","9.996616","::1","::1","HTTP","139","GET / HTTP/1.1 "
"62161","9.997307","::1","::1","HTTP","139","GET / HTTP/1.1 "
This works, getting around 250+ requests a second.
This solution does work on Windows 10. You may have to pip install for concurrent and requests.
import time
import requests
import concurrent.futures
start = int(time.time()) # get time before the requests are sent
urls = [] # input URLs/IPs array
responses = [] # output content of each request as string in an array
# create an list of 5000 sites to test with
for y in range(5000):urls.append("https://example.com")
def send(url):responses.append(requests.get(url).content)
with concurrent.futures.ThreadPoolExecutor(max_workers=10000) as executor:
futures = []
for url in urls:futures.append(executor.submit(send, url))
end = int(time.time()) # get time after stuff finishes
print(str(round(len(urls)/(end - start),0))+"/sec") # get average requests per second
Output:
286.0/sec
Note: If your code requires something extremely time dependent, replace the middle part with this:
with concurrent.futures.ThreadPoolExecutor(max_workers=10000) as executor:
futures = []
for url in urls:
futures.append(executor.submit(send, url))
for future in concurrent.futures.as_completed(futures):
responses.append(future.result())
This is a modified version of what this site showed in an example.
The secret sauce is the max_workers=10000. Otherwise, it would average about 80/sec. Although, when setting it to beyond 1000, there wasn't any boost in speed.

Python post request display message if response taking longer than x seconds

I have the following python code that fetches data from a remote json file. The processing of the remote json file can sometimes be quick or sometimes take a little while. So I put the please wait print message before the post request. This works fine. However, I find that for the requests that are quick, the please wait is pointless. Is there a way I can display the please wait message if request is taking longer than x seconds?
try:
print("Please wait")
r = requests.post(url = "http://localhost/test.php")
r_data = r.json()
you can do it using multiple threads as follows:
import threading
from urllib import request
from asyncio import sleep
def th():
sleep(2) # if download takes more than 2 seconds
if not isDone:
print("Please wait...")
dl_thread = threading.Thread(target=th) # create new thread that executes function th when the thread is started
dl_thread.start() # start the thread
isDone = False # variable to track the request status
r = request.post(url="http://localhost/test.php")
isDone = True
r_data = r.json()

Using multithreading for api requests in python

For my project I need to request a api and to store the result in a list. But the no. of requests I need to give more than 5000 with different body values. So, it take huge amount of time to complete. Is there is any way to parallely send the requests to complete the process quickly. I tried some threading code in this but I can't be able to figure out the ay to solve this.
import requests
res_list=[]
l=[19821, 29674 , 41983, 40234 ,.....] # Nearly 5000 items for now and the count may increase in future
for i in l:
URL ="https://api.something.com/?key=xxx-xxx-xxx&job_id={0}".format(i)
res = requests.get(url=URL)
res_list.append(res.text)
Probably, you just need to make your queries asynchronously. Something like that:
import asyncio
import aiohttp
NUMBERS = [1, 2, 3]
async def call():
async with aiohttp.ClientSession() as session:
for num in NUMBERS:
async with session.get(f'http://httpbin.org/get?{num}') as resp:
print(resp.status)
print(await resp.text())
if __name__ == '__main__':
loop = asyncio.new_event_loop()
loop.run_until_complete(call())

aiohttp set number of requests per second

I'm writing an API in Flask with 1000+ requests to get data and I'd like to limit the number of requests per second. I tried with:
conn = aiohttp.TCPConnector(limit_per_host=20)
and
conn = aiohttp.TCPConnector(limit=20)
But is seems doesn't work
My code looks like this:
import logging
import asyncio
import aiohttp
logging.basicConfig(filename="logfilename.log", level=logging.INFO, format='%(asctime)s %(levelname)s:%(message)s')
async def fetch(session, url):
async with session.get(url, headers=headers) as response:
if response.status == 200:
data = await response.json()
json = data['args']
return json
async def fetch_all(urls, loop):
conn = aiohttp.TCPConnector(limit=20)
async with aiohttp.ClientSession(connector=conn, loop=loop) as session:
results = await asyncio.gather(*[fetch(session, url) for url in urls], return_exceptions=True)
return results
async def main():
loop = asyncio.new_event_loop()
url_list = []
args = ['a', 'b', 'c', +1000 others]
urls = url_list
for i in args:
base_url = 'http://httpbin.org/anything?key=%s' % i
url_list.append(base_url)
htmls = loop.run_until_complete(fetch_all(urls, loop))
for j in htmls:
key = j['key']
# save to database
logging.info(' %s was added', key)
If I run code, within 1s I send over than 200 requests. Is there any way to limit requests?
The code above works as expected (apart from a small error regarding headers being undefined).
Tested on my machine the httpbin URL responds in around 100ms which means that with a concurrency of 20 it will serve around 200 requests in 1 second (which is what you're seeing as well):
100 ms per request means 10 requests are completed in a second
10 requests per second with a concurrency of 20 means 200 requests in one second
The limit option (aiohttp.TCPConnector) limits the number of concurrent requests and does not have any time dimension.
To see the limit in action try with more values like 10, 20, 50:
# time to complete 1000 requests with different keys
aiohttp.TCPConnector(limit=10): 12.58 seconds
aiohttp.TCPConnector(limit=20): 6.57 seconds
aiohttp.TCPConnector(limit=50): 3.1 seconds
If you want to use a requests per second limit send batch of requests (20 for example) and use asyncio.sleep(1.0) to pause for a second, then send the next batch and so on.

Is it possible to send many requests asynchronously with Python

I'm trying to send about 70 requests to slack api but can't find a way to implement it in asynchronous way, I have about 3 second for it or I'm getting timeout error
here how I've tried to t implement it:
import asyncio
def send_msg_to_all(sc,request,msg):
user_list = sc.api_call(
"users.list"
)
members_array = user_list["members"]
ids_array = []
for member in members_array:
ids_array.append(member['id'])
real_users = []
for user_id in ids_array:
user_channel = sc.api_call(
"im.open",
user=user_id,
)
if user_channel['ok'] == True:
real_users.append(User(user_id, user_channel['channel']['id']) )
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(send_msg(sc, real_users, request, msg))
loop.close()
return HttpResponse()
async def send_msg(sc, real_users, req, msg):
for user in real_users:
send_ephemeral_msg(sc,user.user_id,user.dm_channel, msg)
def send_ephemeral_msg(sc, user, channel, text):
sc.api_call(
"chat.postEphemeral",
channel=channel,
user=user,
text=text
)
But it looks like I'm still doing it in a synchronous way
Any ideas guys?
Slack's API has a rate limit of 1 query per second (QPS) as documented here.
Even if you get this working you'll be well exceeding the limits and you will start to see HTTP 429 Too Many Requests errors. Your API token may even get revoked / cancelled if you continue at that rate.
I think you'll need to find a different way.

Categories

Resources