For some reasons multi threading is not efficient in my code.
My code gets a token from a txt file and sends a post request with that token.
And i don't understand why multi threading is not efficient in my code.
It took 2.7 seconds to make 3 post requests.
Here is my code:
import requests
from concurrent.futures import ThreadPoolExecutor, as_completed
from time import time
url_list = [
"https://www.google.com/api/"
]
tokens = set()
with open("tokens.txt", "r") as f:
file_lines = f.readlines()
for line in file_lines:
tokens.add(line.strip())
token_data = {"token": None}
def makerequest(url):
for token in tokens:
token_data["Token"] = token
html = requests.post(url,stream=True, data=token_data)
print(html.text)
start = time()
processes = []
with ThreadPoolExecutor(max_workers=200) as executor:
for url in url_list:
processes.append(executor.submit(makerequest, url))
for task in as_completed(processes):
print(task.result())
print(f'Time taken: {time() - start}')
2.7 seconds to send 3 post requests i don't think it is good for multi threading.
ThreadPoolExecutor doesn't have any special insight into or control over the callables that you submit to it. It can't change how they behave. What it can do is start the callables passed to it on separate threads from each other. Let's have a look at your example:
You have one URL and some quantity of tokens. Every call to makerequest will make a number of requests in series, each one starting after the previous has completed, one for each token. It doesn't use multithreading in any way - whatever thread makerequest is called on, that's the thread that makes all of the requests, one after another.
You loop once per URL - which is to say, you do this only once at all (since you have only one URL) - and invoke executor.submit, telling it to call makerequest for that particular URL. It can do so on a thread in the thread pool, but because you only tell it to make one call, it's only going to make use of one thread. That single thread will call makerequest once, and that invocation of makerequest will make a number of requests all on that same thread, one after another.
If you want the requests to be made in parallel, you will need to break things up more. You could, for example, extract the loop from inside makerequest and make it take a URL and a token. Then you can submit every separate combination of URL and token to the executor separately. As a rough example:
def makerequest(url, token):
token_data = {"token": token}
html = requests.post(url,stream=True, data=token_data)
print(html.text)
# ...
processes = []
with ThreadPoolExecutor(max_workers=200) as executor:
for url in url_list:
for token in tokens:
processes.append(executor.submit(makerequest, url, token))
Minor notes: You use "token" and "Token" interchangeably as keys in your dictionary. That's a recipe for confusion - you should figure out which is correct and stick to it. You also create a global variable token_data and then modify it inside makerequest. This is disastrous with threads - you can't guarantee that one thread won't modify it while another is using it. You should not modify data structures shared between threads - instead create a new token_data as a local variable every time.
I am working with slack command (python code is running behind this), it works fine, but this gives error
This slash command experienced a problem: 'Timeout was reached' (error detail provided only to team owning command).
How to avoid this ?
According to the Slack slash command documentation, you need to respond within 3000ms (three seconds). If your command takes longer then you get the Timeout was reached error. Your code obviously won't stop running, but the user won't get any response to their command.
Three seconds is fine for a quick thing where your command has instant access to data, but might not be long enough if you're calling out to external APIs or doing something complicated. If you do need to take longer, then see the Delayed responses and multiple responses section of the documentation:
Validate the request is okay.
Return a 200 response immediately, maybe something along the lines of {'text': 'ok, got that'}
Go and perform the actual action you want to do.
In the original request, you get passed a unique response_url parameter. Make a POST request to that URL with your follow-up message:
Content-type needs to be application/json
With the body as a JSON-encoded message: {'text': 'all done :)'}
you can return ephemeral or in-channel responses, and add attachments the same as the immediate approach
According to the docs, "you can respond to a user commands up to 5 times within 30 minutes of the user's invocation".
After dealing with this issue myself and having my Flask app hosted on Heroku I found that the simplest solution was to use threading. I followed the example from here:
https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xi-email-support
from threading import Thread
def backgroundworker(somedata,response_url):
# your task
payload = {"text":"your task is complete",
"username": "bot"}
requests.post(response_url,data=json.dumps(payload))
#app.route('/appmethodaddress',methods=['POST','GET'])
def receptionist():
response_url = request.form.get("response_url")
somedata = {}
thr = Thread(target=backgroundworker, args=[somedata,response_url])
thr.start()
return jsonify(message= "working on your request")
All the slow heavy work is performed by the backgroundworker() function. My slack command points to https://myappaddress.com/appmethodaddress where the receptionist() function takes the response_url of the received Slack message and passes it alongside any other optional data to the backgroundworker(). As the process is now split it simply returns the "working on your request" message to your Slack channel pretty much instantly and upon completion backgroundworker() sends the second message "your task is complete".
I too was facing this error frequently:
"Darn – that slash command didn't work (error message: Timeout was reached). Manage the command at slash-command"
I was writing a Slack slash-command "bot" on AWS Lambda that sometimes needed to perform slow operations (invoking other external APIs etc). The Lambda function would take greater than 3 seconds in some cases causing the Timeout was reached error from Slack.
I found #rcoup's excellent answer here and applied it in the context of AWS Lambda. The error doesn't appear any more.
I did this with two separate Lambda functions. One is a "dispatcher" or "receptionist" that greets the incoming Slack slash command with a "200 OK" and returns the simple "Ok, got that" type of message to the user. The other is the actual "worker" Lambda function that starts the long-ish operation asynchronously and posts the result of that operation to the Slack response_url later.
This is the dispatcher/receptionist Lambda function:
def lambda_handler(event, context):
req_body = event['body']
try:
retval = {}
# the param_map contains the 'response_url' that the worker will need to post back to later
param_map = _formparams_to_dict(req_body)
# command_list is a sequence of strings in the slash command such as "slashcommand weather pune"
command_list = param_map['text'].split('+')
# publish SNS message to delegate the actual work to worker lambda function
message = {
"param_map": param_map,
"command_list": command_list
}
sns_response = sns_client.publish(
TopicArn=MY_SNS_TOPIC_ARN,
Message=json.dumps({'default': json.dumps(message)}),
MessageStructure='json'
)
retval['text'] = "Ok, working on your slash command ..."
except Exception as e:
retval['text'] = '[ERROR] {}'.format(str(e))
return retval
def _formparams_to_dict(req_body):
""" Converts the incoming form_params from Slack into a dictionary. """
retval = {}
for val in req_body.split('&'):
k, v = val.split('=')
retval[k] = v
return retval
As you can see from the above, I didn't invoke the worker Lambda Function directly from the dispatcher (though this is possible). I chose to use AWS SNS to publish a message that the worker receives and processes.
Based on this StackOverflow answer, this is the better approach as it's non-blocking (asynchronous) and scalable. Also it was easier to use SNS to decouple the two functions in the context of AWS Lambda, direct invocation is trickier for this use-case.
Finally, here's how I consume the SNS event in my worker Lambda Function:
def lambda_handler(event, context):
message = json.loads(event['Records'][0]['Sns']['Message'])
param_map = message['param_map']
response_url = param_map['response_url']
command_list = message['command_list']
main_command = command_list[0].lower()
# process the command as you need to and finally post results to `response_url`
I tried the sample provided within the documentation of the requests library for python.
With async.map(rs), I get the response codes, but I want to get the content of each page requested. This, for example, does not work:
out = async.map(rs)
print out[0].content
Note
The below answer is not applicable to requests v0.13.0+. The asynchronous functionality was moved to grequests after this question was written. However, you could just replace requests with grequests below and it should work.
I've left this answer as is to reflect the original question which was about using requests < v0.13.0.
To do multiple tasks with async.map asynchronously you have to:
Define a function for what you want to do with each object (your task)
Add that function as an event hook in your request
Call async.map on a list of all the requests / actions
Example:
from requests import async
# If using requests > v0.13.0, use
# from grequests import async
urls = [
'http://python-requests.org',
'http://httpbin.org',
'http://python-guide.org',
'http://kennethreitz.com'
]
# A simple task to do to each response object
def do_something(response):
print response.url
# A list to hold our things to do via async
async_list = []
for u in urls:
# The "hooks = {..." part is where you define what you want to do
#
# Note the lack of parentheses following do_something, this is
# because the response will be used as the first argument automatically
action_item = async.get(u, hooks = {'response' : do_something})
# Add the task to our list of things to do via async
async_list.append(action_item)
# Do our list of things to do via async
async.map(async_list)
async is now an independent module : grequests.
See here : https://github.com/kennethreitz/grequests
And there: Ideal method for sending multiple HTTP requests over Python?
installation:
$ pip install grequests
usage:
build a stack:
import grequests
urls = [
'http://www.heroku.com',
'http://tablib.org',
'http://httpbin.org',
'http://python-requests.org',
'http://kennethreitz.com'
]
rs = (grequests.get(u) for u in urls)
send the stack
grequests.map(rs)
result looks like
[<Response [200]>, <Response [200]>, <Response [200]>, <Response [200]>, <Response [200]>]
grequests don't seem to set a limitation for concurrent requests, ie when multiple requests are sent to the same server.
I tested both requests-futures and grequests. Grequests is faster but brings monkey patching and additional problems with dependencies. requests-futures is several times slower than grequests. I decided to write my own and simply wrapped requests into ThreadPoolExecutor and it was almost as fast as grequests, but without external dependencies.
import requests
import concurrent.futures
def get_urls():
return ["url1","url2"]
def load_url(url, timeout):
return requests.get(url, timeout = timeout)
with concurrent.futures.ThreadPoolExecutor(max_workers=20) as executor:
future_to_url = {executor.submit(load_url, url, 10): url for url in get_urls()}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
resp_err = resp_err + 1
else:
resp_ok = resp_ok + 1
Unfortunately, as far as I know, the requests library is not equipped for performing asynchronous requests. You can wrap async/await syntax around requests, but that will make the underlying requests no less synchronous. If you want true async requests, you must use other tooling that provides it. One such solution is aiohttp (Python 3.5.3+). It works well in my experience using it with the Python 3.7 async/await syntax. Below I write three implementations of performing n web requests using
Purely synchronous requests (sync_requests_get_all) using the Python requests library
Synchronous requests (async_requests_get_all) using the Python requests library wrapped in Python 3.7 async/await syntax and asyncio
A truly asynchronous implementation (async_aiohttp_get_all) with the Python aiohttp library wrapped in Python 3.7 async/await syntax and asyncio
"""
Tested in Python 3.5.10
"""
import time
import asyncio
import requests
import aiohttp
from asgiref import sync
def timed(func):
"""
records approximate durations of function calls
"""
def wrapper(*args, **kwargs):
start = time.time()
print('{name:<30} started'.format(name=func.__name__))
result = func(*args, **kwargs)
duration = "{name:<30} finished in {elapsed:.2f} seconds".format(
name=func.__name__, elapsed=time.time() - start
)
print(duration)
timed.durations.append(duration)
return result
return wrapper
timed.durations = []
#timed
def sync_requests_get_all(urls):
"""
performs synchronous get requests
"""
# use session to reduce network overhead
session = requests.Session()
return [session.get(url).json() for url in urls]
#timed
def async_requests_get_all(urls):
"""
asynchronous wrapper around synchronous requests
"""
session = requests.Session()
# wrap requests.get into an async function
def get(url):
return session.get(url).json()
async_get = sync.sync_to_async(get)
async def get_all(urls):
return await asyncio.gather(*[
async_get(url) for url in urls
])
# call get_all as a sync function to be used in a sync context
return sync.async_to_sync(get_all)(urls)
#timed
def async_aiohttp_get_all(urls):
"""
performs asynchronous get requests
"""
async def get_all(urls):
async with aiohttp.ClientSession() as session:
async def fetch(url):
async with session.get(url) as response:
return await response.json()
return await asyncio.gather(*[
fetch(url) for url in urls
])
# call get_all as a sync function to be used in a sync context
return sync.async_to_sync(get_all)(urls)
if __name__ == '__main__':
# this endpoint takes ~3 seconds to respond,
# so a purely synchronous implementation should take
# little more than 30 seconds and a purely asynchronous
# implementation should take little more than 3 seconds.
urls = ['https://postman-echo.com/delay/3']*10
async_aiohttp_get_all(urls)
async_requests_get_all(urls)
sync_requests_get_all(urls)
print('----------------------')
[print(duration) for duration in timed.durations]
On my machine, this is the output:
async_aiohttp_get_all started
async_aiohttp_get_all finished in 3.20 seconds
async_requests_get_all started
async_requests_get_all finished in 30.61 seconds
sync_requests_get_all started
sync_requests_get_all finished in 30.59 seconds
----------------------
async_aiohttp_get_all finished in 3.20 seconds
async_requests_get_all finished in 30.61 seconds
sync_requests_get_all finished in 30.59 seconds
maybe requests-futures is another choice.
from requests_futures.sessions import FuturesSession
session = FuturesSession()
# first request is started in background
future_one = session.get('http://httpbin.org/get')
# second requests is started immediately
future_two = session.get('http://httpbin.org/get?foo=bar')
# wait for the first request to complete, if it hasn't already
response_one = future_one.result()
print('response one status: {0}'.format(response_one.status_code))
print(response_one.content)
# wait for the second request to complete, if it hasn't already
response_two = future_two.result()
print('response two status: {0}'.format(response_two.status_code))
print(response_two.content)
It is also recommended in the office document. If you don't want involve gevent, it's a good one.
I have a lot of issues with most of the answers posted - they either use deprecated libraries that have been ported over with limited features, or provide a solution with too much magic on the execution of the request, making it difficult to error handle. If they do not fall into one of the above categories, they're 3rd party libraries or deprecated.
Some of the solutions works alright purely in http requests, but the solutions fall short for any other kind of request, which is ludicrous. A highly customized solution is not necessary here.
Simply using the python built-in library asyncio is sufficient enough to perform asynchronous requests of any type, as well as providing enough fluidity for complex and usecase specific error handling.
import asyncio
loop = asyncio.get_event_loop()
def do_thing(params):
async def get_rpc_info_and_do_chores(id):
# do things
response = perform_grpc_call(id)
do_chores(response)
async def get_httpapi_info_and_do_chores(id):
# do things
response = requests.get(URL)
do_chores(response)
async_tasks = []
for element in list(params.list_of_things):
async_tasks.append(loop.create_task(get_chan_info_and_do_chores(id)))
async_tasks.append(loop.create_task(get_httpapi_info_and_do_chores(ch_id)))
loop.run_until_complete(asyncio.gather(*async_tasks))
How it works is simple. You're creating a series of tasks you'd like to occur asynchronously, and then asking a loop to execute those tasks and exit upon completion. No extra libraries subject to lack of maintenance, no lack of functionality required.
You can use httpx for that.
import httpx
async def get_async(url):
async with httpx.AsyncClient() as client:
return await client.get(url)
urls = ["http://google.com", "http://wikipedia.org"]
# Note that you need an async context to use `await`.
await asyncio.gather(*map(get_async, urls))
if you want a functional syntax, the gamla lib wraps this into get_async.
Then you can do
await gamla.map(gamla.get_async(10))(["http://google.com", "http://wikipedia.org"])
The 10 is the timeout in seconds.
(disclaimer: I am its author)
I know this has been closed for a while, but I thought it might be useful to promote another async solution built on the requests library.
list_of_requests = ['http://moop.com', 'http://doop.com', ...]
from simple_requests import Requests
for response in Requests().swarm(list_of_requests):
print response.content
The docs are here: http://pythonhosted.org/simple-requests/
If you want to use asyncio, then requests-async provides async/await functionality for requests - https://github.com/encode/requests-async
DISCLAMER: Following code creates different threads for each function.
This might be useful for some of the cases as it is simpler to use. But know that it is not async but gives illusion of async using multiple threads, even though decorator suggests that.
You can use the following decorator to give a callback once the execution of function is completed, the callback must handle the processing of data returned by the function.
Please note that after the function is decorated it will return a Future object.
import asyncio
## Decorator implementation of async runner !!
def run_async(callback, loop=None):
if loop is None:
loop = asyncio.get_event_loop()
def inner(func):
def wrapper(*args, **kwargs):
def __exec():
out = func(*args, **kwargs)
callback(out)
return out
return loop.run_in_executor(None, __exec)
return wrapper
return inner
Example of implementation:
urls = ["https://google.com", "https://facebook.com", "https://apple.com", "https://netflix.com"]
loaded_urls = [] # OPTIONAL, used for showing realtime, which urls are loaded !!
def _callback(resp):
print(resp.url)
print(resp)
loaded_urls.append((resp.url, resp)) # OPTIONAL, used for showing realtime, which urls are loaded !!
# Must provide a callback function, callback func will be executed after the func completes execution
# Callback function will accept the value returned by the function.
#run_async(_callback)
def get(url):
return requests.get(url)
for url in urls:
get(url)
If you wish to see which url are loaded in real-time then, you can add the following code at the end as well:
while True:
print(loaded_urls)
if len(loaded_urls) == len(urls):
break
from threading import Thread
threads=list()
for requestURI in requests:
t = Thread(target=self.openURL, args=(requestURI,))
t.start()
threads.append(t)
for thread in threads:
thread.join()
...
def openURL(self, requestURI):
o = urllib2.urlopen(requestURI, timeout = 600)
o...
I second the suggestion above to use HTTPX, but I often use it in a different way so am adding my answer.
I personally use asyncio.run (introduced in Python 3.7) rather than asyncio.gather and also prefer the aiostream approach, which can be used in combination with asyncio and httpx.
As in this example I just posted, this style is helpful for processing a set of URLs asynchronously even despite the (common) occurrence of errors. I particularly like how that style clarifies where the response processing occurs and for ease of error handling (which I find async calls tend to give more of).
It's easier to post a simple example of just firing off a bunch of requests asynchronously, but often you also want to handle the response content (compute something with it, perhaps with reference to the original object that the URL you requested was to do with).
The core of that approach looks like:
async with httpx.AsyncClient(timeout=timeout) as session:
ws = stream.repeat(session)
xs = stream.zip(ws, stream.iterate(urls))
ys = stream.starmap(xs, fetch, ordered=False, task_limit=20)
process = partial(process_thing, things=things, pbar=pbar, verbose=verbose)
zs = stream.map(ys, process)
return await zs
where:
process_thing is an async response content handling function
things is the input list (which the urls generator of URL strings came from), e.g. a list of objects/dictionaries
pbar is a progress bar (e.g. tqdm.tqdm) [optional but useful]
All of that goes in an async function async_fetch_urlset which is then run by calling a synchronous 'top-level' function named e.g. fetch_things which runs the coroutine [this is what's returned by an async function] and manages the event loop:
def fetch_things(urls, things, pbar=None, verbose=False):
return asyncio.run(async_fetch_urlset(urls, things, pbar, verbose))
Since a list passed as input (here it's things) can be modified in-place, you can effectively get output back (as we're used to from synchronous function calls)
I have been using python requests for async calls against github's gist API for some time.
For an example, see the code here:
https://github.com/davidthewatson/flasgist/blob/master/views.py#L60-72
This style of python may not be the clearest example, but I can assure you that the code works. Let me know if this is confusing to you and I will document it.
I have also tried some things using the asynchronous methods in python, how ever I have had much better luck using twisted for asynchronous programming. It has fewer problems and is well documented. Here is a link of something simmilar to what you are trying in twisted.
http://pythonquirks.blogspot.com/2011/04/twisted-asynchronous-http-request.html
Non of the answers above helped me because they assume that you have a predefined list of requests, while in my case i need to be able to listen to requests and respond asynchronously (in similar way to how it works in nodejs).
def handle_finished_request(r, **kwargs):
print(r)
# while True:
def main():
while True:
address = listen_to_new_msg() # based on your server
# schedule async requests and run 'handle_finished_request' on response
req = grequests.get(address, timeout=1, hooks=dict(response=handle_finished_request))
job = grequests.send(req) # does not block! for more info see https://stackoverflow.com/a/16016635/10577976
main()
the handle_finished_request callback would be called when a response is received. note: for some reason timeout (or no response) does not trigger error here
This simple loop can trigger async requests similarly to how it would work in nodejs server