I need to obtain properties from a web service for a large list of products (~25,000) and this is a very time sensitive operation (ideally I need this to execute in just a few seconds). I coded this first using a for loop as a proof of concept but it's taking 1.25 hours. I'd like to vectorize this code and execute the http requests in parallel using a GPU on Google Collab. I've removed many of the unnecessary details, but it's important to note that the products and their web service urls are stored in a DataFrame.
Will this be faster to execute on a GPU? Or should I just use multiple threads on a CPU?
What is the best way to implement this? And How can I save the results from parallel processes to the results DataFrame (all_product_properties) without running into concurrency problems?
Each product has multiple properties (key-value pairs) that I'm obtaining from the JSON response, but the product_id is not included in the JSON response so I need to add the product_id to the DataFrame.
#DataFrame containing string column of urls
urls = pd.DataFrame(["www.url1.com", "www.url2.com", ..., "www.url3.com"], columns=["url"])
#initialize empty dataframe to store properties for all products
all_product_properties = pd.DataFrame(columns=["product_id", "property_name", "property_value"])
for i in range(1, len(urls)):
curr_url = urls.loc[i, "url"]
try:
http_response = requests.request("GET", curr_url)
if http_response is not None:
http_response_json = json.loads(http_response.text)
#extract product properties from JSON response
product_properties_json = http_response_json['product_properties']
curr_product_properties_df = pd.json_normalize(product_properties_json)
#add product id since it's not returned in the JSON
curr_product_properties_df["product_id"] = i
#save current product properties to DataFrame containing all product properties
all_product_properties = pd.concat([all_product_properties, curr_product_properties_df ])
except Exception as e:
print(e)
GPUs probably will not help here since they are meant for accelerating numerical operations. However, since you are trying to parallelize HTTP requests which are I/O bound, you can use Python multithreading (part of the standard library) to reduce the time required.
In addition, concatenating pandas dataframes in a loop is a very slow operation (see: Why does concatenation of DataFrames get exponentially slower?). You can instead append your output to a list, and run just a single concat after the loop has concluded.
Here's how I would implement your code w/ multithreading:
# Use an empty list for storing loop output
all_product_properties = []
thread_local = threading.local()
def get_session():
if not hasattr(thread_local, "session"):
thread_local.session = requests.Session()
return thread_local.session
def download_site(url):
session = get_session()
try:
with session.get(url) as response:
if response is not None:
http_response_json = json.loads(response.text)
product_properties_json = http_response_json['product_properties']
curr_product_properties_df = pd.json_normalize(product_properties_json)
#add product id since it's not returned in the JSON
curr_product_properties_df["product_id"] = i
#save current product properties to DataFrame containing all product properties
return curr_product_properties_df
print(f"Read {len(response.content)} from {url}")
except Exception as e:
print(e)
def download_all_sites(sites):
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
all_product_properties = executor.map(download_site, sites)
return all_product_properties
if __name__ == "__main__":
# Store URLs as list, example below
urls = ["https://www.jython.org", "http://olympus.realpython.org/dice"] * 10
start_time = time.time()
all_product_properties = download_all_sites(urls)
all_product_properties = pd.concat(all_product_properties)
duration = time.time() - start_time
print(f"Downloaded {len(urls)} in {duration} seconds")
Reference: this RealPython article on multithreading and multiprocessing in Python: https://realpython.com/python-concurrency/
Related
I am using Python gitlab to get a list of gitlab projects returned as generators in batches of 100. If a project has a tag of "snow" I want to add it to a list that will get converted to a json object. Here is the code I have that does this:
gl_prj_list = gl_conn.projects.list(as_list=False)
for p in gl_prj_list:
if "snow" in p.tag_list:
prj = {"id": p.id}
prj["name"] = p.path_with_namespace
gl_data.append(prj)
return json.dumps(gl_data), 200, {'Content-Type': 'text/plain'}
So ultimately I want a result that might look like this: (only 2 of the 100 projects had the snow tag)
[{"id": 7077, "name": "robr/snow-cli"}, {"id": 4995, "name": "test/prod-deploy-iaas-spring-starter"}]
This works fine and all but seems a bit slow. The response time is usually between 3.5-5 seconds. And since I will have to do this over 10-20 batches I'd like to improve on the response time.
Is there a better way to check for the "snow" value in the tag_list attribute of the generator and return the result?
Assuming that the bottleneck is not the API call, you can use multiprocessing.Pool() for this.
from multiprocessing import Pool
def f(p):
if "snow" in p.tag_list:
return {"id":p.id, "name":p.path_with_namespace}
return False
gl_prj_list = gl_conn.projects.list(as_list=False)
with Pool(10) as pool: #10 processes in parallel (change this with the number of cores you have available)
gl_data = pool.map(f, gl_prj_list)
gl_data = [i for i in gl_data if i] #get rid of the False items
json.dumps(gl_data), 200, {'Content-Type': 'text/plain'}
If the bottleneck is the API call and you want to call the API multiple times, then add the call inside f() and use the same trick. You will call the API 10 times in parallel instead of sequentially.
My program basically has to get around 6000 items from the DB and calls an external API for each item. This almost takes 30 min to complete. I just thought of using threads here where i could create multi threads and split the process and reduce the time. So i came up with some thing like this. But I have two questions here. How do i store the response from the API that is processed by the function.
api = externalAPI()
for x in instruments:
response = api.getProcessedItems(x.symbol, days, return_perc);
if(response > float(return_perc)):
return_response.append([x.trading_symbol, x.name, response])
So in the above example the for loop runs for 6000 times(len(instruments) == 6000)
Now lets take i have splited the 6000 items to 2 * 3000 items and do something like this
class externalApi:
def handleThread(self, symbol, days, perc):
//I call the external API and process the items
// how do i store the processed data
def getProcessedItems(self,symbol, days, perc):
_thread.start_new_thread(self.handleThread, (symbol, days, perc))
_thread.start_new_thread(self.handleThread, (symbol, days, perc))
return self.thread_response
I am just starting out with thread. would be helpful if i know this is the right thing to do to reduce the time here.
P.S : Time is important here. I want to reduce it to 1 min from 30 min.
I suggest using worker-queue pattern like so...
you will have a queue of jobs, each worker will take a job and work on it, the result it will put at another queue, when all workers are done, the result queue will be read and process the results
def worker(pool, result_q):
while True:
job = pool.get()
result = handle(job) #handle job
result_q.put(result)
pool.task_done()
q = Queue.Queue()
res_q = Queue.Queue()
for i in range(num_worker_threads):
t = threading.Thread(target=worker, args=(q, res_q))
t.setDaemon(True)
t.start()
for job in jobs:
q.put(job)
q.join()
while not res_q.empty():
result = res_q.get()
# do smth with result
The worker-queue pattern suggested in shahaf's answer works fine, but Python provides even higher level abstractions, in concurret.futures. Namely a ThreadPoolExecutor, which will take care of the queueing and starting of threads for you:
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(max_workers=30)
responses = executor.map(process_item, (x.symbol for x in instruments))
The main complication with using the excutor.map() is that it can only map over one argument, meaning that there can be only one input to proces_item namely symbol).
However, if more arguments are needed, it is possible to define a new function, which will fixate all arguments but one. This can either be done manually or using the special Python partial call, found in functools:
from functools import partial
process_item = partial(api.handleThread, days=days, perc=return_perc)
Applying the ThreadPoolExecutor strategy to your current probelm would then have a solution similar to:
from concurrent.futures import ThreadPoolExecutor
from functools import partial
class Instrument:
def __init__(self, symbol, name):
self.symbol = symbol
self.name = name
instruments = [Instrument('SMB', 'Name'), Instrument('FNK', 'Funky')]
class externalApi:
def handleThread(self, symbol, days, perc):
# Call the external API and process the items
# Example, to give something back:
if symbol == 'FNK':
return days*3
else:
return days
def process_item_generator(api, days, perc):
return partial(api.handleThread, days=days, perc=perc)
days = 5
return_perc = 10
api = externalApi()
process_item = process_item_generator(api, days, return_perc)
executor = ThreadPoolExecutor(max_workers=30)
responses = executor.map(process_item, (x.symbol for x in instruments))
return_response = ([x.symbol, x.name, response]
for x, response in zip(instruments, responses)
if response > float(return_perc))
Here I have assumed that x.symbol is the same as x.trading_symbol and I have made a dummy implementation of your API call, to get some type of return value, but it should give a good idea of how to do this. Due to this, the code is a bit longer, but then again, it becomes a runnable example.
I have written a script to run around 530 API calls which i intend to run every 5 minutes, from these calls i store data to process in bulk later (Prediction ETC).
The API has a limit of 500 requests per second. However when running my code I am seeing a 2 second time per call (due to SSL i believe).
How can i speed this up to enable me to run 500 requests within 5 minutes, as the current time required renders the data i am collecting useless :(
Code:
def getsurge(lat, long):
response = client.get_price_estimates(
start_latitude=lat,
start_longitude=long,
end_latitude=-34.063676,
end_longitude=150.815075
)
result = response.json.get('prices')
return result
def writetocsv(database):
database_writer = csv.writer(database)
database_writer.writerow(HEADER)
pool = Pool()
# Open Estimate Database
while True:
for data in coordinates:
line = data.split(',')
long = line[3]
lat = line[4][:-2]
estimate = getsurge(lat, long)
timechecked = datetime.datetime.now()
for d in estimate:
if d['display_name'] == 'TAXI':
database_writer.writerow([timechecked, [line[0], line[1]], d['surge_multiplier']])
database.flush()
print(timechecked, [line[0], line[1]], d['surge_multiplier'])
Is the APi under your control? If so, create an endpoint which can give you all the data you need in one go.
I'm trying to load nodes (about 400) and relationships (about 800) from a Neo4j DB to create a force directed graph using D3. This is my get function (I'm using Tornado):
def get(self):
query_string = "START r=rel(*) RETURN r"
query = neo4j.CypherQuery(graph_db, query_string)
results = query.execute().data
start = set([r[0].start_node for r in results])
end = set([r[0].end_node for r in results])
nodes_to_keep = list(start.union(end))
nodes = []
for n in nodes_to_keep:
nodes.append({
"name":n['name'].encode('utf-8'),
"group":n['type'].encode('utf-8'),
"description":n['description'].encode('utf-8'),
"node":int(n['node_id'])})
#links
links = []
for r in results:
links.append({"source":int(r[0].start_node['node_id']), "target":int(r[0].end_node['node_id'])})
self.render(
"index.html",
page_title='My Page',
page_heading='Sweet D3 Force Diagram',
nodes=nodes,
links =links,
)
I'm thinking the expensive process is in for n in nodes_to_keep: and the for r in results: since every time I get each property, that's a trip to the server. Right?
What's the best way to accomplish this task?
The reason why the above process is taking so long is because every time I ask for a node property, I'm taking a trip to the server to fetch something out of the database. I was able to drastically reduce the time this process takes by simply modifying the Cypher query.
For instance, to get all nodes with relationships I used this query:
query_string = """MATCH (n)-[r]-(m)
RETURN n, n.node_id, n.name, n.type, n.description, m.node_id, m.name, m.type, m.description"""
query = neo4j.CypherQuery(graph_db, query_string)
results = query.execute().data
The results contain the information I need, so I just loop through the results to get the properties.
The takeaway is that you need to write your queries such that they get you the info you need the first time around.
I have a list of 100 ids, and I need to do a lookup for each one of them. The lookup takes approximate 3s to run. Here is the sequential code that would be needed to run it:
ids = [102225077, 102225085, 102225090, 102225097, 102225105, ...]
for id in ids:
run_updates(id)
I would like to run ten (10) of these concurrently at a time, using either gevent or multiprocessor. How would I do this? Here is what I tried for gevent but it's quite slow:
def chunks(l, n):
""" Yield successive n-sized chunks from l.
"""
for i in xrange(0, len(l), n):
yield l[i:i+n]
ids = [102225077, 102225085, 102225090, 102225097, 102225105, ...]
if __name__ == '__main__':
for list_of_ids in list(chunks(ids, 10)):
jobs = [gevent.spawn(run_updates(id)) for id in list_of_ids]
gevent.joinall(jobs, timeout=200)
What would be the correct way to split up the ids list and run ten-at-a-time? I would even be open to using multiprocessor or gevent (not too familiar with either).
Doing it sequentially takes 364 seconds for 100 ids.
Using multiprocessor takes about 207 seconds on 100 ids, doing 5 at a time:
pool = Pool(processes=5)
pool.map(run_updates, list_of_apple_ids)
Using gevent takes somewhere in between the two:
jobs = [gevent.spawn(run_updates, apple_id) for apple_id in list_of_apple_ids]
Is there any way I can get better performance than the Pool.map? I have a pretty decent computer here with a fast internet connection, it should be able to do it much quicker...
Check out the grequests library. You can do something like:
import grequests
for list_of_ids in list(chunks(ids, 10)):
urls = [''.join(('http://www.example.com/id?=', id)) for id in list_of_ids]
requests = (grequests.get(url) for url in urls)
responses = grequests.map(requests)
for response in responses:
print response.content
I know this breaks your model somewhat because you have your request encapsulated in a run_updates method, but I think it may be worth exploring nonetheless.
from multiprocessing import Process
from random import Random.random
ids = [random() for _ in range(100)] # make some fake ids, whatever
def do_thing(arg):
print arg # Here's where you'd do lookup
while ids:
curs, ids = ids[:10], ids[10:]
procs = [Process(target=do_thing, args=(c,)) for c in curs]
for proc in procs:
proc.run()
This is roughly how I'd do it, I guess.