Multithread https requests in python - python

I'm trying to multi-thread web requests in python for web scraping. I want to send multiple requests to the same website using multi-threading, but the time it takes for the script to complete is the same whether or not I use multi-threading.
This is the code that I'm using:
import queue
import urllib.request
from threading import Thread
def perform_web_requests(addresses, no_workers):
class Worker(Thread):
def __init__(self, request_queue):
Thread.__init__(self)
self.queue = request_queue
self.results = []
def run(self):
while True:
content = self.queue.get()
if content == "":
break
request = urllib.request.Request(content)
response = urllib.request.urlopen(request)
self.results.append(response.read())
self.queue.task_done()
# Create queue and add addresses
q = queue.Queue()
for url in addresses:
q.put(url)
# Workers keep working till they receive an empty string
for _ in range(no_workers):
q.put("")
# Create workers and add tot the queue
workers = []
for _ in range(no_workers):
worker = Worker(q)
worker.start()
workers.append(worker)
# Join workers to wait till they finished
for worker in workers:
worker.join()
# Combine results from all workers
r = []
for worker in workers:
r.extend(worker.results)
return r
urls = ['https://google.com']
i = 0
while i < 100:
results = perform_web_requests(urls, 50)
i += 1
print(i)

It appears urllib does not support multi-threading. Use urllib3:
https://github.com/urllib3/urllib3

Related

processing very large text files in parallel using multiprocessing and threading

I have found several other questions that touch on this topic but none that are quite like my situation.
I have several very large text files (3+ gigabytes in size).
I would like to process them (say 2 documents) in parallel using multiprocessing. As part of my processing (within a single process) I need to make an API call and because of this would like to have each process have it's own threads to run asynchronously.
I have came up with a simplified example ( I have commented the code to try to explain what I think it should be doing):
import multiprocessing
from threading import Thread
import threading
from queue import Queue
import time
def process_huge_file(*, file_, batch_size=250, num_threads=4):
# create APICaller instance for each process that has it's own Queue
api_call = APICaller()
batch = []
# create threads that will run asynchronously to make API calls
# I expect these to immediately block since there is nothing in the Queue (which is was
# the api_call.run depends on to make a call
threads = []
for i in range(num_threads):
thread = Thread(target=api_call.run)
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
####
# start processing the file line by line
for line in file_:
# if we are at our batch size, add the batch to the api_call to to let the threads do
# their api calling
if i % batch_size == 0:
api_call.queue.put(batch)
else:
# add fake line to batch
batch.append(fake_line)
class APICaller:
def __init__(self):
# thread safe queue to feed the threads which point at instances
of these APICaller objects
self.queue = Queue()
def run(self):
print("waiting for something to do")
self.queue.get()
print("processing item in queue")
time.sleep(0.1)
print("finished processing item in queue")
if __name__ == "__main__":
# fake docs
fake_line = "this is a fake line of some text"
# two fake docs with line length == 1000
fake_docs = [[fake_line] * 1000 for i in range(2)]
####
num_processes = 2
procs = []
for idx, doc in enumerate(fake_docs):
proc = multiprocessing.Process(target=process_huge_file, kwargs=dict(file_=doc))
proc.start()
procs.append(proc)
for proc in procs:
proc.join()
As the code is now, "waiting for something to do" prints 8 times (makes sense 4 threads per process) and then it stops or "deadlocks" which is not what I expect - I expect it to start sharing time with the threads as soon as I start putting items in the Queue but the code does not appear to make it this far. I ordinarily would step through to find a hang up but I still don't have a solid understanding of how to best debug using Threads (another topic for another day).
In the meantime, can someone help me figure out why my code is not doing what it should be doing?
I have made a few adjustments and additions and the code appears to do what it is supposed to now. The main adjustments are: adding a CloseableQueue class (from Brett Slatkins Effective Python Item 55), and ensuring that I call close and join on the queue so that the threads properly exit. Full code with these changes below:
import multiprocessing
from threading import Thread
import threading
from queue import Queue
import time
from concurrency_utils import CloseableQueue
def sync_process_huge_file(*, file_, batch_size=250):
batch = []
for idx, line in enumerate(file_):
# do processing on the text
if idx % batch_size == 0:
time.sleep(0.1)
batch = []
# api_call.queue.put(batch)
else:
computation = 0
for i in range(100000):
computation += i
batch.append(line)
def process_huge_file(*, file_, batch_size=250, num_threads=4):
api_call = APICaller()
batch = []
# api call threads
threads = []
for i in range(num_threads):
thread = Thread(target=api_call.run)
threads.append(thread)
thread.start()
for idx, line in enumerate(file_):
# do processing on the text
if idx % batch_size == 0:
api_call.queue.put(batch)
else:
computation = 0
for i in range(100000):
computation += i
batch.append(line)
for _ in threads:
api_call.queue.close()
api_call.queue.join()
for thread in threads:
thread.join()
class APICaller:
def __init__(self):
self.queue = CloseableQueue()
def run(self):
for item in self.queue:
print("waiting for something to do")
pass
print("processing item in queue")
time.sleep(0.1)
print("finished processing item in queue")
print("exiting run")
if __name__ == "__main__":
# fake docs
fake_line = "this is a fake line of some text"
# two fake docs with line length == 1000
fake_docs = [[fake_line] * 10000 for i in range(2)]
####
time_s = time.time()
num_processes = 2
procs = []
for idx, doc in enumerate(fake_docs):
proc = multiprocessing.Process(target=process_huge_file, kwargs=dict(file_=doc))
proc.start()
procs.append(proc)
for proc in procs:
proc.join()
time_e = time.time()
print(f"took {time_e-time_s} ")
class CloseableQueue(Queue):
SENTINEL = object()
def __init__(self, **kwargs):
super().__init__(**kwargs)
def close(self):
self.put(self.SENTINEL)
def __iter__(self):
while True:
item = self.get()
try:
if item is self.SENTINEL:
return # exit thread
yield item
finally:
self.task_done()
As expected this is a great speedup from running synchronously - 120 seconds vs 50 seconds.

How to make requests while program in infinite loop with multiprocessing

I have two function which are require to run same time. read_card needs to run in an infinite loop and waits for new cards(it is actually a Nrf reader) and
adds some string to a queue , send_data suppose to get values from queue and send them to the server via requests library.Everything works when I do not use multiprocessing. But I need concurrency I guess.
Here is my two function.
def read_card(reader, configs):
print("First started")
while True:
authorized_uid = reader.is_granted(reader.read())
print("Waiting for card")
#TODO:If not authorized in AccessList.txt look to the server
if authorized_uid is not None:
print(authorized_uid)
open_door()
check_model = CheckModel(configs.DeviceSerialNumber, authorized_uid)
message_helper.put_message(check_model)
def send_data(sender):
print("Second started")
while True:
message_model = message_helper.get_message()
if message_model is not None:
sender.send_message(message_model)
Here is how I call main
def main():
download_settings()
create_folders()
settings = read_settings()
accessList = get_user_list(settings)
configure_scheduler(settings)
message_sender = MessageSender(client.check,client.bulk)
reader_process = multiprocessing.Process(name = "reader_loop", target = read_card, args=(Reader(accessList, entryLogger),configs,))
message_process = multiprocessing.Process(name = "message_loop", target = send_data, args=(message_sender,))
reader_process.start()
message_process.start()
if __name__ == '__main__':
main()
And those are for debugging. I printed what put_message and send_message from different classes.
def send_message(self,model):
print(model)
return self.checkClient.check(model)
def put_message(self, message):
print(message)
self.put_to_queue(self.queue, message)
self.put_to_db(message)
I expect to see some object names in terminal, but I only see below. Also reader does not work.
First started
Second started
Which part I do wrong?
Use a Queue to communicate between processes. Then when you read a card inside reader create a new job and push it into the queue, then pop this job inside the processor and send the request.
Here's a proof of concept:
from datetime import datetime
from multiprocessing import Process, Queue
from random import random
from time import sleep
import requests
def reader(q: Queue):
while True:
# create a job
job = {'date': datetime.now().isoformat(), 'number': random()}
q.put(job)
# use a proper logger instead of printing,
# otherwise you'll get mangled output!
print('Enqueued new job', job)
sleep(5)
def client(q: Queue):
while True:
# wait for a new job
job = q.get()
res = requests.post(url='https://httpbin.org/post',
data=job)
res.raise_for_status()
json = res.json()
print(json['form'])
if __name__ == '__main__':
q = Queue()
reader_proc = Process(name='reader', target=reader, args=(q,))
client_proc = Process(name='client', target=client, args=(q,))
procs = [reader_proc, client_proc]
for p in procs:
print(f'{p.name} started')
p.start()
for p in procs:
p.join()
which prints:
reader started
client started
Enqueued new job {'date': '2019-07-01T15:51:53.100395', 'number': 0.7659293922700549}
{'date': '2019-07-01T15:51:53.100395', 'number': '0.7659293922700549'}
Enqueued new job {'date': '2019-07-01T15:51:58.116020', 'number': 0.14306347124900576}
{'date': '2019-07-01T15:51:58.116020', 'number': '0.14306347124900576'}

Multi-threading in Python: Getting stuck at last thread

I have a strange situation and cannot figure it out after lots of hit-trials. I am using multi-threading (10) for reading urls (100) and it works fine in most cases but in some situation, it gets stuck at the last thread. I waited for it to see if it returns and it took a lot of time (1050 seconds) whereas the rest of the nine threads returned within 25 seconds. It shows something is wrong with my code but can't figure it out. Any ideas?
Note1: It happens for both daemon and non-daemon threads.
Note2: The number of URLs and thread changes. I tried a different number of URLs from 10-100 and various threads from 5-50.
Note3: The URLs are most of the time completely different.
import urllib2
import Queue
import threading
from goose import Goose
input_queue = Queue.Queue()
result_queue = Queue.Queue()
Thread Worker:
def worker(input_queue, result_queue):
queue_full = true
while queue_full:
try:
url = input_queue.get(False)
read a url using urllib2 and goose
process it
result_queue.put(updated value)
except Queue.Empty:
queue_full = False
Main process:
for url in urls:
input_queue.put(url)
thread_count = 5
for t in range(thread_count):
t = threading.Thread(target=worker, args= (input_queue, result_queue))
t.start()
for url in urls:
url = result_queue.get() # updates url
The process gets blocked at the last result_queue.get() call.
NOTE: I am more interested in what I am doing wrong here, in case someone can point that out? Because I tend to think that I wrote correct code but apparently that's not the case.
You can use ThreadPoolExecutor from concurrent.futures.
from concurrent.futures import ThreadPoolExecutor
MAX_WORKERS = 50
def worker(url):
response = requests.get(url)
return response.content
with ThreadPoolExecutor(max_workers=MAX_WORKERS) as executor:
results = executor.map(worker, urls)
for result in results:
print(result)
For example, i take URL as a list of numbers
import urllib2
import Queue
import threading
#from goose import Goose
input_queue = Queue.Queue()
result_queue = Queue.Queue()
def worker(input_queue, result_queue):
while not input_queue.empty():
try:
url = input_queue.get(False)
updated_value = int(url) * 9
result_queue.put(updated_value)
except Queue.Empty:
pass
urls = [1,2,3,4,5,6,7,8,9]
for url in urls:
input_queue.put(url)
thread_count = 5
for i in range(thread_count):
t = threading.Thread(target=worker, args= (input_queue, result_queue))
t.start()
t.join()
for url in urls:
try:
url = result_queue.get()
print url
except Queue.Empty:
pass
Output
9
18
27
36
45
54
63
72
81

subprocess does not return control to main process after finishing

I have a Python application where I use processes for computing classification. For communication processes use Queues. Everything works fine except that after all sub-processes are done the main process does not get control back. So, as I understand, the sub-processes did not terminated. But, why?
#!/usr/bin/python
from wraper import *
from multiprocessing import Process, Lock,Queue
def start_threads(data,counter,threads_num,reporter):
threads = []
d_lock = Lock()
c_lock = Lock()
r_lock = Lock()
dq = Queue()
rq = Queue()
cq = Queue()
dq.put(data)
rq.put(reporter)
cq.put(counter)
for i in range(threads_num):
t = Process(target=mule, args=(dq,cq,rq,d_lock,c_lock,r_lock))
threads.append(t)
for t in threads:
t.start()
for t in threads:
t.join()
return rq.get()
def mule(dq,cq,rq,d_lock,c_lock,r_lock):
c_lock.acquire()
counter = cq.get()
can_continue = counter.next_ok()
idx = counter.get_features_indeces()
cq.put(counter)
c_lock.release()
while can_continue:
d_lock.acquire()
data = dq.get()
labels, features = data.get_features(idx)
dq.put(data)
d_lock.release()
accuracy = test_classifier(labels, features)
r_lock.acquire()
reporter = rq.get()
reporter.add_result(accuracy[0],idx)
rq.put(reporter)
r_lock.release()
c_lock.acquire()
counter = cq.get()
can_continue = counter.next_ok()
idx = counter.get_features_indeces()
cq.put(counter)
c_lock.release()
print('done' )
It writes for each process that it did it's job and that's it...

Threads not stop in python

The purpose of my program is to download files with threads. I define the unit, and using len/unit threads, the len is the length of the file which is going to be downloaded.
Using my program, the file can be downloaded, but the threads are not stopping. I can't find the reason why.
This is my code...
#! /usr/bin/python
import urllib2
import threading
import os
from time import ctime
class MyThread(threading.Thread):
def __init__(self,func,args,name=''):
threading.Thread.__init__(self);
self.func = func;
self.args = args;
self.name = name;
def run(self):
apply(self.func,self.args);
url = 'http://ubuntuone.com/1SHQeCAQWgIjUP2945hkZF';
request = urllib2.Request(url);
response = urllib2.urlopen(request);
meta = response.info();
response.close();
unit = 1000000;
flen = int(meta.getheaders('Content-Length')[0]);
print flen;
if flen%unit == 0:
bs = flen/unit;
else :
bs = flen/unit+1;
blocks = range(bs);
cnt = {};
for i in blocks:
cnt[i]=i;
def getStr(i):
try:
print 'Thread %d start.'%(i,);
fout = open('a.zip','wb');
fout.seek(i*unit,0);
if (i+1)*unit > flen:
request.add_header('Range','bytes=%d-%d'%(i*unit,flen-1));
else :
request.add_header('Range','bytes=%d-%d'%(i*unit,(i+1)*unit-1));
#opener = urllib2.build_opener();
#buf = opener.open(request).read();
resp = urllib2.urlopen(request);
buf = resp.read();
fout.write(buf);
except BaseException:
print 'Error';
finally :
#opener.close();
fout.flush();
fout.close();
del cnt[i];
# filelen = os.path.getsize('a.zip');
print 'Thread %d ended.'%(i),
print cnt;
# print 'progress : %4.2f'%(filelen*100.0/flen,),'%';
def main():
print 'download at:',ctime();
threads = [];
for i in blocks:
t = MyThread(getStr,(blocks[i],),getStr.__name__);
threads.append(t);
for i in blocks:
threads[i].start();
for i in blocks:
# print 'this is the %d thread;'%(i,);
threads[i].join();
#print 'size:',os.path.getsize('a.zip');
print 'download done at:',ctime();
if __name__=='__main__':
main();
Could someone please help me understand why the threads aren't stopping.
I can't really address your code example because it is quite messy and hard to follow, but a potential reason you are seeing the threads not end is that a request will stall out and never finish. urllib2 allows you to specify timeouts for how long you will allow the request to take.
What I would recommend for your own code is that you split your work up into a queue, start a fixed number of thread (instead of a variable number), and let the worker threads pick up work until it is done. Make the http requests have a timeout. If the timeout expires, try again or put the work back into the queue.
Here is a generic example of how to use a queue, a fixed number of workers and a sync primitive between them:
import threading
import time
from Queue import Queue
def worker(queue, results, lock):
local_results = []
while True:
val = queue.get()
if val is None:
break
# pretend to do work
time.sleep(.1)
local_results.append(val)
with lock:
results.extend(local_results)
print threading.current_thread().name, "Done!"
num_workers = 4
threads = []
queue = Queue()
lock = threading.Lock()
results = []
for i in xrange(100):
queue.put(i)
for _ in xrange(num_workers):
# Use None as a sentinel to signal the threads to end
queue.put(None)
t = threading.Thread(target=worker, args=(queue,results,lock))
t.start()
threads.append(t)
for t in threads:
t.join()
print sorted(results)
print "All done"

Categories

Resources