Multi-threading in Python: Getting stuck at last thread - python

I have a strange situation and cannot figure it out after lots of hit-trials. I am using multi-threading (10) for reading urls (100) and it works fine in most cases but in some situation, it gets stuck at the last thread. I waited for it to see if it returns and it took a lot of time (1050 seconds) whereas the rest of the nine threads returned within 25 seconds. It shows something is wrong with my code but can't figure it out. Any ideas?
Note1: It happens for both daemon and non-daemon threads.
Note2: The number of URLs and thread changes. I tried a different number of URLs from 10-100 and various threads from 5-50.
Note3: The URLs are most of the time completely different.
import urllib2
import Queue
import threading
from goose import Goose
input_queue = Queue.Queue()
result_queue = Queue.Queue()
Thread Worker:
def worker(input_queue, result_queue):
queue_full = true
while queue_full:
try:
url = input_queue.get(False)
read a url using urllib2 and goose
process it
result_queue.put(updated value)
except Queue.Empty:
queue_full = False
Main process:
for url in urls:
input_queue.put(url)
thread_count = 5
for t in range(thread_count):
t = threading.Thread(target=worker, args= (input_queue, result_queue))
t.start()
for url in urls:
url = result_queue.get() # updates url
The process gets blocked at the last result_queue.get() call.
NOTE: I am more interested in what I am doing wrong here, in case someone can point that out? Because I tend to think that I wrote correct code but apparently that's not the case.

You can use ThreadPoolExecutor from concurrent.futures.
from concurrent.futures import ThreadPoolExecutor
MAX_WORKERS = 50
def worker(url):
response = requests.get(url)
return response.content
with ThreadPoolExecutor(max_workers=MAX_WORKERS) as executor:
results = executor.map(worker, urls)
for result in results:
print(result)

For example, i take URL as a list of numbers
import urllib2
import Queue
import threading
#from goose import Goose
input_queue = Queue.Queue()
result_queue = Queue.Queue()
def worker(input_queue, result_queue):
while not input_queue.empty():
try:
url = input_queue.get(False)
updated_value = int(url) * 9
result_queue.put(updated_value)
except Queue.Empty:
pass
urls = [1,2,3,4,5,6,7,8,9]
for url in urls:
input_queue.put(url)
thread_count = 5
for i in range(thread_count):
t = threading.Thread(target=worker, args= (input_queue, result_queue))
t.start()
t.join()
for url in urls:
try:
url = result_queue.get()
print url
except Queue.Empty:
pass
Output
9
18
27
36
45
54
63
72
81

Related

Multithread https requests in python

I'm trying to multi-thread web requests in python for web scraping. I want to send multiple requests to the same website using multi-threading, but the time it takes for the script to complete is the same whether or not I use multi-threading.
This is the code that I'm using:
import queue
import urllib.request
from threading import Thread
def perform_web_requests(addresses, no_workers):
class Worker(Thread):
def __init__(self, request_queue):
Thread.__init__(self)
self.queue = request_queue
self.results = []
def run(self):
while True:
content = self.queue.get()
if content == "":
break
request = urllib.request.Request(content)
response = urllib.request.urlopen(request)
self.results.append(response.read())
self.queue.task_done()
# Create queue and add addresses
q = queue.Queue()
for url in addresses:
q.put(url)
# Workers keep working till they receive an empty string
for _ in range(no_workers):
q.put("")
# Create workers and add tot the queue
workers = []
for _ in range(no_workers):
worker = Worker(q)
worker.start()
workers.append(worker)
# Join workers to wait till they finished
for worker in workers:
worker.join()
# Combine results from all workers
r = []
for worker in workers:
r.extend(worker.results)
return r
urls = ['https://google.com']
i = 0
while i < 100:
results = perform_web_requests(urls, 50)
i += 1
print(i)
It appears urllib does not support multi-threading. Use urllib3:
https://github.com/urllib3/urllib3

Combine python thread results into one list

I am trying to modify the solution shown here: What is the fastest way to send 100,000 HTTP requests in Python? except that instead of checking header status I am making an API request which returns a dictionary and I would like the end result of all of these API requests to be a list of all of the dictionaries.
Here is my code -- consider that api_calls is a list that has each url to open for the json request...
from threading import Thread
from Queue import Queue
concurrent = 200
def doWork():
while True:
url = q.get()
result = makeRequest(url[0])
doSomethingWithResult(result, url)
q.task_done()
def makeRequest(ourl):
try:
api_call = urlopen(ourl).read()
result = json.loads(api_call)
return result, ourl
except:
return "error", ourl
def doSomethingWithResult(result, url):
print(url,result)
q = Queue(concurrent * 2)
for i in range(concurrent):
t = Thread(target=doWork)
t.daemon = True
t.start()
try:
for url in api_calls:
q.put(url)
q.join()
except KeyboardInterrupt:
sys.exit(1)
Like the example linked, this currently will succesfully print the url, result on each line. What I would instead like to do is add the (url, result) to a list in each thread and then at the end join them into one master list. I cannot figure out how to have this master list and join the results at the end. Can anybody help with what I should modify in the doSomethingWithResult? If I was doing one large loop, I would just have an empty list and I would append the result to the list after each API request, but I do not know how to mimick this now that I am using threads.
I expect that a common response will be to use https://en.wikipedia.org/wiki/Asynchronous_I/O and if this is the suggestion, then I would appreciate somebody actually providing an example that accomplishes as much as the code I have linked above.
Use a ThreadPool instead. It does the heavy lifting for you. Here is a working example that fetches a few urls.
import multiprocessing.pool
concurrent = 200
def makeRequest(ourl):
try:
api_call = urlopen(ourl).read()
result = json.loads(api_call)
return "success", ourl
except:
return "error", ourl
def main():
api_calls = [
'http:http://jsonplaceholder.typicode.com/posts/{}'.format(i)
for i in range(1,5)]
# a thread pool that implements the process pool API.
pool = multiprocessing.pool.ThreadPool(processes=concurrent)
return_list = pool.map(makeRequest, api_calls, chunksize=1)
pool.close()
for status, data in return_list:
print(data)
main()

Python multiprocess Process never terminates

My routine below takes a list of urllib2.Requests and spawns a new process per request and fires them off. The purpose is for asynchronous speed, so it's all fire-and-forget (no response needed). The issue is that the processes spawned in the code below never terminate. So after a few of these the box wilL OOM. Context: Django web app. Any help?
MP_CONCURRENT = int(multiprocessing.cpu_count()) * 2
if MP_CONCURRENT < 2: MP_CONCURRENT = 2
MPQ = multiprocessing.JoinableQueue(MP_CONCURRENT)
def request_manager(req_list):
try:
# put request list in the queue
for req in req_list:
MPQ.put(req)
# call processes on queue
worker = multiprocessing.Process(target=process_request, args=(MPQ,))
worker.daemon = True
worker.start()
# move on after queue is empty
MPQ.join()
except Exception, e:
logging.error(traceback.print_exc())
# prcoess requests in queue
def process_request(MPQ):
try:
while True:
req = MPQ.get()
dr = urllib2.urlopen(req)
MPQ.task_done()
except Exception, e:
logging.error(traceback.print_exc())
Maybe i am not right, but
MP_CONCURRENT = int(multiprocessing.cpu_count()) * 2
if MP_CONCURRENT < 2: MP_CONCURRENT = 2
MPQ = multiprocessing.JoinableQueue(MP_CONCURRENT)
def request_manager(req_list):
try:
# put request list in the queue
pool=[]
for req in req_list:
MPQ.put(req)
# call processes on queue
worker = multiprocessing.Process(target=process_request, args=(MPQ,))
worker.daemon = True
worker.start()
pool.append(worker)
# move on after queue is empty
MPQ.join()
# Close not needed processes
for p in pool: p.terminate()
except Exception, e:
logging.error(traceback.print_exc())
# prcoess requests in queue
def process_request(MPQ):
try:
while True:
req = MPQ.get()
dr = urllib2.urlopen(req)
MPQ.task_done()
except Exception, e:
logging.error(traceback.print_exc())
MP_CONCURRENT = int(multiprocessing.cpu_count()) * 2
if MP_CONCURRENT < 2: MP_CONCURRENT = 2
MPQ = multiprocessing.JoinableQueue(MP_CONCURRENT)
CHUNK_SIZE = 20 #number of requests sended to one process.
pool = multiprocessing.Pool(MP_CONCURRENT)
def request_manager(req_list):
try:
# put request list in the queue
responce=pool.map(process_request,req_list,CHUNK_SIZE) # function exits after all requests called and pool work ended
# OR
responce=pool.map_async(process_request,req_list,CHUNK_SIZE) #function request_manager exits after all requests passed to pool
except Exception, e:
logging.error(traceback.print_exc())
# prcoess requests in queue
def process_request(req):
dr = urllib2.urlopen(req)
This works ~5-10x faster then your code
Integrate side "brocker" to django (such as rabbitmq or something like it).
Ok after some fiddling (and a good night's sleep) I believe I've figured out the problem (and thank you Eri, you were the inspiration I needed). The main issue of the zombie processes was that I was not signaling back that the process was finished (and killing it) both of which I (naively) thought was happening automagically with multiprocess.
The code that worked:
# function that will be run through the pool
def process_request(req):
try:
dr = urllib2.urlopen(req, timeout=30)
except Exception, e:
logging.error(traceback.print_exc())
# process killer
def sig_end(r):
sys.exit()
# globals
MP_CONCURRENT = int(multiprocessing.cpu_count()) * 2
if MP_CONCURRENT < 2: MP_CONCURRENT = 2
CHUNK_SIZE = 20
POOL = multiprocessing.Pool(MP_CONCURRENT)
# pool initiator
def request_manager(req_list):
try:
resp = POOL.map_async(process_request, req_list, CHUNK_SIZE, callback=sig_end)
except Exception, e:
logging.error(traceback.print_exc())
A couple of notes:
1) The function that will be hit by "map_async" ("process_request" in this example) must be defined first (and before the global declarations).
2) There is probably a more graceful way to exit the process (suggestions welcome).
3) Using pool in this example really was best (thanks again Eri) due to the "callback" feature which allows me to throw a signal right away.

Multi-threaded Python Web Crawler Got Stuck

I'm writing a Python web crawler and I want to make it multi-threaded. Now I have finished the basic part, below is what it does:
a thread gets a url from the queue;
the thread extracts the links from the page, checks if the links exist in a pool (a set), and puts the new links to the queue and the pool;
the thread writes the url and the http response to a csv file.
But when I run the crawler, it always gets stuck eventually, not exiting properly. I have gone through the official document of Python but still have no clue.
Below is the code:
#!/usr/bin/env python
#!coding=utf-8
import requests, re, urlparse
import threading
from Queue import Queue
from bs4 import BeautifulSoup
#custom modules and files
from setting import config
class Page:
def __init__(self, url):
self.url = url
self.status = ""
self.rawdata = ""
self.error = False
r = ""
try:
r = requests.get(self.url, headers={'User-Agent': 'random spider'})
except requests.exceptions.RequestException as e:
self.status = e
self.error = True
else:
if not r.history:
self.status = r.status_code
else:
self.status = r.history[0]
self.rawdata = r
def outlinks(self):
self.outlinks = []
#links, contains URL, anchor text, nofollow
raw = self.rawdata.text.lower()
soup = BeautifulSoup(raw)
outlinks = soup.find_all('a', href=True)
for link in outlinks:
d = {"follow":"yes"}
d['url'] = urlparse.urljoin(self.url, link.get('href'))
d['anchortext'] = link.text
if link.get('rel'):
if "nofollow" in link.get('rel'):
d["follow"] = "no"
if d not in self.outlinks:
self.outlinks.append(d)
pool = Queue()
exist = set()
thread_num = 10
lock = threading.Lock()
output = open("final.csv", "a")
#the domain is the start point
domain = config["domain"]
pool.put(domain)
exist.add(domain)
def crawl():
while True:
p = Page(pool.get())
#write data to output file
lock.acquire()
output.write(p.url+" "+str(p.status)+"\n")
print "%s crawls %s" % (threading.currentThread().getName(), p.url)
lock.release()
if not p.error:
p.outlinks()
outlinks = p.outlinks
if urlparse.urlparse(p.url)[1] == urlparse.urlparse(domain)[1] :
for link in outlinks:
if link['url'] not in exist:
lock.acquire()
pool.put(link['url'])
exist.add(link['url'])
lock.release()
pool.task_done()
for i in range(thread_num):
t = threading.Thread(target = crawl)
t.start()
pool.join()
output.close()
Any help would be appreciated!
Thanks
Marcus
Your crawl function has an infinite while loop with no possible exit path.
The condition True always evaluates to True and the loop continues, as you say,
not exiting properly
Modify the crawl function's while loop to include a condition. For instance, when the number of links saved to the csv file exceeds a certain minimum number, then exit the while loop.
i.e.,
def crawl():
while len(exist) <= min_links:
...

Threads not stop in python

The purpose of my program is to download files with threads. I define the unit, and using len/unit threads, the len is the length of the file which is going to be downloaded.
Using my program, the file can be downloaded, but the threads are not stopping. I can't find the reason why.
This is my code...
#! /usr/bin/python
import urllib2
import threading
import os
from time import ctime
class MyThread(threading.Thread):
def __init__(self,func,args,name=''):
threading.Thread.__init__(self);
self.func = func;
self.args = args;
self.name = name;
def run(self):
apply(self.func,self.args);
url = 'http://ubuntuone.com/1SHQeCAQWgIjUP2945hkZF';
request = urllib2.Request(url);
response = urllib2.urlopen(request);
meta = response.info();
response.close();
unit = 1000000;
flen = int(meta.getheaders('Content-Length')[0]);
print flen;
if flen%unit == 0:
bs = flen/unit;
else :
bs = flen/unit+1;
blocks = range(bs);
cnt = {};
for i in blocks:
cnt[i]=i;
def getStr(i):
try:
print 'Thread %d start.'%(i,);
fout = open('a.zip','wb');
fout.seek(i*unit,0);
if (i+1)*unit > flen:
request.add_header('Range','bytes=%d-%d'%(i*unit,flen-1));
else :
request.add_header('Range','bytes=%d-%d'%(i*unit,(i+1)*unit-1));
#opener = urllib2.build_opener();
#buf = opener.open(request).read();
resp = urllib2.urlopen(request);
buf = resp.read();
fout.write(buf);
except BaseException:
print 'Error';
finally :
#opener.close();
fout.flush();
fout.close();
del cnt[i];
# filelen = os.path.getsize('a.zip');
print 'Thread %d ended.'%(i),
print cnt;
# print 'progress : %4.2f'%(filelen*100.0/flen,),'%';
def main():
print 'download at:',ctime();
threads = [];
for i in blocks:
t = MyThread(getStr,(blocks[i],),getStr.__name__);
threads.append(t);
for i in blocks:
threads[i].start();
for i in blocks:
# print 'this is the %d thread;'%(i,);
threads[i].join();
#print 'size:',os.path.getsize('a.zip');
print 'download done at:',ctime();
if __name__=='__main__':
main();
Could someone please help me understand why the threads aren't stopping.
I can't really address your code example because it is quite messy and hard to follow, but a potential reason you are seeing the threads not end is that a request will stall out and never finish. urllib2 allows you to specify timeouts for how long you will allow the request to take.
What I would recommend for your own code is that you split your work up into a queue, start a fixed number of thread (instead of a variable number), and let the worker threads pick up work until it is done. Make the http requests have a timeout. If the timeout expires, try again or put the work back into the queue.
Here is a generic example of how to use a queue, a fixed number of workers and a sync primitive between them:
import threading
import time
from Queue import Queue
def worker(queue, results, lock):
local_results = []
while True:
val = queue.get()
if val is None:
break
# pretend to do work
time.sleep(.1)
local_results.append(val)
with lock:
results.extend(local_results)
print threading.current_thread().name, "Done!"
num_workers = 4
threads = []
queue = Queue()
lock = threading.Lock()
results = []
for i in xrange(100):
queue.put(i)
for _ in xrange(num_workers):
# Use None as a sentinel to signal the threads to end
queue.put(None)
t = threading.Thread(target=worker, args=(queue,results,lock))
t.start()
threads.append(t)
for t in threads:
t.join()
print sorted(results)
print "All done"

Categories

Resources