How many threads are running in this python example? - python

I want to learn how to run a function in multithreading using python. In other words, I have a long list of arguments that I want to send to a function that might take time to finish. I want my program to loop over the arguments and call the functions on parallel (no need to wait until the function finishes fromt he forst argument).
I found this example code from here:
import Queue
import threading
import urllib2
# called by each thread
def get_url(q, url):
q.put(urllib2.urlopen(url).read())
theurls = ["http://google.com", "http://yahoo.com"]
q = Queue.Queue()
for u in theurls:
t = threading.Thread(target=get_url, args = (q,u))
t.daemon = True
t.start()
s = q.get()
print s
My question are:
1) I normally know that I have to specify a number of threads that I want my program to run in parallel. There is no specific number of threads in the code above.
2) The number of threads is something that varies from device to device (depends on the processor, memory, etc.). Since this code does not specify any number of threads, how the program knows the right number of threads to run concurrently?

The threads are being created in the for loop. The for loop gets executed twice since there are two elements in theurls . This also answers your other two questions. Thus you end up with two threads in the program
plus main loop thread
Total 3

Related

How can I remove duplicates while multiprocesscing?

I am very new to multiprocessing and I am only using it to find an image on the screen, the problem is the code produces duplicates which slow it down I have tried using a "not in" statement to only place proc into processes if it is not already in it, but this did not work. Any help or optimization would be welcome I have no idea what I am doing as this is just a personal project to learn multiprocessing.
from multiprocessing.context import Process
import pyautogui as auto
screenWidth, screenHeight = auto.size()
currentMouseX, currentMouseY = auto.position()
def bot(aim):
while True:
for aim in auto.locateAllOnScreen(r"dot.png", confidence=0.9795):
auto.click(aim)
print(aim)
def bot2(aim):
while True:
for aim in auto.locateAllOnScreen(r"dot.png", confidence=0.9795):
auto.click(aim)
print(aim)
def bot3(aim):
while True:
for aim in auto.locateAllOnScreen(r"dot.png", confidence=0.9795):
auto.click(aim)
print(aim)
if __name__ == "__main__":
processes = []
for t in auto.locateAllOnScreen(r"dot.png", confidence=0.9795):
proc = Process(target=bot, args=(t,))
processes.append(proc)
proc.start()
for z in auto.locateAllOnScreen(r"dot.png", confidence=0.9795):
proc = Process(target=bot2, args=(z,))
processes.append(proc)
proc.start()
for x in auto.locateAllOnScreen(r"dot.png", confidence=0.9795):
proc = Process(target=bot3, args=(x,))
processes.append(proc)
proc.start()
for p in processes:
p.join()
Unless my eyes deceive me, you have three functions bot, bot2 and bot3 that appear to be identical. You have to ask yourself why you need three identical functions that differ only in a name. I certainly don't have an answer.
Presumanly auto.locateAllOnScreen returns the locations of all occurrences of "dot.png" on your screen and you would like to print out information on each occurrence in parallel. Your main process is iterating all of these occurrences 3 times and for each occurrence staring a new process. Then each process is totally ignoring the occurrence argument, aim, that is being passed to it and instead iterating all the occurrences itself. So if there were 5 occurrences on the screen you would be creating 3 * 5 = 15 processes and each process would be printing 5 lines of output (one for each occurrence) for a total of 15 * 5 = 75 lines of output when in reality you should only be getting 5 lines of output if you were doing this correctly (I am ignoring that there is a while True: loop where all the output is then repeated). You are also potentially creating more processes than the number of CPU cores you have on your computer and so they would not truly be running in parallel on the assumption that the bot function(s) are CPU-intensive, which may not be the case.
I am not sure whether this problem is a candidate for multiprocessing since there is a fair amount of overhead just to create processes and to pass arguments and results to and from one process to another. So you might not gain any improvement in performance. But if the idea is to see how you would solve this using multiprocessing, then I would suggest that if you do not know in advance how many elements the call to auto.locateAllOnScreen might return and recognizing that there is no point in creating more processes than the number of processors you actually have, then it is probably best to use a multiprocessing pool of fixed size.
What you want to do is have your worker function bot (and you only need one of these) be passed a single occurrence that it will process. You then create a pool of processes equal to the smaller value of the size of the number of CPUs you have and the number of tasks you actually have to submit. You then submit to the pool a number of tasks where each task specifies the worker function to perform that task and the argument(s) it requires.
In the code below I have removed from function bot the while True: loop that never terminates. You can put it back in if you want.
from multiprocessing import Pool, cpu_count
import pyautogui as auto
def bot(aim):
# do the work for the single occurrence of aim
auto.click(aim)
print(aim)
if __name__ == "__main__":
aims = list(auto.locateAllOnScreen(r"dot.png", confidence=0.9795))
# choose appropriate pool size:
pool = Pool(min(len(aims), cpu_count()))
# bot will be called for each element returned by call to auto.locateAllOnScreen
pool.map(bot, aims)

how to start multiple jobs in python and communicate with the main job

I am a novice user of python multithreading/multiprocessing, so please bear with me.
I would like to solve the following problem and I need some help/suggestions in this regard.
Let me describe in brief:
I would like to start a python script which does something in the
beginning sequentially.
After the sequential part is over, I would like to start some jobs
in parallel.
Assume that there are four parallel jobs I want to start.
I would like to also start these jobs on some other machines using "lsf" on the computing cluster.My initial script is also running on a ” lsf”
machine.
The four jobs which I started on four machines will perform two logical steps A and B---one after the other.
When a job started initially, they start with logical step A and finish it.
After every job (4jobs) has finished the Step A; they should notify the first job which started these. In other words, the main job which started is waiting for the confirmation from these four jobs.
Once the main job receives confirmation from these four jobs; it should notify all the four jobs to do the logical step B.
Logical step B will automatically terminate the jobs after finishing the task.
Main job is waiting for the all jobs to finish and later on it should continue with the sequential part.
An example scenario would be:
Python script running on an “lsf” machine in the cluster starts four "tcl shells" on four “lsf” machines.
In each tcl shell, a script is sourced to do the logical step A.
Once the step A is done, somehow they should inform the python script which is waiting for the acknowledgement.
Once the acknowledgement is received from all the four, python script inform them to do the logical step B.
Logical step B is also a script which is sourced in their tcl shell; this script will also close the tcl shell at the end.
Meanwhile, python script is waiting for all the four jobs to finish.
After all four jobs are finished; it should continue with the sequential part again and finish later on.
Here are my questions:
I am confused about---should I use multithreading/multiprocessing. Which one suits better?
In fact what is the difference between these two? I read about these but I wasn't able to conclude.
What is python GIL? I also read somewhere at any one point in time only one thread will execute.
I need some explanation here. It gives me an impression that I can't use threads.
Any suggestions on how could I solve my problem systematically and in a more pythonic way.
I am looking for some verbal step by step explanation and some pointers to read on each step.
Once the concepts are clear, I would like to code it myself.
Thanks in advance.
In addition to roganjosh's answer, I would include some signaling to start the step B after A has finished:
import multiprocessing as mp
import time
import random
import sys
def func_A(process_number, queue, proceed):
print "Process {} has started been created".format(process_number)
print "Process {} has ended step A".format(process_number)
sys.stdout.flush()
queue.put((process_number, "done"))
proceed.wait() #wait for the signal to do the second part
print "Process {} has ended step B".format(process_number)
sys.stdout.flush()
def multiproc_master():
queue = mp.Queue()
proceed = mp.Event()
processes = [mp.Process(target=func_A, args=(x, queue)) for x in range(4)]
for p in processes:
p.start()
#block = True waits until there is something available
results = [queue.get(block=True) for p in processes]
proceed.set() #set continue-flag
for p in processes: #wait for all to finish (also in windows)
p.join()
return results
if __name__ == '__main__':
split_jobs = multiproc_master()
print split_jobs
1) From the options you listed in your question, you should probably use multiprocessing in this case to leverage multiple CPU cores and compute things in parallel.
2) Going further from point 1: the Global Interpreter Lock (GIL) means that only one thread can actually execute code at any one time.
A simple example for multithreading that pops up often here is having a prompt for user input for, say, an answer to a maths problem. In the background, they want a timer to keep incrementing at one second intervals to register how long the person took to respond. Without multithreading, the program would block whilst waiting for user input and the counter would not increment. In this case, you could have the counter and the input prompt run on different threads so that they appear to be running at the same time. In reality, both threads are sharing the same CPU resource and are constantly passing an object backwards and forwards (the GIL) to grant them individual access to the CPU. This is hopeless if you want to properly process things in parallel. (Note: In reality, you'd just record the time before and after the prompt and calculate the difference rather than bothering with threads.)
3) I have made a really simple example using multiprocessing. In this case, I spawn 4 processes that compute the sum of squares for a randomly chosen range. These processes do not have a shared GIL and therefore execute independently unlike multithreading. In this example, you can see that all processes start and end at slightly different times, but we can aggregate the results of the processes into a single queue object. The parent process will wait for all 4 child processes to return their computations before moving on. You could then repeat the code for func_B (not included in the code).
import multiprocessing as mp
import time
import random
import sys
def func_A(process_number, queue):
start = time.time()
print "Process {} has started at {}".format(process_number, start)
sys.stdout.flush()
my_calc = sum([x**2 for x in xrange(random.randint(1000000, 3000000))])
end = time.time()
print "Process {} has ended at {}".format(process_number, end)
sys.stdout.flush()
queue.put((process_number, my_calc))
def multiproc_master():
queue = mp.Queue()
processes = [mp.Process(target=func_A, args=(x, queue)) for x in xrange(4)]
for p in processes:
p.start()
# Unhash the below if you run on Linux (Windows and Linux treat multiprocessing
# differently as Windows lacks os.fork())
#for p in processes:
# p.join()
results = [queue.get() for p in processes]
return results
if __name__ == '__main__':
split_jobs = multiproc_master()
print split_jobs

Threading advice for web crawler - scheduling with single list

I would like to add in multiple threading to my web crawler but I can see that the way the spider schedules links to be crawled may be incompatible with multi-threading. The crawler is only ever going to be active on a handful of news websites but rather than starting a new thread per domain I would prefer to have multiple threads opened on the same domain. My web crawling code is operated through the following function:
def crawl_links():
links_to_crawl.append(domain[0])
while len(links_to_crawl) > 0:
link = links_to_crawl[0]
if link in crawled_links or link in ignored_links:
del links_to_crawl[0]
else:
print '\n', link
try:
html = get_html(link)
GetLinks(html)
SaveFile(html)
crawled_links.append(links_to_crawl.pop(0))
except (ValueError, urllib2.URLError, Timeout.Timeout, httplib.IncompleteRead):
ignored_links.append(link_to_crawl.pop(0))
print 'Spider finished!'
print 'Ignored links:\n', ignored_links
print 'Crawled links:\n', crawled_links
print 'Relative links\n', relative_links
If my understanding of how threading will work is correct, if I simply opened multiple threads on this process they will all crawl the same links (potentially multiple times) or they will clash a bit. Without necessarily going into specifics, how would you advise to restructure the scheduling to make it compatible with multiple threads running at the same time?
I've given this some thought and the only workaround I could come up with is having the GetLinks() class appending links to multiple lists, with an individual list per thread... but this seems like quite a clumsy workaround.
Here is a general scheme that I have used in order to run a multi-threaded application in Python.
The scheme takes a table of input arguments, and executes in parallel one thread for each row.
Each thread takes one row, and executes sequentially one thread for each item in the row.
Each item contains a fixed number of arguments which are passed to the executed thread.
Input Example:
table = \
[
[[12,32,34],[11,20,14],[33,67,56],[10,20,45]],
[[21,21,67],[44,34,74],[23,12,54],[31,23,13]],
[[31,67,56],[34,22,67],[87,74,52],[87,74,52]],
]
In this example we will have 3 threads running in parallel, each one executing 4 threads sequentially.
In order to keep your threads balanced, it is advisable to have the same number of items in each row.
Threading Scheme:
import threading
import MyClass # This is for you to implement
def RunThreads(outFileName,errFileName):
# Create a shared object for saving the output of different threads
outFile = CriticalSection(outFileName)
# Create a shared object for saving the errors of different threads
errFile = CriticalSection(errFileName)
# Run in parallel one thread for each row in the input table
RunParallelThreads(outFile,errFile)
def RunParallelThreads(outFile,errFile):
# Create all the parallel threads
threads = [threading.Thread(target=RunSequentialThreads,args=(outFile,errFile,row)) for row in table]
# Start all the parallel threads
for thread in threads: thread.start()
# Wait for all the parallel threads to complete
for thread in threads: thread.join()
def RunSequentialThreads(outFile,errFile,row):
myObject = MyClass()
for item in row:
# Create a thread with the arguments given in the current item
thread = threading.Thread(target=myObject.Run,args=(outFile,errFile,item[0],item[1],item[2]))
# Start the thread
thread.start()
# Wait for the thread to complete, but only up to 600 seconds
thread.join(600)
# Terminate the thread if it hasn't completed up to this point
if thread.isAlive():
thread._Thread__stop()
errFile.write('Timeout on arguments: '+item[0]+' '+item[1]+' '+item[2]+'\n')
The class below implements an object which can be safely shared among different threads running in parallel. It provides a single interface method called write, which allows any thread to update the shared object in a safe manner (i.e., without the OS switching to another thread during the process).
import codecs
class CriticalSection:
def __init__(self,fileName):
self.mutex = threading.Lock()
self.fileDesc = codecs.open(fileName,mode='w',encoding='utf-8')
def __del__(self):
del self.mutex
self.fileDesc.close()
def write(self,data):
self.mutex.acquire()
self.fileDesc.write(data)
self.mutex.release()
The above scheme should allow you to control the level of "parallel-ness" and the level of "sequential-ness" within your application.
For example, you can use a single row for all the items, and have your application running in a complete sequential manner.
In contrast, you can place each item in a separate row, and have your application running in a complete parallel manner.
And of course, you can choose any combination of the above...
Note:
In MyClass, you will need to implement method Run, which will take the outFile and errFile objects, as well as the arguments that you have defined for each thread.

Multiprocessing with python3 only runs once

I have a problem running multiple processes in python3 .
My program does the following:
1. Takes entries from an sqllite database and passes them to an input_queue
2. Create multiple processes that take items off the input_queue, run it through a function and output the result to the output queue.
3. Create a thread that takes items off the output_queue and prints them (This thread is obviously started before the first 2 steps)
My problem is that currently the 'function' in step 2 is only run as many times as the number of processes set, so for example if you set the number of processes to 8, it only runs 8 times then stops. I assumed it would keep running until it took all items off the input_queue.
Do I need to rewrite the function that takes the entries out of the database (step 1) into another process and then pass its output queue as an input queue for step 2?
Edit:
Here is an example of the code, I used a list of numbers as a substitute for the database entries as it still performs the same way. I have 300 items on the list and I would like it to process all 300 items, but at the moment it just processes 10 (the number of processes I have assigned)
#!/usr/bin/python3
from multiprocessing import Process,Queue
import multiprocessing
from threading import Thread
## This is the class that would be passed to the multi_processing function
class Processor:
def __init__(self,out_queue):
self.out_queue = out_queue
def __call__(self,in_queue):
data_entry = in_queue.get()
result = data_entry*2
self.out_queue.put(result)
#Performs the multiprocessing
def perform_distributed_processing(dbList,threads,processor_factory,output_queue):
input_queue = Queue()
# Create the Data processors.
for i in range(threads):
processor = processor_factory(output_queue)
data_proc = Process(target = processor,
args = (input_queue,))
data_proc.start()
# Push entries to the queue.
for entry in dbList:
input_queue.put(entry)
# Push stop markers to the queue, one for each thread.
for i in range(threads):
input_queue.put(None)
data_proc.join()
output_queue.put(None)
if __name__ == '__main__':
output_results = Queue()
def output_results_reader(queue):
while True:
item = queue.get()
if item is None:
break
print(item)
# Establish results collecting thread.
results_process = Thread(target = output_results_reader,args = (output_results,))
results_process.start()
# Use this as a substitute for the database in the example
dbList = [i for i in range(300)]
# Perform multi processing
perform_distributed_processing(dbList,10,Processor,output_results)
# Wait for it all to finish.
results_process.join()
A collection of processes that service an input queue and write to an output queue is pretty much the definition of a process pool.
If you want to know how to build one from scratch, the best way to learn is to look at the source code for multiprocessing.Pool, which is pretty simply Python, and very nicely written. But, as you might expect, you can just use multiprocessing.Pool instead of re-implementing it. The examples in the docs are very nice.
But really, you could make this even simpler by using an executor instead of a pool. It's hard to explain the difference (again, read the docs for both modules), but basically, a future is a "smart" result object, which means instead of a pool with a variety of different ways to run jobs and get results, you just need a dumb thing that doesn't know how to do anything but return futures. (Of course in the most trivial cases, the code looks almost identical either way…)
from concurrent.futures import ProcessPoolExecutor
def Processor(data_entry):
return data_entry*2
def perform_distributed_processing(dbList, threads, processor_factory):
with ProcessPoolExecutor(processes=threads) as executor:
yield from executor.map(processor_factory, dbList)
if __name__ == '__main__':
# Use this as a substitute for the database in the example
dbList = [i for i in range(300)]
for result in perform_distributed_processing(dbList, 8, Processor):
print(result)
Or, if you want to handle them as they come instead of in order:
def perform_distributed_processing(dbList, threads, processor_factory):
with ProcessPoolExecutor(processes=threads) as executor:
fs = (executor.submit(processor_factory, db) for db in dbList)
yield from map(Future.result, as_completed(fs))
Notice that I also replaced your in-process queue and thread, because it wasn't doing anything but providing a way to interleave "wait for the next result" and "process the most recent result", and yield (or yield from, in this case) does that without all the complexity, overhead, and potential for getting things wrong.
Don't try to rewrite the whole multiprocessing library again. I think you can use any of multiprocessing.Pool methods depending on your needs - if this is a batch job you can even use the synchronous multiprocessing.Pool.map() - only instead of pushing to input queue, you need to write a generator that yields input to the threads.

Return whichever expression returns first

I have two different functions f, and g that compute the same result with different algorithms. Sometimes one or the other takes a long time while the other terminates quickly. I want to create a new function that runs each simultaneously and then returns the result from the first that finishes.
I want to create that function with a higher order function
h = firstresult(f, g)
What is the best way to accomplish this in Python?
I suspect that the solution involves threading. I'd like to avoid discussion of the GIL.
I would simply use a Queue for this. Start the threads and the first one which has a result ready writes to the queue.
Code
from threading import Thread
from time import sleep
from Queue import Queue
def firstresult(*functions):
queue = Queue()
threads = []
for f in functions:
def thread_main():
queue.put(f())
thread = Thread(target=thread_main)
threads.append(thread)
thread.start()
result = queue.get()
return result
def slow():
sleep(1)
return 42
def fast():
return 0
if __name__ == '__main__':
print firstresult(slow, fast)
Live demo
http://ideone.com/jzzZX2
Notes
Stopping the threads is an entirely different topic. For this you need to add some state variable to the threads which needs to be checked in regular intervals. As I want to keep this example short I simply assumed that part and assumed that all workers get the time to finish their work even though the result is never read.
Skipping the discussion about the Gil as requested by the questioner. ;-)
Now - unlike my suggestion on the other answer, this piece of code does exactly what you are requesting:
from multiprocessing import Process, Queue
import random
import time
def firstresult(func1, func2):
queue = Queue()
proc1 = Process(target=func1,args=(queue,))
proc2 = Process(target=func2, args=(queue,))
proc1.start();proc2.start()
result = queue.get()
proc1.terminate(); proc2.terminate()
return result
def algo1(queue):
time.sleep(random.uniform(0,1))
queue.put("algo 1")
def algo2(queue):
time.sleep(random.uniform(0,1))
queue.put("algo 2")
print firstresult(algo1, algo2)
Run each function in a new worker thread, the 2 worker threads send the result back to the main thread in a 1 item queue or something similar. When the main thread receives the result from the winner, it kills (do python threads support kill yet? lol.) both worker threads to avoid wasting time (one function may take hours while the other only takes a second).
Replace the word thread with process if you want.
You will need to run each function in another process (with multiprocessing) or in a different thread.
If both are CPU bound, multithread won help much - exactly due to the GIL -
so multiprocessing is the way.
If the return value is a pickleable (serializable) object, I have this decorator I created that simply runs the function in background, in another process:
https://bitbucket.org/jsbueno/lelo/src
It is not exactly what you want - as both are non-blocking and start executing right away. The tirck with this decorator is that it blocks (and waits for the function to complete) as when you try to use the return value.
But on the other hand - it is just a decorator that does all the work.

Categories

Resources