Well, I'm quite new to python and multiprocessing, and what I need to know is if there is any way to make active processes wait for something like "all processes have finished using a given resource", then continue their works. And yes, I really need them to wait, the main purpose is related to synchronization. It's not about finishing the processes and joining them, it's about waiting while they're running, should I use something like a Condition/Event or something? I couldn't find anything really helpful anywhere.
It would be something like this:
import multiprocessing
def worker(args):
#1. working
#2. takes the resource from the manager
#3. waits for all other processes to finish the same step above
#4. returns to 1.
if __name__ == '__main__':
manager = multiprocessing.Manager()
resource = manager.something()
pool = multiprocessing.Pool(n)
result = pool.map(worker, args)
pool.close()
pool.join()
Edit: The "working" part takes a lot more time than the other parts, so I still take advantage of multiprocessing, even if the access to that single resource is serial. Let's say the problem works this way: I have multiple processes running a solution finder (an evolutionary algorithm), and every "n" solutions made, I use that resource to exchange some data between those processes and improve solutions using the information. So, I need all of them to wait before exchanging that info. It's a little hard to explain, and I'm not really here to discuss the theory, I just want to know if there is any way I could do what I tried to describe in the main question.
I'm not sure, that I understood your question. But I think you can use Queue. It's good solution to transmit data from one process to another. You can implement something like:
1. Process first chunk
2. Write results to queue
3. Waits until queue is not full
4. Returns to 1
I actually found out a way to do what I wanted.
As you can see in the question, the code was using a manager along the processes. So, in simple words, I made a shared resource which works basically like a "Log". Every time a process finishes its work, it writes a permission in the log. Once all the desired permissions are there, the processes continue their works (also, using this, I could set specific orders of access for a resource, for example).
Please note that this is not a Lock or a Semaphore.
I suppose this isn't a good method at all, but it suits the problem's needs and doesn't delay the execution.
Related
I want to spawn X number of Pool workers and give each of them X% of the work to do. My issue is that the work takes about 20 minutes to exhaust, longer for each extra process running, due to the type of calculations being done my answer may be found within minutes or hours. What I would like to do is implement some way for a single worker to go "HEY I FOUND IT" and use that signal to kill the remainder of the pool and move on with my calculations.
Key points:
I have tried callbacks, they don't seem to run on a starmap_async until the entire pool finishes.
I only care about the first suitable answer found.
I am not sharing resources and surprise process death, albeit rude, is perfectly acceptable.
I've also considered using a Queue, but it wouldn't make since because the scope of work I'm passing to each is already built into the parameters of the function.
Below is a very dulled down version of what I'm working with (the calculations I'm working with can take hours to finish over a 4.2 billion complex iterable.)
def doWork():
workers = Pool(2)
results = workers.starmap_async( func = distSearch , iterable = Sections1_5, callback = killPool )
workers.close()
print("Found answer : {}".format(results.get()))
workers.join()
def killPool():
workers.terminate()
print("Worker Pool Terminated")
I should probably specify that my process only returns if it finds an answer otherwise it just exits once done. I have looked at this thread but it has my completely lost and seems like a lot of overhead to consistently check for the win condition when that should come in the return/callback of the Worker Pool.
All the answers I've found result in significant overhead by supervising the worker pool, I'm looking for a solution that sources the kill signal at the worker level, autonomously.
I'm looking for a solution that sources the kill signal at the worker level, autonomously.
AFAIK, that doesn't exist. The methods of the Pool object (like Pool.terminate) should only be used in the process that created the pool.
What you could do is use Pool.imap_unordered. This returns an iterator in the parent process over the results which yields results as soon as they become available. As soon as the desired result pops up, you could then use Pool.terminate().
Edit:
From looking at the 3.5 implementation starmap_async returns a MapResult instance, which is not an iterator.
You can wrap multiple inputs in a tuple and use imap_unordered over a list of those.
I have a list in my python program that gets new items on certain occasions (It's a message-queue consumer). Then I have a thread that every few minutes checks to see if there's anything in the list, and if there is then I want to do an action on each item and then empty the list.
Now my problem: should I use locks to ensure that the action in the subthread is atomic, and does this ensure that the main thread can't alter the list while I'm going through the list?
Or should I instead use some kind of flag?
Pseudocode to make my problem clearer.
Subthread:
def run(self):
while 1:
if get_main_thread_list() is not empty:
do_operations()
empty_the_list()
sleep(30)
Main thread:
list = []
def on_event(item):
list.add(item)
def main():
start_thread()
start_listening_to_events()
I hope this makes my problem clearer, and any links to resources or comments are obviously welcome!
PS: I'm well aware that I just might not grasp threaded programming well enough for this question, if you believe so could you please take some time explaining whats wrong with my reasoning if you have the time.
should I use locks to ensure that the action in the subthread is atomic, and does this ensure that the main thread can't alter the list while I'm going through the list?
Yes. If you implement it correctly yes.
Or should I instead use some kind of flag?
"some kind of flag" == lock, so you'd better use threading locks.
Important: It looks to me like you're trying to reimplement the queue module from the stdlib, you might want to take a look at it.
Other than having a bunch of interesting features is also thread safe:
The queue module implements multi-producer, multi-consumer queues. It is especially useful in threaded programming when information must be exchanged safely between multiple threads. The Queue class in this module implements all the required locking semantics.
So I have the following program:
https://github.com/eWizardII/homobabel/blob/master/Experimental/demo_async_falcon.py
However, when it's run I only get two active threads are running, how can I make it so that there are more threads running. I have tried doing stuff like urlv2 = birdofprey(ip2) where ip2 = str(host+1) however that just ends up sending the same thing to two threads. Any help would be appreciated.
Thanks,
Active count=2 means that you have one of your designed thread (birdofprey), and the main thread. This is because you use lock, so the second birdofprey thread waits for the first and so on. I didn't get deeper into the algorithm, but it seems that you don't need to lock birdofprey threads, since they don't share any data (I could get wrong). If they share, you should make exclusive access to the shared data, and not to lock the whole body of run.
Update upon comment
remove locks (if there is no shared data. storage_i is not a shared data.);
in the for loop` create threads, start them, append to a list;
make the second loop over the list of threads, call join collect the information you need.
line 75, urlv.join() blocks until the thread finishes. So you actually create one thread, wait until it's done and then start the next. The other thread is the main thread.
I think the problem is that you need to pull urlv.join() out of the for loop. Right now, because of the join, you're waiting for your new thread to complete before starting the next one.
But for general readability, maintainability, etc., you might want to consider using Python's Queue class to set up a work queue and have a pool of worker threads pulling from it.
I wrote a python script that:
1. submits search queries
2. waits for the results
3. parses the returned results(XML)
I used the threading and Queue modules to perform this in parallel (5 workers).
It works great for the querying portion because i can submit multiple search jobs and deal with the results as they come in.
However, it appears that all my threads get bound to the same core. This is apparent when it gets to the part where it processes the XML(cpu intensive).
Has anyone else encountered this problem? Am i missing something conceptually?
Also, i was pondering the idea of having two separate work queues, one for making the queries and one for parsing the XML. As it is now, one worker will do both in serial. I'm not sure what that will buy me, if anything. Any help is greatly appreciated.
Here is the code: (proprietary data removed)
def addWork(source_list):
for item in source_list:
#print "adding: '%s'"%(item)
work_queue.put(item)
def doWork(thread_id):
while 1:
try:
gw = work_queue.get(block=False)
except Queue.Empty:
#print "thread '%d' is terminating..."%(thread_id)
sys.exit() # no more work in the queue for this thread, die quietly
##Here is where i make the call to the REST API
##Here is were i wait for the results
##Here is where i parse the XML results and dump the data into a "global" dict
#MAIN
producer_thread = Thread(target=addWork, args=(sources,))
producer_thread.start() # start the thread (ie call the target/function)
producer_thread.join() # wait for thread/target function to terminate(block)
#start the consumers
for i in range(5):
consumer_thread = Thread(target=doWork, args=(i,))
consumer_thread.start()
thread_list.append(consumer_thread)
for thread in thread_list:
thread.join()
This is a byproduct of how CPython handles threads. There are endless discussions around the internet (search for GIL) but the solution is to use the multiprocessing module instead of threading. Multiprocessing is built with pretty much the same interface (and synchronization structures, so you can still use queues) as threading. It just gives every thread its own entire process, thus avoiding the GIL and forced serialization of parallel workloads.
Using CPython, your threads will never actually run in parallel in two different cores. Look up information on the Global Interpreter Lock (GIL).
Basically, there's a mutual exclusion lock protecting the actual execution part of the interpreter, so no two threads can compute in parallel. Threading for I/O tasks will work just fine, because of blocking.
edit: If you want to fully take advantage of multiple cores, you need to use multiple processes. There's a lot of articles about this topic, I'm trying to look one up for you I remember was great, but can't find it =/.
As Nathon suggested, you can use the multiprocessing module. There are tools to help you share objects between processes (take a look at POSH, Python Object Sharing).
Am new to python and making some headway with threading - am doing some music file conversion and want to be able to utilize the multiple cores on my machine (one active conversion thread per core).
class EncodeThread(threading.Thread):
# this is hacked together a bit, but should give you an idea
def run(self):
decode = subprocess.Popen(["flac","--decode","--stdout",self.src],
stdout=subprocess.PIPE)
encode = subprocess.Popen(["lame","--quiet","-",self.dest],
stdin=decode.stdout)
encode.communicate()
# some other code puts these threads with various src/dest pairs in a list
for proc in threads: # `threads` is my list of `threading.Thread` objects
proc.start()
Everything works, all the files get encoded, bravo! ... however, all the processes spawn immediately, yet I only want to run two at a time (one for each core). As soon as one is finished, I want it to move on to the next on the list until it is finished, then continue with the program.
How do I do this?
(I've looked at the thread pool and queue functions but I can't find a simple answer.)
Edit: maybe I should add that each of my threads is using subprocess.Popen to run a separate command line decoder (flac) piped to stdout which is fed into a command line encoder (lame/mp3).
If you want to limit the number of parallel threads, use a semaphore:
threadLimiter = threading.BoundedSemaphore(maximumNumberOfThreads)
class EncodeThread(threading.Thread):
def run(self):
threadLimiter.acquire()
try:
<your code here>
finally:
threadLimiter.release()
Start all threads at once. All but maximumNumberOfThreads will wait in threadLimiter.acquire() and a waiting thread will only continue once another thread goes through threadLimiter.release().
"Each of my threads is using subprocess.Popen to run a separate command line [process]".
Why have a bunch of threads manage a bunch of processes? That's exactly what an OS does that for you. Why micro-manage what the OS already manages?
Rather than fool around with threads overseeing processes, just fork off processes. Your process table probably can't handle 2000 processes, but it can handle a few dozen (maybe a few hundred) pretty easily.
You want to have more work than your CPU's can possibly handle queued up. The real question is one of memory -- not processes or threads. If the sum of all the active data for all the processes exceeds physical memory, then data has to be swapped, and that will slow you down.
If your processes have a fairly small memory footprint, you can have lots and lots running. If your processes have a large memory footprint, you can't have very many running.
If you're using the default "cpython" version then this won't help you, because only one thread can execute at a time; look up Global Interpreter Lock. Instead, I'd suggest looking at the multiprocessing module in Python 2.6 -- it makes parallel programming a cinch. You can create a Pool object with 2*num_threads processes, and give it a bunch of tasks to do. It will execute up to 2*num_threads tasks at a time, until all are done.
At work I have recently migrated a bunch of Python XML tools (a differ, xpath grepper, and bulk xslt transformer) to use this, and have had very nice results with two processes per processor.
It looks to me that what you want is a pool of some sort, and in that pool you would like the have n threads where n == the number of processors on your system. You would then have another thread whose only job was to feed jobs into a queue which the worker threads could pick up and process as they became free (so for a dual code machine, you'd have three threads but the main thread would be doing very little).
As you are new to Python though I'll assume you don't know about the GIL and it's side-effects with regard to threading. If you read the article I linked you will soon understand why traditional multithreading solutions are not always the best in the Python world. Instead you should consider using the multiprocessing module (new in Python 2.6, in 2.5 you can use this backport) to achieve the same effect. It side-steps the issue of the GIL by using multiple processes as if they were threads within the same application. There are some restrictions about how you share data (you are working in different memory spaces) but actually this is no bad thing: they just encourage good practice such as minimising the contact points between threads (or processes in this case).
In your case you are probably intersted in using a pool as specified here.
Short answer: don't use threads.
For a working example, you can look at something I've recently tossed together at work. It's a little wrapper around ssh which runs a configurable number of Popen() subprocesses. I've posted it at: Bitbucket: classh (Cluster Admin's ssh Wrapper).
As noted, I don't use threads; I just spawn off the children, loop over them calling their .poll() methods and checking for timeouts (also configurable) and replenish the pool as I gather the results. I've played with different sleep() values and in the past I've written a version (before the subprocess module was added to Python) which used the signal module (SIGCHLD and SIGALRM) and the os.fork() and os.execve() functions --- which my on pipe and file descriptor plumbing, etc).
In my case I'm incrementally printing results as I gather them ... and remembering all of them to summarize at the end (when all the jobs have completed or been killed for exceeding the timeout).
I ran that, as posted, on a list of 25,000 internal hosts (many of which are down, retired, located internationally, not accessible to my test account etc). It completed the job in just over two hours and had no issues. (There were about 60 of them that were timeouts due to systems in degenerate/thrashing states -- proving that my timeout handling works correctly).
So I know this model works reliably. Running 100 current ssh processes with this code doesn't seem to cause any noticeable impact. (It's a moderately old FreeBSD box). I used to run the old (pre-subprocess) version with 100 concurrent processes on my old 512MB laptop without problems, too).
(BTW: I plan to clean this up and add features to it; feel free to contribute or to clone off your own branch of it; that's what Bitbucket.org is for).
I am not an expert in this, but I have read something about "Lock"s. This article might help you out
Hope this helps
I would like to add something, just as a reference for others looking to do something similar, but who might have coded things different from the OP. This question was the first one I came across when searching and the chosen answer pointed me in the right direction. Just trying to give something back.
import threading
import time
maximumNumberOfThreads = 2
threadLimiter = threading.BoundedSemaphore(maximumNumberOfThreads)
def simulateThread(a,b):
threadLimiter.acquire()
try:
#do some stuff
c = a + b
print('a + b = ',c)
time.sleep(3)
except NameError: # Or some other type of error
# in case of exception, release
print('some error')
threadLimiter.release()
finally:
# if everything completes without error, release
threadLimiter.release()
threads = []
sample = [1,2,3,4,5,6,7,8,9]
for i in range(len(sample)):
thread = threading.Thread(target=(simulateThread),args=(sample[i],2))
thread.daemon = True
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
This basically follows what you will find on this site:
https://www.kite.com/python/docs/threading.BoundedSemaphore