I need it to open 10 processes, and each time one of them finishes I want to wait few seconds and start another one.
It seems pretty simple, but somehow I can't get it to work.
I'm not 100% clear on what you're trying to accomplish, but have you looked at the multiprocessing module, specifically using a pool of workers?
I've done this same thing to process web statistics using a semaphore. Essentially, as processes are created, the semaphore is incremented. When they exit, it's decremented. The creation process is blocked when the semaphore blocks.
This actually fires off threads, which run external processes down execution path a bit.
Here's an example.
thread_sem = threading.Semaphore(int(cfg.maxthreads))
for k,v in log_data.items():
thread_list.append(ProcessorThread(int(k), v, thread_sem))
thread_list[-1].start()
And then in the constructor for ProcessorThread, I do this:
def __init__(self, siteid, data, lock_object):
threading.Thread.__init__(self)
self.setDaemon(False)
self.lock_object = lock_object
self.data = data
self.siteid = siteid
self.lock_object.acquire()
When the thread finishes it's task (whether successfully or not), the lock_object is released which allows for another process to begin.
HTH
Related
Kind all, I'm really new to python and I'm facing a task which I can't completely grasp.
I've created an interface with Tkinter which should accomplish a couple of apparently easy feats.
By clicking a "Start" button two threads/processes will be started (each calling multiple subfunctions) which mainly read data from a serial port (one port per process, of course) and write them to file.
The I/O actions are looped within a while loop with a very high counter to allow them to go onward almost indefinitely.
The "Stop" button should stop the acquisition and essentially it should:
Kill the read/write Thread
Close the file
Close the serial port
Unfortunately I still do not understand how to accomplish point 1, i.e.: how to create killable threads without killing the whole GUI. Is there any way of doing this?
Thank you all!
First, you have to choose whether you are going to use threads or processes.
I will not go too much into differences, google it ;) Anyway, here are some things to consider: it is much easier to establish communication between threads than betweeween processes; in Python, all threads will run on the same CPU core (see Python GIL), but subprocesses may use multiple cores.
Processes
If you are using subprocesses, there are two ways: subprocess.Popen and multiprocessing.Process. With Popen you can run anything, whereas Process gives a simpler thread-like interface to running python code which is part of your project in a subprocess.
Both can be killed using terminate method.
See documentation for multiprocessing and subprocess
Of course, if you want a more graceful exit, you will want to send an "exit" message to the subprocess, rather than just terminate it, so that it gets a chance to do the clean-up. You could do that e.g. by writing to its stdin. The process should read from stdin and when it gets message "exit", it should do whatever you need before exiting.
Threads
For threads, you have to implement your own mechanism for stopping, rather than using something as violent as process.terminate().
Usually, a thread runs in a loop and in that loop you check for a flag which says stop. Then you break from the loop.
I usually have something like this:
class MyThread(Thread):
def __init__(self):
super(Thread, self).__init__()
self._stop_event = threading.Event()
def run(self):
while not self._stop_event.is_set():
# do something
self._stop_event.wait(SLEEP_TIME)
# clean-up before exit
def stop(self, timeout):
self._stop_event.set()
self.join(timeout)
Of course, you need some exception handling etc, but this is the basic idea.
EDIT: Answers to questions in comment
thread.start_new_thread(your_function) starts a new thread, that is correct. On the other hand, module threading gives you a higher-level API which is much nicer.
With threading module, you can do the same with:
t = threading.Thread(target=your_function)
t.start()
or you can make your own class which inherits from Thread and put your functionality in the run method, as in the example above. Then, when user clicks the start button, you do:
t = MyThread()
t.start()
You should store the t variable somewhere. Exactly where depends on how you designed the rest of your application. I would probably have some object which hold all active threads in a list.
When user clicks stop, you should:
t.stop(some_reasonable_time_in_which_the_thread_should_stop)
After that, you can remove the t from your list, it is not usable any more.
First you can use subprocess.Popen() to spawn child processes, then later you can use Popen.terminate() to terminate them.
Note that you could also do everything in a single Python thread, without subprocesses, if you want to. It's perfectly possible to "multiplex" reading from multiple ports in a single event loop.
This question already has answers here:
How do you create a daemon in Python?
(16 answers)
Closed 9 years ago.
I am new with Daemons and I was wondering how can I make my main script a daemon?
I have my main script which I wish to make a Daemon and run in the background:
main.py
def requestData(information):
return currently_crunched_data()
while True:
crunchData()
I would like to be able to use the requestData function to this daemon while the loop is running. I am not too familiar with Daemons or how to convert my script into one.
However I am guessing I would have to make two threads, one for my cruncData loop and one for the Daemon request receiever since the Daemon has its own loop (daemon.requestLoop()).
I am currently looking into Pyro to do this. Does anyone know how I can ultimately make a background running while loop have the ability to receive requests from other processes (like a Daemon I suppose) ?
There are already a number of questions on creating a daemon in Python, like this one, which answer that part nicely.
So, how do you have your daemon do background work?
As you suspected, threads are an obvious answer. But there are three possible complexities.
First, there's shutdown. If you're lucky, your crunchData function can be summarily killed at any time with no corrupted data or (too-significant) lost work. In that case:
def worker():
while True:
crunchData()
# ... somewhere in the daemon startup code ...
t = threading.Thread(target=worker)
t.daemon = True
t.start()
Notice that t.daemon. A "daemon thread" has nothing to do with your program being a daemon; it means that you can just quit the main process, and it will be summarily killed.
But what if crunchData can't be killed? Then you'll need to do something like this:
quitflag = False
quitlock = threading.Lock()
def worker():
while True:
with quitlock:
if quitflag:
return
crunchData()
# ... somewhere in the daemon startup code ...
t = threading.Thread(target=worker)
t.start()
# ... somewhere in the daemon shutdown code ...
with quitlock:
quitflag = True
t.join()
I'm assuming each iteration of crunchData doesn't take that long. If it does, you may need to check quitFlag periodically within the function itself.
Meanwhile, you want your request handler to access some data that the background thread is producing. You'll need some kind of synchronization there as well.
The obvious thing is to just use another Lock. But there's a good chance that crunchData is writing to its data frequently. If it holds the lock for 10 seconds at a time, the request handler may block for 10 seconds. But if it grabs and releases the lock a million times, that could take longer than the actual work.
One alternative is to double-buffer your data: Have crunchData write into a new copy, then, when it's done, briefly grab the lock and set currentData = newData.
Depending on your use case, a Queue, a file, or something else might be even simpler.
Finally, crunchData is presumably doing a lot of CPU work. You need to make sure that the request handler does very little CPU work, or each request will slow things down quite a bit as the two threads fight over the GIL. Usually this is no problem. If it is, use a multiprocessing.Process instead of a Thread (which makes sharing or passing the data between the two processes slightly more complicated, but still not too bad).
Short question: Is it possible to have N work processes and a balancer process that will find worker that does nothing at this time and pass UnitOfWork to it?
Long question:
Imagine class like this, witch will be subclassed for certain tasks:
class UnitOfWork:
def __init__(self, **some_starting_parameters):
pass
def init(self):
# open connections, etc.
def run(self):
# do the job
Start the balancer and worker process:
balancer = LoadBalancer()
workers = balancer.spawn_workers(10)
Deploy work (balancer should find a lazy worker, and pass a task to it, or else if every worker is busy, add UOW to queue and wait till free worker):
balancer.work(UnitOfWork(some=parameters))
# internally, find free worker, pass UOW, ouw.init() + ouw.run()
Is this possible (or is it crazy)?
PS I'm familiar with multiprocessing Process class, and process pools, but:
Every Process instance starts a process (yep :) ) - I want fixed num of workers
I want Process instance that can make generic work
I suggest you take a look at multiprocessing.Pool() because I believe it exactly solves your problem. It runs N "worker processes" and as each worker finishes a task, another task is provided. And there is no need for "poison pills"; it is very simple.
I have always used the .map() method on the pool.
Python multiprocessing.Pool: when to use apply, apply_async or map?
EDIT: Here is an answer I wrote to another question, and I used multiprocessing.Pool() in my answer.
Parallel file matching, Python
You don't need any smarts in the balancer; the Queue alone will do what you want. Throw each unit of work into the queue, and have the workers loop, taking a single work unit from the queue and processing it on each iteration. I don't think there's any problem passing an instance of UnitOfWork through the queue.
If you have a fixed amount of work to be done, you can create a "no more work to be done" work unit (a "poison pill") that tells a worker to shut down, and after all the regular work is put into the queue, put as many poison pills into the queue as you have workers.
I want to upload a file to two ftp sites. After both finishing, I need to delete the file.
In case not to block the main function running, both ftp and deletion functions will be implemented by threading, which means there will be 3 threads running in background simultaneously. An easy problem becomes complicated because of threading.
Following are possible solutions:
Use a queue and put all three threads in order
Use mutex
Both work, but I don't think they are the best way to do that. Can anyone share his/her idea?
Use a semaphore.
In your parent handler create a semaphore of 0 and hand it off to the threads.
import threading
sem = threading.Semaphore(0)
hostThread = threading.Thread(target=uploadToHost, args=(sem,))
backupThread = threading.Thread(target=uploadToBackup, args=(sem,))
sem.acquire() # Wait for one of them to finish
sem.acquire() # Wait for the other to finish
In your children, you'll just have to call sem.release once the upload is finished.
Am new to python and making some headway with threading - am doing some music file conversion and want to be able to utilize the multiple cores on my machine (one active conversion thread per core).
class EncodeThread(threading.Thread):
# this is hacked together a bit, but should give you an idea
def run(self):
decode = subprocess.Popen(["flac","--decode","--stdout",self.src],
stdout=subprocess.PIPE)
encode = subprocess.Popen(["lame","--quiet","-",self.dest],
stdin=decode.stdout)
encode.communicate()
# some other code puts these threads with various src/dest pairs in a list
for proc in threads: # `threads` is my list of `threading.Thread` objects
proc.start()
Everything works, all the files get encoded, bravo! ... however, all the processes spawn immediately, yet I only want to run two at a time (one for each core). As soon as one is finished, I want it to move on to the next on the list until it is finished, then continue with the program.
How do I do this?
(I've looked at the thread pool and queue functions but I can't find a simple answer.)
Edit: maybe I should add that each of my threads is using subprocess.Popen to run a separate command line decoder (flac) piped to stdout which is fed into a command line encoder (lame/mp3).
If you want to limit the number of parallel threads, use a semaphore:
threadLimiter = threading.BoundedSemaphore(maximumNumberOfThreads)
class EncodeThread(threading.Thread):
def run(self):
threadLimiter.acquire()
try:
<your code here>
finally:
threadLimiter.release()
Start all threads at once. All but maximumNumberOfThreads will wait in threadLimiter.acquire() and a waiting thread will only continue once another thread goes through threadLimiter.release().
"Each of my threads is using subprocess.Popen to run a separate command line [process]".
Why have a bunch of threads manage a bunch of processes? That's exactly what an OS does that for you. Why micro-manage what the OS already manages?
Rather than fool around with threads overseeing processes, just fork off processes. Your process table probably can't handle 2000 processes, but it can handle a few dozen (maybe a few hundred) pretty easily.
You want to have more work than your CPU's can possibly handle queued up. The real question is one of memory -- not processes or threads. If the sum of all the active data for all the processes exceeds physical memory, then data has to be swapped, and that will slow you down.
If your processes have a fairly small memory footprint, you can have lots and lots running. If your processes have a large memory footprint, you can't have very many running.
If you're using the default "cpython" version then this won't help you, because only one thread can execute at a time; look up Global Interpreter Lock. Instead, I'd suggest looking at the multiprocessing module in Python 2.6 -- it makes parallel programming a cinch. You can create a Pool object with 2*num_threads processes, and give it a bunch of tasks to do. It will execute up to 2*num_threads tasks at a time, until all are done.
At work I have recently migrated a bunch of Python XML tools (a differ, xpath grepper, and bulk xslt transformer) to use this, and have had very nice results with two processes per processor.
It looks to me that what you want is a pool of some sort, and in that pool you would like the have n threads where n == the number of processors on your system. You would then have another thread whose only job was to feed jobs into a queue which the worker threads could pick up and process as they became free (so for a dual code machine, you'd have three threads but the main thread would be doing very little).
As you are new to Python though I'll assume you don't know about the GIL and it's side-effects with regard to threading. If you read the article I linked you will soon understand why traditional multithreading solutions are not always the best in the Python world. Instead you should consider using the multiprocessing module (new in Python 2.6, in 2.5 you can use this backport) to achieve the same effect. It side-steps the issue of the GIL by using multiple processes as if they were threads within the same application. There are some restrictions about how you share data (you are working in different memory spaces) but actually this is no bad thing: they just encourage good practice such as minimising the contact points between threads (or processes in this case).
In your case you are probably intersted in using a pool as specified here.
Short answer: don't use threads.
For a working example, you can look at something I've recently tossed together at work. It's a little wrapper around ssh which runs a configurable number of Popen() subprocesses. I've posted it at: Bitbucket: classh (Cluster Admin's ssh Wrapper).
As noted, I don't use threads; I just spawn off the children, loop over them calling their .poll() methods and checking for timeouts (also configurable) and replenish the pool as I gather the results. I've played with different sleep() values and in the past I've written a version (before the subprocess module was added to Python) which used the signal module (SIGCHLD and SIGALRM) and the os.fork() and os.execve() functions --- which my on pipe and file descriptor plumbing, etc).
In my case I'm incrementally printing results as I gather them ... and remembering all of them to summarize at the end (when all the jobs have completed or been killed for exceeding the timeout).
I ran that, as posted, on a list of 25,000 internal hosts (many of which are down, retired, located internationally, not accessible to my test account etc). It completed the job in just over two hours and had no issues. (There were about 60 of them that were timeouts due to systems in degenerate/thrashing states -- proving that my timeout handling works correctly).
So I know this model works reliably. Running 100 current ssh processes with this code doesn't seem to cause any noticeable impact. (It's a moderately old FreeBSD box). I used to run the old (pre-subprocess) version with 100 concurrent processes on my old 512MB laptop without problems, too).
(BTW: I plan to clean this up and add features to it; feel free to contribute or to clone off your own branch of it; that's what Bitbucket.org is for).
I am not an expert in this, but I have read something about "Lock"s. This article might help you out
Hope this helps
I would like to add something, just as a reference for others looking to do something similar, but who might have coded things different from the OP. This question was the first one I came across when searching and the chosen answer pointed me in the right direction. Just trying to give something back.
import threading
import time
maximumNumberOfThreads = 2
threadLimiter = threading.BoundedSemaphore(maximumNumberOfThreads)
def simulateThread(a,b):
threadLimiter.acquire()
try:
#do some stuff
c = a + b
print('a + b = ',c)
time.sleep(3)
except NameError: # Or some other type of error
# in case of exception, release
print('some error')
threadLimiter.release()
finally:
# if everything completes without error, release
threadLimiter.release()
threads = []
sample = [1,2,3,4,5,6,7,8,9]
for i in range(len(sample)):
thread = threading.Thread(target=(simulateThread),args=(sample[i],2))
thread.daemon = True
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
This basically follows what you will find on this site:
https://www.kite.com/python/docs/threading.BoundedSemaphore