Python: Can you only terminate processes in a pool via pool.terminate()? - python

I am trying to terminate the processes belonging to a pool. The pool processes are carrying out calculations and a stop button in a Gui should end these calculations.
It seems the simple way to do this is by calling pool.terminate(). This option isn't available to me because I don't have access to the pool variable in my scope. It was created in a file that I'd rather not edit.
I tried an approach by terminating the processes by process ID. I get the pids from a list created by active_children. But it seems that os.kill has no effect as all the processes are still there. Where did I go wrong/how can I solve this? I'd appreciate any help.
Below is a minimal, reproducable example. Also if my post indicates an obvious lack of knowledge, it's probably true and I apologize. thank you
from multiprocessing import Pool
from multiprocessing import active_children
import os, signal
if __name__ == '__main__':
pool = Pool()
print(active_children())
for process in active_children():
pid = process.pid
os.kill(pid, signal.SIGTERM)
print(active_children()) #same output as previous print statement
pool.terminate()
print(active_children()) #returns an empty list

Related

Python multiprocessing map using with statement does not stop

I am using multiprocessing python module to run parallel and unrelated jobs with a function similar to the following example:
import numpy as np
from multiprocessing import Pool
def myFunction(arg1):
name = "file_%s.npy"%arg1
A = np.load(arg1)
A[A<0] = np.nan
np.save(arg1,A)
if(__name__ == "__main__"):
N = list(range(50))
with Pool(4) as p:
p.map_async(myFunction, N)
p.close() # I tried with and without that statement
p.join() # I tried with and without that statement
DoOtherStuff()
My problem is that the function DoOtherStuff is never executed, the processes switches into sleep mode on top and I need to kill it with ctrl+C to stop it.
Any suggestions?
You have at least a couple problems. First, you are using map_async() which does not block until the results of the task are completed. So what you're doing is starting the task with map_async(), but then immediately closes and terminates the pool (the with statement calls Pool.terminate() upon exiting).
When you add tasks to a Process pool with methods like map_async it adds tasks to a task queue which is handled by a worker thread which takes tasks off that queue and farms them out to worker processes, possibly spawning new processes as needed (actually there is a separate thread which handles that).
Point being, you have a race condition where you're terminating the Pool likely before any tasks are even started. If you want your script to block until all the tasks are done just use map() instead of map_async(). For example, I rewrote your script like this:
import numpy as np
from multiprocessing import Pool
def myFunction(N):
A = np.load(f'file_{N:02}.npy')
A[A<0] = np.nan
np.save(f'file2_{N:02}.npy', A)
def DoOtherStuff():
print('done')
if __name__ == "__main__":
N = range(50)
with Pool(4) as p:
p.map(myFunction, N)
DoOtherStuff()
I don't know what your use case is exactly, but if you do want to use map_async(), so that this task can run in the background while you do other stuff, you have to leave the Pool open, and manage the AsyncResult object returned by map_async():
result = pool.map_async(myFunction, N)
DoOtherStuff()
# Is my map done yet? If not, we should still block until
# it finishes before ending the process
result.wait()
pool.close()
pool.join()
You can see more examples in the linked documentation.
I don't know why in your attempt you got a deadlock--I was not able to reproduce that. It's possible there was a bug at some point that was then fixed, though you were also possibly invoking undefined behavior with your race condition, as well as calling terminate() on a pool after it's already been join()ed. As for your why your answer did anything at all, it's possible that with the multiple calls to apply_async() you managed to skirt around the race condition somewhat, but this is not at all guaranteed to work.

How to kill a Threading pool from parent?

I have threading classing that has the following run function.
So when this class is set to run it keeps on checking a multiprocessing manager queue, if there is anything inside it, it starts the pool to run the job(track function). Upon completion of the job, pool closes automatically and the whole queue if not empty check starts.
def runQueue(self):
print("The current thread is", threading.currentThread().getName())
while True:
time.sleep(1)
self.pstate=False
if self.runStop: #this stops the whole threading by dropping main loop
break
while not self.tasks.empty():
self.pstate=True
task = self.tasks.get()
with ThreadPool(processes=1) as p: #<- want to kill this pool
ans = p.apply(self.track, args=(task,))
self.queueSend(ans)
self.tasks.task_done()
print("finished job")
I used the pool because the function returns a value which I need to map. What I am looking for is a way such that, upon some parent call, the pool closes by dropping the job, while keeping the primary class thread (run function [main loop] running).
Any kind of help is appreciated.
I found that for my case pool.terminate would work only I/O applications, I did find some solutions online which were not related to the pool but I could implement.
One solution is to run the thread as a multiprocessing process and then call process.terminate()
or using multiprocessing Pool and then call pool.terminate.
Note that multiprocessing is faster for CPU intensive tasks. If the tasks are I/O intensive threads are the best solution.
The only way I found a way to kill the thread is using win32 ctypes module.
If you start a thread and get it's tid
with
tid thread.ident
then you can put your in kill_thread(tid) function below
w32 = ctypes.windll.kernel32
THREAD_TERMINATE = 1
def kill_thread(tid):
handle = w32.OpenThread(THREAD_TERMINATE, False,tid)
result = w32.TerminateThread(handle, 0)
w32.CloseHandle(handle)
Hope this helps someone.

Running Python on multiple cores

I have created a (rather large) program that takes quite a long time to finish, and I started looking into ways to speed up the program.
I found that if I open task manager while the program is running only one core is being used.
After some research, I found this website:
Why does multiprocessing use only a single core after I import numpy? which gives a solution of os.system("taskset -p 0xff %d" % os.getpid()),
however this doesn't work for me, and my program continues to run on a single core.
I then found this:
is python capable of running on multiple cores?,
which pointed towards using multiprocessing.
So after looking into multiprocessing, I came across this documentary on how to use it https://docs.python.org/3/library/multiprocessing.html#examples
I tried the code:
from multiprocessing import Process
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
a = input("Finished")
After running the code (not in IDLE) It said this:
Finished
hello bob
Finished
Note: after it said Finished the first time I pressed enter
So after this I am now even more confused and I have two questions
First: It still doesn't run with multiple cores (I have an 8 core Intel i7)
Second: Why does it input "Finished" before its even run the if statement code (and it's not even finished yet!)
To answer your second question first, "Finished" is printed to the terminal because a = input("Finished") is outside of your if __name__ == '__main__': code block. It is thus a module level constant which gets assigned when the module is first loaded and will execute before any code in the module runs.
To answer the first question, you only created one process which you run and then wait to complete before continuing. This gives you zero benefits of multiprocessing and incurs overhead of creating the new process.
Because you want to create several processes, you need to create a pool via a collection of some sort (e.g. a python list) and then start all of the processes.
In practice, you need to be concerned with more than the number of processors (such as the amount of available memory, the ability to restart workers that crash, etc.). However, here is a simple example that completes your task above.
import datetime as dt
from multiprocessing import Process, current_process
import sys
def f(name):
print('{}: hello {} from {}'.format(
dt.datetime.now(), name, current_process().name))
sys.stdout.flush()
if __name__ == '__main__':
worker_count = 8
worker_pool = []
for _ in range(worker_count):
p = Process(target=f, args=('bob',))
p.start()
worker_pool.append(p)
for p in worker_pool:
p.join() # Wait for all of the workers to finish.
# Allow time to view results before program terminates.
a = input("Finished") # raw_input(...) in Python 2.
Also note that if you join workers immediately after starting them, you are waiting for each worker to complete its task before starting the next worker. This is generally undesirable unless the ordering of the tasks must be sequential.
Typically Wrong
worker_1.start()
worker_1.join()
worker_2.start() # Must wait for worker_1 to complete before starting worker_2.
worker_2.join()
Usually Desired
worker_1.start()
worker_2.start() # Start all workers.
worker_1.join()
worker_2.join() # Wait for all workers to finish.
For more information, please refer to the following links:
https://docs.python.org/3/library/multiprocessing.html
Dead simple example of using Multiprocessing Queue, Pool and Locking
https://pymotw.com/2/multiprocessing/basics.html
https://pymotw.com/2/multiprocessing/communication.html
https://pymotw.com/2/multiprocessing/mapreduce.html

Spawn few parallel processes and kill them after finish

I need to make script which on some condition spawns parallel proccess (worker) and makes it to do some IO job. And when it finished - close that process.
But looks like the processes do not tend co exit by default.
Here is my approach:
import multiprocessing
pool = multiprocessing.Pool(4)
def f(x):
sleep(10)
print(x)
return True
r = pool.map_async(f, [1,2,3,4,5,6,7,8,9,10])
But it I run it in the ipython and whait for all prints, after this I can run ps aux | grep ipython and see a lot of processes. So looks like these workers are still alive.
Maybe I'm doind something wrong, but how can I get make these processes terminate when they finished their task? And what approach should I use if I want to spawn a lot of workers one by one (by getting some rmq message, for example)?
Pool spawns worker processes when you declare the pool. They do not get killed until the pool is shut down. Instead, they wait there for more work to appear in the queue.
If you change your code to:
r = pool.map_async(f, [1,2,3,4,5,6,7,8,9,10])
pool.close()
pool.join()
print "check ps ax now"
sleep (10)
you will see the pool processes have disappeared.
Another thing, your program might not work as intended as you declare function f after you declare your pool. I had to change pool = multiprocessing.Pool(4) to follow function f declaration, but this may vary between Python versions. Anyway, if you get odd "module has no attribute" -exceptions, this is the reason.
Hannu

Python's semaphore hangs for ever

Im trying to do things concurrently in my program and to throttle the number of processes opened at the same time (10).
from multiprocessing import Process
from threading import BoundedSemaphore
semaphore = BoundedSemaphore(10)
for x in xrange(100000):
semaphore.acquire(blocking=True)
print 'new'
p = Process(target=f, args=(x,))
p.start()
def f(x):
... # do some work
semaphore.release()
print 'done'
The first 10 processes are launched and they end correctly (I see 10 "new" and "done" on the console), and then nothing. I don't see another "new", the program just hangs there (and Ctrl-C doesn't work either). What's wrong ?
Your problem is the use of threading.BoundedSemaphore across process boundaries:
import threading
import multiprocessing
import time
semaphore = threading.BoundedSemaphore(10)
def f(x):
semaphore.release()
print('done')
semaphore.acquire(blocking=True)
print('new')
print(semaphore._value)
p = multiprocessing.Process(target=f, args=(100,))
p.start()
time.sleep(3)
print(semaphore._value)
When you create a new process, the child gets a copy of the parent process's memory. Thus the child is decrementing it's semaphore, and the semaphore in the parent is untouched. (Typically, processes are isolated from each other: it takes some extra work to communicate across processes; this is what multiprocessing is for.)
This is opposed to threads, where the two threads share the memory space, and are considered the same process.
multiprocessing.BoundedSemaphore is probably what you want. (If you replace threading.BoundedSemaphore with it, and replace semaphore._value with semaphore.get_value()`, you'll see the above's output change.)
Your bounded semaphore is not shared properly between the various processes which are being spawned; you might want to switch to using multiprocessing.BoundedSemaphore. See the answers to this question for some more details.

Categories

Resources