Killing a process launched from a process that has ended - Python - python

I am trying to kill a process in Python, that is being launched from another process and I am unable to find the correct place to place my ".terminate()".
To explain myself better I will post some example code:
from multiprocessing import Process
import time
def function():
print "Here is where I am creating the function I need to kill"
ProcessToKill = Process(target = killMe)
ProcessToKill.start()
def killMe():
while True:
print "kill me"
time.sleep(0.5)
if __name__ == '__main__':
Process1 = Process(target = function)
Process1.start()
My question is, where can I place ProcessToKill.terminate(), ideally without having to change the overall structure of the code?

You can hold onto the ProcessToKill object so that you can kill it later:
from multiprocessing import Process
import time
def function():
print "Here is where I am creating the function I need to kill"
ProcessToKill = Process(target = killMe)
ProcessToKill.start()
return ProcessToKill
def killMe():
while True:
print "kill me"
time.sleep(0.5)
if __name__ == '__main__':
Process1 = function()
time.sleep(5)
Process1.terminate()
Here, I've removed your wrapping of function in another Process object, because for the example it seems redundant, but you should be able to do the same thing with a Process that runs another Process.

Related

Automatically restarting Python sub-processes using identical arguments

I have a python script which calls a series of sub-processes. They need to run "for ever" - but they occasionally die, or get killed. When this happens I need to restart the process using the same arguments as the one which died.
This is a very simplified version:
[edit: this is the less simplified version, which includes "restart" code]
import multiprocessing
import time
import random
def printNumber(number):
print("starting :", number)
while random.randint(0, 5) > 0:
print(number)
time.sleep(2)
if __name__ == '__main__':
children = [] # list
args = {} # dictionary
for processNumber in range(10,15):
p = multiprocessing.Process(
target=printNumber,
args=(processNumber,)
)
children.append(p)
p.start()
args[p.pid] = processNumber
while True:
time.sleep(1)
for n, p in enumerate(children):
if not p.is_alive():
#get parameters dead child was started with
pidArgs = args[p.pid]
del(args[p.pid])
print("n,args,p: ",n,pidArgs,p)
children.pop(n)
# start new process with same args
p = multiprocessing.Process(
target=printNumber,
args=(pidArgs,)
)
children.append(p)
p.start()
args[p.pid] = pidArgs
I have updated the example to illustrate how I want the processes to be restarted if one crashes/killed/etc - keeping track of which pid was started with which args.
Is this the "best" way to do this, or is there a more "python" way of doing this?
I think I would create a separate thread for each Process and use a ProcessPoolExecutor. Executors have a useful function, submit, which returns a Future. You can wait on each Future and re-launch the Executor when the Future is done. Arguments to the function are tracked as class variables, so restarting is just a simple loop.
import threading
from concurrent.futures import ProcessPoolExecutor
import time
import random
import traceback
def printNumber(number):
print("starting :", number)
while random.randint(0, 5) > 0:
print(number)
time.sleep(2)
class KeepRunning(threading.Thread):
def __init__(self, func, *args, **kwds):
self.func = func
self.args = args
self.kwds = kwds
super().__init__()
def run(self):
while True:
with ProcessPoolExecutor(max_workers=1) as pool:
future = pool.submit(self.func, *self.args, **self.kwds)
try:
future.result()
except Exception:
traceback.print_exc()
if __name__ == '__main__':
for process_number in range(10, 15):
keep = KeepRunning(printNumber, process_number)
keep.start()
while True:
time.sleep(1)
At the end of the program is a loop to keep the main thread running. Without that, the program will attempt to exit while your Processes are still running.
For the example you provided I would just remove the exit condition from the while loop and change it to True.
As you said though the actual code is more complicated (why didn't you post that?). So if the process gets terminated by lets say an exception just put the code inside a try catch block. You can then put said block in an infinite loop.
I hope this is what you are looking for but that seems to be the right way to do it provided the goal and information you provided.
Instead of just starting the process immediately, you can save the list of processes and their arguments, and create another process that checks they are alive.
For example:
if __name__ == '__main__':
process_list = []
for processNumber in range(5):
process = multiprocessing.Process(
target=printNumber,
args=(processNumber,)
)
process_list.append((process,args))
process.start()
while True:
for running_process, process_args in process_list:
if not running_process.is_alive():
new_process = multiprocessing.Process(target=printNumber, args=(process_args))
process_list.remove(running_process, process_args) # Remove terminated process
process_list.append((new_process, process_args))
I must say that I'm not sure the best way to do it is in python, you may want to look at scheduler services like jenkins or something like that.

Starting multiple processes inside a function makes it loop for each process started

I'm writing a program and made a "pseudo" program which imitates same thing as the main one does. The main idea is that a program starts and it scans a game. First part detects if game started, then it open 2 processes. 1 that scans the game all the time and sends info to the second process, which analyzes the data and plots it. In short, its 2 infinite loops running simultaneously.
I'm trying to put it all into functions now so I can run it through tkinter and make a GUI for it.
The issue is, every time a process starts, it loops back on start of parent function, executes it again, then goes to start second process. What is the issue here? In this test model, one process sends value of X to second process which prints it out.
import multiprocessing
import time
from multiprocessing import Pipe
def function_start():
print("GAME DETECTED AND STARTED")
parent_conn, child_conn = Pipe()
p1 = multiprocessing.Process(target=function_first_process_loop, args=(child_conn,))
p2 = multiprocessing.Process(target=function_second_process_loop, args=(parent_conn,))
function_load(p1)
function_load(p2)
def function_load(process):
if __name__ == '__main__':
print("slept 1")
process.start()
def function_first_process_loop(conn):
x=0
print("FIRST PROCESS STARTED")
while True:
time.sleep(1)
x += 1
conn.send(x)
print(x)
def function_second_process_loop(conn):
print("SECOND PROCESS STARTED")
while True:
data = conn.recv()
print(data)
function_start()
I've also tried rearranging functions a bit on different ways. This is one of them:
import multiprocessing
import time
from multiprocessing import Pipe
def function_load():
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p1 = multiprocessing.Process(target=function_first_process_loop, args=(child_conn,))
p2 = multiprocessing.Process(target=function_second_process_loop, args=(parent_conn,))
p1.start()
p2.start()
#FIRST
def function_start():
print("GAME LOADED AND STARTED")
function_load()
def function_first_process_loop(conn):
x=0
print("FIRST PROCESS STARTED")
while True:
time.sleep(1)
x += 1
conn.send(x)
print(x)
def function_second_process_loop(conn):
print("SECOND PROCESS STARTED")
while True:
data = conn.recv()
print(data)
#
function_start()
You should always tag your question tagged with multiprocessing with platform you are running under, but I will infer that it is probably Windows or some other platform that uses the spawn method to launch new processes. That means when a new process is created, a new Python interpreter is launched an the program source is processed from the top and any code at global scope that is not protected by the check if __name__ == '__main__': will be executed, which means that each started process will be executing the statement function_start().
So, as #PranavHosangadi rightly pointed out you need the __name__ check in the correct place.
import multiprocessing
from multiprocessing import Pipe
import time
def function_start():
print("GAME DETECTED AND STARTED")
parent_conn, child_conn = Pipe()
p1 = multiprocessing.Process(target=function_first_process_loop, args=(child_conn,))
p2 = multiprocessing.Process(target=function_second_process_loop, args=(parent_conn,))
function_load(p1)
function_load(p2)
def function_load(process):
print("slept 1")
process.start()
def function_first_process_loop(conn):
x=0
print("FIRST PROCESS STARTED")
while True:
time.sleep(1)
x += 1
conn.send(x)
print(x)
def function_second_process_loop(conn):
print("SECOND PROCESS STARTED")
while True:
data = conn.recv()
print(data)
if __name__ == '__main__':
function_start()
Let's do an experiment: Before function_start(), add this line:
print(__name__, "calling function_start()")
Now, you get the following output:
__main__ calling function_start()
GAME DETECTED AND STARTED
slept 1
slept 1
__mp_main__ calling function_start()
GAME DETECTED AND STARTED
__mp_main__ calling function_start()
GAME DETECTED AND STARTED
FIRST PROCESS STARTED
SECOND PROCESS STARTED
1
1
2
2
...
Clearly, function_start() is called by the child process every time you start it. This is because python loads the entire script, and then calls the function you want from that script. The new processes have the name __mp_main__ to differentiate them from the main process, and you can make use of that to prevent the call to function_start() by these processes.
So instead of function_start(), call it this way:
if __name__ == "__main__":
print(__name__, "calling function_start()")
function_start()
and now you get what you wanted:
__main__ calling function_start()
GAME DETECTED AND STARTED
slept 1
slept 1
FIRST PROCESS STARTED
SECOND PROCESS STARTED
1
1
2
2
...

Why won't this Python child-thread allow the Parent thread to do its work?

I have the following python multi-threading program
#!/usr/bin/env python
from multiprocessing import Process
import time
child_started = False
def child_func():
global child_started
child_started = True
print "Child Started"
while True:
time.sleep(1)
print "X"
if __name__ == '__main__':
global child_started
child_thread = Process(target=child_func)
child_thread.start()
while child_started is False:
time.sleep(2)
print "Parent Starting Process"
# Do something
print "Parent Done"
child_thread.terminate()
print "Child Cancelled by Parent"
child_thread.join()
I expected the child process to do some work, but then eventually the parent thread to come in and terminate it. However that's not happening. Why? As you can see below, once the child process starts running, the Parent process gets frozen out and never does anything. Why?? How to fix.
$ ~/threads.py
~/threads.py:20: SyntaxWarning: name 'child_started' is assigned to before global declaration
Child Started
X
X
X
X
X
As #thepaul said, your child_started variable is local variable and it is not shared between multiprocessing communications.
I suggest your create a Queue, once the child process get started, put an element into the queue and checks the queue.empty() in your main process and do your work.
#!/usr/bin/env python
from multiprocessing import Process
from multiprocessing import Queue
import time
def child_func(queue):
print "Child Started"
# put anything into queue after `child_func` get invoked, indicates
# your child process is working
queue.put("started...")
while True:
time.sleep(1)
print "X"
if __name__ == '__main__':
queue = Queue()
child_thread = Process(target=child_func,args=(queue,))
child_thread.start()
# stop sleeping until queue is not empty
while queue.empty():
time.sleep(2)
print "Parent Starting Process"
# Do something
print "Parent Done"
child_thread.terminate()
print "Child Cancelled by Parent"
child_thread.join()
When you set child_started in your child_func function, you are setting a local variable, and not the global module-level one. Also, since you are using multiprocessing, the two processes won't even share the global variable either.
You should pass something with shared storage across the participating proceses, such as multiprocessing.Event, instead.
edit: whoops, I answered this first as if you were using threading instead of multiprocessing.

Python: How to make program wait till function's or method's completion

Often there is a need for the program to wait for a function to complete its work. Sometimes it is opposite: there is no need for a main program to wait.
I've put a simple example. There are four buttons. Clicking each will call the same calculate() function. The only difference is the way the function is called.
"Call Directly" button calls calculate() function directly. Since there is a 'Function End' print out it is evident that the program is waiting for the calculate function to complete its job.
"Call via Threading" calls the same function this time using threading mechanism. Since the program prints out ': Function End' message immidiately after the button is presses I can conclude the program doesn't wait for calculate() function to complete. How to override this behavior? How to make program wait till calculate() function is finished?
"Call via Multiprocessing" buttons utilizes multiprocessing to call calculate() function.
Just like with threading multiprocessing doesn't wait for function completion. What statement we have to put in order to make it wait?
"Call via Subprocess" buttons doesn't do anything since I didn't figure out the way to hook subprocess to run internal script function or method. It would be interesting to see how to do it...
Example:
from PyQt4 import QtCore, QtGui
app = QtGui.QApplication(sys.argv)
def calculate(listArg=None):
print '\n\t Starting calculation...'
m=0
for i in range(50000000):
m+=i
print '\t ...calculation completed\n'
class Dialog_01(QtGui.QMainWindow):
def __init__(self):
super(Dialog_01, self).__init__()
myQWidget = QtGui.QWidget()
myBoxLayout = QtGui.QVBoxLayout()
directCall_button = QtGui.QPushButton("Call Directly")
directCall_button.clicked.connect(self.callDirectly)
myBoxLayout.addWidget(directCall_button)
Button_01 = QtGui.QPushButton("Call via Threading")
Button_01.clicked.connect(self.callUsingThreads)
myBoxLayout.addWidget(Button_01)
Button_02 = QtGui.QPushButton("Call via Multiprocessing")
Button_02.clicked.connect(self.callUsingMultiprocessing)
myBoxLayout.addWidget(Button_02)
Button_03 = QtGui.QPushButton("Call via Subprocess")
Button_03.clicked.connect(self.callUsingSubprocess)
myBoxLayout.addWidget(Button_03)
myQWidget.setLayout(myBoxLayout)
self.setCentralWidget(myQWidget)
self.setWindowTitle('Dialog 01')
def callUsingThreads(self):
print '------------------------------- callUsingThreads() ----------------------------------'
import threading
self.myEvent=threading.Event()
self.c_thread=threading.Thread(target=calculate)
self.c_thread.start()
print "\n\t\t : Function End"
def callUsingMultiprocessing(self):
print '------------------------------- callUsingMultiprocessing() ----------------------------------'
from multiprocessing import Pool
pool = Pool(processes=3)
try: pool.map_async( calculate, ['some'])
except Exception, e: print e
print "\n\t\t : Function End"
def callDirectly(self):
print '------------------------------- callDirectly() ----------------------------------'
calculate()
print "\n\t\t : Function End"
def callUsingSubprocess(self):
print '------------------------------- callUsingSubprocess() ----------------------------------'
import subprocess
print '-missing code solution'
print "\n\t\t : Function End"
if __name__ == '__main__':
dialog_1 = Dialog_01()
dialog_1.show()
dialog_1.resize(480,320)
sys.exit(app.exec_())
Use a queue: each thread when completed puts the result on the queue and then you just need to read the appropriate number of results and ignore the remainder:
#!python3.3
import queue # For Python 2.x use 'import Queue as queue'
import threading, time, random
def func(id, result_queue):
print("Thread", id)
time.sleep(random.random() * 5)
result_queue.put((id, 'done'))
def main():
q = queue.Queue()
threads = [ threading.Thread(target=func, args=(i, q)) for i in range(5) ]
for th in threads:
th.daemon = True
th.start()
result1 = q.get()
result2 = q.get()
print("Second result: {}".format(result2))
if __name__=='__main__':
main()
Documentation for Queue.get() (with no arguments it is equivalent to Queue.get(True, None):
Queue.get([block[, timeout]])
Remove and return an item from the queue. If optional args block is true and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Empty exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the Empty exception (timeout is ignored in that case).
How to wait until only the first thread is finished in Python
You can to use .join() method too.
what is the use of join() in python threading
I find that using the "pool" submodule within "multiprocessing" works amazingly for executing multiple processes at once within a Python Script.
See Section: Using a pool of workers
Look carefully at "# launching multiple evaluations asynchronously may use more processes" in the example. Once you understand what those lines are doing, the following example I constructed will make a lot of sense.
import numpy as np
from multiprocessing import Pool
def desired_function(option, processes, data, etc...):
# your code will go here. option allows you to make choices within your script
# to execute desired sections of code for each pool or subprocess.
return result_array # "for example"
result_array = np.zeros("some shape") # This is normally populated by 1 loop, lets try 4.
processes = 4
pool = Pool(processes=processes)
args = (processes, data, etc...) # Arguments to be passed into desired function.
multiple_results = []
for i in range(processes): # Executes each pool w/ option (1-4 in this case).
multiple_results.append(pool.apply_async(param_process, (i+1,)+args)) # Syncs each.
results = np.array(res.get() for res in multiple_results) # Retrieves results after
# every pool is finished!
for i in range(processes):
result_array = result_array + results[i] # Combines all datasets!
The code will basically run the desired function for a set number of processes. You will have to carefully make you're function can distinguish between each process (hence why I added the variable "option".) Additionally, it doesn't have to be an array that is being populated in the end, but for my example thats how I used it. Hope this simplifies or helps you better understand the power of multiprocessing in Python!

Multiprocessing beside a main loop

I'm struggling with a issue for some time now.
I'm building a little script which uses a main loop. This is a process that needs some attention from the users. The user responds on the steps and than some magic happens with use of some functions
Beside this I want to spawn another process which monitors the computer system for some specific events like pressing specif keys. If these events occur then it will launch the same functions as when the user gives in the right values.
So I need to make two processes:
-The main loop (which allows user interaction)
-The background "event scanner", which searches for specific events and then reacts on it.
I try this by launching a main loop and a daemon multiprocessing process. The problem is that when I launch the background process it starts, but after that I does not launch the main loop.
I simplified everything a little to make it more clear:
import multiprocessing, sys, time
def main_loop():
while 1:
input = input('What kind of food do you like?')
print(input)
def test():
while 1:
time.sleep(1)
print('this should run in the background')
if __name__ == '__main__':
try:
print('hello!')
mProcess = multiprocessing.Process(target=test())
mProcess.daemon = True
mProcess.start()
#after starting main loop does not start while it prints out the test loop fine.
main_loop()
except:
sys.exit(0)
You should do
mProcess = multiprocessing.Process(target=test)
instead of
mProcess = multiprocessing.Process(target=test())
Your code actually calls test in the parent process, and that call never returns.
You can use the locking synchronization to have a better control over your program's flow. Curiously, the input function raise an EOF error, but I'm sure you can find a workaround.
import multiprocessing, sys, time
def main_loop(l):
time.sleep(4)
l.acquire()
# raise an EOFError, I don't know why .
#_input = input('What kind of food do you like?')
print(" raw input at 4 sec ")
l.release()
return
def test(l):
i=0
while i<8:
time.sleep(1)
l.acquire()
print('this should run in the background : ', i+1, 'sec')
l.release()
i+=1
return
if __name__ == '__main__':
lock = multiprocessing.Lock()
#try:
print('hello!')
mProcess = multiprocessing.Process(target=test, args = (lock, ) ).start()
inputProcess = multiprocessing.Process(target=main_loop, args = (lock,)).start()
#except:
#sys.exit(0)

Categories

Resources